The full text of this article hosted at iucr.org is unavailable due to technical difficulties.

ORIGINAL ARTICLE
Free Access

The influences of contextualized media on students' science attitudes, knowledge, and argumentation learning through online game‐based activities

Yu‐Ren Lin

Corresponding Author

E-mail address: titiyoyo@msn.com

Institute of Science Education and Communication, Central China Normal University, , Wuhan, People's Republic of China

Correspondence

Yu‐Ren Lin, Institute of Science Education and Communication, Central China Normal University, NO.152 Luoyu Road, Wuhan, Hubei, P.R. China 430079.

Email: titiyoyo@msn.com

Search for more papers by this author
First published: 21 August 2018

Abstract

The present study defined three levels of contextualized media and investigated their influences on students' science attitudes, comprehension, and argumentation. To achieve the purpose, an online game‐based science argumentation (OGSA) program was developed for the experiments (N = 148). The OGSA included three versions for student argumentation regarding the use of contextualized medias: the mildly, moderately, and highly contextualized medias. For each version, the students had to play two types of games, find‐the‐fault and find‐the‐difference games, to complete one topic of argumentation learning. We found that the highly contextualized group students had the best performances of their science learning and argumentation. However, to produce highly contextualized media was a time‐consuming task although it provided the students with contextualization cues to connect the learning to their prior knowledge.

Lay Description

What is already known about this topic:

  • Teaching argumentation has become increasingly prevalent and also an essential goal for science education.
  • The use of images in learning plays an important role in facilitating students' understanding of science concepts and theories, because it makes the learning contextualizing and can boost the learning outcomes. However, the explanation on the relationship among the use of images, contextualized environment, and learning outcomes remain few.
  • A number of online learning environments were developed to support students' science argumentation. However, few studies based on the elements of gameplay for the design.

What this paper adds:

  • The present study defined three levels of contextualized images and investigated their influences on students' science attitudes, comprehension, and argumentation.
  • High school students needed images that are at least moderately contextualized and related to their prior knowledge in order to achieve high‐quality (that is, rational and evidence‐based) argumentation in terms of claims, warrants, rebuttals, and questions.
  • We found that contextualized level of image is a factor significantly affecting on students' learning of science attitudes, comprehension of science concepts, and argumentation.

Implications for practice and/or policy:

  • The integration of multiple types of game‐based learning (e.g., find fault and find difference) into the development of online learning environments would constitute an innovative approach for educational instructions.
  • We provided an example on how to integrate different images in terms of their contextualized level into the development of online game‐based program for the learning of science argumentation.

1 INTRODUCTION

The integration of multimedia and games into the design of online learning would make the learning more effective and educationally entertaining (Alessi & Trollip, 2001). It has been reported that well‐designed online science learning environments support the development of students' science‐related abilities, including science attitudes (Barab & Dede, 2007; Maxmen, 2010; Papastergiou, 2009), understanding of scientific knowledge (Meluso, Zheng, Spires, & Lester, 2012), and scientific argumentation (Yang, Lin, She, & Huang, 2015; Golanics & Nussbaum, 2008), allowing these abilities to be developed more effectively. The existing research regarding text reading comprehension indicates that individuals rely on contextualization cues to obtain meanings (Gumperz, 1986). That is, meaningful learning occurs when a learner's prior knowledge is activated and then applied as the learner interacts with the given new information (Chen & Hwang, 2017; Freitas & Castanheira, 2007; Hwang, Chiu, & Chen, 2015; Tversky, 2011). In this regard, the level to which contextualization cues should be designed and embedded into the online learning environments is a topic that needs to be emphasized and discussed. However, only a limited amount of research regarding the influence of different contextualization cues on students' online science argumentation learning has been conducted (Park, Lee, & Kim, 2009; Van Genuchten, Scheiter, & Schüler, 2012). Based on previous studies regarding prior knowledge and the development of online learning environments (Chen, Wu, & Jen, 2013; Park et al., 2009; Winberg & Hedman, 2008), media with three differing levels of embedded contextualized cues were identified, with these media being termed highly contextualized media, moderately contextualized media, and mildly contextualized media. That is, media were categorized according to the level to which they can activate students' prior knowledge to solve the problem scenarios online. The present study aimed to investigate the influence of these three levels of contextualized media on students' science attitudes, understanding of scientific concepts, and scientific argumentation. Two research questions guided the present study:

  1. Does the level of contextualization for the media used in an online game‐based argumentation learning environment influence the science understanding and science attitudes of students learning in such an environment?
  2. Does the level of contextualization for the media used in an online game‐based argumentation learning environment influence the argumentation of students learning in such an environment in terms of their claims, warrants, rebuttals, and questions?

2 LITERATURE REVIEW

2.1 Argumentation in science classroom

The teaching of argumentation, in which students need to build arguments by considering evidence and counter evidence, using appropriate reasoning, and evaluating alternative standpoints, has become increasingly prevalent and also an essential goal for science education (Duschl, Schweingruber, & Shouse, 2007; Moon, Stanford, Cole, & Towns, 2017; Osborne et al., 2016). It has been reported that argumentation is rarely found in the science discourse of most students, because regular classroom discourse typically follows a pattern in which the teacher initiates discussion by using a question with a known answer, the students respond to the question, and then the teacher evaluates the students' responses (Driver, Newton, & Osborne, 2000; Kilinc, Demiral, & Kartal, 2017; Lemke, 1990).

A number of online learning programs were developed to support students' use of scientific languages in argumentation (Yang et al., 2015; Golanics & Nussbaum, 2008; Hasancebi & Gunel, 2013). The programs situate learners in collaborative, virtual, and explorative environment, enable them to explore relevant information from multiple resource and support them to make statements and reflections based on multiple views (Meluso et al., 2012). In our previous work, an online argumentation learning environment was developed to facilitate students' use of scientific language and application of scientific evidence in argumentation (Lin, Hung, & Hung, 2017). An argumentation feature was noticed during the analysis of the arguments generated by the students in the low prior knowledge group. They tended to base their arguments on personal experiences, such as what they had done, found, and been told, rather than on scientific knowledge. This finding implied that it is necessary to activate students' prior knowledge and experience in order to help them acquire meaningful learning. However, there have still only been a limited number of empirical studies that have actually investigated game‐based learning (GBL) and the relationship between contextualized factors and science outcomes (Meluso et al., 2012).

In order to provide appropriate scaffoldings, research studies have applied Toulmin's argumentation pattern (Toulmin, 1958) as a key theoretical basis (Lin et al., 2017; Erduran, Simon, & Osborne, 2004; Weinberger, Stegmann, & Fischer, 2010). Theoretically, there are six components in Toulmin's model: data, claims, warrants, backings, qualifiers, and rebuttals. Previous studies have indicated, however, that the application of the model can be problematic when clarifying exactly what counts as data, warrants, and backings (Kelly, Druker, & Chen, 1998; Oliveira, Akerson, & Oldfield, 2012). So, like a number of previous studies, we reduced Toulmin's concepts of “warrants,” “data,” and “backings” into a single category called “warrants” (Erduran et al., 2004; Zohar & Nemet, 2002). Furthermore, it should be noted that argumentation is a kind of social and communicative activity and, relatedly, that the quality of students' argumentation is related to the questioning strategies and questions provided by the teacher and their peers (Chin, 2007; González‐Howard & McNeill, 2016; Martin & Hand, 2009). This implied us that it was important to take the utterance of questions into account in our data analyses.

2.2 The effects of GBL on science learning

GBL is thought to be an effective tool that can promote learning experiences (Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012) and motivation (Hung, Sun, & Liu, 2018; Papastergiou, 2009; Perini, Maria, Fradinho, & Manuel, 2017). Kebritchi and Hirumi (2008) identified the following five reasons for applying GBL to instruction: (a) GBL emphasizes peers' interactions, (b) GBL creates personal motivation and satisfaction, (c) GBL accommodates multiple learning styles, (d) GBL reinforces mastery of skills, and (e) GBL provides a collaborative and decision‐making context. Furió, González‐Gancedo, Juan, Seguí, and Rando (2013) indicated that GBL could provide multiple benefits to teachers in terms of helping them to develop appropriate strategies for achieving a number of instructional purposes. When learning through games, students typically take greater interest in and have more opportunities for discovering, talking, exploring, and questioning (Perini et al., 2017; Yang, 2012). However, the use of GBL does not guarantee that learners will consistently achieve better learning results because it is not possible to identify a clear causal relationship between students' gameplay experiences and academic performance (Byun & Joung, 2018; Meluso et al., 2012; Wrzesien & Raya, 2010). As O'Neil, Wainess, and Baker (2005) articulated, educational GBL should take the prior knowledge and life experiences of individuals into account in order to enhance learning outcomes. In one study, Wrzesien and Raya (2010) found that sixth‐grade students who received instruction via a science GBL environment had higher engagement levels than students in a control group; however, there was no direct evidence to prove that the GBL led to significantly better learning achievement among the students who took part in it as compared with the students taught in a traditional class.

In science education, students' science abilities, such as their problem‐solving, questioning, and argumentation abilities, as well as their ability to engage in scientific inquiry, are emphasized in contemporary educational reform, and GBL has been advocated as a promising approach by which to make changes from traditional learning to a more innovative form of learning in which learners' interests and willingness to take part are emphasized (Barab & Dede, 2007; Hung et al., 2018; Maxmen, 2010; Perini, Luglietti, Margoudi, Oliveira, & Taisch, 2017). In fact, even though previous research has yielded both positive and negative evidence regarding the effectiveness of using GBL to enhance learning, researchers still regard GBL as being potentially beneficial to educational reform efforts because it provides a new perspective from which researchers can potentially better understand and explain cognitive as well as affective learning issues.

2.3 Comprehension and contextualized environments

Comprehension theories basically agree on the notion that reading is a constructive process in which readers activate their prior knowledge and integrate it with the new information they are receiving from the text to achieve a coherent understanding (Mason, Tornatora, & Pluchino, 2013; Peterson, 2016). For example, Kintsch and Van Dijk (1978) constructed the construction–integration model based on the assumption that reading comprehension is a bottom‐up process. A reader may build three different type of mental representations of a text: verbatim, semantic, and situational representations (Kintsch, 1998). The initial phase of reading involves meaning construction in which a reader constructs a network of propositions both from the text and from prior knowledge. This initial phase can be chaotic, regulated by weak production rules, and even result in contradictory outputs. However, the second phase, which is called integration, consists of a refinement process in which “the network of propositions spread their activation until it becomes stable” (Sanjose, Vidal‐Abarca, & Padilla, 2006, p. 7).

Based on Kintsch's theory, Marzano and Kendall (2007) explained that a comprehension process involves integrating and symbolizing phases. To integrate, a learner must recognize the basic structures of any new concept encountered and be able to identify it (i.e., the new concept) as noncritical or critical. To symbolize, a learner must activate their prior knowledge and then create a useful symbolic representation for the new concept under consideration. Previous studies on visual representations have focused on how to use multimedia to enhance contextualized learning and reading comprehension (Fillipatau & Pumfrey, 1996; Freitas & Castanheira, 2007). It was reported that a reader's interpretations of graphics, texts, and images involve inferential processes (Kearsey & Turner, 1999; Kress & van Leeuwen, 1996). That is, individuals usually rely on contextualization cues to obtain meaning when reading (Gumperz, 1986). For example, Stylianidou, Ormerod, and Ogborn (2002) analyzed the images in science textbooks and students' associated reading comprehension of those books. They suggested that teachers should spend sufficient time discussing textbook images with their students. In other words, constructing a more contextualized environment for students as they engage in reading supports meaningful learning. At the same time, language education researchers have reported that decontextualized environments provide another means by which to learn, with various pieces of evidence showing that such decontextualized environments do improve learners' knowledge acquisition (Hunt & Beglar, 2005; Schmitt, 2008). In this regard, there are at least two different approaches for learning and reading: the decontextualized approach and the contextualized approach. In a study investigating how best to learn a foreign language, Nation (2011) indicated that decontextualized environments constitute an important approach. This approach is derived from an ancient Greek word and mostly used while memorizing. It is a common sense that a learner should expend effort in order to make sense of a new word in multiple contexts; however, the merits of spending too much time to learn just one word have been questioned (Ünaldi, Bardakci, Akpinar, & Dolas, 2013). As such, whether language‐related instruction should be carried out in context or out of context remains a matter of some debate among professionals in language education (Oxford & Scarcella, 1994; Ünaldi et al., 2013).

In science education, we believe that a similar situation exists. That is, the question of how to arrange and present multimedia and concepts and the question of the level to which contextualized environments should be utilized in order to improve students' learning comprehension were important research questions for the present study. An online learning context is usually constructed by applying virtual media, which are created by computer programs, and real‐world media, which are taken from reality (Ge & Er, 2005). One advantage of using virtual media, such as computer simulations, is that doing so eliminates complex factors and avoids dangerous processes that may exist in real‐world situations (Chen et al., 2013; Chen & Hwang, 2017). Importantly, such forms of online learning and simulations have great potential to help learners acquire deeper understandings of science (Kim & Hannafin, 2011; Lai, Hwang, & Tu, 2018; Lindgren, Tscholl, Wang, & Johnson, 2016; Park et al., 2009). In contrast with simulations, digital materials from the real world usually involve details. These details can also be seen as contextualization cues that can assist in activating learners' prior knowledge. The media used for constructing online learning contexts should be based on the learners' prior knowledge in order to support their meaningful learning (Hwang et al., 2015; Taguchi, Li, & Tang, 2017). This principle enabled us to define three levels of contextualized media.

3 METHOD

3.1 Participant

Four 10th‐grade classes totaling 148 students participated in the study and were randomly assigned into four groups: the mildly contextualized group (N = 37), the moderately contextualized group (N = 36), the highly contextualized group (N = 37), and the control group (N = 38). There were five learning topics that were presented to all the participants: (a) balance levers, (b) evolution, (c) refraction and dispersion, (d) plants and seasons, and (e) fermentation.

3.2 Procedure

An online game‐based science argumentation (OGSA) program was created for our quasi‐experimental design. Before the experiment, the students in the three experimental groups attended two 25‐min introductory lessons. The first one introduced them to basic knowledge regarding science argumentation, including the four categories of utterances in argumentation: claims, warrants, rebuttals, and questions, while the second introduced them to the basic operations of the OGSA using the biology exercise topic “how to identify male and female butterflies.” In order to facilitate the students' coconstructions and coevaluations of each other's arguments in the OGSA, a heterogeneous grouping strategy was used in which each larger experimental group was further divided into smaller groups consisting of just four to five members.

After the introductory lessons, the students in the three experimental groups received similar inquiry‐based instructions and then participated in the OGSA in order to complete one topic of learning. Specifically, they were encouraged to investigate the learning topic by conducting experiments in a laboratory, exploring the relevant literature online and in the school library, making observations, and visiting a science museum near the school. Then, they spent two 40‐min classes to complete one topic of the OGSA learning in a school computer classroom, with a total of 16 study hours being completed over 3 months. The topic of fermentation is one example of the learning topics covered. For that topic, the students in the three experimental groups were invited to make homemade wine at the beginning of the research semester. In order to acquire a deep understanding of the scientific concept of fermentation, they were encouraged to explore the relevant literature online and interview their friends and family. Moreover, they were asked to record their observations of their homemade wine, including observations of its colour, smell, and other characteristics, by using their cell phone cameras and notebooks every 2 or 3 days throughout the 3 months. A respective cloud database was created for each of the three experimental groups. The experimental group students were asked to import the collected data (literatures, reports, notes, photos, and videos) into the respective cloud database. Moreover, they were encouraged to review and discuss the data collected by themselves and their peers in the same experimental group. Only the data collected by the students in the highly contextualized group were reviewed and then selected by our research team as materials for constructing a highly contextualized version of the OGSA. The research team included the teacher, two researchers, and one professor of science education. Among the objects that were selected were the media (including two photos and one 1‐ to 4‐min video) created by the students that could best represent their science inquiry experiences. Before the experiment, we briefly explained to the students in the three experimental groups that the photos and videos they created might be used as materials for constructing the OGSA. We had their permission to use the media. Basically, they viewed it as an honour if their media were selected and published. As to the students in the control group, they basically received traditional instructions that could generally be characterized as teacher‐centred and involving little communication with peers as they covered the five topics. After receiving instruction regarding a given topic, they then had to practice and pass several traditional assessments. Most of the questions in the traditional assessments were well designed so to determine whether or not the students had acquired a body of knowledge. Hence, in comparison with the students in the experimental groups, the control group students had a learning environment with a minimal level of contextualization.

3.3 The OGSA and definitions of the three levels of contextualized media

The first part of the OGSA provides learner a brief introduction to target science concepts and figures, to engage them in problem scenarios. After that, the OGSA provided two types of game‐based argumentation for one topic of learning: find‐the‐fault and find‐the‐difference games. The first one consisted of “find fault” argumentation. In this type of argumentation, the participants had to find all the faults and unreasonable phenomena in the given images, for example, an upside‐down rainbow. The second type of argumentation consisted of “find difference” argumentation. Similar to the find fault argumentation, the find difference argumentation required the participants to spot the differences between two given images.

Three versions of the OGSA were developed on the basis of the three respective levels of contextualized media. A main difference among them involved the embedded media they utilized. In our definition, highly contextualized media consist of photos and videos that can effectively activate students' prior knowledge in order to solve the problems in the OGSA. The media created by the highly contextualized group students represented part of their direct experiences. It is thus reasonable to assume that they involved the best contextualization cues in terms of activating the students' prior knowledge. Second, the moderately contextualized media were defined as those media that can provide vicarious experiences. These media were created by our research team using a camera. Any contextualized factors such as backgrounds and voices were removed or limited as much as possible. The students did not participate in any filming tasks. So, the first time they saw the media was when they played the games in the OGSA. Third, as to the mildly contextualized media, they were created by our research team based on the media used in the moderately contextualized version by using drawing and animation software. The mildly contextualized media show science phenomena and experiments (such as experiments involving refraction and dispersion) with as few contextualization cues as possible. With the exception of the media used, the other elements embedded in the OGSA interface, such as the main question for gameplay, student discussion block, and templates for supporting argumentation, were the same. Appendix A shows examples of the three types of contextualized images embedded in the five topics of the OGSA.

Figure 1 shows an interface of the find‐the‐fault gameplay in the moderately contextualized version of the OGSA for the balance levers topic. The students were asked to point out any possible problems with a bamboo dragonfly body that could cause its failure to balance and then propose arguments accordingly. Students were allowed to view a video on the left‐hand side of the interface showing two different bamboo dragonflies. One of them could balance on a small area with its head, while the other one failed to do so. The two photos labelled as “figure 1” and “figure 2” in Figure 1 show the front side and reverse side of the bamboo dragonfly that failed to balance.

image
Scenario and discussion block for find‐the‐fault activity in the online game‐based science argumentation (OGSA) [Colour figure can be viewed at wileyonlinelibrary.com]

Figure 2 shows an example of a find‐the‐difference activity regarding homemade wine in the highly contextualized version of the OGSA. The two photos in Figure 2 allow for a comparison between samples of homemade wine on its first day after being made with the same sample after it had been aged for 10 weeks. During game playing in the OGSA, the students could use a magnifier to spot the differences between the wines in the two photos. Furthermore, the students were allowed to explore related information on the internet and were allowed to review their group members' statements and provide their own responses. The right panel of the OGSA interface was used by the students for the purpose of entering four different utterances. To help the students to use scientific language in generating their arguments, two layers of templates were provided. The first layer provided definitions (that is, definitions of “claim,” “warrant,” “rebuttal,” and “question”), whereas the second layer provided one to two templates for each argumentation component. As shown on left side of Figure 2, the interface also provided students with a video of the day during which they made homemade wine together. Appendix B provides examples of the three versions of the OGSA.

image
Scenario and discussion block for find‐the‐difference activity in the online game‐based science argumentation (OGSA) [Colour figure can be viewed at wileyonlinelibrary.com]

3.4 Instruments

3.4.1 Scientific Concept Test (SCT)

Scientific Concept Test (SCT) is a multiple‐choice diagnostic instrument developed to measure the degree of students' science conceptual and knowledge understanding (Cronbach's α = 0.83). Content validity was established with a panel of three evaluators (three master degree researchers), ensuring that the items were properly constructed and relevant to the five topics. Students must answer each question correctly to receive one point. There were four items for each topics; so the highest possible score is 20.

3.4.2 Science Attitude Test (SAT)

Science Attitude Test (SAT) is a 5‐point Likert scale. It was developed on the basis of attitude scales for measuring the students' feeling and attitude about studying science in school (Russell & Hollander, 1975). There were 20 statements about learner's feeling toward science included in the SAT, such as (a) science is very interesting to me; (b) I do not like science, and it scares me to have to take it; (c) I am always under a terrible strain in a science class. The highest possible score for the scale is 100 (Cronbach's α = 0.86).

3.5 Data analysis

The main data in the present study were SCT, SAT, and argumentation scores from the OGSA. All the students were requested to complete SCT and SAT before and after one topic learning. All students' arguments proposed in the OGSA were collected. We applied Toulmin's theory and developed a framework for coding student dialogue argumentations in the OGSA. Each student statement was read line‐by‐line by the first author and the two teachers. Any off‐task portions of the conversation were removed. Then, we sorted the utterances into the four categories of utterances and evaluated their quality/level based on the analytical framework (Table 1). The cross‐coder reliability was constructed by two researchers and the value was 0.87. For example, if a claim included the source of authority, a relevant explanation, or (theoretical or empirical) evidence, we treated it as a Level 2 claim. Otherwise, it was deemed a Level 1 claim. In order to get a complete picture combining both the quality and quantity of the students' argumentation, total scores were calculated. Level 1 arguments were coded as 1, and Level 2 arguments were coded as 2.

Table 1. Description, definitions, and examples of the four categories student utterances
Category Description Definitions of argument quality Examples (score)
Claim Assertion, facts, findings, and views

Low quality: An argument consists of an ssertion without any explanation or supportive information

High quality: An argument consists of an assertion with explanation or supportive information

I found that my fruit was dried out.

(1 point)

 

 

I found that there were some decay inside the second battle; however, there was fresh fruit in the first one.

(2 points)

Warrant Reasons and explanations used to justify the assertion

Low quality: An argument consists with a theory or information, however, it did not provide the connection with their assertion, or not clearly addressed the theory or information.

High quality: An argument consists of an assertion with supportive theory or information and clearly addressed the connection between theory and assertion

Because of the bacterium (1 point)

 

 

 

 

 

The second wine seemed to be oxidized, losing some of its bright red or purple colours and starting to looking brown. (2 points)

Rebuttal The exceptional circumstances in which the general authority of the warrant would not hold true, including peers' challenges and counterarguments.

Low quality: An argument consists of a weak counterclaim and without a clear explanation and reason.

High quality: An argument consists of a counterclaim with a clearly identifiable explanation and reason.

Do not listen to her. (1 point)

 

 

 

She is wrong. Temperature is very important in the fermentation process, and most wines are best served at a temperature from 16–18 °C. (2 points)

Question Questions form student in argumentation for the validity of the data, identifying evidences, making reasons, and so on

Low quality: A question not based on the talking topic being discussed or without a clearly identifiable explanation and reason

High quality: A question based on the talking topic being discussed and with a clearly identifiable explanation and reason

How do you know that? (1 point)

 

 

 

 

Yeasts need glucose to initiate fermentation; so do you know what role yeast plays in the production of wine? (2 points)

4 RESULT

4.1 Scientific Concept Test

Table 2 presents a summary of descriptive measures, the one‐factor measure analysis of covariance and post hoc test results on SCT. The analysis was conducted with the pretest of SCT as a covariate and the contextualized level as indipendent variable. The results indicated that the contextualized level had a significant effect on students' science comprehension (f(3) = 12.69, p < 0.001). Due to the variance of the data that was homogenous (f(3, 144) = .068, p > 0.05), the Sidak post hoc test was used, and it showed that the students in the highly contextualized group outperformed their peers in the mildly contextualized (mean difference (md) = 2.50, p < 0.05) and the control (md = 5.12, p < 0.001) groups; the moderately contextualized group outperformed the control group (md = 2.48, p < 0.001); the mildly contextualized group outperformed the control group (md = 2.62, p < 0.05).

Table 2. One‐way ANCOVA analysis in the SCT
Mean SD df F (sig) Partial η2
N Pretest Posttest Pretest Posttest
Group
Mildly group 37 4.01 10.22 0.882 4.131
Moderately group 36 4.17 11.11 1.00 3.232
Highly group 37 4.05 12.73 .911 2.704
Control group 38 4.05 7.42 1.13 3.554
Source
Pretest 1 0.448 0.048
Contextualized level 3 12.69*** 0.253
  • Note: ANCOVA: analysis of covariance; df: degrees of freedom; SCT: Scientific Concept Test; SD: standard deviation.
  • * p < 0.05
  • ** p < 0.01
  • *** p < 0.001

4.2 Scientific Attitude Test

A one‐factor repeated measure analysis of variance (ANOVA) was conducted with the pretest and posttest of SAT as the repeated factor and the contextualized level as independent factor. The analysis helped us to see the the degree of students' improvements regarding the students' science attitude and to investigate the effect of the contextualized level on the students' science attitude (Table 3). The results revealed that all the four student groups had a significant improvement after the five topics of learning (f(1) = 541.2, p < 0.001). The contextualized level had a statistically significant effect on the students' science attitudes (f(3, 144) = 42.8, p < 0.001). Due to the variances of the data that were homogenous for both pretest (f(3, 144) = 2.47, p > 0.05) and posttest (f(3, 144) = 2.45, p > .05), the Sidak post hoc test was used. The results showed the students in the highly contextualized group outperformed their peers in the mildly contextualized (md = 9.28, p < 0.001) and the control (md = 17.42, p < 0.001) groups. The moderately contextualized group outperformed the mildly contextualized group (md = 5.15, p < 0.05) and the control group (md = 13.29, p < 0.001); the mildly contextualized group outperformed the control group (md = 17.42, p < 0.01). There were significant interactions between the two variables (f(3) = 73.25, p < 0.001). Follow‐up simple main effect indicated that the contextualized level had a statistically significant effect for the posttest of the SAT (f(3) = 97.09, p < 0.001).

Table 3. A one‐factor repeated measure ANOVA analysis in the SAT
Mean SD df F (sig) Partial η2
N Pretest Posttest Pretest Posttest
Group
Mildly group 37 52.22 70.16 8.36 10.68
Moderately group 36 52.08 80.61 7.00 9.10
Highly group 37 53.00 87.95 6.72 6.80
Control group 38 52.74 53.37 10.54 10.16
Text 1 541.2*** 0.790
Contextualized level 3 42.8*** 0.472
Text × contextualized level 3 73.25*** 0.604
  • Note: ANOVA: analysis of variance; df: degrees of freedom; SAT: Scientific Attitude Test; SD: standard deviation.
  • * p < 0.05,
  • ** p < 0.01,
  • *** p < 0.001.

4.3 Online game‐based science argumentation

Figure 3 presents a clustered bar chart generated from descriptive statistics and ANOVA data for comparing the OGSA performance of students in the highly, moderately, and mildly contextualized groups. A one‐factor repeated measure ANOVA with the topic as repeated factor and the contextualized level as independent factor was conducted to investigate the students' improvements regarding the argumentation scores (claim, warrant, rebuttal, and quesiton). It revealed that in general, all the three student groups acquired a significant improvement (Wilks' Lambda (4) = 104.1, p < 0.001). A one‐factor multivariate analysis of variance was then conducted to investigate the differences among the three groups across the five topics. The results showed in the contextualized level had a significant effect on argumentation score in Topic 1 (f(3) = 34.60, p < 0.0001), Topic 2 (f(3) = 33.9, p < 0.0001), Topic 3 (f(3) = 22.2, p < 0.0001), Topic 4 (f(3) = 42.0, p < 0.0001), Topic 5 (f(3) = 31.9, p < 0.0001). Since the homogeneity of variance of the data is invalid for the five topics. Follow up Games‐Howell post hoc for the five topics showed that the highly contextualized group outperformed their peers in the mildly contextualized group (Topic 1, md = 2.14, p < 0.001; Topic 2, md = 5.51, p < 0.001; Topic 3, md = 4.65, p < 0.001; Topic 4, md = 9.16, p < 0.001; Topic 5, md = 9.01, p < 0.001). The moderately group outperformed than the mildly contextualized groups (Topic 1, md = 1.62, p < 0.001; Topic 3, md = 4.80, p < 0.001; Topic 4, md = 7.87, p < 0.001; Topic 5, md = 7.10, p < 0.001). The highly contextualized group outperformed their peers in the moderately contextualized group (Topic 2, md = 4.52, p < 0.001).

image
Descriptive statistics and AVOVA for comparing the online game‐based science argumentation performance of students in the three groups across the five topics [Colour figure can be viewed at wileyonlinelibrary.com]

Table 4 presents the results of a one‐factor repeated measure ANOVA that was conducted with the four categories of utterances scores generated by the students in the five topics as the repeated factors and the contextualized level as independent factor. The results showed that students in the three groups made significant progress in terms of producing claims (f(4) = 246.6, p < 0.001), warrants (f(4) = 236.7, p < 0.001), rebuttals (f(4) = 192.7, p < 0.001), and questions (f(4) = 196.3, p < 0.001). The contextualized level had a statistically significant effect on students' claim (f(2) = 49.67, p < 0.001), warrant (f(2) = 60.33, p < 0.001), rebuttal (f(2) = 10.09, p < 0.001), question (f(2) = 13.26, p < 0.001). There were significant interactions between the two factors in the category of claim (f(8) = 15.08, p < 0.001), warrant (f(8) = 10.41, p < 0.001), rebuttal (f(8) = 3.79, p < 0.01), question (f(8) = 5.08, p < 0.01). The simple main effect analyses further showed that for the category of claim, the highly contextualized group outperformed their peers in the moderately (md = 0.92, p < 0.01) and the mildly contextualized group (md = 2.5, p < 0.001); the moderately contextualized group outperformed than the mildly contextualized groups (md = 1.58, p < 0.001). For the category of warrant, the highly contextualized group outperformed their peers in the mildly contextualized group (md = 2.03, p < 0.001); the moderately group outperformed than the mildly contextualized groups (md = 1.67, p < 0.001). For the category of rebuttal, the highly contextualized group outperformed their peers in the mildly contextualized group (md = .78, p < 0.001); the moderately contextualized group outperformed than the mildly contextualized groups (md = .60, p < .01). For the category of question, the highly contextualized group outperformed their peers in the mildly contextualized group (md = .77, p < .001); the moderately contextualized group outperformed than the mildly contextualized groups (md = .62, p < 0.001).

Table 4. One‐way repeated‐measures ANOVA of the four utterance scores on OGSA
Source Mean (SD) df F (sig) Partial η2
Topic 1 Topic 2 Topic 3 Topic 4 Topic 5
Claim
Mildly group 1.11 (0.51) 1.70 (0.96) 1.97 (1.40) 3.30 (0.93) 4.97 (1.50)
Moderately group 1.47 (0.65) 2.08 (1.18) 4.08 (0.90) 6.25 (2.10) 7.08 (2.23)
Highly group 2.01 (1.05) 3.73 (1.75) 4.27 (1.88) 6.73 (1.88) 8.84 (2.04)
Topic 4 403.9*** .791
Contextualized level 2 49.67*** .481
Topic* 8 15.08*** 0.220
Contextualized level
Warrant
Mildly group 0.14 (0.34) 0.86 (1.03) 1.46 (0.98) 1.81 (1.46) 3.65 (1.20)
Moderately group 1.17 (0.37) 1.44 (0.80) 2.83 (1.25) 4.47 (2.56) 6.36 (1.62)
Highly group 1.27 (0.45) 2.81 (1.48) 2.27 (1.50) 5.24 (2.38) 6.49 (1.48)
Topic 4 236.7*** 0.689
Contextualized Level 2 60.33*** 0.530
Topic* 8 10.41*** 0.163
Contextualized level
Rebuttal
Mildly group 0.03 (0.16) 0.08 (0.27) 0.22 (0.47) 0.70 (1.41) 2.86 (1.67)
Moderately group 0.17 (0.37) 0.08 (0.28) 0.97 (1.27) 2.00 (1.17) 3.72 (2.45)
Highly group 0.11 (0.31) 0.95 (1.26) 0.89 (1.04) 2.05 (1.52) 3.78 (2.01)
Topic 4 192.7*** 0.643
Contextualized level 2 10.09*** 0.169
Topic* 8 3.79* 0.066
Contextualized level
Question
Mildly group 0.00 (0.00) 0.03 (0.16) 0.16 (0.37) 0.46 (1.07) 2.14 (0.34)
Moderately group 0.08 (0.28) 0.06 (0.23) 0.72 (0.84) 1.42 (1.3) 3.56 (2.38)
Highly group 0.03 (0.16) 0.70 (0.84) 1.03 (1.11) 1.41 (1.01) 3.51 (2.05)
Topic 4 196.3*** 0.647
Contextualized level 2 13.26*** 0.199
Topic* 8 5.08** 0.087
Contextualized level
  • Note: ANOVA: analysis of variance; df: degrees of freedom; OGSA: online game‐based science argumentation; SD: standard deviation.
  • * p < 0.05,
  • ** p < 0.01,
  • *** p < 0.001.

5 DISCUSSION AND CONCLUSION

An important finding in the present study was that the “level of contextualization” factor had statistically significant effects on the students' science attitudes, comprehension of science concepts, and their argumentation in term of claims, warrants, rebuttals, and questions. The results of the follow‐up post hoc tests showed that the three experimental groups significantly outperformed the control group. One reason explaining these results was that the experimental group students participated in the various inquiry‐based instructions and games in the OGSA. As reported by previous studies, inquiry‐based and innovative instructions support the development of positive attitudes toward science (Barab & Dede, 2007; Chen et al., 2013; Lindgren et al., 2016; Maxmen, 2010), deeper knowledge understanding, and the use of language and arguments like those used by scientists (Golanics & Nussbaum, 2008; Hasancebi & Gunel, 2013; Meluso et al., 2012; Moon et al., 2017). Specifically, the students in the three experimental groups were allowed to explore information collaboratively, test their ideas with experiments, make observations, take photos and videos for reflection, and communicate their ideas during the OGSA participation. Such forms of integration between inquiry‐based activities and the OGSA supported the students' science learning, which explains the results of the present study.

On the other hand, the comparisons among the three experimental groups in the post hoc test revealed that the higher the level of the contextualization of the media used in the OGSA, the higher the performances of the students in terms of their science attitudes, comprehension, and argumentation across the five topics. It is worth noting, however, that there was no significant difference between the highly and moderately contextualized groups in terms of the SAT and SCT results. Moreover, the statistical analyses of the OGSA scores revealed that there was a significant difference between the highly and moderately contextualized groups for only one of the five topics. Overall, the results indicated that the students needed at least moderately contextualized media to achieve better science learning. In other words, the contextualization cues in the highly and moderately contextualized media could activate the students' prior knowledge and support their comprehension, argumentation, and development of positive attitudes. Specifically, during the find‐the‐fault gameplay, the students needed to engage in careful observation of the given media and in reflection on what they had learned in order to point out unreasonable aspects and generate various arguments. Such skills are considered cognitively demanding for most middle school students (Baron, 1995; Osborne et al., 2016). Thus, the details in the highly and moderately contextualized media could be seen as having promoted these skills more strongly than the mildly contextualized media in terms of activating the students' prior science knowledge and promoting their ability to argue scientifically. Take the fermentation activity, for example, when the students reviewed photos and videos that they took during the homemade wine activity, the media would simultaneously remind the students of how they made their own wine, where they obtained related information, the results of their observations, and their direct experience such as discussing related topic with their family and friends. Such recall of their experiences and knowledge supports their construction of high‐quality arguments in the OGSA. The results support the assertion that meaningful learning takes place through interactions between new information and the learner's prior knowledge (Kintsch, 1998; Mason et al., 2013; Stylianidou et al., 2002).

With respect to the present study, it is worth noting that it took some amount of time for the students to explore scientific knowledge with inquiry‐based strategies and to record their learning process and upload the media to the OGSA. For example, for the topic of plants and seasons, the students had to take a number of photos of the school campus from the same vantage point but in different months by using their cell phone cameras (i.e., August and December). Although these tasks of taking photos were not complicated, they did take time to complete, and some educators may doubt whether it is worth spending too much time to learn just a few points (Ünaldi et al., 2013). Based on the results of the present study, however, we believe the expended time was worth it in terms of its value to the students' learning (Freitas & Castanheira, 2007; Stylianidou et al., 2002).

6 SUGGESTIONS AND LIMITATIONS

A basic suggestion based on the findings of the present study is to develop contextualized media in order to support students' online GBL even though it may be a time‐consuming task. On the basis of the quantitative results, we found that the students, especially the highly contextualized group students, gradually abandoned fixed templates in generating their arguments. They used complex evidenced‐based arguments in regards to the later topics, which was another reason for their high performance. Specifically, some students pointed out that the irrational refraction angle at the water–air boundary as a fault in Topic 3 of the OGSA learning. They further explained that “such a phenomenon could be real” to critique their peers who believed that the phenomenon would not happen in the real world. Their reasons included the statements that “it is not the water‐air boundary we observed in the lab, the value of which was 1.33” and “the angle depends on the density of the medium,” among others. They integrated their prior scientific knowledge into the generation of arguments in a number of ways. However, the present study did not explore such integration via qualitative analysis. This is worthy of being studied in the future. In the development of the OGSA, we used visualization media for the two games. It was difficult for us to construct online GBL environments by only using decontextualized materials such as text descriptions. However, this method seemed to be appropriate, especially for learning abstract scientific knowledge (Lawson, Alkhoury, Benford, Clark, & Falconer, 2000). This is a subject that needs to be further explored.

The present research includes certain limitations. First, we organized a single case study. The quantitative generalizability of such research is limited. Second, the Toulmin's theory includes six components for a sound argument. We did not include “qualifier” in our coding system. In argumentation, a qualifier is usually used to reconcile a conflict caused by critiques and to evaluate the validity of argument (Toulmin, 1958). We therefore considered that the students may use more qualifier in competitive activities, such as public hearing and debate, rather than in gameplay activity. However, our viewpoint on this matter was changed after the experiments because the students' arguments regarding the latter topics were much more complex, as discussed above. This is another research idea that can be explored in our future research. Another limitation in the present study was related to the experimental procedure. There was an unexpected factor that may have affected the learning outcomes of the students even though the students in the three experimental groups received the same instructions and learning tasks. That is, they gradually noticed that the media embedded in the OGSA were exclusively from the highly contextualized student group after the second topic of the OGSA learning. That is, the highly contextualized group students noticed that their media had a greater chance of being published in the OGSA, which potentially encouraged them to spend more time on observations and on creating more accurate media. On the other hand, the students in the other two groups did not receive such encouragement because they also noticed what media was chosen for the OGSA. This aspect of the experimental design could thus be viewed as a limitation insofar as it means that we cannot conclude with certainty that the significant differences among the three experimental groups were totally due to the level of contextualized media. That is, the awareness of what media were selected and where the media derive from may have influenced the students' learning motivations and outcomes.

APPENDIX A

QUESTIONS AND IMAGES USED THE THREE VERSIONS OF THE ONLINE GAME‐BASED SCIENCE ARGUMENTATION

Question of Topic 1:

A bamboo dragonfly can be put on a very small area using its head to balance. It shows an example of dynamic balancing. However, the bamboo dragonfly on the page cannot do so; please find out the possible problem. The two figures below show the front (figure 1) and reverse (figure 2) side of the bamboo dragonfly.

Mildly group Moderately group Highly group

Figure 1

image

Figure 1

image

Figure 1

image

Figure 2

image

Figure 2

image

Figure 2

image

Question of Topic 2:

The two images below show artificial fossils and landscapes used to explain Darwin's theory of evolution. Please point out and explain the unreasonable aspects of the creatures and the arranged environments for each image.

Mildly group Moderately group Highly group
image image image
image image image

Question of Topic 3:

Please point out unreasonable aspects in the following two images based on your knowledge of refraction and dispersion. One of them shows a curved item (figure 1) and the other one shows a rainbow (figure 2).

Mildly group Moderately group Highly group

Figure 1

image

Figure 1

image

Figure 1

image

Figure 2

image

Figure 2

image

Figure 2

image

Question of Topic 4:

The two images below consist of photographs of plants taken in the summer (figure 1) and winter (figure 2), respectively. Please point out and explain the unreasonable phenomena in the two photographs.

Mildly group Moderately group Highly group

Figure 1

image

Figure 1

image

Figure 1

image

Figure 2

image

Figure 2

image

Figure 2

image

Question of Topic 5:

The two images below show homemade wines that have been aged for 2 days (figure 1) and 10 weeks (figure 2), respectively. Please point out and explain the unreasonable phenomena.

Mildly group Moderately group Highly group
imageFigure 1

Figure 1

image

Figure 1

image

Figure 2

image

Figure 2

image

Figure 2

image

APPENDIX B

SNAPSHOTS OF THE THREE ONLINE GAME‐BASED SCIENCE ARGUMENTATION VERSIONS

1. Highly contextualized version of online game‐based science argumentation (OGSA)

image

2. Moderately contextualized version of OGSA

image

3. Mildly contextualized version of OGSA

image