An empirical investigation into student's mathematical word‐based problem‐solving process: A computerized approach
Abstract
This study proposes a computer‐assisted system with a design based on Polya's problem‐solving model. The system is designed to help average and low‐achieving second graders in mathematics with word‐based addition and subtraction questions. The emphasis of using the specific model was on dividing the problem‐solving procedure into stages and the concentration on the stages that are problematic for students. Specifically, we compared mathematical word problem‐solving performance and computational skills of students who utilized the computer‐assisted system with students who employed general strategy instruction. Participants consisted of 52 second‐grade students randomly assigned to treatment conditions. Students were pretested and posttested with mathematical problem‐solving and computation tests, and repeated measures of their progress with respect to word problem solving were registered. The results showed there was a significant difference between the experimental and control groups in terms of the word problem‐solving progress measure, favouring the experimental group. This confirms that providing students with a computer‐assisted system offered the opportunity to explore all stages of the problem‐solving procedure as one possible way to enhance their problem‐solving skills.
Lay Description
What is currently known about the subject matter
- Problem‐based learning is recognized as an instruction method that positively influence learning outcomes along with higher order thinking skills.
- The use of computers to implement findings from qualitative research related to problem‐solving teaching strategies can develop reasoning skills in mathematics and problem solving of elementary students with learning difficulties in mathematics.
- Polya's model helps students to solve mathematical word‐based problems.
What the paper adds to this subject
- This paper suggests that efforts to improve word problem solving should focus on episodes students neglect when solving problems.
- This study proposes a computer‐assisted system with a design based on Polya's problem‐solving model.
- This empirical study investigates whether providing students with computer‐based assistance at each stage of the problem‐solving process helps students to overcome their difficulties.
- This study explores whether students fail to solve the problem because they do not understand the specific relations among the three problem quantities, or in choosing a correct solution strategy, or in carrying out the solution strategy correctly.
The implications of study findings for practitioners
- Combining Polya's model with computer assistance at each solving procedure stage enhances student's capabilities to solve mathematical word‐based problems.
- Students encountered problems when reviewing their solutions; this signifying that students did not recognize the variants of the problem.
1 INTRODUCTION
In traditional teaching, assessment of whether students understand a mathematical problem is based on whether they are able to describe the correct arithmetic procedure. However, it has not been enough to evaluate students' grasp of mathematics concepts and abilities to solve math problems merely depending on their writing (Huang, Liu, & Chang, 2012).
Descriptive evaluation is recognized as an effective method that requires students to write out the problem‐solving process so that teachers can analyse what the students do not understand and help improve their comprehension. During the past 30 years, there has been an increasing emphasis on assessing problem solving by examining the cognitive processes involved while students are engaged in problem solving.
In science education, a word problem is a mathematical exercise where significant background information on the problem is presented as text rather than in mathematical notation (Verschaffel, Greer, & de Corte, 2000). Two steps are needed to solve a math word problem. First, the wording is translated into a numeric equation that combines smaller expressions, and then the equation is solved. Despite the growing attention directed at problem‐solving skills, teachers often experience difficulties in teaching students how to approach problems and how to make use of proper mathematical tools (Harskamp & Suhre, 2006). The difficulties stem in part from the fact that the teaching methods and performance assessment methods are inadequate and limited; the difficulty is greater for elementary school teachers who are not subject‐matter (mathematics) teachers. For example, Stern and Lehrndorfer (1992) reported that word problems depicting the comparison of quantities have been shown to be difficult for elementary school children in several studies.
In order to implement the new national curriculum in Algerian elementary schools, the National Institute for Educational Research promoted many projects such as the “Accompanying Measures for Primary Mathematics Learning.” Along with the projects, some recommendations were made to basically change the performance assessment methods to enable teachers to better evaluate students' knowledge and skills (Institut National de Recherche en Éducation, 2006). The use of multimedia and interactive computer programs may be a viable way to stimulate the acquisition of such mathematical problem‐solving skills. The question therefore arises as to what extent those new technologies facilitate the acquisition and improvement of math problem‐solving strategies and consequently the mathematical performance.
Many previous studies have dealt with students' mathematical problem‐solving difficulties and proposed computer‐assisted systems to help students garner problem‐solving knowledge (see,e.g., Chang et al., 2006; Schoppek & Tulis, 2010; Huang et al., 2012; Adesina, Stone, Batmaz, & Jones, 2014). However, these systems have not been investigated empirically to determine at which stage of the problem‐solving process students encounter challenges. Chang, Sung, and Lin (2006) reported that previous computer programs have all incorporated the problem‐solving steps within a single stage, making it difficult to diagnose the stages at which errors occur when a student encounters difficulties during problem solving. Efforts to improve problem solving should focus on episodes students neglect when solving problems and on making them realize that mathematics is not all about the results.
The purpose of this paper is to propose a computer‐assisted one‐step word problem‐solving system based on the four problem‐solving stages according to Polya (1945): (a) understanding the problem, (b) making a plan, (c) executing the plan, and (d) reviewing the solution. First, for understanding the problem stage, we intend to evaluate how well students identify the mathematical concepts needed to solve the problem and how well they understand and apply the information given in the problem. Second, for making a plan stage, we intend to evaluate how efficiently students establish relationships among the elements of the problem they are attempting to solve using schematic pre‐established diagrams. At executing the plan stage, we seek to assess how precisely students apply the rule of subtraction and solve subtraction problems with borrowing and how they correctly carry out addition problems when the process involves a carry‐over number. Finally, during reviewing the solution stage, we will determine how precisely and clearly the students recognize the variants of the solved problem expressed (put‐together, change‐get‐more, change‐get‐less, and compare) to verify their solutions.
The primary research questions that we address in the present study are as follows: (a) whether students benefit from computer‐assisted stage‐based mathematical word problem solving and (b) at which stage of the problem‐solving model students encounter difficulties.
The paper is organized as follows: Section 2 focuses mainly on the influence of computer‐assisted environments on mathematics instruction and the impact of the resolution model on students' problem solving. In Section 3, we present the proposed problem‐solving process and describe student–machine interfaces used at each stage in addition to embedded techniques to aid in achieving a successful outcome at each stage. Sections 4 and 5 feature the experimental design and analytical results. Following this, the results from the study are presented and discussed in Section 5. Finally, in Section 6, we conclude by assessing the adopted strategy and identify future work directions.
2 RELEVANT RESEARCH WORK
Arithmetic word problems play an important role in elementary school mathematics curricula in terms of developing general problem‐solving skills (Verschaffel, Greer, & Decorte, 2007). Studies on how children solve addition and subtraction problems date to the early part of the last century (Arnett, 1905; Browne, 1906). Since that time, a number of researchers have investigated how children solve addition and subtraction problems (see,e.g., Svenson, 1975; Carpenter, Hiebert, & Moser, 1981; Van de Walle, Karp, & Williams, 2007; Reys, Lindquist, Lambdin, & Smith, 2014; Siegler & Jenkins, 2014). With technological advancement and the arrival of the multimedia computer instruction era, the attention of more and more investigations has focused on interactive learning methods through multimedia computers. The use of computers to implement findings from qualitative research related to problem‐solving teaching strategies may be a viable way to develop reasoning skills in mathematics and problem solving of elementary students with learning difficulties in mathematics.
2.1 Computer‐aided problem solving
Over the last years, there has been greater research on the influence of computer‐assisted environments on mathematics instruction (see Harskamp & Suhre, 2006; Huang & Ke, 2009; Schoppek & Tulis, 2010; Li & Ma, 2010; Gunbas, 2015). There is empirical evidence that computer‐assisted learning environments play a role in students' learning enhancement and motivation (Lopez‐Morteo & Lopez, 2007). For example, Panaoura (2012) investigated the improvement of students' mathematical performance with a mathematical model via a computerized approach. He showed that students bolster their problem‐solving abilities. Xin et al. (2016) explored the potential effects of the Please Go Bring Me Conceptual Model‐Based Problem Solving (PGBM‐COMPS) intelligent tutor system compared with traditional teacher‐delivered intervention when applied to multiplicative problem solving. According to the findings, the PGBM‐COMPS intelligent tutor seemed to yield better outcomes in enhancing participating students' multiplicative problem solving. However, the number of participants involved in this study was limited. Conducting future studies with larger sample sizes may perhaps enhance the validity of the PGBM‐COMPS program. Comparison of the effectiveness between computer‐ and teacher‐mediated instructions on the word problem‐solving performance of students has also been studied (Leh & Jitendra, 2013). However, there was no statistical significance in between‐condition differences at posttest and with a 4‐week retention test of word problem solving. One of the most popular software programs in word problem solving is “GO Solve Word Problems” (Tom Snyder Productions, 2005). This tool introduces students to the most common types of arithmetical situations reflected in word problems. It makes use of graphic organizers to help students construct concrete mental models of the situations and relationships through the supplied information in each problem. This program has the advantage of providing a different diagram for each problem situation. However, it does not focus on problem‐solving episodes, making it difficult to diagnose at which stage students make misconceptions.
2.2 Polya's problem‐solving strategy
Polya's (1945) four‐step process has provided a model for the teaching and assessing problem solving in mathematics classrooms: understanding the problem, devising a plan, carrying out the plan, and looking back. In order to help students better cope with difficulties encountered in solving problems, many researchers have developed computer‐assisted mathematical problem‐solving tools based on Polya's problem‐solving strategy. For example, Ma and Wu (2000) designed a set of interesting active learning materials for teaching. The system was designed according to Polya's problem‐solving strategy. Research outcomes indicated both students' learning interest and achievement had improved. Huang et al. (2012) formulated a computer‐assisted mathematical problem‐solving system to help low‐achieving second and third graders in mathematics with word‐based questions. They found that the mathematical problem‐solving abilities of experiment group students were significantly superior to that of controls. Combination of Polya's strategy with schema representation and solution trees has also been studied (Chang et al., 2006). In this study, the authors developed a computer‐assisted problem‐solving system and tested it with elementary school mathematical problems that involve the operations of addition, subtraction, multiplication, and division. The system was empirically demonstrated to be effective in improving the performance of students with lesser problem‐solving capabilities. Recently, Yang, Chang, Cheng, and Chan (2016) examined how to foster pupils' mathematical communication abilities by using tablet PCs. A reciprocal peer‐tutoring‐enhanced mathematical communication system was designed for supporting students' math creations (including mathematical representation, solution, and solution explanation of word problems) and reciprocal peer‐tutoring activities. The system activity flow involved four steps: understanding the problem, drawing a representation, writing a solution, and explaining the solution. These steps were designed according to Polya's findings (1973) about problem solving. The results showed that students' mathematical representations and solution explanations became more accurate after the learning activity.
Taking the cited examples together evinces the notion that computer‐assisted mathematics problem‐solving systems have a positive impact on children's problem‐solving abilities. Yet it is worth underscoring certain issues warranting investigation. Herein, we first explore whether students fail to solve problems because they do not understand the specific relations among the problem quantities, or in choosing a correct solution strategy, or in carrying out the solution strategy correctly. We also explore whether providing students with computer‐based assistance at each stage of the problem‐solving process helps students overcome their difficulties. Therefore, we focus predominantly on supporting students at the various stages to allow them to solve one‐step mathematical word problems on the basis of (a) understanding of the problem, (b) planning of the solution, (c) execution of the plan, and (d) reviewing of the solution.
3 SYSTEM DESIGN AND FRAMEWORK OUTLINE
According to Polya's problem‐solving model, the proposed system is designed to guide average and low‐achieving second‐grade students through the parts of the problem‐solving process that they often ignore or fail to understand by providing steps to identify what is given and what is requested in the problem or how to organize the solving plan. We chose the second‐grade level because students at this point supposedly have robust knowledge of these types of problems (Fuchs & Fuchs, 2005).
As the schema representation is very helpful for conceptualizing the semantics of the problem (Jitendra et al., 2007; Reusser, 1996), the learning system helps students think about and solve mathematical problems with a graphical representation at making a plan stage consisting of two operand nodes; one operator node and one result node (see Figure 3). Each operand and result node has two attributes, the label and value, representing, respectively, the meaning of the node and its numerical value. The values for the two operand nodes and the operator node correspond to the two operands and an operator in the mathematical expression. The value at the result node is the result of the expression.
The problems used in this study are divided into four types based on the classification of Vergnaud (1982): (a) put‐together, (b) change‐get‐more, (c) change‐get‐less, and (d) compare. As all the problems of the learning system involve three quantities and any of these quantities can be unknown, there are three possible problem subtypes within each main problem category. Two of these require subtraction of the two given numbers in the problem, and one requires addition of the two given.
3.1 Students' problem‐solving process
Students' problem‐solving guidance process is portrayed in Figure 1. First, the adequate problem is provided by the system among other problems stored in the problem‐solving information database. At Stage 1, the assessing module for understanding the problem proposes responses to be selected by the student and other techniques for comprehension of the problem. The plan‐making stage enables the student to build a schema representation of their solution. Stage 3 offers, according to the operation type, a calculation interface followed by a multiquestionnaire for the student to validate their solution. Problem information is provided at each stage of the problem‐solving process, and assessment of each stage is recorded in the student‐tracking database. The system displays feedback messages subsequent to problem‐solving completion.

3.1.1 Understanding the problem stage
During this stage, the system offers students the possibility to circle important words in the problem. Students also have to distinguish between what is known and what is requested in the problem by selecting appropriate responses. For illustration, Figure 2 displays the problem in the problem frame and check boxes for the needed answers. As the teaching language is in Arabic, the text must be read from right to left, thus adapting the learning system to the student‐specific cultural context.

3.1.2 Making a plan stage
The guiding process of plan elaboration is divided into four steps as illustrated in Figure 3 so as to help students express graphically problem solving. The first step consists of identifying the first operand and its value among a list of operands; the student selects the appropriate one and enters its value. The second step proposes an operator choice of either addition or subtraction. At the third step, the student selects the second operand and its value. Finally, during the fourth step, all that is required is a result label. According to the problem missing part, a result label may either be requested in the first or third steps.

In the example of Figure 3, the student has opted for the second label for the first operand and enters a value of 13, whereas for the second operand, they also chose a second label with a value of 8 and attempt subtraction between these two operands. Labels are presented so as to gauge if the student understands the meaning of the operator.
After the student has pressed the “next” button, the system compares the solution plan created by the student with that built into the system, and the suggestions regarding the student's problem solving are stored in the student tracking database and displayed after the student completes the problem.
3.1.3 Executing the plan stage
At this stage, the system evaluates the student's procedural knowledge. As shown in Figure 4, the system provides a graphical preview of all the addition and subtraction worksheets in a vertical problem format. Large boxes are reserved for digits of operands, and little boxes are used if regrouping is required for subtraction (exchanging one of the tens for 10 units or one of the hundreds for 10 tens) or for addition when the process involves a carry‐over number. Three steps are required at most to complete this stage, each aimed at manipulating ones, tens, and hundreds. The student has to fill in the large cases (ones, tens, and hundreds) of results and little cases if regrouping is needed for subtraction or if addition necessitates carrying over. After finishing the calculations of all the columns, the system assesses the student's answer, and feedback is stored. A last stage is needed to validate the student's problem solution.

3.1.4 Reviewing the solution stage
During this stage, the student answers questions as depicted in Figure 5. In order to validate the solution from the previous stage, the system proposes questions that are related to the problem, and the student must answer with true or false. After completing this stage, the student presses the evaluation button that triggers the system to evaluate the results, and messages appear to indicate whether any mistakes were made. Additionally, the correct problem‐solving steps are displayed simultaneously next to the student's answers.

One of the most common strategies for studying the problem‐solving process involves the use of an analytic scoring scale. Analytic scoring is an evaluation method that assigns point values to various dimensions of the problem‐solving episode (Charles, Lester, & O'Daffer, 1987).
The grading rubric assesses the solutions in four specific areas. According to experts, 10 maximum possible points are awarded to each problem by assigning two maximum possible points for understanding the problem, three maximum possible points for making a plan stage, three maximum possible points for procedural skills, and finally, two maximum possible points for reviewing the solution. Details of the stages' scoring are available in Table 1.
| Stage | Understanding the problem | Making a plan | Executing the plan | Reviewing the solution |
|---|---|---|---|---|
| Criteria | Complete understanding of the problem with recognition of what is given and what is requested in the problem with correct identification of important words. | Accuracy of setting the plan and degree of describing and interpreting the operands and their values, the operation, and the missing value. | Accuracy of computation with correct manipulation of numbers (adding/subtracting of units, tens, and hundreds) and the right use of regrouping and carrying over concepts. | Correct validation of problem solution by selecting the correct problem–solution variants. Variants are obtained by combining addition and subtraction of problem operands and missing parts. |
| Rubrics and scores |
•What is known: 0.7 pts. •What is requested: 0.7 pts. •Identifying important words: 0.6 pts. |
•First operand label identification: 0.5 pts. •Second operand label identification: 0.5 pts. •Identification of missing label: 0.5 pts. •Correct number sentence: 1.5 pts. |
•Units manipulation: 1 pt. •Tens manipulation: 1 pt. •Hundreds manipulation: 1 pt. In case of manipulating numbers without hundreds, the rating score of this category is affected equally in terms of units and tens manipulation. |
•First problem–solution variant: 0.5 pts. •Second problem–solution variant: 0.5 pts. •Third problem–solution variant: 0.5 pts. •Fourth problem–solution variant: 0.5 pts. |
4 METHODOLOGY
4.1 Participants and experimental design
The participants of the study were 60 Grade 2 students attending four classes in two elementary schools in Tiaret Province in the west Algeria. Students were selected on the basis of scores from the average mathematics course score of the first and second trimesters. All Grade 2 students scoring at or below the 50th percentile were chosen to participate in the study, but the sample was reduced to 52 students on the basis of the provision of informed parental consent. We selected the 50th percentile as our cut‐off score to include children that had basic learning abilities and needed remedial instruction.
This study consisted of a precontrol–postcontrol design with a 2 × 2 mixed‐factorial design where the between‐subject variable was the treatment and the within‐subject variable was tested. Participants were divided into two groups. One group was the focus of the experiment (i.e., experimental group where 26 randomly assigned students used the developed system to solve word problems), and the other group was the baseline (i.e., control group where the other 26 randomly assigned students used a general strategy to solve word problems without the developed system). The levels of the between‐ and within‐subjects variables are found in Table 2.
| Treatment vs. test | Pretest | Posttest |
|---|---|---|
| Using the developed system | Students' pretest scores | Students' posttest scores |
| Not using the developed system | Students' pretest scores | Students' posttest scores |
4.2 Materials
This study concentrated on students' solving of one‐step mathematical word problems. We utilized several one‐step word problems derived from various Grade 2 mathematics textbooks to teach word problem solving using the four steps of Polya's strategy instruction. In addition, situations involving the four problem types that did not include unknown information were developed for use during the training (treatment) phase. The related problems were collected, revised, and compiled into a database of 96 problems (25 of put‐together, 21 of change‐get‐more, 22 of change‐get‐less, and 28 of compare) for practising with the developed system. Each problem type involved three quantities, and the position of the unknown in these problems may be any one of the three quantities.
4.3 Procedure
We organized 10 meetings of the students with our learning environment so they became familiar with and practised the word problem‐solving program based on Polya's four‐stage model. Training sessions where spread out over 5 weeks (frequency being twice a week for 90 min each). Before the first session began, we proceeded to pretest both groups (i.e., experimental and control), with the test taking 60 min, and the collected answers were graded according to the four categories of problems (see Section 3.1). After carrying out the pretesting, the experimental sessions began. The experimental group students received instruction for solving one‐step problems that involved the different four stages of the problem‐solving model using our developed learning environment. During each session, experimental group students solved eight problems (two of each type), and the control group students solved the same word problems on a paper worksheet. For this group, the problems were scored to allow for credit for the correct number sentence, computation, and labels in answers. Finally, we organized 1 hr posttesting for both groups.
5 RESULTS
5.1 Analysis of group differences
We used Box's (1949) M test of homogeneity of covariance matrices to test the multivariate homogeneity of variance–covariance matrix assumptions. An insignificant value from Box's M test indicates that those groups (i.e., experimental and control) do not differ from each other and would support the assumption. The Box's M value of 1.459 was associated with a p‐value of 0.707, which was interpreted as nonsignificant. Therefore, the variances were homogeneous.
5.2 Analysis of students' performance
A two‐way mixed analysis of variance with repeated measure for one factor was conducted to determine whether there was statistical significance between students using the computer‐assisted mathematical problem‐solving system and students making use of general instruction. The independent variable included a between‐subjects variable, the group (experimental and control), and within‐subjects variable, the test, with repeated measures of pretesting and posttesting. The dependent variable was the students' scores mathematical problem‐solving pretests and posttests. An α level of 0.05 was utilized for this analysis. Results for model assumptions of normality, homogeneity of covariance, and linearity were deemed satisfactory.
The results from the main effect of pretesting and posttesting was significant as depicted in Table 3, F(1, 50) = 177.96, p < 0.001. A large effect size was evident, indicating that scores during the pretest may have been different from those of the posttest for the two groups (ƞp2 = 0.781 > 0.14). There was also a significant main effect in the group factor as presented in Table 4, F(1, 50) = 5.39, p < 0.05, which was indicative of a moderate to large effect size, uncovering differences in the scores between the two groups. In addition, there was a significant interaction in the scores gain between the experimental and control groups, F(1, 50) = 47.04, p < 0.001, providing evidence that there was a change from the pretest to the posttest that was greater in one group than the other. The interaction was interpreted with simple main effects analysis, focusing on the effects of time within each treatment group.
| Source | Time | Type III sum of squares | df | Mean square | F | Significance |
|---|---|---|---|---|---|---|
| Time | Linear | 75.310 | 1 | 75.310 | 177.965 | 0.000 |
| Time × Group | Linear | 19.906 | 1 | 19.906 | 47.040 | 0.000 |
| Error (time) | Linear | 21.159 | 50 | 0.423 |
- Note. Measure: score.
| Source | Type III sum of squares | df | Mean square | F | Significance |
|---|---|---|---|---|---|
| Intercept | 3,307.522 | 1 | 3,307.522 | 822.118 | 0.000 |
| Group | 21.695 | 1 | 21.695 | 5.392 | 0.024 |
| Error | 201.159 | 50 | 4.023 |
- Note. Measure: score. Transformed variable: average.
The simple main effect analysis revealed a statistically significant difference between the two groups during posttesting (F(1, 50) = 14.88, p < 0.05) but not for the pretesting (F(1, 50) = 0.012, p = 0.915). We could consequently deduce that the capacity of problem solving in the experimental group (M = 7.38, SD = 0.32) was significantly greater than that in the control group (M = 5.59, SD = 0.32). With this, the simple main effect comparison between the pretests and posttests showed there to be a significant difference in the experimental group, but not one that was considerable in the control group, indicating that the training had resulted in significant progress only in the experimental group. In reviewing Table 5, it can be seen that the scores for participants in the experimental group increased from the pretest (M = 4.80, SE = 0.25) to the posttest (M = 7.38, SE = 0.33), F(1, 50) = 203.99, p < 0.005, whereas the scores for participants in the control group exhibited a slight change from the pretest (M = 4.76, SE = 0.25) to the posttest (M = 5.59, SE = 0.33), F(1, 50) = 21.00, p < 0.005.
| Group | Time | M | SE | 95% confidence interval | |
|---|---|---|---|---|---|
| Lower bound | Upper bound | ||||
| Control | 1 | 4.769 | 0.252 | 4.263 | 5.276 |
| 2 | 5.596 | 0.328 | 4.938 | 6.255 | |
| Experimental | 1 | 4.808 | 0.252 | 4.301 | 5.314 |
| 2 | 7.385 | 0.328 | 6.726 | 8.043 | |
- Note. Measure: score.
5.3 Analysis of stages at which students encounter difficulties in the problem‐solving model
By exploring log files for the problem‐solving stages for all students recorded by the system in the student tracking database, the results summarized in Figures 6 and 7, we learned the following:


First, the results of the study for the experimental group were analysed by averaging the scores of each stage of the solved problems for all students and then comparing the average scores over three points of time (pretest–training–posttest). In general, the graph in Figure 6 shows there to be a net improvement in scores during the first three stages over time, but the scores from the last stage (i.e., reviewing the solution) remained nearly constant, probably because the students were not able to relate to the different forms of a unique problem. A deep investigation is merited to determine the real reasons for this failure. Detailed review of our study findings shows that for most of the problems, a high score in “understanding the problem” means high scores in both “making a plan” process and “executing the plan” skills. Although the overall correlation is robust, certain students did not adhere to this trend.
In addition, as claimed in Section 1, our study investigated the stages of the problem‐solving model that are problematic for most students. The goal of this analysis consisted of identifying the stages at which low achievers encounter difficulties in order to remedy these weaknesses. A visual examination of the word problem‐solving progress data depicted in Figure 7 uncovered that students realized about 70% of the first stage's reference score, 93.33% of the second stage's reference score, 76% of the third stage's reference score, and only roughly 44% of the last stage's reference score. However, the score gain between pretesting and posttesting was the highest for understanding the problem stage (35%) and was 33%, 26.6%, and 4%, respectively, for making a plan, executing the plan, and reviewing the solution stages.
5.4 Analysis of students' opinions regarding using the computer‐assisted system and the resolution model
After the posttest was completed, students were requested to answer a questionnaire surrounding their opinions on the computer‐assisted system, the staged problem–resolution model, and difficulties encountered during resolution. We also desired to know more about reviewing the solution stage failures. The organizing team assisted students in understanding and answering all the questionnaire questions. Table 6 features the collected information.
| Question | Yes (%) |
|---|---|
| Do you think that the use of the computer‐assisted system has helped you to acquire more skills in the resolution of word problems? | 92.3 |
| Do you think that multiple‐choice question from understanding the problem stage has helped you understand the problem better? | 88.4 |
| Do you think that using of schema representation strategy in making a plan stage has facilitated your description of the solution? | 100 |
| Did the graphical presentation worksheet help you to perform calculations easily? | 100 |
| Do you think that understanding the problem stage is the simplest stage? | 23 |
| Do you think that making of the plan stage is the simplest stage? | 46.1 |
| Do you think that executing the plan stage is the simplest stage? | 30.7 |
| Do you think that reviewing the solution stage is the simplest stage? | 0 |
| Do you think that reviewing the solution stage is less important than the other resolution stages? | 96.1 |
Concerning the system use viewpoint, Table 6 shows that the proportion of students who thought they acquired more skills in mathematical problem solving was high (92.3%), suggesting that practising problem solving through the computer was quite valuable. On the other hand, regarding the provided assistance to help students with their problem solving, namely, the use of multiple choice questions, the schema representation strategy, and worksheet use, the majority of students (88.4%, 100%, and 100%, respectively) thought that the provided techniques aided their performance during all stages. When asked which step was the most simple, 46.1% of the students responded that it was making of the plan stage, whereas 30.7% and 23%, respectively, considered executing the plan stage and understanding the problem stage the simplest. However, no students thought reviewing the solution stage was the simplest (0%). According to results, the outcomes indicated that students had encountered problems when reviewing their solutions and had failed to correspond the problem with its variants (the same problem with a different missing part). Therefore, we note that 96.1% of students believed that reviewing the solution stage was less important than other resolution stages.
6 DISCUSSION
In this paper, two issues have been addressed. At first, we investigate what students gain from a computer‐assisted word problem solving using Polya's strategy by providing assistance at each stage. Next, we explore the stages of the problem‐solving model that are problematic for most students. Overall, the findings seem to support the potential for using stage‐based word problem‐solving instructional program to address the knowledge gap of the students with learning difficulties and enhance their problem‐solving performance.
6.1 Influence on students' problem‐solving ability
The results of this study support the findings of existing studies proposing that employing Polya's methods can significantly improve the overall performance of students (Craig, 2016; Özsoy & Ataman, 2017; Romiszowski, 2016). The statistical results show that the students' problem‐solving performance as well as their problem‐solving ability was improved via the proposed learning approach. It seems that the program with the use of the model created a powerful learning environment in which students overcame their difficulties. Scores for participants in the experimental group increased from the pretest (M = 4.80, SE = 0.25) to the posttest (M = 7.38, SE = 0.33). These findings converge with prior findings that providing of assistance at each episode of the solving process is very helpful to enhance students' problem‐solving ability (Chang et al., 2006; Huang et al., 2012; Lee, 2017).
There have been several studies that were conducted to investigate the effectiveness of visualization support on problem‐solving achievement (Jitendra, 2007; Jitendra et al., 2007; Reinhard, Hesse, Hron, & Picard, 1997; Reusser, 1996). This study continued the convention of using such assistance but was also marked by a few major differences from previous studies. The first was with respect to the use of multiple‐choice questions at understanding the problem stage, where student had to identify what was known and what was requested in the problem. The second was employing of schema representation strategy during making a plan stage to enable students to describe the solution steps in detail by ordering operands of the problem. The third difference was the vertical graphical presentation of addition and subtraction worksheets so as to facilitate students performing regrouping if subtraction was needed to solve the problem or carrying over if addition was required.
6.2 Diagnosis of difficult steps of the problem‐solving model
On the basis of the results of this study, students achieved a successful outcome during the first three stages, but they failed most of the time at the last stage (students realized about 70% of the first stage's reference score, 93.33% of the second stage's reference score, 76% of the third stage's reference score, and only roughly 44% of the last stage's reference score). This finding indicates that students have high levels of performance during making the plan stage and a strong knowledge in understanding the problem and executing the plan stages. In contrast, students encountered problems when reviewing their solutions; this signifying that students did not recognize the variants of the problem or perhaps they felt they accomplished objective of their work by the end of their calculations. These results are in agreement with prior findings from the literature (Huang et al., 2012), where only 5.9% of students considered the double‐checking step the easiest. In addition, Yang et al. (2016) stated that students became more capable of using mathematical representations and make equations (making a plan stage and executing the plan stage), but they still need more practice in using mathematical language for explaining their solutions (reviewing their solutions). Nevertheless, Chang et al. (2006) showed that only 16.7% of students highlighted important information (understanding the problem stage) and only 42% of the students constructed complete solution trees (making a plan stage). It may seem confused; however, authors claimed that the training had resulted in significant progress only in the experimental group.
This means that although the computer‐assisted system improved the problem‐solving performance of the experimental group, further research is necessary in order to find ways to better design reviewing the solution stage. One possible answer is to teach students to recognize variants of a given problem or to translate a problem into requested variant‐type problems (put‐together, change‐get‐more, change‐get‐less, and compare).
6.3 Influence on students' opinions about using the resolution model
The majority of students thought that the resolution model helped them acquire more skills in mathematical problem solving, suggesting that practising problem solving through the computer was quite valuable. This result was consistent with the findings of other researchers (Huang et al., 2012; Yang et al., 2016). However, no students thought reviewing the solution stage was the simplest.
7 CONCLUSION
Mathematics education in Algeria has lacked practical and effective descriptive methods readily usable in schools, and students who have experienced mathematical learning difficulties have become unable to benefit from expanded mathematical curricula. Our study was designed to offer one possible solution to this problem. We developed a computer‐assisted problem‐based learning system to help low‐achieving elementary students improve their abilities in solving basic word‐based addition and subtraction questions and enhance their willingness to continue learning. Our system is based on Polya's four problem‐solving steps; the emphasis when using this model was on dividing the problem‐solving procedure into stages so as to diagnose those stages at which errors occur when a student encounters difficulties in mathematics.
The results from this work confirmed the effectiveness of the computer‐assisted mathematical problem‐solving system in providing students with the opportunity to solve word‐based addition and subtraction questions through making more prominent the parts of the problem‐solving process they often ignore.
Although this study provides insights into what kind of assistance determine students' word‐based problem‐solving performance in the proposed resolution model approach, further investigations might be needed to confirm and extend the inferred results of the study. Thus, several issues need to be considered for future research. First, the extended studies can involve different groups of subjects who experience different reviewing solutions techniques approaches in the last resolution process stage in order to make more significant conclusions. Second, further study may be needed to add more variables relating to word‐based problem‐solving performance such as intelligence quality, learning materials, learning methods, and parents' socio‐economic background (Oloruntegbe, Ikpe, & Kukuru, 2010). Finally, the learning approach was validated over a training period of 5 weeks. Future research is needed for the duration of intervention programs in order to examine the maintenance of the word problem‐solving skills across time (on a retention test) and the transfer of the learned skills to a school administered, standardized mathematics achievement test.
ACKNOWLEDGEMENT
This work is part of AMIAC Extension Research Project 20/U311/4227, supported by the Algerian Ministry of Higher Education and Scientific Research.




