Skip to main content

Exploring differences in primary students’ geometry learning outcomes in two technology-enhanced environments: dynamic geometry and 3D printing

Abstract

Background

This paper compares the effects of two classroom-based technology-enhanced teaching interventions, conducted in two schools in sixth (age 11–12) grade. In one school, the intervention involves the use of a class set of 3D Printing Pens, and in another school the use of dynamic geometry environments, for inquiry-based learning of the relations among the number of vertices, edges, and faces of prisms and pyramids. An instrument was designed as guided by the van Hiele model of geometric thinking and administered to the two groups in the form of pretests, posttests, and delayed posttests to assess students’ prior knowledge before the intervention started, the learning outcomes obtained immediately after intervention, and the retention of knowledge after the interventions had been completed for a sustained period of time. The purpose of this study is to explore differences in geometry learning outcomes in two technology-enhanced environments, one that involves dynamic, visual representations of geometry and another that involves embodied actions of constructing physical 3D solids.

Results

The results show that students using dynamic geometry improved at a higher rate than those using 3D Pens. On the other hand, students with the aid of 3D Pens demonstrated better retention of the properties of 3D solids than their dynamic geometry counterparts. Namely, the posttest results show that the dynamic geometry environment (DGE) group generally outperformed the 3D Pen group across categories. The observed outperformance by the DGE group on “advanced” implies that the DGE technology had a stronger effect on higher levels of geometric learning. However, the results from the ANCOVA suggest that the retention effect was more significant with 3D Pens.

Conclusions

This study has established evidence that the DGE instructions produced strong but relatively temporary geometry learning outcomes, while 3D Pen instructions can help solidify that knowledge. The results of this study further shed light on the effect of visual and sensory-motor experiences on school mathematics learning and corroborate previous work showing that the effects of gesture are particularly good at promoting long-lasting learning.

Technology-enhanced mathematics learning has been widely studied in recent years (see, for example, Ball et al., 2018). Most notably, the last two decades have seen a major line of inquiry on the use of dynamic geometry environments (DGEs) for affording dynamic modes of thinking and interacting with mathematical concepts (Baccaglini-Frank, 2019; Leung, 2008; Ng & Sinclair, 2015). From the lens of semiotic mediation (Bartolini Bussi & Mariotti, 2008), the direct manipulations with mathematical objects in DGEs can be internalized as a set of mathematical signs with invariant properties, thereby leading to generalizations about theoretical definitions in mathematics (Leung, Baccaglini-Frank, & Mariotti, 2013). From a different perspective, the manipulability of DGEs offers an increased degree of embodiment for technology-enhanced learning with the dragging action affording a form of motoric engagement and also gestural congruency when operating on touchscreen devices (Ng, 2019). Indeed, the emerging viewpoint of embodied cognition (Lakoff & Núñez, 2000) emphasizes mathematical cognition as deeply rooted in the body’s interactions with the environment and materials (e.g. tools). Embodied cognitive approaches to learning predict that sensorimotor experiences, including visual perception and bodily actions, strengthen students’ sense-making processes, especially their visualizations and spatial reasoning in science, technology, engineering, and mathematics (STEM) disciplines (Weisberg & Newcombe, 2017). Research has shown that the transition from action to abstraction for mathematics and science learning can be supported through gestures (Novack, Congdon, Hemani-Lopez, & Goldin-Meadow, 2014), translating representations (Stull, Hegarty, Dixon, & Stieff, 2012), and analogical mapping (Jamrozik, McQuire, Cardillo, & Chatterjee, 2016).

The ability to visualize and reason spatially requires strong perceptions about how shapes are positioned and transformed in relation to one another. In other words, visual and kinesthetic experiences, both in digital and tactile environments, can help strengthen one’s ability to visualize and learn geometry. For example, Laborde (2008) shows that the tools available in “3D DGEs” (i.e. DGEs that support manipulations in 3D scenes) may not only assist with visualization processes but also enlarge their range. Specifically, 3D DGEs enable learners to construct, observe, and manipulate geometrical figures in a “3D-like space,” through which their perception of spatial positions and spatial relations can be strengthened (Christou, Jones, Mousoulides, & Pittalis, 2006). On the other hand, the first author (Ng & Chan, 2019; Ng & Ferrara, 2019) has previously examined a Papert-inspired, “learning as Making” pedagogy, one that facilitates geometry learning with the hands-on construction of physical artefacts in a 3D printing environment. The author concluded that the use of 3D Printing PensFootnote 1 provides low-floor, high-ceiling opportunities for students to utilize their hands to develop their spatial skills and enhance integrated STEM learning (Ng & Chan, 2019). In relation to embodied cognition, the use of 3D Pens facilitated hand movements that supported gestural forms of thinking about mathematics concepts, such as the revolution about an axis, perpendicularity, and slope (Ng & Sinclair, 2018). Despite the significant role that the body and visual perception play in cognitive processes, research comparing their effect on mathematics learning remains relatively under-researched.

This study compares two technology-enhanced environments that incorporate multi-sensory experiences into learning early geometry. The first is a computerized environment that uses pre-made applets to support the visualizations of 3D figures through virtual transformations (such as resize, rotate, and display nets of solids) in DGEs with mouse input. Unlike the first environment that incorporates primarily vision (Fig. 1a, b), the second is a hands-on and artefact construction approach to learning early geometry; it involves the use of 3D Pens to construct physical 3D solids and thereby exploring their 0D, 1D, and 2D parts (Fig. 1c). According to Laborde (2008), construction tasks require a cognitive process of deconstruction. The novelty of the 3D Pen environment lies in the fact that the deconstruction is not only possible with 0D or 1D parts as found with paper-and-pencil but also with 2D or 3D parts. This approach also demands that students are actively “making” 3D models with their own hands besides coordinating their vision with the artefacts. Given our interests in embodied mathematics learning, 3D Pens afford increased immediacy and sensory interactions with mathematical representations that are lacking in screen-based tools (Jackiw & Sinclair, 2009). In this light, this study explores differences in geometry learning in two technology-enhanced environments to further our understanding of the effect of embodied interactions on school mathematics learning.

Fig. 1
figure1

a, b A computer applet that performs virtual transformations of various 3D figures. c Constructing a physical artefact (i.e. a cube) with a 3D Pen

Research background

Student difficulties in the learning of 3D geometry

Research literature points out two main difficulties underlying spatial skill development in 3D geometry. Firstly, as much as iconic visualization bearing on a shape helps us “see” the figure in 3D geometry (Duval, 2005), students rely too much on the perceptual attributes of 3D figures. According to van Hiele (1986), being able to discern and describe shapes based on their appearance is the first necessary step in developing geometric thinking; however, remaining in this level, students will fail to decompose any shape into its basic elements and identify its properties (Hershkowitz, 1989; Vinner & Hershkowitz, 1983). In particular, Hershkowitz (1989) found that the prototype phenomenon existed in both students’ and teachers’ visualization processes. Reasoning based on the prototypical figure is misleading since the attributes possessed by a prototype do not necessarily reflect the crucial attributes of a generic figure. Moreover, the more crucial attributes that a geometrical concept has, the less likely the prototypical figure supports the visualization of the concept. Clements, Swaminathan, Zeitler Hannibal, and Sarama (1999) showed that students who reasoned based on the properties of shapes outperformed students using visual explanations in classifying 2D shapes, yet there were still a large number of students who could not distinguish between classes of shapes or justify their class selection.

A study carried out by Hallowell, Okamoto, Romo, and La Joy (2015) pointed out that children have difficulties matching between planes and the faces of solids. For example, when shown a triangular-prism manipulative, children were unable to match the plane-rectangle stimulus with the rectangular face of the manipulative. For a solid-cone manipulative, children frequently misidentified it as a match with the solid-pyramid stimulus item based on their mutual pointiness. These studies provide insights into the importance of not overly relying on the visual attributes of shapes, i.e. level 1 of van Hiele’s (1986) model of geometric thought, especially in 3D geometry where its conjunctive attributes are even more complex and identifying the elements composing a 3D figure is even more difficult. As Duval (2005) explains, iconic visualization in focusing on the contours of a global object does not consider the faces, edges, nor vertices of the object. A non-iconic visualization via breaking up the object into components or transforming it into another figure is needed to completely visualize the 3D figure and its properties.

Second, mental rotations and mental transformations of 3D figures are found to be very challenging for school children (Ozdemir, 2010), and this compounds their difficulty moving toward van Hiele’s (1986) second level of geometric thought, to describe shapes on the basis of their properties. Hershkowitz (1989) noted that, as influenced by the prototype phenomenon, teachers and students regarded geometric concepts as static and fixed, having few changes and flexibility in form in their geometric thinking. Bruce and Hawes (2015) studied various tasks that aimed to improve children’s mental rotation ability; the results suggest that improvements on 3D tasks were not as large as on 2D tasks, and minimal improvements had been made on both tasks for young children. The researchers also identified that children were more successful in doing mental rotations when engaged with real objects as opposed to images of 3D figures shown on paper. Further research is needed to underscore the effect of sensorimotor experiences in supporting mental rotations and transformations of 3D geometry.

Technologies for learning 3D geometry

As early as the 1980s, Papert pioneered a digital environment, Logo, in which children can construct and explore 2D geometry with commands that embody motions (e.g. forward) and turns (e.g. right). As such, one can go beyond the perceptual level to identify components comprising a figure and to understand the notion of angles and angular rotation (Papert & Harel, 1991). A key contribution to digital technologies such as Logo, and DGEs in general, was in offering a powerful, temporalized representation of mathematical ideas through the lens of continuity and continuous change (Jackiw & Sinclair, 2009). In terms of 3D geometry learning, the representations generated by a 3D DGE are complicated by the media of flat screen through which they are presented on. Nonetheless, Laborde (2008) argues that the manipulability of Cabri3D, such as the ability to rotate objects and to change views dynamically onscreen, helps to strengthen visualization and cognitive processes. A classic example was provided in which middle and high school students often misconceive that two lines intersect in 3D because they intersect on the diagram. The interactive aspects of DGEs can enable opportunities to observe and make sense of 3D geometrical relations that could otherwise be difficult to do in static 2D representational environments, especially since mental rotations are challenging for children (Bruce & Hawes, 2015). Moreover, Laborde (2008) suggests that 3D objects can be used to support induction in Cabri3D before they are used at the level of proof, thereby encouraging geometrical sense-making.

Besides strengthening spatial sense through visual feedback, 3D DGEs may provide an integrated environment for creating, analyzing, and investigating 3D figures. For example, Christou et al. (2006) designed 3DMath which facilitated informed manipulation such as translations, reflections, and rotations, with the intention to support students to explore and anticipate results of the given transformation. This is in accordance with the use of DGEs to support dynamic modes of learning geometry, namely to identify crucial attributes and make generalizations about shapes “by generating examples, observing, and experimenting with examples as the basis for generalized conjectures” (Erez & Yerushalmy, 2006, p. 274). In particular, the dragging tool in DGEs allows the continuous transformation of a shape in 2D or 3D to explore critical or defining attributes of different shapes as well as to understand their hierarchical relations. Additional dragging capabilities in 3D DGEs include controlling the speed of rotation, resizing proportionally in all dimensions or only one dimension (Christou et al., 2006), and choosing which 0D, 1D, 2D, and 3D objects to drag (Laborde, 2008). On the other hand, a key concern for researchers is whether DGEs would help children transit to van Hiele’s third level of thought, one that involves recognizing the critical attributes and their relationships. Further, more empirical evidence is needed to investigate how well children retain their knowledge after receiving instructions with DGEs.

Research purpose and questions

Scholars agree that the use of technological tools encourages learners to become aware of the actions they perform on shapes and space (Sarama & Clements, 2002). In particular, DGEs offer visual means to strengthen one’s ability to visualize in 2D and 3D. Such affordances include resizing, rotating, and performing other geometric constructions in 3D as well as changing views to overcome the difficulties associated with visualizing 3D objects on 2D surfaces (i.e. computer screen). By contrast, the use of 3D Pens allows learners to explore mathematics in intuitive, physical, and embodied ways. For example, the 3D Pens invite students to perform physical compositions of 3D solids by their 0D (vertices), 1D (edges), and 2D (faces) parts with their hands, which may support their visualization and development of spatial skills. Thus, this study investigates two kinds of embodied learning, via the construction of digital (in DGEs) and physical artefacts (with 3D Pens), in terms of facilitating higher mental processes of geometric thinking as well as better retention of concepts. Further, this study uses statistical methods to explore differences in students’ geometry learning outcomes in these two technology-enhanced environments.

This study poses two research questions as follows:

  1. 1.

    How do students’ geometric knowledge—in relation to understanding the number of vertices, edges, and faces of prisms and pyramids, as well as their relations—compare after engaging in learning activities with DGEs and with 3D Pens?

  2. 2.

    Does digital-based (in DGEs) or hands-on constructionist learning (with 3D Pens) facilitate better retention of these concepts?

Methodology

Research design, participants, and setting

This study adopts a quasi-experimental research design without a control group to focus on differences of geometry learning outcomes between two intervention groups. This design provides a viable form of research when study units are not randomly assigned to observational conditions because of practical constraints; moreover, the lack of a control group was due to ethical considerations of denying potentially beneficial interventions (i.e. technology-enhanced learning) to students. The interventions varied at the school level instead of the classroom level due to the constraint for local schools to implement their school-based curricula. To strengthen the research design, the authors used the analysis of covariance (ANCOVA) to equalize the groups by treating the pretest score as a covariate.

Seven teachers and 174 sixth-grade (11–12 years old) students from two primary schools participated in this study. At school A, three classes of sixth-grade students (n = 65) participated in the designed classroom intervention where DGE technology was adopted in instruction (“DGE group”). At school B, four classes of sixth-grade students (n = 101) participated in a different classroom intervention where 3D Pens were adopted in instruction (“3D Pen group”). The case study screening was performed in the selection of participating schools, by considering the participating schools’ socio-economical background (i.e. average; government-subsidized schools), class size (i.e. less than 25 per class), teaching experience of the teacher participants (i.e. 5 to 10 years), and the students’ academic ability based on assessing the schools’ banding (Yin, 2006). The authors employed a control technique, namely to compare homogeneous subgroups, by grouping and comparing students with the same pre-requisite knowledge in their fifth grade, who learned topics in the sixth-grade curriculum in the same order.

Procedures

In response to the literature review on students’ difficulties with geometry learning, the topic in the “measure, shape, and space” strand in the local sixth-grade curriculum, “Exploration of 3D Shapes,” was chosen as the target learning goal in this study (Hong Kong Curriculum Development Council [HKCDC], 2017). The specific expected learning outcomes for sixth grade were to understand the relations among the number of sides of base, the number of edges, and the number of vertices of a prism, as well as those of a pyramid (HKCDC, 2017).

The respective classroom-based interventions took place over two 70-min lessons. The two lesson plans shared merely the same procedures, time allocations, and learning activities but with different technology used during the activities. The first lesson focused on the target properties of different forms of prisms, while the second lesson focused on those of the pyramids. The designed lesson plans were co-developed by the researcher and classroom teachers, upon which they were implemented in classrooms of the DGE group and 3D Pen group, respectively. Before the lessons were implemented, the teachers were briefed as follows to ensure consistency: the classroom teacher first led a review on naming prisms and pyramids by identifying the solids’ bases, e.g. triangular prism and rectangular prisms. Second, an inquiry-based and student-centered activity took place in which students explored the target properties within the respective technology-enhanced environments.

In the 3D Pen group, every two students used one 3D Pen to draw different prisms and pyramids. Without given much guidance on the constructions, each student pair took turns to draw at least one of each kind of prisms/pyramids, namely a triangular, a rectangular, and a pentagonal prism/pyramid within the allotted time. In the DGE group, every two students were paired up with a computer, where they used a pre-made DGE to explore the nets of different kinds of prisms/pyramids and to visualize how they could be folded into the corresponding prisms/pyramids onscreen. The DGEs granted students the ability to increase the number of sides of the prism’s or pyramid’s base conveniently, and hence, most students explored prisms/pyramids with bases of up to ten sides during the class period. While the DGE environment supported changing views and varying measurements and level of opacity of the 3D figure through the mouse input, the construction of prisms/pyramids from their nets was fixed. In other words, it provided only one way of folding the solids. On the other hand, the 3D Pen environment supported hands-on and flexible constructions of the 3D solids. For example, when constructing a triangular prism with 3D Pens, some students created a triangular base initially, whereas some students created one of its rectangular faces initially (Ng & Ferrara, 2019).

In both environments, the students were given ample time to complete the activity. During the activity, the classroom teachers prompted the students to count the number of vertices, edges, and faces of the solids and look for any patterns about the numbers observed. At the end of the lessons, the teachers in both groups facilitated a whole-class discussion on the patterns of observed and on generalizing the relations among the number of vertices, edges, and faces in any prisms/pyramids. During this time, the teachers and students together completed a chart which included the number of vertices, edges, and faces for triangular, rectangular, pentagonal, and decagonal prisms/pyramids. Field notes and videotaped lessons were collected for reviewing how the lessons unfolded, yet it was beyond the scope of the current work to analyse the collected qualitative data.

Measures

In accordance with the learning outcomes stated above, an assessment was designed to test students’ knowledge of the learning target before and after the lesson implementations. Consultation with mathematics educators and education experts was sought in the design process. Different versions of tests were used as pretests (T0), posttests (T1), and delayed posttests (T2). The designed tests aim to assess students’ prior knowledge before the teaching experiments started, learning outcomes immediately after intervention, and the retention of knowledge after the interventions had been completed for a sustained period of time. The pre- and posttests took place immediately (i.e. one school day) before and after the teaching experiments, and the delayed posttest took place 5 months after the lessons. The target concepts and relevant topics were not taught to the students during this period.

The pre-, post-, and delayed post-test questions mirrored each other with the same number of questions and format, namely, with 15 questions totaling 27 fill-in-the-blanks. Questions from each version were unique, in the sense that questions in the pretests were regenerated with different numbers in the posttests and delayed posttests. Students received identical test questions across groups for each test, which can be further categorized into two levels of difficulty, i.e. “simple” and “advanced.” An example of a “simple” question would be to determine the number of vertices, edges, or faces in a prism or pyramid whose base has n sides (where 5 ≤ n ≤ 13, or n = 100), and an example of an “advanced” question would be to determine the exact name of the prism or pyramid by working backward given the number of vertices, edges, and faces, N (where 9 ≤ N ≤ 24).

According to van Hiele (1986), recognizing and describing the properties of 3D figures based on visual characteristics correspond to levels 1 and 2 of geometric thinking (visualization and analysis). In turn, “simple” questions were designed to mirror geometric thinking at these levels by gauging students’ ability to visualize and perform mental rotations with or without a visual representation of the 3D figure. In contrast, “advanced” questions reflect students’ knowledge of the relations among 0D, 1D, and 2D parts of geometric objects and the ability to perform informal logical reasoning with the properties (level 3 or abstraction). Table 1 shows a breakdown of the types of questions in the pre-, post-, and delayed post-assessments.

Table 1 A summary of the types of questions in student assessments

Results

Descriptive statistics

Table 2 shows the descriptive statistics of students’ test scores across pretests, posttests, and delayed posttests. In particular, mean score/item (as opposed to raw score) was used to describe test scores in order to take advantage of making comparisons between (sub-)categories as well as ranking them. When grading students’ responses on the tests, a correct item was given 1 point; hence, the mean score/item also represents the percentage of correct items in a given (sub-)category.

Table 2 Development of student scores from pretest to delayed posttest

As seen in Table 2, both the 3D Pen and DGE group’s performances increased remarkably immediately after interventions (see “change from T0 to T1”). In addition, it can be observed that the two groups took on different patterns of development 5 months after the intervention (see “changes from T1 to T2”), that is, the improvement of the 3D Pen group from T1 to T2 is greater than that of its counterpart. For example, the 3D Pen group consistently improved between posttests and delayed posttest, while the DGE group performed at a lower or the same level between tests on most (sub-)categories. Further analyses will follow to elaborate on the phenomena observed.

Pre-test

After discounting two missing datasets in the 3D Pen group, an item analysis was performed to gauge the reliability of the test instruments. The results were observed as follows. First, a discrimination index was obtained by adopting the mean score of the high group (HG) and the low group (LG) in each item to test individual item difficulty, where the HG and LG were defined as the 27% or 28% of the top and bottom scores, respectively. Then, the distance between the mean of the HG and LG was calculated to test the item discrimination index. Finally, a t test was performed to determine if the item discrimination was significant between HG and LG.

For the 3D Pen group, the mean score/item ranged from .29 to .71, the item difficulty ranged from .35 to .65, the item discrimination index ranged from .56 to 1, and the item discrimination levels were all strongly significant with p < .01. For the DGE group, the mean score/item ranged from .15 to .86, the item difficulty ranged from .22 to .78, the item discrimination index ranged from .28 to .94, and the item discrimination levels were all significant with p < .05, except for one of the items for “face” or “AF,” which had an item discrimination index of .28 and significance level of p = .06. If this item were excluded in the item analysis, the mean score/item and item difficulty would remain the same, but the item discrimination index would change from .33 to .94. Hence, the item analysis results indicated that all items were suitable to assess student performance on the given concepts with the exception of one item, which was only acceptable in one intervention group. As such, this study kept all 27 items for further analyses.

Both participant groups were sixth-grade students at the time of study, and thus, they had already learned relevant knowledge about “faces” in their fifth-grade year, as aligned with the local curriculum. Given this background, it was expected that students would perform at a much higher rate in the pretest on tasks related to “faces” compared to tasks related to “edges” and “vertices” regardless of the level of difficulty of questions. Indeed, as shown in Table 3, students have higher mean scores on “faces” (M = .56 for the 3D Pen group, M = .75 for the DGE group) compared to those of “edges” (M = .42 for the 3D Pen group, M = .27 for the DGE group) and “vertices” (M = .37 for the 3D Pen group, M = .35 for the DGE group). Likewise, the mean scores of “SF” and “AF” consistently ranked the highest among the mean scores of seven sub-categories for both groups (Table 4). In addition, the two groups differed in their prior knowledge about some sub-categories in the pretest. Namely, the DGE group performed significantly better than their 3D Pen counterparts on “faces” (p = .001), including “SF” (p = .000) and “AF” (p = .021), while the 3D Pen group performed significantly better on “edges” (p = .015), including “SE” (p = .049) and “AE” (p = .012) before the intervention. No significant differences were found between groups when comparing the categories “simple” (p = .843) and “advanced” (p = .971).

Table 3 Descriptive statistics and comparisons between the 3D Pen and DGE groups (by categories)
Table 4 Descriptive statistics and comparison between the 3D Pen and DGE groups (by sub-categories)

Posttest

Posttests were analysed by first conducting item analysis and calculating descriptive statistics. Having already made comparisons between the 3D Pen and DGE groups on pretest performance by the use of t tests—where it was found that pretest performance was significantly different on some sub-categories between groups—it was decided that a culminating ANCOVA controlled by pretest scores would be used to compare posttest and delayed posttests scores between the 3D Pen and DGE groups in order to adjust for pretest differences.

The item discrimination levels and item difficulty levels remained acceptable. Overall, students in the DGE group obtained a mean/item score of no less than .78 in any sub-category in the posttest, and .70 for the 3D Pen group, indicating that students in both groups excelled in the posttest after the respective interventions, with the DGE group achieving a higher minimum threshold. Indeed, the DGE group’s posttest mean score/item increased from .46 in the pretest (σ= .27) to .85 (σ = .21), while the 3D Pen group achieved a smaller increase in their posttest mean score/item, from .45 in the pretest (σ = .33) to .78 (σ = .21).

Results of ANCOVA (Tables 3 and 4) suggest that, though differences between the two groups were not observed on the overall score (p = .103) and category “faces” (p = .475), the DGE group performed significantly higher in “vertices” (p = .019) and “AV” (p = .011) in addition to having a directional advantage on “edges” (p = .055) and “advanced” (p = .033), but not in “simple” (p = .441) questions. In summary, students from both groups scored higher in all (sub-)categories after the respective technology-enhanced interventions, where the DGE group improved at a higher rate than the 3D Pen group on “vertices,” “edges,” “advanced,” and “AV.”

Delayed posttest

The results for the intervention’s retention effect were observed from descriptive statistics shown in Table 2 and represented graphically in Fig. 2. As Table 2 shows and noted earlier, the change of mean score/item from T1 to T2 was consistently greater for the 3D Pen group than that of the DGE group. For example, the 3D Pen group improved on all sub-categories from T1 to T2, and their improvements were relatively higher than the DGE group on “faces,” “edges,” and “vertices” as well as all sub-categories of “advanced” questions. The DGE group obtained a negative change from T1 to T2 in three sub-categories; however, they did improve at a better rate in “simple” questions, including “SF” and “SE.” The graphs in Fig. 2 further depict different trends of developments from T1 to T2 between the two groups, that is, the retention of knowledge is shown to be greater for the 3D Pen group than that for the DGE group for all but “simple” questions.

Fig. 2
figure2

Development of T0, T1, and T2 scores on the total score, vertices, advanced, SV, AF, AV, and AE

The ANCOVA results (Tables 3 and 4) provide more statistical evidence for the retention effect with pretest scores as control variables. No differences are observed on the delayed posttest total scores (p = .562), as with posttests total scores (p = .103) between the two groups. However, for “vertices” and “advanced” questions, the score difference was significant in T1 (pvertices = .019, padvanced = .033) but not in T2 (pvertices = .235, padvanced = .252). Also, in the sub-category of “AV,” the score difference was significant in T1 (p = .011), but not in T2 (p = .284). As a consequence, the 3D Pen intervention caused a greater retention effect than the DGE intervention, particularly on “vertices,” “advanced,” and “AV” questions.

Discussion

Significance of the study

The goal of this study is to explore differences in geometry learning outcomes in two technology-enhanced environments, one that involves dynamic, visual representations of geometry and another that involves embodied actions of constructing physical 3D solids. The results of this study further shed light on the effect of visual and sensory-motor experiences on school mathematics learning. The posttest results show that the DGE group generally outperformed the 3D Pen group across categories. The observed outperformance is significant on “vertices,” “edges,” “advanced,” and “AV” questions for the DGE group. In response to the research question, the observed outperformance by the DGE group on “advanced” implies that the DGE technology had a stronger effect on higher levels of geometric learning (van Hiele, 1986). These results corroborate support to learning 3D geometry with digital, screen-based tools not only in terms of mental rotation but also the relations among 0D, 1D, and 2D parts of geometric objects and performing informal logical reasoning with the properties. Even if 3D concepts are represented on the flat computer screen, the students achieved the desired geometry learning outcomes via an inquiry-based form of learning complemented with the manipulability of DGEs.

However, the results from ANCOVA suggest that the retention effect was more significant with 3D Pens, as it was found that the 3D Pen group’s T2 scores consistently increased from T1, and the difference between the two groups was no longer significant in all sub-categories 5 months after the respective interventions were administered. It is concluded that students’ levels of geometric thinking did not differ significantly across groups in the long term, especially in “advanced” questions of which the 3D Pen group caught up with their DGE counterparts. Hence, this study highlights embodied interactions with 3D Pens as having a positive and sustained effect on geometry learning. Meanwhile, despite minor losses between T1 and T2 in most sub-categories, the DGE scores were still higher than the 3D Pen scores. Therefore, further research on long-term geometric learning retention is warranted, especially in light of the foundational role of mathematical literacy and sense-making in STEM education (English, 2016; Li & Schoenfeld, 2019).

Positive effects of DGEs on geometry learning

An important finding of the present study is that the DGE group outperformed the 3D Pen group on the posttest on several categories and one sub-category, including a mean score of nearly 2 points (24.51 out of 29) higher than that of the 3D Pen group (22.59 out of 29). As informed by the literature, it is speculated that this outperformance by the DGE group on the posttest may be explained by two characteristics of DGEs that are not available in hands-on but non-digital learning environments, such as those with 3D Pens. First, DGEs are capable of showing a limitless number of examples, usually activated by the dragging tool. As prior research has pointed out, the dragging tool not only facilitates a continuous conception and visualization of mathematics but also readily supports conjecturing about mathematics by showing variance and invariance via visual means (Battista, 2008; Erez & Yerushalmy, 2006). Whereas the students in the 3D Pen group were able to construct and interact with three different kinds of prisms and pyramids during the 70-min lessons, their DGE counterparts were able to interact with more than ten (and in theory, an infinite) number of prisms and pyramids onscreen in the computerized environment, which could be supportive to their identification of the properties of 3D solids upon the intervention.

Second, DGEs offer immediate feedback and manipulation possibilities that are not available in the 3D Pen environment. Christou et al. (2006) suggest that the exploratory nature of DGEs accords with the fallibilist approach to mathematics to focus on seeking evidence of whether a conjecture is valid or not. In the context of learning 3D figures and their properties, the DGEs provide feedback as to whether the learner has input the correct number of vertices, edges, and faces in the 3D solids shown onscreen. This context is similar to that of Bokhove and Drijvers (2012), who reported on the positive effects of a digital intervention aimed to improve students’ algebraic expertise via a digital environment that supports feedback as a form of formative assessments. From the perspective of didactical situations (Brousseau, 1997), the DGE is an instrument that is part of the milieu or learning situation proposed by the teacher and which the learner employs as a means to accomplish the proposed task. In technological learning environments, feedback is an important feature that supports the evolution of learners’ strategies and mathematical knowledge development beyond merely validating one’s answers (Artigue, 2007). It is suggested that the combination of limitless examples and feedback are factors that contributed to students’ strong performance in “advanced” questions in T1.

Hands-on mathematics and gestures

The present study suggests that both digital and hands-on learning environments improve knowledge acquisition and facilitate adequate retention of learning about 3D geometry. However, the retention effect of embodied interactions with 3D Pens was stronger than the DGE group, as T2 score differences between groups were no longer significant across all categories, including the “advanced” questions. These results lend support to the embodied cognition perspective and a hands-on approach to learning mathematics. According to embodied cognition theory, cognition is situated in our bodily and tactile interactions with the physical world. One prevalent use of our body is in how we use our hands to touch, move, and make reference to something about the world. Recent extant research has highlighted the role of gestures, both in terms of how embodied concepts are communicated, as well as how spatial, abstract, or physical information is encoded (Burte, Gardony, Hutton, & Taylor, 2017). In Novack et al.’ (2014) terms, the hand movement of using 3D Pens can be considered concrete gestures that preserve the embodied nature of the interaction as found in physical manipulation. Compared with using a 2D input mouse to navigate a 3D scene, the novel 3D Pen environment facilitates a much stronger “connection between mathematical and pedagogic dynamisms” (Jackiw & Sinclair, 2009, p. 418) given rise by direct, hands-on interaction of mathematical representations. Furthermore, the results of this study corroborate previous work showing that the effects of gesture are particularly good at promoting long-lasting learning (e.g. Cook, Mitchell, & Goldin-Meadow, 2008). Considering the unique embodied nature of using 3D Pens, the improved T2 performance in some categories by the 3D Pen group can be explained. By the end of T2, students in the 3D Pen group demonstrated knowledge of properties of prisms and pyramids that matched with their DGE counterparts, suggesting that visualization and geometrical thinking can be supported initially and strengthened upon hands-on learning with 3D printing technology.

Limitations of the study

When it was not feasible to control students’ prior abilities, different students’ prior knowledge of 3D geometry between the two groups might influence the results of their subsequent performances after interventions. According to the t test results comparing score differences on students’ performances in the pretests, students’ pre-scores were different on “faces,” “edges,” “SF,” “SE,” “AF,” and “AE.” To counter this limitation, this study adopted analyses controlling pretest scores to avoid possible bias, that is, ANCOVA with pre-scores as covariates were used to overcome the deficiency of using t tests when pre-scores were not identical. Moreover, this study did not collect precise gender breakdown of the participants and hence did not examine potential gender differences. It is noted that the gender representations from the respective schools were not visibly different, yet there remains room for exploring variables such as gender, as well as spatial skills and motivation, on students’ geometry outcomes (Atit et al., 2020).

Conclusion

The present study has established evidence that the DGE instructions produced strong but relatively temporary geometry learning outcomes, while 3D Pen instructions can help solidify that knowledge. From this, the specific question arises as to whether the gestures used to construct mathematical concepts with 3D Pens will help form or simulate mathematical thinking. The answer to this question will inform wider inquiries of whether and how the gestures mobilized by the 3D Pens affect learning and, more broadly, embodiment-oriented research dealing with educational technologies in an era where novel interfaces and human-computer interaction becoming increasingly ubiquitous. Indeed, the 3D Pen environment is promising and warrant future research to focus on the immediacy of embodied interactions with mathematical representations. In the same way, another fruitful line of research is to build on current work in relation to using the touchscreen medium to support more intuitive and tactile experiences for learning geometry with DGEs.

Availability of data and materials

This study’s data is available for access through an institutional repository.

Notes

  1. 1.

    The 3D Printing Pen, or 3D (Drawing) Pen, is a novel handheld 3D printing device that operates in the same manner as a 3D printer. It extrudes small, flattened strings of molten thermoplastic (ABS or PLA) and forms a volume of “ink” as the material hardens immediately after extrusion from the nozzle. As the pen moves along with the hand which holds it, a 3D model is created at once, either on a surface or in the air (for more info, see Ng, Sinclair, & Davis, 2018)

Abbreviations

ANCOVA:

Analysis of covariance

DGE:

Dynamic geometry environment

HKCDC:

Hong Kong Curriculum Development Council

STEM:

Science, technology, engineering and mathematics

References

  1. Artigue, M. (2007). Digital technologies: a window on theoretical issues in mathematics education. In D. Pitta-Pantazi, & G. Philippou (Eds.), Proceedings of the Fifth Congress of the European Society for Research in Mathematics Education, (pp. 68–82).

    Google Scholar 

  2. Atit, K., Power, J. R., Veurink, N., Uttal, D., Sorby, S., Panther, G., … Carr, M. (2020). Examining the role of spatial skills and mathematics motivation on middle school mathematics achievement. International Journal of STEM Education, 7. https://doi.org/10.1186/s40594-020-00234-3.

  3. Baccaglini-Frank, A. (2019). Dragging, instrumented abduction and evidence, in processes of conjecture generation in a dynamic geometry environment. ZDM: The International Journal of Mathematics Education, 51(5), 779–791.

    Article  Google Scholar 

  4. Ball, L., Drijvers, P., Ladel, S., Siller, H.-S., Tabach, M., & Vale, C. (2018). Uses of technology in primary and secondary mathematics education: tools, topics and trends. Cham: Springer.

    Google Scholar 

  5. Bartolini Bussi, M. G., & Mariotti, M. A. (2008). Semiotic mediation in the mathematics classroom: artifacts and signs after a Vygotskian perspective. In L. English, M. Bartolini Bussi, G. Jones, R. Lesh, & D. Tirosh (Eds.), Handbook of international research in mathematics education, second revised edition, (pp. 746–783). Mahwah: Lawrence Erlbaum.

    Google Scholar 

  6. Battista, M. T. (2008). Development of the shape makers geometry microworld. Research on Technology and the Teaching and Learning of Mathematics, 2, 131–156.

    Google Scholar 

  7. Bokhove, C., & Drijvers, P. (2012). Effects of a digital intervention on the development of algebraic expertise. Computers & Education, 58(1), 197–208.

    Article  Google Scholar 

  8. Brousseau, B. (1997). Theory of didactical situations in mathematics. Dordrecht: Kluwer.

    Google Scholar 

  9. Bruce, C. D., & Hawes, Z. (2015). The role of 2D and 3D mental rotation in mathematics for young children: what is it? Why does it matter? And what can we do about it? ZDM: The International Journal of Mathematics Education, 47(3), 331–343.

    Article  Google Scholar 

  10. Burte, H., Gardony, A. L., Hutton, A., & Taylor, H. A. (2017). Think3d!: Improving mathematics learning through embodied spatial training. Cognitive Research: Principles and Implications, 2. https://doi.org/10.1186/s41235-017-0052-9.

  11. Christou, C., Jones, K., Mousoulides, N., & Pittalis, M. (2006). Developing the 3DMath dynamic geometry software: theoretical perspectives on design. International Journal for Technology in Mathematics Education, 13(4), 168–174.

    Google Scholar 

  12. Clements, D. H., Swaminathan, S., Zeitler Hannibal, M. A., & Sarama, J. (1999). Young children’s concepts of shape. Journal for Research in Mathematics Education, 30(2), 192–212.

    Article  Google Scholar 

  13. Cook, S. W., Mitchell, Z., & Goldin-Meadow, S. (2008). Gesturing makes learning last. Cognition, 106, 1047–1058.

    Article  Google Scholar 

  14. Duval, R. (2005). Les conditions cognitives de l’apprentissage de la géométrie: Développement de la visualisation, différenciation des raisonnements et coordination de leurs fonctionnements [Cognitive conditions of the geometric learning: developing visualisation, distinguishing various kinds of reasoning and co-ordinating their running]. Annales de Didactique et de Sciences Cognitives, 10, 5–53.

    Google Scholar 

  15. English, L. (2016). STEM education K-12: perspectives on integration. International Journal of STEM Education, 3. https://doi.org/10.1186/s40594-016-0036-1.

  16. Erez, M. M., & Yerushalmy, M. (2006). “If you can turn a rectangle into a square, you can turn a square into a rectangle...” Young students experience the dragging tool. International Journal of Computers for Mathematical Learning, 11(3), 271–299.

    Article  Google Scholar 

  17. Hallowell, D. A., Okamoto, Y., Romo, L. F., & La Joy, J. R. (2015). First-graders’ spatial-mathematical reasoning about plane and solid shapes and their representations. ZDM: The International Journal of Mathematics Education, 47(3), 363–375.

    Article  Google Scholar 

  18. Hershkowitz, R. (1989). Visualization in geometry--two sides of the coin. Focus on Learning Problems in Mathematics, 11, 61–76.

    Google Scholar 

  19. Hong Kong Curriculum Development Council (2017). Supplement to mathematics education key learning area curriculum guide: learning content of primary mathematics. Hong Kong: The Printing Department.

    Google Scholar 

  20. Jackiw, N., & Sinclair, N. (2009). Sounds and pictures: dynamism and dualism in dynamic geometry. ZDM: The International Journal of Mathematics Education, 41(4), 413–426.

    Article  Google Scholar 

  21. Jamrozik, A., McQuire, M., Cardillo, E. R., & Chatterjee, A. (2016). Metaphor: bridging embodiment to abstraction. Psychonomic Bulletin & Review, 23(4), 1080–1089.

    Article  Google Scholar 

  22. Laborde, C. (2008). Experiencing the multiple dimensions of mathematics with dynamic 3D geometry environments: illustration with Cabri 3D. The Electronic Journal of Mathematics and Technology, 2(1), 38+.

  23. Lakoff, G., & Núñez, R. (2000). Where mathematics comes from: how the embodied mind brings mathematics into being. New York: Basic Books.

    Google Scholar 

  24. Leung, A. (2008). Dragging in a dynamic geometry environment through the lens of variation. International Journal of Computers for Mathematical Learning, 13(2),135–157.

  25. Leung, A., Baccaglini-Frank, A., & Mariotti, M. A. (2013). Discernment of invariants in dynamic geometry environments. Educational Studies in Mathematics, 84, 439–460.

    Article  Google Scholar 

  26. Li, Y., & Schoenfeld, A. H. (2019). Problematizing teaching and learning mathematics as “given” in STEM education. International Journal of STEM Education, 6. https://doi.org/10.1186/s40594-019-0197-9.

  27. Ng, O. (2019). Examining technology-mediated communication using a commognitive lens: the case of touchscreen-dragging in dynamic geometry environments. International Journal of Science and Mathematics Education, 17(6), 1173–1193. https://doi.org/10.1007/s10763-018-9910-2.

    Article  Google Scholar 

  28. Ng, O., & Chan, T. (2019). Learning as making: using 3D computer-aided design to enhance the learning of shapes and space in STEM-integrated ways. British Journal of Educational Technology, 50(1), 294–308. https://doi.org/10.1111/bjet.12643.

    Article  Google Scholar 

  29. Ng, O., & Ferrara, F. (2019). Towards a materialist vision of ‘learning as making’: the case of 3D Printing Pens in school mathematics. International Journal of Science and Mathematics Education, 18, 925–944. https://doi.org/10.1007/s10763-019-10000-9.

    Article  Google Scholar 

  30. Ng, O., & Sinclair, N. (2015). Young children reasoning about symmetry in a dynamic geometry environment. ZDM: The International Journal of Mathematics Education, 47(3), 421–434. https://doi.org/10.1007/s11858-014-0660-5.

    Article  Google Scholar 

  31. Ng, O., & Sinclair, N. (2018). Drawing in space: doing mathematics with 3D pens. In L. Ball, P. Drijvers, S. Ladel, H.-S. Siller, M. Tabach, & C. Vale (Eds.), Uses of technology in primary and secondary mathematics education, (pp. 301–313). Cham: Springer. https://doi.org/10.1007/978-3-319-76575-4_16.

    Google Scholar 

  32. Ng, O., Sinclair, N., & Davis, B. (2018). Drawing off the page: how new 3D technologies provide insight into cognitive and pedagogical assumptions about mathematics. The Mathematics Enthusiast, 15(3), 563–578.

    Google Scholar 

  33. Novack, M. A., Congdon, E. L., Hemani-Lopez, N., & Goldin-Meadow, S. (2014). From action to abstraction: using the hands to learn math. Psychological Science, 25(4), 903–910.

    Article  Google Scholar 

  34. Ozdemir, G. (2010). Exploring visuospatial thinking in learning about mineralogy: spatial orientation ability and spatial visualization ability. International Journal of Science and Mathematics Education, 8(4), 737–759.

    Article  Google Scholar 

  35. Papert, S., & Harel, I. (1991). Situating constructionism. In I. Harel, & S. Papert (Eds.), Constructionism, (pp. 1–11). New Jersey: Ablex Publishing.

  36. Sarama, J., & Clements, D. H. (2002). Building blocks for young children’s mathematical development. Journal of Educational Computing Research, 27(1), 93–110.

    Article  Google Scholar 

  37. Stull, A. T., Hegarty, M., Dixon, B., & Stieff, M. (2012). Representational translation with concrete models in organic chemistry. Cognition and Instruction, 30(4), 404–434.

    Article  Google Scholar 

  38. Van Hiele, P. M. (1986). Structure and insight. A theory of mathematics education. Orlando: Academic press Inc.

    Google Scholar 

  39. Vinner, S., & Hershkowitz, R. (1983). On concept formation in geometry. ZDM: The International Journal of Mathematics Education, 1, 20–25.

    Google Scholar 

  40. Weisberg, S.M., Newcombe, N.S. (2017). Embodied cognition and STEM learning: overview of a topical collection in CR:PI. Cognitive Research: Principles and Implications, 2(1), Article 38. https://doi.org/10.1186/s41235-017-0071-6

  41. Yin, R. (2006). Case study methods. In J. Green, G. Camilli, & P. Elmore (Eds.), Handbook of complementary methods in education research, (pp. 111–122). Mahwah: Lawrence Erlbaum.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Prof. Steven Ross for his valuable feedback on this manuscript, as well as the teachers and students for their participation in the study.

Funding

This study is supported by the Research Grants Council (Hong Kong), Early Career Scheme (RGC ref no. 24615919).

Author information

Affiliations

Authors

Contributions

The first author conducted the research procedures and contributed towards major sections of the manuscript. The second completed the quantitative data analyses, and the third author assisted in both the quantitative analyses and the research design. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Oi-Lam Ng.

Ethics declarations

Competing interests

There is no potential conflict of interest in the work reported here.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ng, O., Shi, L. & Ting, F. Exploring differences in primary students’ geometry learning outcomes in two technology-enhanced environments: dynamic geometry and 3D printing. IJ STEM Ed 7, 50 (2020). https://doi.org/10.1186/s40594-020-00244-1

Download citation

Keywords

  • Technology-enhanced learning
  • 3D printing
  • 3D pen
  • Dynamic geometry
  • Embodied cognition
  • Gestures
  • Mathematics education
  • Classroom interventions
  • ANCOVA