Influence of 3D models and animations on students in natural subjects
International Journal of STEM Education volume 9, Article number: 65 (2022)
Studies comparing the effect of dynamic and static visualization suggest a predominantly positive effect of dynamic visualization. However, the results of individual comparisons are highly heterogeneous. In this study, we assess whether dynamic visualization (3D models and animations) used in the experimental group has a stronger positive influence on the intrinsic motivation and learning outcomes of science students (Biology, Chemistry and Geology) than static visualization used in the control group, and whether selected variables (students’ gender, age, educational level, learning domain, and teacher personality) significantly affect the results.
This study was conducted in 2019 with a sample of 565 students from Czech middle (aged 11–15 years) and high (aged 15–19 years) schools using the following research tools: Motivated Strategies for Learning Questionnaire, Intrinsic Motivation Inventory and knowledge tests. The results show that using 3D models and animations in the teaching process significantly increased the students’ intrinsic motivation for learning natural sciences (more specifically, its components (1) interest, (2) effort to actively participate in the educational process, (3) perceived competence and (4) understanding the usefulness of the subject matter), with a mean Hedges’ g = 0.38. In addition, students in the experimental group reached a significantly higher level of Chemistry knowledge than their peers in the control group. Furthermore, by moderator analysis, we identified three moderator variables, namely student age, instructional domain and teacher personality. These variables significantly affect intrinsic motivation in different ways. The strongest positive effect of dynamic visualizations was found among students aged 11–13, whereas the weakest positive effect was identified among students aged 14–16. Regarding instructional domain animations and 3D models, the strongest positive effect is found in Chemistry (g = 0.74) and Biology (g = 0.72), whereas the positive impact on Geology is significantly weaker (g = 0.45). Teacher personality was found to be a major moderator in student motivation, with significant differences (g = 0.40—1.24). Teachers’ attitude towards modern technology plays an important role concerning this effect.
Based on these findings, we conclude that 3D models and animations have a positive effect on students and that teachers should include these visual aids in their lessons. For this reason, teachers are encouraged to implement these dynamic visual aids in their lessons regardless of their beliefs, and to get an adequate support in the process of implementation if necessary.
A lack of Science, Technology, Engineering, and Mathematics (STEM) graduates has long troubled the European Union in general, and the Czech Republic in particular (Gago et al., 2005). Despite the growing number of scientific publications in the STEM field (Li, 2021; Takeuchi et al., 2020) there is a current shortage of STEM graduates resulting from the relatively low number of high school students who enroll in science and technology degrees at universities (Eurostat, 2020), mostly due to the low popularity of some subjects. For example, Chemistry is an unpopular subject based on the results of student surveys (Beauchamp & Parkinson, 2008; Pavelková et al., 2010) because this subject is apparently too abstract for students who struggle to visualize some fundamental concepts, such as an atomic orbital (Chen et al., 2015), and to understand the particulate nature of matter (Williamson & Abraham, 1995). Even Biology, which is regarded as an easy subject (Hanzalová, 2019), covers numerous topics requiring a high level of abstraction, including anatomic structures (Mitsuhashi et al., 2009) and cellular biology (Jenkinson, 2018), as well as molecular genetics concepts and processes (Malacinski & Zell, 1996; Rotbain et al., 2006). It is therefore necessary that students are sufficiently motivated to study science subjects, which means increasing students’ interest in the topics taught, along with overcoming challenging (mostly abstract) topics.
Visual representations have been developed to aid thinking and we generally use it to better understand various data (Mazza, 2009; Ware, 2004). The goal is to visually represent data in such a way that the most important patterns are clearly distinguishable from their surroundings (Mazza, 2009) to enable us capturing and incorporating a new piece of information into the long-term memory (Craik & Lockhart, 1972). This predetermines visualizations to be used in teaching these particular abstract topics.
Visualization can be divided into static visualizations (e.g., still illustrations, slides and photographs) and dynamic visualizations (e.g., animations, three-dimensional rotating models, simulations and videos). The latter have been gaining popularity as the use of graphics in computer-based educational environments has increasingly become commonplace (Lin & Atkinson, 2011).
Yet individual comparisons between the differential effects of static and dynamic visualization have yielded highly heterogeneous results (Kaushal & Panda, 2019).
Considering the above, this study aims to identify the best approach to increase students’ internal motivation for science subjects, because students who are more interested in natural sciences are also more motivated to study these subjects (Berg et al., 2003; Klahr & Nigam, 2004) and to understand them (Khishfe & Abd-El-Khalick, 2002). For this purpose, we assess the influence of dynamic visualization on primary and secondary school students in comparison with static visualization in science subjects (biology, chemistry and geology). More specifically, we examine the influence of static and dynamic visualization on students’ internal motivation (interest/enjoyment, effort, perceived competence, value/usefulness) and on the level of acquired knowledge on the subject matter.
One way to increase students’ interest in science subjects and to support their cognitive processes is to use visualization aids (Bilbokaitė, 2015; Nodzyńska, 2012; Popelka et al., 2019; Rotbain et al., 2006; Ryoo & Linn, 2012; Wu et al., 2001). Visual aids can help students understand particularly difficult and abstract topics (Bunce & Gabel, 2002; Harrison & Treagust, 2006) by stimulating their imagination and enhancing their ability to understand the subject matter, thereby improving the memorization of these concepts. Visualization can also enable students to adequately understand preconcepts (Tarmizi, 2010) while preventing the formation of misconcepts. Generally, visualization plays a key role in explaining the subject matter, focusing on features of microelements invisible to the naked eye (DiSpezio, 2010; Gomez-Zwiep, 2008; Herman et al., 2011). Some subjects, such as Biochemistry (Schönborn & Anderson, 2006) and closely related molecular biology (Jenkinson, 2018; Marbach-Ad et al., 2008), cannot be effectively taught without visualization. Therefore, visualization tools are crucial for understanding and research in the molecular and cellular biological sciences (Schönborn & Anderson, 2006).
The benefits of visualization tools lie in facilitating the understanding process best described by the so-called scaffolding theory (Eshach et al., 2011; Wood et al., 1976). “The scaffolding metaphor means that given appropriate assistance, a learner can perform a task otherwise outside his/her independent reach” (Eshach et al., 2011, p. 552). The scaffolding theory, originally requiring an adult to assist and help students (Wood et al., 1976), has been subsequently extended by Puntambekar and Hübscher (2005) to teaching tools able to control and measure the amount of information given, thus reducing the number of acts needed to reach understanding (Puntambekar & Hübscher, 2005; Tabak, 2004; Wood et al., 1976). More recently, Chang and Linn (2013) showed that interactions with visualization tools are even more beneficial than visualization itself. Therefore, visualization aids that promote further interactions aim to be more effective.
Dynamic visualization aids (e.g., animations, simulations, three-dimensional rotating models and videos) can be used in the teaching process for several purposes. First, animations can serve as a means of gaining attention. This category includes various animated arrows or highlights (Berney & Bétrancourt, 2016). Secondly, animation may be used to demonstrate concrete or abstract procedures required to be memorized and performed by the learner, such as tying nautical knots (Ayres et al., 2009; Schwan & Riempp, 2004). Thirdly, animation-based teaching is effective in describing processes that change over time and space (Ainsworth & VanLabeke, 2004; Rieber, 1990; Schnotz & Lowe, 2003). Dynamic visualization is especially suitable for dynamically visualizing abstract objects which students cannot easily imagine. Therefore, teaching through dynamic visualization is significantly more effective, especially in difficult scientific disciplines in which dynamic visualization can support the students’ cognitive processes (Bilbokaite, 2015; McElhaney et al., 2015).
This correlation is evident in processes that change over time (Ainsworth & VanLabeke, 2004; Rieber, 1990). For this reason, visualization is widely used in areas related to physical, chemical or biological disciplines. McElhaney et al. (2015) specifically mention that dynamic visualization can help pupils/students visualize unobservable dynamic phenomena, such as global climate change, tectonic plate motion, heat transfer, gene expression, cellular respiration and other cellular processes (e.g., cell division)—i.e., topics taught in science subjects such as geology, biology and chemistry.
Advantages and disadvantages of dynamic visualization
Both advantages and disadvantages of using dynamic visualization in teaching have been reported in comparison with static visualization. The benefits of dynamic visualization include enabling and facilitating effects (Kühl et al., 2011; Schnotz, 2005; Schnotz & Rasch, 2005) because the continuous representation of changes supports the perceptual and conceptual processing of dynamic information (Berney & Bétrancourt, 2016), in addition to preventing students from developing misconceptions and drawing erroneous conclusions from a mere static representation of the curriculum (e.g., misinterpreting a picture), which is related to an unnecessary cognitive load (Bétrancourt et al., 2001; Kühl et al., 2011). Dynamic visualization also reduces cognitive load associated with gradual steps (Berney & Bétrancourt, 2016), by helping students contextualize separate knowledge, for example relationships among pictures or schemes, which subsequently reduces working memory demands. Another benefit of dynamic visualization includes the ability to control its pace, such as pausing, rewinding or replaying (McElhaney et al., 2015).
Conversely, a disadvantage of dynamic visualization is the great amount of information given (Ainsworth & VanLabeke, 2004; Bétrancourt & Réalini, 2005), all of which (even transient) is processed and stored in the working memory, which could potentially lead to cognitive overload (Chandler, 2004; Chandler & Sweller, 1991; Jones & Scaife, 2000; Lowe, 1999; Mayer & Moreno, 2002). Dynamic visualization offers only temporary information, which (due to working memory overload) can be replaced by subsequent information (Bétrancourt & Tversky, 2000). By contrast, static images presenting different states or steps allow students to examine and compare these states, whereas dynamic visualization provides one step at a time. This stepwise presentation results in another disadvantage of dynamic visualization, which is the inability to compare individual steps (Bétrancourt et al., 2001). Another disadvantage of dynamic visualization is the split attention effect. When multiple events overlap in dynamic visualization (animation), attention fragmentation may occur, causing imperfect information acquisition (Löwe, 2003). Other disadvantages include oversimplifying a curriculum problem, which may give students a false impression that they understand the problem (Schnotz & Rasch, 2005).
Dynamic vs static visualization impact
Considering the widespread use of dynamic visualization, researchers have sought to study its impact on students. In 2000, a review conducted by Bétrancourt and Tversky (2000) compared 17 studies on the differences between common educational methods (extrapolation and analysis, among others) and the educational process supported by animations. Most studies (10 of 17) showed a positive impact of using animations, but the remaining 7 found no effect or only non-significant effects of incorporating animations into the educational process. In 2007, Höffler and Leutner conducted a meta-analysis of 26 studies published in 1973–2003 (Höffler & Leutner, 2007), including 76 pairwise comparisons of the effect between dynamic visualizations and static visualizations. This meta-analysis showed a positive effect of the animations compared with the static visualizations, with an average effect size d = 0.37, which indicates a small to medium effect. However, the authors also included video-based visualization in their study. Even studies comparing static to video-based visualization have shown a significantly higher effect on average (d = 0.76; a total of 12 comparisons) than other comparisons based on computer graphics (d = 0.36; a total of 64 comparisons). The total number of participants was not specified.
The research of Höffler and Leutner was closely followed by a similar review study by Berney and Bétrancourt (2016), who also analyzed research articles published up to 2013, and focused on comparing differences in the benefits of static and dynamic visualization. The authors included 61 published studies, totaling 140 pairwise comparisons of dynamic and static visualizations intended for teaching. In contrast to the previous study, which assessed effect size based on Cohen’s d (Cohen, 1988), the magnitude of the effect was expressed as Hedges’ g (Hedges, 1981) in this meta-analysis, but the results confirmed the positive effect of animations when compared with static visualizations, with a difference in effect magnitude g = 0.23, which represents a small effect. As the studies included more than 7000 subjects, the results can be considered reliable. The comparison between Cohen’s d of the previous meta-analysis and Hedges’ g of this analysis shows a decrease in effect size. The authors explain that the effect size is smaller because they included more total and pairwise comparisons in the analysis. The authors further highlight that, although the overall effect was positive, almost 60% of the studies did not show significant differences between dynamic and static visualization.
The results from the aforementioned meta-analyses suggest a predominant, albeit slight, positive effect of dynamic visualization (most often in the form of animation). Moreover, the results from individual comparisons are highly heterogeneous. Some studies show the positive influence of animations on the educational process (Lin & Atkinson, 2011; Marbach-Ad et al., 2008; Özmen, 2011), whereas others are less clear in their conclusions (Boucheix & Schneider, 2009; Bulman & Fairlie, 2016; Mayer et al., 2007; Tversky et al., 2002).
Moderator variables influencing the effect of dynamic visualization on students
As detailed in the section above, previous empirical studies exploring the influence of animations lack uniform results (Kaushal & Panda, 2019). These disparities have led researchers to search for potential moderators of the effect of using dynamic visual aids on student performance. These factors, which our study also addressed, include the instructional domain (subject), student gender and education level.
Instructional domain (subject)
The influence of the instructional domain, for which animation was created, was studied in the meta-analysis by Höffler and Leutner, (2007). The results showed that the instructional domain in which the analysis is performed is a determining factor of the effect size. The highest magnitude of the effect was measured in chemistry (d = 0.75; a total of 7 comparisons), followed by mathematics (d = 0.62; 5 comparisons) and physics (d = 0.28; 39 comparisons). The smallest effect was found surprisingly in biology (d = 0.13; 12 comparisons). However, due to the low number of comparisons, whose final effect size has been included in the overall result, the statistical power of this comparison was low (Höffler & Leutner, 2007).
The meta-analysis conducted by Berney and Bétrancourt (2016) also examined the influence of moderating variables that affect the effectiveness of animations in the teaching process. This meta-analysis among other things showed the subject in which the analysis is performed is a determinant of effect size as well. The highest effect was measured in “natural sciences” (g = 1.26; 8 comparisons), with a relatively large effect in chemistry as well (g = 0.77; 8 comparisons), but with a low effect size in biology (g = 0.20; 33 comparisons). However, even in these results, only a few subjects were compared, which reduced the statistical power of the results.
In the meta-analysis by Castro-Alonso et al. (2019), the influence of the subject on the effectiveness of dynamic visualization (animation) in teaching was also investigated. The authors focused on STEM and found that the dynamic type of visualization is more effective in geology and other sciences (g = 0.38; 11 comparisons) and subsequently in biology and medical sciences (g = 0.27; 11 comparisons) than in technical or mathematical subjects (g = 0.15; 15 comparisons) or even physics and chemistry (g = 0.19; 23 comparisons). Nevertheless, the number of overall comparisons was relatively low, which reduces the statistical power of the results.
The division into the instructional domain also entails some difficulties, which may be, for example, the attractiveness of the discussed topic. The whole content of individual scientific disciplines is not homogeneous, and one chapter may be more attractive for students than another, which has a great influence on the overall results.
In their meta-analysis, Castro-Alonso et al. (2019) found that student gender is a key factor because dynamic visualizations are less effective in a sample of participants with fewer females than males. In particular, studies involving fewer than 59% of females showed a moderately positive effect of dynamic visualization (g = 0.36, 35 comparisons) and studies involving 60% or more females did not show any dynamic visualization effect (g = 0.07, 47 comparisons). The authors suggested that the unequal ratio of females to males, in some studies, may be a significant factor in explaining variations in effect size across studies.
Unfortunately, student gender factor has been overlooked in many studies (Garland & Sanchez, 2013; Schnotz et al., 1999; Wang et al., 2011), and most of which do not even provide gender ratios for the whole sample (Castro-Alonso et al., 2019). In addition, many studies are conducted with undergraduate pedagogy and psychology students, and males are markedly under-represented in these degrees (Castro-Alonso et al., 2019).
Gender has a strong influence on cognitive load (Bevilacqua, 2017). Thus, this factor must be analyzed. In their meta-analysis, Zell et al. (2015) concluded that gender has a significant effect on attention, memory and problem solving (d = 0.22), especially among the participants with the best results. Gender can also affect the participants’ perceptions of spatial imagination (Höffler, 2010; Ikwuka & Samuel, 2017; Wong et al., 2018; Zell et al., 2015).
Level of education plays a huge role, mainly because cognitive ability correlates with age (within individual differences) (Damon et al., 2006). This is reflected not only in different subjects (instructional domain), but more specifically in individual topics. The level of education must also be taken into account when choosing teaching methods, because it is at the age of middle school students when abstract and scientific thinking gradually develops (Damon et al., 2006; Goswami, 2010).
The literature shows that dynamic visualizations and animations have a positive impact on school children (Bétrancourt & Chassot, 2008), university students (Jaffar, 2012) and adults (Türkay, 2016). McElhaney et al. (2015) assessed, whether the effect of dynamic visualization depended on education level, and found that the difference between the effects of dynamic and static visualizations is higher in primary and secondary school students (g = 0.27; 10 comparisons) than in post-secondary level students (g = 0.07; 37 comparisons), which showed almost no effect.
The variable education level was also examined by Castro-Alonso et al. (2019), who concluded that the use of dynamic visualization is most effective among primary school students (g = 0.53), followed by secondary school students (g = 0.44), and the least effective among university students (g = 0.19) (Castro-Alonso et al., 2019).
A substantial amount of variance in instructional quality can be explained by teacher characteristics such as cognitive ability, personality, professional knowledge, constructivist beliefs, enthusiasm and instructional quality (Baier et al., 2019). Teacher personality plays an important role in the educational process and should not be omitted. Kim et al. (2018) showed that even though domains of teacher personality do not predict academic achievement, they are able to predict subjective measures of teacher effectiveness as well as evaluation of teaching (Kim et al., 2019). Especially extraversion and enthusiasm have been identified as very strong predictors of instructional quality (Baier et al., 2019). Some of these domains are also crucial factors in the implementation and acceptance of technology in education (Tzima et al., 2019).
Objectives, hypothesis and research questions
The results from empirical studies are not uniform. Thus, further research is required to determine when animations are more effective than static visual aids (Kaushal & Panda, 2019) by continuously exploring dynamic visualizations and by defining potential moderators, which may significantly affect the potential impact of these aids on students.
Currently, many ongoing discussions (especially among teachers and politicians) address the use of dynamic visualizations (e.g., animations, simulations, three-dimensional rotating model, and videos) and their impact on the quality of the education process. Furthermore, the Strategy of Digital Education of the Czech Republic has already been approved since 2014 (MEYS, 2020). This strategy, aimed at the digitalization of education in middle and high schools, prioritized opening up the education system to new teaching methods through the use of digital technologies. Accordingly, new visualization equipment was purchased for 60 Czech schools. However, the effectiveness of these visual aids, their impact on the quality of educational process and the influence of potential moderator variables must be evaluated before expanding this strategy to the entire country.
Considering the above, we conducted a proof-of-concept study to assess whether using visual aids positively influenced students. The basic research method was a comparative study in the form of a pedagogical experiment, which investigated the impact of dynamic visualization as a teaching tool on chemistry students (and students of other science subjects) in comparison with those taught using static representations. A different representation of the curriculum was chosen as the independent variable, and it was investigated as to whether the difference could cause a change in both the intrinsic motivation of the students as well as the level of acquired knowledge (dependent variable). Thus, this is research in science didactics using ICT technology to serve as a teaching tool, delivering teaching content and motivating students in the process. Our study was designed and conducted at middle and high schools and mainly focused on the influence of using 3D models and animations in lessons of natural sciences (Biology, Chemistry and Geology)—more specifically on students’ intrinsic motivation and on their level of knowledge. Furthermore, the roles of potential moderator variables, such as gender, level of education, instructional domains and teacher personality, are discussed in our research.
The aim of our research was to assess the effect of 3D models and animations used in natural science classes on students. The size effect was measured on Hedges’ g scale.
The following research questions were developed:
How does the use of 3D models and animations affect students’ intrinsic motivation—more specifically students’: (1) interest; (2) effort to actively participate in the educational process; (3) perceived competence; (4) understanding of the usefulness of the subject matter?
How does this effect change after the intervention time (three months) of regular usage of 3D models and animations?
What is the effect of using 3D models and animations on acquired knowledge in Chemistry and Biology?
What roles do potential moderators (instructional domains, gender, level of education and teacher personality) play in the effectiveness of 3D models and animations?
Based on the results mentioned in the previous section, we set the following hypothesis:
3D models and animations have a positive influence on the intrinsic motivation of students in comparison with static visualization.
3D models and animations have a positive effect on learning outcomes in comparison with static visualization.
The variables of gender, age, educational level, learning domain, and teacher personality significantly affect the results.
The first and second hypotheses are based on the assumption that visualization can serve as a scaffolding tool for learners (Puntambekar & Hübscher, 2005). These hypotheses are supported by the benefits of dynamic visualization reported in Chapters 1.1 and 1.2, i.e., dynamic visualization helps students visualize abstract objects that they struggled to imagine (Bilbokaite, 2015; McElhaney et al., 2015) and unobservable dynamic phenomena (McElhaney et al., 2015), preventing misconceptions (Bétrancourt et al., 2001; Kühl et al., 2011) and reducing cognitive load (Berney & Bétrancourt, 2016). The hypotheses are contradicted by the findings of several meta-analyses (e.g., Berney & Bétrancourt, 2016; Castro-Alonso et al., 2019; Höffler & Leutner, 2007; McElhaney et al., 2015), as summarized in more detail in Chapters 1.3 and 1.4.
In total, 565 (middle and high school) students (321 females and 238 males; 6 students omitted this information), aged 11 to 20, were included in this study and divided into two groups (242 students in the control group and 323 students in the experimental group). Most of them were Biology students (350), in addition to Chemistry (124) and Geology (70) students. All students of both groups had similar educational and socioeconomic backgrounds.
In accordance with the precepts of a proof-of-concept study, the teachers and consequently their students who participated in this research were randomly selected. The teachers involved in this research were required to teach the same subject (Chemistry, Biology or Geology) in two classes of the same grade so that each teacher taught students in both the experimental class and control class, that is, to enable a direct comparison between the two classes. Of the 50 teachers who met the criteria for participation in this study, 11 were randomly selected to participate in this research. The teachers were employed at a middle or high school and had a master’s degree. The median years of experience in teaching science was 15.5 years, and all teachers agreed to use 3D models and visualizations in some of their classes.
As explained above, all teachers taught in two classes of the same grade, an experimental class and a control class. All students of the experimental classes formed the experimental group (EG), whereas all students of the control classes formed the control group (CG).
In this article, experimental teaching is teaching in which EG students were taught using dynamic visualization aids. The teachers incorporated dynamic visualizations (three-dimensional rotating models and animations) into the lessons in the experimental class for 3 months, without changing their teaching methods. Teachers were instructed to use dynamic visualization in almost every lesson depending on the topic under discussion.
In turn, control teaching herein is classical (traditional) teaching in which the traditional way of teaching (thus far) was followed, i.e., in the same way as in the experimental group, albeit without dynamic visualization aids. The teacher could use visual aids in the control class as well (pictures and schemes, among others), but not dynamic visualizations (three-dimensional rotating models or animations).
Each teacher taught the same topics in both control and experimental classes.
Topics from general chemistry (the state of substances, the formation of chemical bonds, ions and acid–base reactions) and organic chemistry (hybridization, stereochemistry, the structure of hydrocarbons and their derivatives) were taught in chemistry. In turn, Biology introduced mainly topics from human biology (human anatomy, muscles, blood circulation, the human skeleton and digestive system), zoology (differences in systems and body structure), general biology (prokaryotic and eukaryotic cell) and botany. Lastly, in Geology, mainly external and internal geological processes were taught.
The application software Corinth was used as the source of 3D models and animations. This app was developed by experts from several universities and is designed to support the digitalization of the educational process at middle and high schools (Corinth s.r.o., 2020). In addition, Charles University, primarily experts in didactics of natural sciences (including authors of this article), helped to develop this application. Corinth is mainly intended for lessons of natural sciences and offers various visual aids for the educational process. The software consists of a library with 1500 visual objects—mostly 3D models, microscope images, videos, photo galleries and animations (Fig. 1). The following topics are covered in this application: Biology, Geology, Chemistry, Physics, Astronomy, Geometry and a few cultural and historical topics. In contrast to common textbooks, online videos or presentations, students can manipulate the object as if they were holding the actual object in their own hands. Therefore, each student can focus on specific details overlooked in 2D images. Moreover, students can turn the 3D models around, zoom in or out on the picture, highlight the objects or look inside them and pause the animations. All models also provide a short description of individual parts, as well as other important comments and notes—for example visualization in augmented reality (AR). This function uses the camera of the equipment to project the chosen 3D model or animation on real time captured backgroud. Application Corinth is known in the US thanks to the educational platform Lifeliqe, which received the 2017 Best App for Teaching and Learning award from the American Association of School Librarians (ALA, 2017).
Measures, knowledge test and questionnaires
Several research tools were used in the preliminary study and subsequent research.
Two types of research tools were used to determine the effect on students’ motivational orientation:
The level of knowledge was determined through knowledge pretests and posttests.
At the beginning of the research, each teacher was interviewed to assess their expectations and experience. At the end of the research, each teacher was interviewed, providing feedback on the lessons taught in this project.
Standardized questionnaires: MSLQ and IMI
The MSLQ (Motivated Strategies for Learning Questionnaire) is a tool for identifying students’ motivational strategies in the learning process, compiled by Pintrich, Smith, Garcia and McKeachie, and serves to identify and evaluate students’ motivational orientations and their use of different strategies for self-learning, i.e., in the process of self-regulation (Pintrich et al., 1991). Based on this questionnaire, a Pre-questionnaire was designed by selecting 16 statements from the four following scales:
intrinsic goal motivation (e.g., “in a class like this, I prefer course material that really challenges me so I can learn new things. The most satisfying thing for me in this course is trying to understand the content as thoroughly as possible.”);
self-efficacy for learning and performance (e.g., “I’m confident I can do an excellent job on the assignments and tests in this course. Considering the difficulty of this course, the teacher, and my skills, I think I will do well in this class.”);
extrinsic goal motivation (e.g., “Getting a good grade in this class is the most satisfying thing for me right now. If I can, I want to get better grades in this class than most of the other students.”);
control beliefs (e.g., “It is my own fault if I don’t learn the material in this course. If I don’t understand the course material, it is because I didn’t try hard enough.”).
This Pre-questionnaire was used in both the experimental and control classes at the beginning of the second lesson—before using the experimental teaching methods for the first time.
The IMI (Intrinsic Motivation Inventory) tool is an internal motivation questionnaire based on Ryan’s research (1982), but its final form was compiled by McAuley et al. (1989) and is used to assess the subjective experience related to the student’s internal motivation and personal self-reflection. Based on IMI, three questionnaires (Post-questionnaire 1, Post-questionnaire 2_1 and Post-questionnaire 2_2) were created, each of which consisted of 25 statements from the four following scales:
interest/enjoyment (e.g., “This activity was fun to do. I enjoyed doing this activity very much.”);
effort/importance (e.g., “I put a lot of effort into this. I tried very hard on this activity.”);
perceived competence (e.g., “I was pretty skilled at this activity. I am satisfied with my performance on this task.”);
value/usefulness (e.g., “I think this is an important activity. I believe doing this activity could be beneficial to me.”).
Both tools use a seven-item Likert’s scale for each statement (Likert, 1932) enabling participants to express their level of agreement with each statement from “strongly agree” = 1 to “strongly disagree” = 7 (Pintrich et al., 1991; Ryan, 1982). Both tools have been used in many earlier studies in the field of intrinsic motivation and self-regulation (Monetti, 2002; Niemi et al., 2003; Wolters, 2004). These tools were also used to measure intrinsic motivation for natural sciences (Šmejkal et al., 2016). An advantage of such research tools is their flexibility as modular aids adaptable to specific research needs. Therefore, they do not require using their full versions (Markland & Hardy, 1997; Pintrich et al., 1991; Rotgans & Schmidt, 2010).
The acquired knowledge was evaluated using knowledge tests. Due to the difficult process of developing these tests, this analysis was performed only at randomly selected schools and in randomly selected classes of those schools, totaling 4 tests (2 Chemistry tests and 2 Biology tests). The tests were created specifically for each school and class by a panel of experts, more specifically two experts in didactics and three teachers of the subject (Chemistry/ Biology). The tests were compiled based on the curriculum and on the goals set by the teacher, in line with the revised version of Bloom's taxonomy of cognitive goals (Airasian et al., 2001). In chemistry in particular, we were able to include a larger number of tasks focused on engaging of higher level thinking skills, such as conceptual and procedural knowledge in Knowledge Dimension and remembering, understanding and application in Cognitive Process Dimension. Each knowledge test was administered twice, once during the first lesson (Pretest) and then during the penultimate lesson (Posttest). The tests were identical for the experimental and control groups.
The research survey was performed in 2019. All teachers involved in the research completed a 2-day training course before the research survey to acquaint themselves with the educational aid, its content and technical aspects (e.g., how to install Corinth software on their mobile device or how to incorporate educational content in presentations and other educational materials). Throughout this study, the teachers were in contact with the researchers and with the Corinth technical support as well. All teachers were also familiarized in detail with the course of the research, all research tools and their purpose in the research, and with the way in which students were supposed to fill in the questionnaires. Students were informed about the pedagogical research, and the research questionnaires were filled in anonymously.
The pedagogical experiment proceeded as follows. Before the actual start of the experiment, an initial interview was conducted with all the teachers. The aim of the initial interview was to find out what the teachers’ expectations are in relation to the implementation of dynamic visualization, specifically the implementation of Corinth in the classroom. The interview included a total of 17 questions, which were thematically divided into four areas: teacher-oriented questions (5), student-oriented questions (4), questions oriented to the content of the Corinth application (5), questions oriented to the school’s attitude towards the implementation of the Corinth application in the classroom (3).
In the first to third lessons, both the experimental and control groups used the classical teaching style. All students filled the Pretest during the first lesson, the Pre-questionnaire at the beginning of the second lesson, and the Post-Questionnaire 1 at the end of the third lesson. From the fourth lesson onward, control group (CG) students were taught using a classical teaching style, whereas the experimental teaching style was implemented in the experimental group (EG). The EG students filled in Post-Questionnaire 2_1 after the first experimental lesson. The same questionnaire, Post-Questionnaire 2_2, was filled in by the EG students again in the last lesson, after three months of intensive learning using dynamic visualizations. In the penultimate lesson of the pedagogical experiment, the students filled in the posttests. After the pedagogical experiment, an output interview was conducted with the teachers.
Figure 2 schematically shows the time course of research and the sequence of research tools.
Results and discussion
Data from 565 students were used in the statistical analysis. The anonymized data were processed in the statistical software IBM SPSS using appropriate statistical methods. Significance was assessed using both parametric and non-parametric tests, setting the significance level at α = 0.05. Initially, the effect size was calculated based on Hedges’ g (Hedges & Olkin, 1985).
Reliability of the data from the questionnaires and calculation of the study variables
In all scales, the reliability of each questionnaire mentioned above was assessed by calculating the Cronbach’s alpha coefficient (Cronbach, 1951).
Almost all values of reliability exceeded the generally accepted minimum of 0.70 (Nunnally, 1978), except for the Cronbach’s alpha of “control beliefs” of the Pre-Questionnaire, which was 0.60 (see Table 1). This value was nevertheless close to the required level and was therefore accepted. In conclusion, the data are internally consistent and reliable. Based on this model approved by confirmation analysis (Šmejkal et al., 2016), new variables were calculated as an average of each item from one of the scales described in Methodology.
The influence of 3D models and animations on students’ intrinsic motivation
To assess the influence of using 3D models and animations on students (RQ1), specifically: (1) interest in the subject matter; (2) effort during the educational process; (3) perceived competence; (4) usefulness of the subject matter, two statistical tests were performed.
First, the students’ motivation in the control lesson was evaluated using data from the Pre-Questionnaire and from the Post-Questionnaire 1. Based on the data, we assessed whether the CG and EG significantly differed. Because the data did not conclusively show a normal distribution, the Mann–Whitney U test was used as the appropriate statistical test, albeit showing no significant difference between the CG and the EG. The significance level of all scales exceeded 0.05, except for “self-efficacy for learning and performance”. Although a significant difference was found in this scale, the Hedges’ g demonstrated that the difference was very small (Table 2). Therefore, the students of the EG and CG reached similar values of intrinsic motivation in most scales.
Second, we assessed whether the perception of control and experimental lessons significantly differed among students who had experienced both teaching styles (only students in the EG). The corresponding data were retrieved from the Post-Questionnaire 1 and Post-Questionnaire 2_1 and analyzed statistically. For this purpose, the non-parametric Wilcoxon signed-rank test was used because some of the data did not show a normal distribution. The results from the test highlighted significant differences between the students’ evaluation of the control and the first experimental lessons in all scales (p-values were significantly lower than 0.05 in all scales, see Table 3). The values of Hedges’ g also suggested that using 3D models and animations had a strong positive effect on the students’ intrinsic motivation, particularly in their interest/enjoyment of the teaching process (g = 1.05) and perceived value/usefulness of the subject matter (g = 1.02), in addition to a medium positive effect on perceived competence (g = 0.41) and a low, albeit positive influence on effort/ importance (g = 0.27). In short, after the first experimental lesson the students’ motivation (more specifically their interest in the subject matter, their effort to understand the subject matter and their perceived teacher competence and subject matter importance) significantly differed between the control and experimental lessons, with a large weighted mean effect size (g = 0.69). It can be concluded that 3D models and animations have a significant, positive effect on all components of intrinsic motivation, thus corroborating the findings of Berney and Bétrancourt (2016). All components of intrinsic motivation are positively influenced by the use of 3D models and animations when comparing experimental and control lessons. Overall, owing to the incorporation of 3D models and animations into the educational process, students are more interested in the subject matter and therefore willing to put more effort into learning new skills, thereby improving their learning outcomes.
To assess whether the positive effect of the application decreases with the intervention time of its incorporation into the educational process over time (3 months) (RQ2), data from the Post-Questionnaire 2_1 were compared with data from the Post-Questionnaire 2_2. Based on the character of the data, the non-parametric Wilcoxon signed-rank test was used for this analysis, rejecting the null hypothesis in 3 of the 4 scales because significant differences were found between the answers of the two questionnaires (see Table 4). The comparison of the effect size showed that the decreases in the scales were low (g = 0.32) in perceived competence, medium (g = 0.41) in value/usefulness and medium/large (g = 0.60) in interest/enjoyment. However, no significant decrease was found in effort/importance over time. As in similar cases it can be expected that after starting to use 3D dynamic animations, the so-called “Novelty Effect” (Clark & Sugrue, 1988) may be observed. Therefore, a study was carried out to monitor changes (in motivation, knowledge) depending on the intervention time of using dynamic 3D animations, for it has been shown that the intervention time of using the aid can lead to a decrease in students attention and motivation (Tsay et al., 2018).
The comparison between the traditional teaching style and the experimental method after 3 months of intensive use of 3D models and animations in the lessons showed a consistently significant difference in 3 of the 4 scales (based on the Wilcoxon signed-rank test on the data from the Post-Questionnaire 1 and Post-Questionnaire 2_2, Table 5 and Fig. 3), with a medium/large positive effect size in the value/usefulness scale (g = 0.64), a medium effect size in the interest/enjoyment scale (g = 0.49) and a small but positive effect size in the effort/importance scale (g = 0.26). Based on the results, using 3D models and animations primarily affects the students’ interest and perceived value of the subject matter. The overall positive effect was evident, even after three months of using the 3D models and animations, as shown by the weighted mean effect size (g = 0.38). In other words, the use of 3D models and animations enhances the perceived importance of the subject matter, most likely by lowering the level of cognitive processes and abstraction necessary for understanding the concepts of phenomena studied in natural sciences (Chandler & Sweller, 1991), which proves a scaffolding potential of used visualization. This experimental approach to teaching prevents the decrease (and in some cases even leads to an increase) in the students’ interest in the subject matter. Furthermore, the students are also willing to put more effort into understanding a given topic. From a long-term perspective, these two trends are crucial because effort/importance reach the same values over time. Accordingly, the occasional use of 3D models and animations helps students understand the importance of the subject, thereby increasing the long-term efforts that they put into the educational process (Ryan & Deci, 2000). Based on the above stated findings, it can be declared that the first hypothesis was confirmed.
In comparison with the findings of previous studies, our positive effects of the use of 3D models and animations are significantly stronger than the results from the meta-analysis performed by Berney and Bétrancourt (2016) and by Castro-Alonso et al. (2019), with an average effect size of 0.23 (Hedges’ g). In turn, the results from this study are similar to those of the meta-analysis by Höffler and Leutner (2007), who reported an average effect size of 0.37 (Cohen’s d). The differences in results of the studies may be caused by the heterogeneity of the studies included in the analyses. Moreover, Castro-Alonso et al. (2019) also address this issue in their meta-analysis where they observed a significant heterogeneity between effect sizes. Therefore, they recommend focusing on different variables influencing these results.
The effect of using 3D models and animations on the level of acquired knowledge
The effect of using 3D models and animations on the level of acquired knowledge (RQ3) was assessed based on the results from knowledge tests.
The reliability of each knowledge test was determined by calculating the corresponding Cronbach’s alpha (see Table 6 for results). The required value of reliability of the test used for individual pedagogical diagnosis is 0.8 (Chráska, 1999). According to George and Mallery (2003), a Cronbach’s alpha value between 0.7 and 0.8 is also acceptable. Therefore, based on the Cronbach’s alpha values calculated in this study, the knowledge tests meet the required reliability standards.
The data were analyzed using the parametric, two-tailed t-test. The results showed no significant difference in the Pretest between the CG and EG at the beginning of the research (Pretest Chemistry: t = -0.192, df = 54, p = 0.848, Mcontrol = 4.29, SD = 2.532, Mexperimental = 4.44, SD = 3.292, g = 0.050; Pretest Biology: t = − 1.283, df = 54, p = 0.205, Mcontrol = 15.55; SD = 5.954; Mexperimental = 17.52; SD = 5.402, g = 0.342).
At the end of the research, the students were asked to complete the same knowledge tests (Posttests). The results from the two-tailed t-test conclusively demonstrate that Chemistry students in the EG performed better than their peers in the CG. The calculated difference was significant and deemed large (Posttest Chemistry: t = − 3.601, df = 58, p = 0.001, Mcontrol = 16.531, SD = 7.326, Mexperimental = 23.839, SD = 8.394, g = 0.916). Furthermore, in the Biology tests, students in the EG also performed better than the students in the CG (Posttest Biology: t = − 1.189, df = 50, p = 0.240, Mcontrol = 25.92, SD = 10.488, Mexperimental = 29.04, SD = 8.373, g = 0.322), albeit non-significantly. One of the possible explanations is the higher level of abstraction and visualization required in Chemistry. Considering the individual development of visual orientation and abstract thinking, 3D models require a lower level of visual orientation from students, and animations can support the understanding of abstract concepts. Thus, the combination of these tools improves the understanding of the subject matter and therefore the results from the evaluation phase of the educational process. The second hypothesis (H2) was as well as confirmed. The higher level of visual orientation and abstract thinking necessary for understanding Chemistry may account for the stronger impact of using visual aids in the educational process. The results from this analysis are in line with the Cognitive Load Theory (Chandler & Sweller, 1991), according to which visualization decreases the cognitive load and therefore lowers the total cognitive steps necessary for succeeding in a given task. The combination of this factor with the significant, positive influence on the students’ interest is one of the signs of scientific literacy, as defined by PISA (OECD, 2006), thus opening up new research opportunities.
The results from knowledge pretests and posttests are summarized in Fig. 4. The chart shows box diagrams of the results of 4 tests (2 pretests and 2 posttests) separately for both subjects, that is, chemistry and natural sciences. The comparison of the box diagrams shows no significant differences between CG and EG in knowledge pretests (especially in chemistry), but the differences become more pronounced in knowledge posttests (again, mainly in chemistry).
The results shown above are in line with the outcomes of the teacher interviews. The teachers reported increased interest and motivation of the students during the lessons with 3D models and animations.
“The pupils’ interest increased, they were drawn into the lessons, everyone was paying attention, they were enjoying it. The lessons were more interesting for the them. Thank you.”
Furthermore, the teachers also reported that passive students were more easily activated.
“The pupils were curious what new things they would see.”
Moreover, the improvements in illustration of the subject matter facilitated the explanation and understanding of abstract concepts for teachers and students, respectively.
“Teaching has become more interesting and students’ imagination and understanding of the subject matter has improved.”
Educators also mentioned that incorporating 3D models and animations is helpful, especially for students with a lower level of visual orientation or abstract thinking.
The influence of potential moderators on dynamic visualizations
The next step of our research was the analysis of the impact of the following potential moderators of the effect of using animations and 3D models on the students’ motivation: student gender, level of education, student age, instructional domain and teacher personality (RQ4). However, the obtained conclusions did not fully confirm the third hypothesis (H3).
Based on the results from the Mann–Whitney U test, student gender played no role in the evaluation of the first experimental lesson (g = 0.10, Nfemale = 151, Nmale = 104), and all intrinsic motivation components were equal between male and female students in the first experimental lesson. Similar results were found when comparing the corresponding data from the first experimental lesson with the data from the control lesson (separately for each gender). The weighted mean effect size on female students (g = 0.69, N = 129) and the weighted mean effect size on male students (g = 0.68, N = 91) were virtually equal (see Table 7). In our study, student gender is not a moderator variable of intrinsic motivation. In contrast, other studies have shown that student gender is a strong moderator variable of the effect of dynamic visualizations on learning (Castro-Alonso et al., 2019). The variability in the findings of these studies may be caused by the differences in individual methodologies, learning environments and male:female ratios of participants. Therefore, this potential moderator must be further explored to find more evidence about its role.
Level of education
The results from the Mann–Whitney U test showed no level-of-education effect on the results of the students (g = 0.13, Nmiddle school = 128, Nhigh school = 129) in the first experimental lesson. Similar results were found when comparing the corresponding data from the first experimental lessons with the data from the control lessons (separately for middle and high school students), as shown by the weighted mean effect sizes for middle (g = 0.67, N = 108) and high school (g = 0.70, N = 112) students (see Table 7). The level of education shows no effect on the results of our study, despite the findings of Castro-Alonso et al. (2019), who reported that dynamic visualizations were more effective among middle school students than among high school students, but the difference was quite small. This finding could be caused by the non-linear variation of the results with student age.
The first experimental lesson was attended by 257 students aged 11 to 20. Data analysis highlighted a significant, quadratic relationship between student age and all components of intrinsic motivation [interest: F(255) = 5.07, p = 0.007; effort: F(255) = 9.08, p = 0.000, competence F(255) = 4.60, p = 0.011: value: F(255) = 3.34, p = 0.037, see Fig. 5]. Based on the relationship between these pairs of variables, a few general trends can be described, for example: younger students (11–12 years of age) perceive the incorporation of dynamic visualizations into the teaching process highly positively. However, as the students become older, they gradually evaluate the use of animations and 3D models in the educational process less favorably—the students’ evaluation is the least positive among 15-year-old students (at the age when they graduate from middle school in the Czech Republic). However, student feedback becomes more positive among high school students aged 16–18. Unfortunately, the sample of students older than 18 years was too small to enable any prediction. In any case, the power of the models is low.
The Kruskal–Wallis test showed a significant influence of the subject Biology (N = 154), Chemistry (N = 64) or Geology (N = 36) on two components of intrinsic motivation, interest/ enjoyment (η2 = 0.079) and value/usefulness (η2 = 0.066) in the first experimental lesson. Biology students showed the highest interest in the subject, whereas the lowest interest was found among Geology students. Furthermore, the same trend was observed in the scale value/ usefulness. As for the other components of intrinsic motivation (effort/ importance and perceived competence), the students in the first experimental lesson reached similar values in each school subject.
Comparing the data from the first experimental lesson with those from the control lesson (separately for each subject), we can summarize the results as follows: animations and 3D models have the strongest positive effect on Chemistry (g = 0.74, N = 56) and Biology (g = 0.72, NBiology = 133), whereas the positive impact on Geology is significantly weaker (g = 0.45, NGeology = 31) than on the other subjects (see Table 7). Considering these results, the instructional domain is a significant moderator variable. Given the limited number of questionnaires from the experimental lessons in Geology and Chemistry, and since only selected topics were taught, the results cannot be generalized. However, based on our findings, we assume that including dynamic visualization (3D models and animations) in biological, chemical and geological disciplines is beneficial, as evidenced by other authors (Jenkinson, 2018; McElhaney et al., 2015; Mitsuhashi et al., 2009).
In total, 11 teachers (9 females and 2 males) participated in this study. Using the Kruskal–Wallis test, we found a significant influence of teacher personality on all components of intrinsic motivation (η2 is between 0.070 and 0.132) in the first experimental lesson.
The overall effect of animations and 3D models on the students’ intrinsic motivation was evaluated based on the comparison between the data from the first experimental lesson and the data from the first control lesson. The calculated values of the weighted mean effect sizes (Hedges’ g) ranged from 0.40 to 1.21 (see Table 7).
The largest differences in size effect were found between individual teachers. Therefore, teacher personality is a significant moderator variable. However, due to the limited number of questionnaires from the experimental lesson, the power of the test comparing subgroups is low. The analysis of structured interviews with teachers shows that students whose teachers worry about the failure of the educational application, and question the positive effect of visualization on learning, evaluate the experimental lesson less positively than students whose teachers are more confident about the experimental teaching process. In their interviews, all teachers also mentioned the time needed to adjust their lesson plans to incorporate visual aids appropriately.
“Initially, I spent more time and put more effort into lesson preparation process because I wanted the app to fit into my teaching plan.”
This concern was justified because many teachers had to learn how to work with the application software before they could use its visual aids in the teaching process. Based on these results, teachers must have a positive attitude towards modern technology, as innovators and early adopters, according to Aldunate and Nussbaum (2013), in addition to adequate technical support at schools in case of any technical difficulties. Furthermore, teachers should be familiar not only with the teaching content of the application software but also with all technical issues. However, teacher personality was not analyzed as a moderator variable of the effectiveness of dynamic visualizations in the teaching process, and therefore should be the subject of further studies.
The main limitation of this study is that all data reflect only the students’ attitude towards the teaching process and their own level of understanding of the subject matter. Therefore, the students’ level of self-confidence also interferes with the results. Nevertheless, this effect is partly compensated for by structured interviews with the teachers, who also evaluated the teaching process from their perspective. The data collected from students and teachers matched. During the interviews, teachers stated that the incorporation of 3D models and animations into the teaching process had a positive impact on their students, who found the models interesting, entertaining and attractive. Therefore, the students appeared more motivated to learn and interested in the subject matter. Furthermore, the teachers expressed a deeper interest in 3D models and animations of physiological processes in plants and humans from not only a biological but also a chemical standpoint (reaction mechanisms, for example).
The effect of teacher personality on the results of the experiment was also partly compensated for by the fact that all teachers taught both groups (experimental and control classes of the same grade). Moreover, the teachers included the same topics in the teaching process in both classes, further offsetting this factor. The generalizability of the results might be limited also by the small sample size. Especially small number of teachers with different teaching styles and personality characteristics may have influenced some of the research results. In this regard, the findings of the present study may provide a good starting point for the design of such studies in a larger scale aiming toward equal sample sizes and similar education level.
The use of 3D models and animations in lessons of natural sciences is positively perceived by students at both middle and high schools. These conclusions are supported by the positive impact of dynamic visualizations on intrinsic motivation in comparison with static images of 3 subject matters (Biology, Chemistry and Geology), as shown by the weighted mean effect size (Hedges’ g = 0.38). Accordingly, the Czech educational system must respond to the specific needs of the current generation, by updating education materials in lockstep with the most recent advances in technology, and by introducing subject matter topics in a more dynamic, realistic and effective way. Our research demonstrates that appropriately incorporating visual aids simplifies abstract processes and enhances understanding. As a result, students may be more interested in learning and may even consider studying the subject matter at a higher level (for example at university). For this reason, teachers should include these visual aids in their lessons regardless of their age or beliefs. Similarly, university educators should also train future teachers in working with digital technologies (Evagorou et al., 2015), so that they are more confident in using them without fear of potential technical failures.
Availability of data and materials
Ainsworth, S., & VanLabeke, N. (2004). Multiple forms of dynamic representation. Learning and Instruction, 14(3), 241–255. https://doi.org/10.1016/j.learninstruc.2004.06.002
Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., Raths, J., & Wittrock, M. (2001). Taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives (L. W. Anderson & D. R. Krathwohl (eds.)).
Aldunate, R., & Nussbaum, M. (2013). Teacher adoption of technology. Computers in Human Behavior, 29(3), 519–524. https://doi.org/10.1016/j.chb.2012.10.017
Ayres, P., Marcus, N., Chan, C., & Qian, N. (2009). Learning hand manipulative tasks: When instructional animations are superior to equivalent static representations. Computers in Human Behavior, 25(2), 348–353. https://doi.org/10.1016/j.chb.2008.12.013
Baier, F., Decker, A. T., Voss, T., Kleickmann, T., Klusmann, U., & Kunter, M. (2019). What makes a good teacher? The relative importance of mathematics teachers’ cognitive ability, personality, knowledge, beliefs, and motivation for instructional quality. British Journal of Educational Psychology, 89(4), 767–786. https://doi.org/10.1111/BJEP.12256
Beauchamp, G., & Parkinson, J. (2008). Pupils’ attitudes towards school science as they transfer from an ICT-rich primary school to a secondary school with fewer ICT resources: Does ICT matter? Education and Information Technologies, 13(2), 103–118. https://doi.org/10.1007/s10639-007-9053-5
Berg, C. A. R., Bergendahl, V. C. B., Lundberg, B. K. S., & Tibell, L. A. E. (2003). Benefiting from an open-ended experiment? A comparison of attitudes to, and outcomes of, an expository versus an open-inquiry version of the same experiment. International Journal of Science Education, 25(3), 351–372. https://doi.org/10.1080/09500690210145738
Berney, S., & Bétrancourt, M. (2016). Does animation enhance learning? A meta-analysis. Computers and Education, 101, 150–167. https://doi.org/10.1016/j.compedu.2016.06.005
Bétrancourt, M., & Chassot, A. (2008). Making sense of animation. In R. Löwe & W. Schnotz (Eds.), Learning with animation: Research implications for design. Cambridge University Press.
Bétrancourt, M., & Réalini, N. (2005). 11th Journe´es d’Etude sur le Traitement Cognitif des Syste`mes d’Information Complexes (JETCSIC). Le Contrôle Sur Le Déroulement de l’animation. https://telearn.archives-ouvertes.fr/hal-00016538/document
Bétrancourt, M., & Tversky, B. (2000). Effect of computer animation on users’ performance: a review/(Effet de l’animation sur les performances des utilisateurs: une sythèse). Le Travail Humain, 63(4).
Bétrancourt, M., Tversky, B., & Bauer-Morrison, J. (2001). Les animations sont-elles vraiment plus efficaces. Revue d’intelligence Artificielle, 14(1–2).
Bevilacqua, A. (2017). Commentary: Should gender differences be included in the evolutionary upgrade to cognitive load theory? Educational Psychology Review, 29, 189–194. https://doi.org/10.1007/s10648-016-9362-6
Bilbokaitė, R. (2015). Effect of computer based visualization on students’ cognitive processes in education process. Society, Integration, Education., 4, 349. https://doi.org/10.17770/sie2015vol4.417
Boucheix, J. M., & Schneider, E. (2009). Static and animated presentations in learning dynamic mechanical systems. Learning and Instruction, 19(2), 112–127. https://doi.org/10.1016/j.learninstruc.2008.03.004
Bulman, G., & Fairlie, R. W. (2016). Technology and Education: Computers, Software, and the Internet. In Handbook of the Economics of Education (Vol. 5, pp. 239–280). Elsevier. https://doi.org/10.1016/B978-0-444-63459-7.00005-1
Bunce, D. M., & Gabel, D. (2002). Differential effects on the achievement of males and females of teaching the particulate nature of chemistry. Journal of Research in Science Teaching, 39(10), 911–927. https://doi.org/10.1002/tea.10056
Castro-Alonso, J. C., Wong, M., Adesope, O. O., Ayres, P., & Paas, F. (2019). Gender imbalance in instructional dynamic versus static visualizations: A meta-analysis. Educational Psychology Review, 31, 361–387. https://doi.org/10.1007/s10648-019-09469-1
Chandler, P. (2004). The crucial role of cognitive processes in the design of dynamic visualizations. Learning and Instruction, 14, 353–357. https://doi.org/10.1016/j.learninstruc.2004.06.009
Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. https://doi.org/10.1207/s1532690xci0804_2
Chang, H.-Y., & Linn, M. C. (2013). Scaffolding learning from molecular visualizations. Journal of Research in Science Teaching, 50(7), 858–886. https://doi.org/10.1002/tea.21089
Chen, S. C., Hsiao, M. S., & She, H. C. (2015). The effects of static versus dynamic 3D representations on 10th grade students’ atomic orbital mental model construction: Evidence from eye movement behaviors. Computers in Human Behavior, 53, 169–180. https://doi.org/10.1016/j.chb.2015.07.003
Chráska, M. (1999). Didaktické testy: příručka pro učitele a studenty učitelství. Paido.
Clark, R. E., & Sugrue, B. M. (1988). Research on instructional media 1978–88. Libraries Unlimited, Inc.
Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.). L. Erlbaum Associates.
Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11(6), 671–684. https://doi.org/10.1016/S0022-5371(72)80001-X
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/BF02310555
Damon, W., Lerner, R. M., Kuhn, D., & Siegler, R. S. (2006). Handbook of child psychology: Cognition, Perception, and Language (6th ed.). John Wiley Sons.
DiSpezio, M. (2010). Misconceptions in the science classroom. Science Scope, 34(1), 16.
Eshach, H., Dor-Ziderman, Y., & Arbel, Y. (2011). Scaffolding the “Scaffolding” metaphor: From inspiration to a practical tool for kindergarten teachers. Journal of Science Education and Technology, 20(5), 550–565. https://doi.org/10.1007/S10956-011-9323-2/TABLES/6
Eurostat. (2020). Distribution of tertiary education students by broad field and sex. https://ec.europa.eu/eurostat/statistics-explained/index.php?title=File:Distribution_of_tertiary_education_students_by_broad_field_and_sex,_EU-27,_2018_(%25)_ET2020.png
Evagorou, M., Erduran, S., & Mäntylä, T. (2015). The role of visual representations in scientific practices: From conceptual understanding and knowledge generation to ‘seeing’ how science works. International Journal of STEM Education, 2(1), 1–13. https://doi.org/10.1186/S40594-015-0024-X/FIGURES/6
Gago, J., Ziman, J., Caro, P., Constantinou, C., Davies, G., Parchmann, I., Rannikmae, M., & Sjoberg, S. (2005). Europe needs more scientists: Report by the high level group on increasing human resources for science and technology. Office for Official Publications of the European Communities, Luxembourg. http://eprints.uni-kiel.de/id/eprint/38088
Garland, T. B., & Sanchez, C. A. (2013). Rotational perspective and learning procedural tasks from dynamic media. Computers and Education, 69, 31–37. https://doi.org/10.1016/j.compedu.2013.06.014
George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference.
Gomez-Zwiep, S. (2008). Elementary teachers’ understanding of students’ science misconceptions: Implications for practice and teacher education. Journal of Science Teacher Education, 19(5), 437–454. https://doi.org/10.1007/s10972-008-9102-y
Goswami, U. C. (2010). The Wiley-Blackwell handbook of childhood cognitive development (2nd ed.). Wiley-Blackwell.
Hanzalová, P. (2019). Oblíbenost témat výuky přírodopisu na 2. stupni základní školy [Univerzita Karlova, Pedagogická fakulta]. https://dspace.cuni.cz/handle/20.500.11956/106185
Harrison, A. G., & Treagust, D. F. (2006). The particulate nature of matter: Challenges in understanding the submicroscopic world. In Chemical Education: Towards Research-based Practice (pp. 189–212). Kluwer Academic Publishers. https://doi.org/10.1007/0-306-47977-x_9
Hedges, L. V. (1981). Distribution theory for glass’s estimator of effect size and related estimators. Journal of Educational Statistics, 6(2), 107–128. https://doi.org/10.3102/10769986006002107
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Academic Press, Inc.
Herman, G. L., Loui, M. C., & Zilles, C. (2011). Students’ misconceptions about medium-scale integrated circuits. IEEE Transactions on Education, 54(4), 637–645. https://doi.org/10.1109/TE.2011.2104361
Höffler, T. N. (2010). Spatial ability: Its influence on learning with visualizations-a meta-analytic review. Educational Psychology Review, 22, 245–269. https://doi.org/10.1007/s10648-010-9126-7
Höffler, T. N., & Leutner, D. (2007). Instructional animation versus static pictures: A meta-analysis. Learning and Instruction, 17(6), 722–738. https://doi.org/10.1016/j.learninstruc.2007.09.013
Ikwuka, O. I., & Samuel, N. N. C. (2017). Effect of computer animation on chemistry academic achievement of secondary school students in Anambra State, Nigeria. Journal of Emerging Trends in Educational Research and Policy Studies, 8(2), 98–102. https://doi.org/10.10520/EJC-9b95fd597
Jaffar, A. A. (2012). YouTube: An emerging tool in anatomy education. Anatomical Sciences Education, 5(3), 158–164. https://doi.org/10.1002/ase.1268
Jenkinson, J. (2018). Molecular biology meets the learning sciences: Visualizations in education and outreach. Journal of Molecular Biology, 430(21), 4013–4027. https://doi.org/10.1016/j.jmb.2018.08.020
Jones, S., & Scaife, M. (2000). Animated diagrams: An investigation into the cognitive effects of using animation to illustrate dynamic processes. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 1889, 231–244. https://doi.org/10.1007/3-540-44590-0_22
Kaushal, R. K., & Panda, S. N. (2019). A meta analysis on effective conditions to offer animation based teaching style. Malaysian Journal of Learning and Instruction, 16(1), 129–153. https://eric.ed.gov/?id=EJ1219792
Khishfe, R., & Abd-El-Khalick, F. (2002). Influence of explicit and reflective versus implicit inquiry-oriented instruction on sixth graders’ views of nature of science. Journal of Research in Science Teaching, 39(7), 551–578. https://doi.org/10.1002/tea.10036
Kim, L. E., Dar-Nimrod, I., & MacCann, C. (2018). Teacher personality and teacher effectiveness in secondary school: Personality predicts teacher support and student self-efficacy but not academic achievement. Journal of Educational Psychology, 110(3), 309–323. https://psycnet.apa.org/buy/2017-52843-001
Kim, L. E., Jörg, V., & Klassen, R. M. (2019). A meta-analysis of the effects of teacher personality on teacher effectiveness and burnout. Educational Psychology Review, 31(1), 163–195. https://doi.org/10.1007/S10648-018-9458-2/TABLES/2
Klahr, D., & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15(10), 661–667. https://doi.org/10.1111/j.0956-7976.2004.00737.x
Kühl, T., Scheiter, K., Gerjets, P., & Gemballa, S. (2011). Can differences in learning strategies explain the benefits of learning from static and dynamic visualizations? Computers and Education, 56(1), 176–187. https://doi.org/10.1016/j.compedu.2010.08.008
Li, Y. (2021). Seven years of development as building a foundation for the journal’s leadership in promoting STEM education internationally. International Journal of STEM Education, 8(1), 1–6. https://doi.org/10.1186/S40594-021-00316-W/TABLES/5
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 44–53.
Lin, L., & Atkinson, R. K. (2011). Using animations and visual cueing to support learning of scientific concepts and processes. Computers and Education, 56(3), 650–658. https://doi.org/10.1016/j.compedu.2010.10.007
Lowe, R. K. (1999). Extracting information from an animation during complex visual learning. European Journal of Psychology of Education, 14(2), 225–244. https://doi.org/10.1007/BF03172967
Löwe, R. K. (2003). Animation and learning: Selective processing of information in dynamic graphics. Learning and Instruction, 13(2), 157–176. https://doi.org/10.1016/S0959-4752(02)00018-X
Malacinski, G. M., & Zell, P. W. (1996). Manipulating the “Invisible”: Learning molecular biology using inexpensive models. American Biology Teacher, 58(7). https://eric.ed.gov/?id=EJ531590
Marbach-Ad, G., Rotbain, Y., & Stavy, R. (2008). Using computer animation and illustration activities to improve high school students’ achievement in molecular genetics. Journal of Research in Science Teaching, 45(3), 273–292. https://doi.org/10.1002/tea.20222
Markland, D., & Hardy, L. (1997). On the factorial and construct validity of the intrinsic motivation inventory: Conceptual and operational concerns. Research Quarterly for Exercise and Sport, 68(1), 20–32. https://doi.org/10.1080/02701367.1997.10608863
Mayer, R. E., DeLeeuw, K. E., & Ayres, P. (2007). Creating retroactive and proactive interference in multimedia learning. Applied Cognitive Psychology, 21(6), 795–809. https://doi.org/10.1002/acp.1350
Mayer, R. E., & Moreno, R. (2002). Aids to computer-based multimedia learning. Learning and Instruction, 12(1), 107–119. https://doi.org/10.1016/S0959-4752(01)00018-4
Mazza, R. (2009). Introduction to information visualization. Introduction to Information Visualization. https://doi.org/10.1007/978-1-84800-219-7
McAuley, E. D., Duncan, T., & Tammen, V. V. (1989). Psychometric properties of the intrinsic motivation inventory in a competitive sport setting: A confirmatory factor analysis. Research Quarterly for Exercise and Sport, 60(1), 48–58. https://doi.org/10.1080/02701367.1989.10607413
McElhaney, K. W., Chang, H. Y., Chiu, J. L., & Linn, M. C. (2015). Evidence for effective uses of dynamic visualisations in science curriculum materials. Studies in Science Education, 51(1), 49–85. https://doi.org/10.1080/03057267.2014.984506
MEYS. (2020). Strategy for the Education Policy of the Czech Republic up to 2030+ . https://www.msmt.cz/uploads/brozura_S2030_en_fin_online.pdf
Mitsuhashi, N., Fujieda, K., Tamura, T., Kawamoto, S., Takagi, T., & Okubo, K. (2009). BodyParts3D: 3D structure database for anatomical concepts. Nucleic Acids Research, 37(SUPPL. 1), D782–D785. https://doi.org/10.1093/nar/gkn613
Monetti, D. M. (2002). A multiple regression analysis of self-regulated learning, epistemology, and student achievement. 62, 3294. http://search.epnet.com/login.aspx?direct=true&db=aph&authdb=epref&an=ABJGBGDGB
Niemi, H., Nevgi, A., & Virtanen, P. (2003). Towards self-regulation in web-based learning. Journal of Educational Media, 28(1), 49–71. https://doi.org/10.1080/1358165032000156437
Nodzyńska, M. (2012). Vizualizace V Chemii a Ve Výuce Chemie. Chem Listy, 106, 519–527. http://chemicke-listy.cz/docs/full/2012_06_519-527.pdf
Nunnally, J. C. (1978). An overview of psychological measurement. In Clinical Diagnosis of Mental Disorders (pp. 97–146). Springer US. https://doi.org/10.1007/978-1-4684-2490-4_4
Özmen, H. (2011). Effect of animation enhanced conceptual change texts on 6th grade students’ understanding of the particulate nature of matter and transformation during phase changes. Computers and Education, 57(1), 1114–1126. https://doi.org/10.1016/j.compedu.2010.12.004
Pavelková, I., Škaloudová, A., & Hrabal, V. (2010). Analýza vyučovacích předmětu na základě výpovědí žáků. Pedagogika. https://doi.org/10.14712/23362189.2018.861
Pintrich, P., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ). https://files.eric.ed.gov/fulltext/ED338122.pdf
Popelka, S., Vondrakova, A., & Hujnakova, P. (2019). Eye-tracking evaluation of weather web maps. ISPRS International Journal of Geo-Information, 8(6), 256. https://doi.org/10.3390/ijgi8060256
Puntambekar, S., & Hübscher, R. (2005). Tools for scaffolding students in a complex learning environment: What have we gained and what have we missed? Educational Psychologist, 40(1), 1–12. https://doi.org/10.1207/s15326985ep4001_1
Rieber, L. P. (1990). Using computer animated graphics in science instruction with children. Journal of Educational Psychology, 82(1), 135–140. https://doi.org/10.1037/0022-06126.96.36.199
Rotbain, Y., Marbach-Ad, G., & Stavy, R. (2006). Effect of bead and illustrations models on high school students’ achievement in molecular genetics. Journal of Research in Science Teaching., 43(5), 500–529. https://doi.org/10.1002/tea.20144
Rotgans, J. I., & Schmidt, H. (2010). The motivated strategies for learning questionnaire: A measure for students’ general motivational beliefs and want more papers like this? The Asia-Paciic Education Researcher, 19(2), 357–369.
Ryan, R. M. (1982). Control and information in the intrapersonal sphere: An extension of cognitive evaluation theory. Journal of Personality and Social Psychology, 43(3), 450–461. https://doi.org/10.1037/0022-35188.8.131.520
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68
Ryoo, K., & Linn, M. C. (2012). Can dynamic visualizations improve middle school students’ understanding of energy in photosynthesis? Journal of Research in Science Teaching, 49(2), 218–243. https://doi.org/10.1002/tea.21003
Schnotz, W. (2005). An integrated model of text and picture comprehension. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511816819
Schnotz, W., Böckheler, J., & Grzondziel, H. (1999). Individual and co-operative learning with interactive animated pictures. European Journal of Psychology of Education, 14(2), 245–265. https://doi.org/10.1007/bf03172968
Schnotz, W., & Lowe, R. (2003). External and internal representations in multimedia learning. Learning and Instruction, 13(2), 117–123. https://doi.org/10.1016/S0959-47520200015-4
Schnotz, W., & Rasch, T. (2005). Enabling, facilitating, and inhibiting effects of animations in multimedia learning: Why reduction of cognitive load can have negative results on learning. Educational Technology Research and Development, 53(3), 47–58. https://doi.org/10.1007/BF02504797
Schwan, S., & Riempp, R. (2004). The cognitive benefits of interactive videos: Learning to tie nautical knots. Learning and Instruction, 14(3), 293–305. https://doi.org/10.1016/j.learninstruc.2004.06.005
Šmejkal, P., Skoršepa, M., Stratilová Urválková, E., & Teplý, P. (2016). Chemické úlohy se školními měřicími systémy: motivační orientace žáků v badatelsky orientovaných úlohách. Scientia in Educatione, 7(1), 29–48. https://doi.org/10.14712/18047106.280
Tabak, I. (2004). Synergy: A complement to emerging patterns of distributed scaffolding. Journal of the Learning Sciences, 13(3), 305–335. https://doi.org/10.1207/s15327809jls1303_3
Takeuchi, M. A., Sengupta, P., Shanahan, M. C., Adams, J. D., & Hachem, M. (2020). Transdisciplinarity in STEM education: A critical review. Studies in Science Education, 56(2), 213–253. https://doi.org/10.1080/03057267.2020.1755802
Tarmizi, R. A. (2010). Visualizing students’ difficulties in learning calculus. In A. Tarmizi & R. Ayub (Eds.), International on mathematics education research 2010—ICMER 2010 (Vol. 8, pp. 377–383). Elsevier Science BV.
Tsay, C.H.-H., Kofinas, A., & Trivedi, S. K. (2018). Novelty effect and student engagement in a technology-mediated gamified learning system. Academy of Management Annual Meeting Proceedings, 2018(1), 13030. https://doi.org/10.5465/AMBPP.2018.13030ABSTRACT
Türkay, S. (2016). The effects of whiteboard animations on retention and subjective experiences when learning advanced physics topics. Computers and Education, 98, 102–114. https://doi.org/10.1016/j.compedu.2016.03.004
Tversky, B., Morrison, J., & Betrancourt, M. (2002). Animation: Can it facilitate? International Journal of Human Computer Studies, 57(4), 247–262. https://doi.org/10.1006/ijhc.2002.1017
Tzima, S., Styliaras, G., & Bassounas, A. (2019). Augmented reality applications in education: Teachers point of view. Education Sciences, 9(2), 99. https://doi.org/10.3390/EDUCSCI9020099
Wang, P. Y., Vaughn, B. K., & Liu, M. (2011). The impact of animation interactivity on novices’ learning of introductory statistics. Computers and Education, 56(1), 300–311. https://doi.org/10.1016/j.compedu.2010.07.011
Ware, C. (2004). Information Visualization: Perception for Design. In Information Visualization (2nd ed.). Morgan Kaufmann.
Williamson, V. M., & Abraham, M. R. (1995). The effects of computer animation on the particulate mental models of college chemistry students. Journal of Research in Science Teaching, 32(5), 521–534. https://doi.org/10.1002/tea.3660320508
Wolters, C. A. (2004). Advancing achievement goal theory: Using goal structures and goal orientations to predict students’ motivation, cognition, and achievement. Journal of Educational Psychology, 96(2), 236–250. https://doi.org/10.1037/0022-06184.108.40.206
Wong, M., Castro-Alonso, J. C., Ayres, P., & Paas, F. (2018). Investigating gender and spatial measurements in instructional animation research. Computers in Human Behavior, 89, 446–456. https://doi.org/10.1016/j.chb.2018.02.017
Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x
Wu, H. K., Krajcik, J. S., & Soloway, E. (2001). Promoting understanding of chemical representations: Students’ use of a visualization tool in the classroom. Journal of Research in Science Teaching, 38(7), 821–842. https://doi.org/10.1002/tea.1033
Zell, E., Krizan, Z., & Teeter, S. R. (2015). Evaluating gender similarities and differences using metasynthesis. American Psychologist, 70(1), 10–20. https://doi.org/10.1037/a0038208
The authors thank all the collaborating teachers in this research for their enthusiasm and valuable feedback. The authors also thank Dr. Carlos V. Melo for editing the manuscript.
This work was supported by University research centres of Charles University: UNCE/HUM/024 and funding project Progres Q17.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Teplá, M., Teplý, P. & Šmejkal, P. Influence of 3D models and animations on students in natural subjects. IJ STEM Ed 9, 65 (2022). https://doi.org/10.1186/s40594-022-00382-8