Skip to main content

Improving conceptual understanding of gas behavior through the use of screencasts and simulations

Abstract

Background

Engagement with particle-level simulations can help students visualize the motion and interactions of gas particles, thus helping them develop a more scientifically accurate mental model. Such engagement outside of class prior to formal instruction can help meet the needs of students from diverse backgrounds and provide instructors with a common experience upon which to build with further instruction. Yet, even with well-designed scaffolds, students may not attend to the most salient aspects of the simulation. In this case, a screencast where an instructor provides narrated use of the simulation and points students towards the important observations may provide additional benefits. This study, which is part of the larger ChemSims project, investigates the use of simulations and screencasts to support students’ developing understanding of gas properties by examining student learning gains.

Results

This study indicates that both students manipulating the simulation on their own and those observing a screencast exhibited significant learning gains from pre- to post-assessment. However, students who observed the screencast were more than twice as likely to transition from a macroscopic explanation to a particle-level explanation of gas behavior in answering matched pre- and post-test questions. Eye-tracking studies indicated very similar viewing and usage patterns for both groups of students overall, including when using the simulation to answer follow-up questions.

Conclusion

Significant learning gains by both groups across all learning objectives indicate that either scaffolded screencast or simulation assignments can be used to support student understanding of gas particle behavior and serve as a first experience upon which to build subsequent instruction. There is some indication that the initial use of the screencast may better help students build correct mental models of gas particle behavior. Further, for this simulation, watching the instructor manipulate the simulation in the screencast allowed students to subsequently use the simulation on their own at a level comparable to those students who had manipulated the simulation on their own throughout the assignment, suggesting that the screencast students were not disadvantaged by not initially manipulating the simulation on their own.

Introduction

Use of particle-level animations and simulations is becoming more common in introductory chemistry courses as these materials provide a means for students to visualize the motion and interactions of atoms, molecules, and ions (Kelly & Jones, 2008; Sanger, Phelps, & Fienhold, 2000). Research has shown that use of such materials in class can support student development of better mental models of particle behavior (Kelly & Jones, 2007; Williamson & Abraham, 1995); something that we know is important for students to be able to understand and explain macroscopic chemical phenomena (Davidowitz & Chittleborough, 2009). However, with the increasing use of online learning environments, by choice or necessity, it is important to understand how to best support student use of simulations on their own, outside of a classroom environment where instructors can more directly oversee and direct student usage of such simulations. The ChemSims (ChemSims, n.d.) project aims to identify evidence-based practices for the use of simulations and screencasts (short videos illustrating instructor narrated use of simulations) in supporting student development of core chemistry concepts. This paper focuses specifically on development of materials to support students in the development of accurate mental models related to gas particle behaviors.

Student understanding of gases

Though much of the research on students’ conceptual understanding of gases was done 20–30 years ago, more recent studies still indicate that college level students have inaccuracies in their particulate mental models as they relate to certain properties of gases (Madden, Jones, & Rahm, 2011). Early research showed that, for ideal gases, students could algorithmically solve for variables, such as temperature or pressure, without being able to answer conceptual questions incorporating these ideas (Pickering, 1990). These early studies suggested that students’ incorrect ideas, such as believing that particles expand in size as a substance becomes a gas (Sanger et al., 2000), or that gas particles rise to the top of a container when heated (Novick & Nussbaum, 1981), stem from the transfer of macroscopic properties to particulate particles (Brook et al., 1984). More recently, Madden et al. (2011) found that students who possessed lower levels of representational competence tended to use the ideal gas law as an algorithmic tool, often applying it incorrectly, whereas students with better mental models were able to make connections between algebraic, graphical, and pictorial models of gases. Further, this research found that the most frequent inaccuracies in mental models were around dynamic processes not easily represented with static pictures, and thus recommended the use of animations to support development of these aspects of students’ mental models.

Although the disconnect between algorithmic problem solving and particulate-level conceptual understanding of gases is well documented in research (Nurrenbern & Pickering, 1987; Sanger, Campbell, Felker, & Spencer, 2007; Sanger & Phelps, 2007; Sawrey, 1990), less research has been done on ways to address this lack of conceptual understanding of gases. Development of deep conceptual understanding is critical for students, as it is necessary for students to be able to provide the causal mechanistic reasoning that we are aiming for (Talanquer, 2013). Therefore, there exists a crucial need for research on ways to support students in their development of robust conceptual understanding of chemistry topics including gases.

Causal-mechanistic reasoning

One power of the discipline of chemistry is its ability to explain the macroscopic properties of materials both within and beyond the discipline (Talanquer, 2018). At the same time, being able to provide high-quality explanations is challenging for novice students. Indeed, the quality of students’ explanations of a phenomenon can vary from simple recitation of a memorized relationship to a much deeper causal mechanistic explanation (Talanquer, 2010; Underwood, Reyes-Gastelum, & Cooper, 2016). It is these later types of explanations, which use atomic- or molecular-level motion and interactions to explain a phenomenon, that provide strong evidence that students truly understand a concept and with which students need more practice (Talanquer, 2013). Educationally, this implies a need to focus less on students knowing facts or being able to solve algorithmic problems and more on students developing a deeper understanding of the content by developing high-quality mental models and using them to construct explanations, an approach that is increasingly being called for in chemistry education (Cooper, 2015; Seery, 2018). For a student to be able to explain the how, what, and why of a process, they typically have to have a strong understanding of the particulate level and the ability to connect it to the macroscopic level of Johnstone’s triangle (Johnstone, 1982). There is evidence that students are able to achieve this level of success through the use of carefully constructed curricular materials and with suitable question prompts (Becker, Noyes, & Cooper, 2016; Cooper, Kouyoumdjian, & Underwood, 2016). The development of such depth of explanation for acid-base chemistry concepts in general chemistry has been shown to carry into organic chemistry, highlighting the importance of building a strong foundation for introductory chemistry topics (Crandell, Kouyoumdjian, Underwood, & Cooper, 2019).

Gases provide a particularly rich opportunity for students to develop their mental model of particles and their motion. From an ideal gas perspective, the interactions of gas particles are straightforward, since they interact through elastic collisions with other particles and the walls of the containers. This simplified set of interactions means that they can be readily represented in a computer simulation which may help students to visualize the particles’ motion and their interactions, a known challenge for students (Gabel, Samuel, & Hunn, 1987), and assist in addressing the naive ideas discussed previously.

Theoretical framework

Research in many fields has consistently demonstrated that students learn more when they are actively engaged in learning activities (Chi & Wylie, 2014). This is well aligned with the constructivist learning, where learners construct understanding through their own experiences with phenomena (Tobin, 2009). Further, research has demonstrated that struggling with a problem prior to formal instruction results in overall better retention and greater knowledge transfer than students who receive instruction in problem solving followed by practice time (Brown, Roediger, & McDaniel, 2014; Schwartz, Chase, Oppezzo, & Chin, 2011). This supports providing students opportunities to explore core chemistry concepts prior to formal instruction using scaffolded screencast/simulation assignments such as the ones we describe here. This is particularly true since the use of animations and simulations in the classroom have been shown to support students’ development of particulate-level mental models of chemical processes (Williamson & Abraham, 1995) which have been identified as important for student conceptual understanding in chemistry (Adbo & Taber, 2009; Liu & Lesniak, 2005). However, when asked to use such simulations on their own outside of a classroom environment, students lose access to the instructor as a resource for clarification and may find it more challenging to remain on task. When a simulation is used by an instructor in class, then the instructor will typically provide the narrative that is critical for helping students make sense of the visualization and direct student attention to the important features of the images (Mayer & Anderson, 1991). According to cognitive load theory, this is critical to supporting student learning as a person’s working memory can only accommodate a limited number of novel interacting elements; thus, instruction should be designed to focus the learner’s attention on the most relevant features (Paas, Renkl, & Sweller, 2003). This can be especially important in the case of complex chemistry simulations which can have many different elements (for example, particle representations, graphs, variable controls), making it hard for a novice learner to know what is important to focus on. By using a screencast of an expert running the simulation, such guidance can be provided to students, but this approach loses that opportunity for students to directly interact with the simulation. Further, cognitive load theory tells us that careful design of instructional materials can be used to reduce extraneous cognitive load (load that focuses attention on things that do not directly contribute to the construction of the desired concept) and enhance germane cognitive load (load related to processing, development, and automation of schemas), which supports learning (Chandler & Sweller, 1991; Paas et al., 2003). Designing scaffolded instructions and screencasts that focus learners on things like looking for patterns in the data or particle interactions that are directly related to the construction of the desired concepts can provide a similar level of support to students that the instructor typically provides when using simulations in the classroom. In previous studies, we have noted that both students’ guided out-of-class usage of the simulation or viewing of a screencast have led to increased understanding of the core underlying chemistry concepts and can provide a useful common experience upon which to build subsequent instruction (Herrington, Sweeder, & VandenPlas, 2017; Sweeder, Herrington, & VandenPlas, 2019; VandenPlas, Herrington, Shrode, & Sweeder, 2020). In the case of kinetics, we observed that classes that had statistically different preclass scores ended up at equivalent levels after completing the activity (Sweeder et al., 2019). In some cases, the screencast has shown some potential in producing greater learning gains over the simulation use alone (Herrington et al., 2017; VandenPlas et al., 2020) and screencasts allow for inclusion of additional learning materials that can be directly targeted to specific challenges (VandenPlas et al., 2020).

Research questions

Given that supporting students’ conceptual development outside of the classroom requires different considerations and scaffolding than instructor facilitated development in a face-to-face class, the following research questions guided this study.

  1. 1.

    What are the impacts of outside-of-class usage of simulations or screencasts on students’ conceptual understanding of the behavior and interactions of gas particles?

  2. 2.

    How and where do students allocate attention when interacting with a simulation, as compared to a screencast, when coupled with a guided assignment?

Methods

To answer these research questions, a two-part study involving both classroom data collection and student usage eye-tracking interviews was used. Curricular materials, using a readily available online simulation, were developed to support students in each of two different modes, individual engagement with manipulation of the simulation and an instructor-led screencast. The first part of this study involved evaluating an in situ use of these curricular materials to address research question 1 (RQ1). The second part was an eye-tracking study designed to identify any difference in student usage pattern between the simulation and screencast groups (RQ2). If any differences exist, then these patterns may be able to shed light on any differences in learning outcomes observed in the much larger classroom portion of the study.

Assignment design

After exploring a variety of potential simulations, the kinetic molecular theory of gases simulation from Pearson Education (Kinetic Molecular Theory of Gases, n.d.) was selected for its effective and accurate representations of the particulate motion of gases and the related graphical representation. The simulation shows a glass container on a hotplate with a piston. The simulation allows students to switch between macroscopic and submicroscopic views, and students can also modify the following parameters: volume or pressure of the container, the number of moles of helium, neon, and argon atoms, and temperature. Within the submicroscopic view (which was used in this study), the students are provided with a histogram indicating the speeds of each of the atoms present and the root mean square speed of any of the gases present.

The development of effective scaffolding to guide students’ usage of this simulation and direct their attention towards the patterns and interactions most useful in helping students construct a scientifically accurate mental model of gas particle behavior required a careful, iterative design and evaluation process as outlined in Fig. 1. Using the backward design (Wiggins & McTighe, 2005) approach, the first step in this process was to identify key learning objectives related to the understanding of gas particle behavior and interactions. They were that students will be able to:

  1. 1.

    Use particulate motion to explain how the following properties of gases affect one another: volume, temp, pressure, and moles of a gas.

  2. 2.

    Explain what happens to the particles of a gas when it is heated or cooled.

  3. 3.

    Describe how particles in the gas state move or interact with each other.

Fig. 1
figure 1

Backward design used for assignment and screencast creation

Based upon these learning objectives, five pretest questions were developed to assess students’ prior knowledge (adapted from ideas found in Suchocki, 2010). The pretest questions were a mix of true/false, multiple choice, and written explanations aligned with common student-naïve ideas found in the literature (Novick & Nussbaum, 1981) or observed by the authors. Five follow-up questions, with two being identical matches to the pretest, were developed and included at the end of the assignment and are shown below in Table 1. These questions were aligned with the learning objectives and designed to be analogous to the pretest questions while also not being directly answered in the simulation.

Table 1 Matching questions from pretest to follow-up

Next, the assignments were designed by identifying the important instructions and questions that would support students use of the simulation in developing a coherent mental model for gas particle behavior. The first set of guided, exploratory questions was designed to draw students’ attention to important aspects of the simulation. A second set of questions was designed to help students explore the identified learning objectives. The assessment questions and assignment were reviewed by two other instructors for face validity.

The scaffolded simulation assignment was used as a script for the creation of a screencast which was 6 min in length. The screencast was designed to direct students’ attention to key observations necessary for achieving the learning objectives but did not provide core chemistry explanations. A side-by-side comparison of a segment of the simulation assignment, screencast assignment, and script for the screencast is shown in Table 2. Both screencast and simulation assignments had students answer similar questions throughout to provide experiences that were matched as closely as possible; however, the questions were not always identical. As shown in Table 2, the simulation students had more labeled pieces given to them because in a previous iteration, it was determined that this additional information was necessary as many students had challenges interpreting the graph. In the screencast, this additional support was embedded in the narration. After completing the initial simulation assignment or screencast, all students were encouraged to use the simulation on their own to answer a final set of application questions.

Table 2 Simulation and screencast assignment prompts and accompanying narration

As outlined in Fig. 1, student responses were used to guide revisions to the assignments, assessment questions, and screencast. Often, these revisions were necessary to prevent students from misinterpreting the simulation in a manner that was only apparent after a set of students completed the assignment. For example, pilot data showed many students unexpectedly stating that changing the volume changed the average speed of the molecules. This misunderstanding likely arose when students would decrease the container volume to its extreme lower limit, and to a novice chemist, the particles colliding more frequently did appear to be moving faster because they changed speeds more often, despite graphical evidence that the average speed remained unchanged. To overcome this limitation of the simulation, the question was adjusted to indicate that the belief that the speed of the atoms changed was incorrect and the students instead had to explain why this was an incorrect interpretation. After this change, further analysis of students’ responses showed that many students were correctly stating that temperature must change to change the average speed of particles. The data presented in this study focuses on the sixth round of data collection. Previous rounds of data collection were used to refine the simulation and screencast assignments as described above and to develop coding schemes.

Study design

The primary purpose of this research program is to develop effective classroom materials for learning important chemistry concepts. As such, measuring their efficacy within a course is key to understanding their value as well as to address RQ1. By evaluating the results of screencasts and simulation assignments across several topic areas, it may then be possible to understand broadly applicable commonalities that may be fruitful for the development of learning materials for other concepts.

Although the classroom study provides insight into the outcome of the guided assignments in the form of student responses, the student behaviors leading to these responses are still a black box. In order to investigate the behavior of students engaging with the screencast and simulation assignments (RQ2), additional data was collected in the form of an eye-tracking experiment. This laboratory-based study provides us additional insight into how students allocate attention while completing the guided assignments to help better explain the outcomes of the classroom study.

Classroom study

Participants and study design

Our Institutional Review Boards (GVSU Ref. No. 16-012-H; MSU x15-799e) approved this study as exempt. Consenting students who participated in the study were part of a general chemistry class at one of two large public institutions located in the Midwest region of the USA. In one institution, gases are presented at the end of the first semester of the general chemistry I and data were collected for a single course where the 46 students were randomly assigned to one of the two different interventions (simulation or screencast). At the other institution, gas laws are presented at the very beginning (second week) of the general chemistry II course. Data were collected from two different classes with two different instructors where each class was assigned to complete the simulation-only assignment (N = 72) and the other to complete the screencast assignment (N = 113). Given that the intervention took place very early in the semester and happened prior to any in-class instruction on the topic, we anticipated no meaningful impact due to the differences in the instructor.

As these assignments were designed to be introductory experiences, prior to the first class on gas laws, students had 10–15 min to complete an in-class pretest (Additional file 1: Pretest). Students were then given the assignment packet to complete as homework which was collected at the beginning of the next class. This assignment contained the link for either the screencast or simulation assignment with the full set of guided questions to answer (Additional file 2: Assignment).

Data analysis

Student responses were coded using the coding scheme provided (Additional file 3: Coding). Codes for each question were developed during the first iteration of the materials and revised for subsequent iterations as questions were modified. The coding scheme was developed using an inductive approach where two researchers (DGH and RDS) each took a separate set of responses and identified common themes in the student answers for each question. They compared themes and came to agreement regarding common themes or codes for each question. This coding scheme was then applied deductively by all coders (BLM, DGH, and RDS) to additional sets of responses. Any responses that did not clearly fit under a code were discussed to decide on final coding. As the assignments and questions were modified, a constant comparative approach was used to ensure codes for each question still captured the breadth of student response and the coding scheme was modified as needed (Strauss & Corbin, 1990). To allow for a quantitative measure of changes in student understanding from pretest to follow-up, each open-ended response had a single code that was considered correct and awarded full points. Scoring for question pair 1 additionally allowed for partial credit, as parts a, b, c, and the open-ended response (shown in Table 1) were each worth 1/4 point, while question pairs 2, 3, 4, and 5 were awarded either a full point or no points. Additionally, scoring of the follow-up question from pair 2 omitted the explanation portion of the student response so as to better align with the matched pretest question. The coded data file as an Excel document is provided (Additional file 4: Data).

Statistical analyses of student scores were completed using the Statistical Package for the Social Sciences (IBM SPSS Statistics 25 (Version 25), 2017) to determine how students’ understanding of the behavior of gases may have changed as a result of their use of the simulation and to compare to the screencast group. A mixed-design analysis of variance was used to determine if there was a significant difference between students’ pretest and follow-up scores and if there were any differences between the treatments. Normalized change scores were also calculated to evaluate student improvement (Marx & Cummings, 2007). Normalized change scores are calculated the same as normalized gain for increases but differ slightly for decreases so that they do not overweight decreases on scores that start near the top of the scale.

Eye-tracking study

Eye tracking has been used in other chemistry education studies to measure student attention during interaction with a variety of multimedia learning materials, including electrostatic potential maps (Williamson, Hegarty, Deslongchamps, Williamson, & Shultz, 2013), NMR spectra (Topczewski, Topczewski, Tang, Kendhammer, & Pienta, 2016), and particulate-level simulations (Herrington et al., 2017; Sweeder et al., 2019; Tang & Abraham, 2016). This method uses reflected light to triangulate the position of the eye in space and which can then be mapped back to a particular location on the stimulus of interest. This allows a real-time measure of where an individual is looking, how long they spend viewing particular features of the stimulus, and the order in which they view these features. Researchers assume that where the eye fixates, the mind focuses (Just & Carpenter, 1980), allowing us to infer an individual’s cognition from the patterns of their eye movements.

Participants and study design

In this study, twenty-three participants were recruited from first-semester general chemistry courses from a large public institution located in the Midwest region of the USA. These students had not yet been exposed to the topic of gas laws in class and were recruited from classes that had not previously engaged with the screencast or simulation assignments. Participants completed the same paper-and-pencil pretest as those in the classroom study before being seated at the eye-tracking computer. A Tobii T60 eye tracker was used to collect binocular eye position at a speed of 60 Hz, with the stimulus displayed on a 17-in. computer monitor. To avoid having students look away from the screen, the eye tracker displayed the screencast or simulation on the top 60% of the screen with the assignment displayed in the form of a PDF on the lower 40% of the screen. Students were seated approximately 24 in. from the monitor and had control of both the assignment and simulation/screencast through use of a mouse. Participants completed either the screencast or simulation assignment while seated at the eye tracker, reporting their answers to assignment questions verbally, to be recorded by an undergraduate student researcher, as well as being captured via audio recording. This was followed by the completion of the “going further” application questions, in which all students, regardless of treatment, were encouraged to use the simulation itself. Finally, students completed the follow-up questions offline, using a paper-and-pencil format. Students for whom the tracker was able to collect less than 85% of eye position data were removed from analysis, resulting in a total of 20 students for data analysis. Of these, 10 received the screencast treatment, and 10 received the simulation treatment.

Data analyses

Eye position data were filtered to identify fixations, points in time, and space where the eye is relatively still and focused on a particular object, during which the majority of processing is assumed to take place (Holmqvist et al., 2011). To accomplish this, the Tobii Fixation Filter was applied with a threshold of 35 pixels using the Tobii Studio software (Tobii, 2016). Fixations were then mapped onto areas of interest (AOIs), objects on the screen which were identified as being relevant to the research questions and simulation used in this study. Two major AOIs were used for a first level of analysis: assignment (lower 40% of the screen) versus resource (screencast/simulation: upper 60% of screen). The resource was then further divided into smaller areas of interest to better probe student use of the resource itself: container, controls, graph, instructions, and molecules.

Total fixation time in each area of interest was summed and used as the dependent variable in a mixed-design ANOVA. Treatment (simulation vs screencast) was used as a between-subject variable to investigate differences in behavior among students using the different electronic resources. To compare time spent viewing the individual AOIs, AOI was used as a within-subject variable. Finally, to probe how student behavior changed during the less scaffolded “going further” questions, time (assignment vs going further) was added as a second within-subject variable.

Results and discussion

Learning gains

Pre- and post-scores were calculated using the five matched pretest and follow-up questions. Students in both the simulation and the screencast groups demonstrated learning gains using a 2 (treatment: simulation, screencast) × 2 (assessment: pretest, follow-up) mixed-design ANOVA. The ANOVA indicated a statistically significant, large main effect for assessment for all students with a 0.58 point increase from 2.20 to 2.78 on a 5-point scale (F1,229 = 39.730, p < 0.001, ηp2 = 0.148) from pre to post. No main effect for treatment or interaction effect between treatment and assessment was found. However, upon further analysis discussed below, we recognized that question pair 3 (Table 1) was not well matched. The follow-up question gave an underestimate of students’ post knowledge compared to prior knowledge as it asked them to go beyond the scope of the pretest question. When this question was omitted from the analysis, the same mixed-design ANOVA again indicated a statistically significant main effect for assessment (mean score increases from 1.44 to 2.42 with a large effect size) for all students but with no main effect for treatment or interaction effect between treatment and assessment (F1,229 = 159.941, p < 0.001, ηp2 = 0.411). Figure 2 illustrates the similarities of the pretest and follow-up scores of both treatment groups indicating that both sets of students had similar learning gains.

Fig. 2
figure 2

Box plot of students’ pretest and follow-up scores on a 4-point scale by treatment. Note: average scores are represented by black squares

Results by learning objectives

To further understand the learning gains shown by students on each of the three learning objectives (LOs) underlying the development of these activities and listed in “Assignment design,” we analyzed each pair of matched questions individually. Each pretest question had an equivalent follow-up question aligned with one or more LO as shown in Table 1. Each pair of questions was analyzed using an individual mixed-design ANOVA (mean scores shown in Table 3 for each question pair). Analysis demonstrated main effects for assessment (pretest to follow-up) for all question pairs with no main effect for treatment or interaction effect between treatment and assessment for any of the question pairs. This indicates that both treatments were equally successful at impacting students. Together, questions 1, 2, and 4 give strong indication that students are making very considerable gains on each of the individual LOs.

Table 3 Summary of results by question

Since students made learning gains on each of the learning objectives, question pair 3, which exhibited a decrease in student scores, warrants further examination. The apparent decrease in learning on this question seems surprising given that results from the other questions suggest that students improved on each of the two individual learning objectives it tested. On the pretest question in this pair, students did extremely well, with 218 (95%) students selecting the correct image. This suggests that the incorrect options were not appealing to the students, and it is possible that some students simply selected the correct answer after eliminating the other two rather than selecting it based on correct conceptual understanding; five students even explicitly stated this in explaining their reasoning. Yet, 174 (80% of the students who circled the correct picture) correctly explain that the particle speed increases. It is plausible that the presence of the correct visual image, with tails on the particles to indicate motion, activated their prior knowledge about the relationship between temperature and speed. The most common incorrect explanation when choosing the correct picture was that the particles were getting “excited,” which, while nonsensical, may be the result of students trying to craft an explanation for the picture they already determined was correct or students misusing the term “excited” in a chemistry context.

In contrast to the pretest question, which only expected students to identify the picture of particles moving faster, the follow-up question asked students to use that knowledge to construct a causal-mechanistic explanation for what happens to the pressure of a gas at fixed volume when the temperature is increased, a much more challenging task that combines two of the learning objectives. Only 83 (36%) students were able to accurately cite increasing number and speed of collisions with the walls, but analysis of students’ incorrect answers shows that learning was not actually lost. An additional 91 (39%) students recognized some connection with the particle speed and collisions; however, they were not explicit about these collisions being with the walls of the container, despite the assignment drawing their attention to this idea. An additional 23 (10%) students recognized that particle speed increased but did not say anything about collisions. All told, a total of 197 students (85%) demonstrated on the follow-up question that they understood the relationship between particle speed and temperature changes, essentially the same as the 174 students (76%) who answered correctly on the pretest. Due to these differences, as described above, we analyzed the data both with this question and omitting this question.

Question pair 5 asked students to explain why a weather balloon expands as it ascends which addresses both LO1 and LO3. A complete causal-mechanistic explanation for this phenomenon requires two things: knowledge that atmospheric pressure decreases at higher altitudes and a strong model of the particulate level behavior of a gas. As evident through student responses on the follow-up, at least 145 (62.8%) of students possessed the required background knowledge regarding atmospheric pressure changing, so the low student performance on question pair 5 (0.29 out of 1) appears to be rooted in students’ mental models of gases. To understand students’ particulate-level models, we looked at answers to three specific assignment questions. Each question was the same format, asking how changing one variable (e.g., temperature), while the others are held constant, changes another variable (e.g., volume) and to explain the relationship based upon particle motion. This included follow-up question 3 and two other analogous questions. If students correctly answered all three questions, the odds that they provided a correct explanation on the weather balloon question were six and a half times higher than those who only answered one or two correctly (odds ratio = 6.65; 95% CI = 2.51–17.6). The high correlation with answering all of these prior questions correctly strongly supports the idea that a robust mental model of gases and their movement is required to be able to provide a complete causal-mechanistic reasoning for how a weather balloon changes as it rises. Yet the overall low level of success by the students indicates how difficult it is to achieve this level of understanding.

Although we did not see any differences between treatments in students’ ability to provide a correct explanation, we did observe differences between treatments in the domain at which students were explaining the phenomenon (symbolic, macroscopic, or particulate). Although the question prompt explicitly states to use “particle motion to explain why this occurs” on the pretest, 96 (41.6%) provided an explanation at the macroscopic domain and 99 students (42.9%) gave particulate domain explanations. There were no differences between treatments (z-test, p > 0.05) on the domain of the pretest explanation, indicating that the treatment groups were equivalent before simulation or screencast instruction. Given that students had not received instruction regarding gas behavior in their class prior to completing the pretest or assignment, it is not surprising that a large portion of students provided macroscopic domain answers. However, these are the students that have the most potential to develop a particulate-level understanding from the screencast or simulation. If we only consider the students who provided pretest explanations at the macroscopic domain, we observe differences between the two treatments in the distributions of the student answers on the identical follow-up question. For the simulation group, only 13.5% of students changed their answers to particulate domain explanations on the follow-up, whereas 39.0% of the screencast group students moved from the macroscopic to the particulate domain. This represents two times (odds ratio = 2.30; 95% CI = 1.19–4.45) greater likelihood that the screencast students shifted to the desired particulate domain answer (although not all particulate domain answers were entirely correct explanations of the phenomenon). This may indicate the ability of a screencast to help students traverse between different modes of representation, a difficult task for novice learners (Madden et al., 2011). Alternatively, the verbal narration of the screencast may be assisting the students in more fully making sense of the simulation by engaging the students in dual coding process (Mayer & Anderson, 1991).

Eye-tracking study

Results from the classroom study demonstrate that both screencast and simulation treatments increase student understanding of gas behavior, but that some differences exist in the explanations they construct to support their understanding. The eye-tracking experiment supports these results by providing a glimpse into how the behavior of these groups compares, showing both overarching similarities in behavior that may account for similarities in performance and some finer-grained differences in behavior that may lead to the increased success in constructing explanations demonstrated by the students in the screencast treatment. Results of the 2 (treatment: simulation, screencast) × 2 (questions: initial assignment, going further) × 2 (AOI: assignment, electronic resource) mixed-design ANOVA show overarching similarities in behavior between the treatment groups. A significant interaction effect between questions and AOI (F1,18 = 38.713, p < 0.001, ηp2 = 0.683) shows that all students spend more time viewing the assignment than the electronic resource, but that this difference becomes significantly larger during the going further segment of the assignment. While students spend approximately 416 s (SD = 106 s) viewing the assignment initially, they devote slightly less time, 348 s (SD = 101 s), to viewing the electronic resource. While assignment viewing stays relatively constant during the going further questions (384 s, SD = 99 s), resource viewing drops to only 166 s (SD = 89 s). While we do see slight differences in how the students use the resources during the initial assignment, when they are accessing different resources, we find their behaviors almost completely align during the going further questions.

In fact, no main effect or interaction effect with treatment is seen for total fixation duration. This addresses one interesting question, which is whether we have negatively impacted the ability of students to use the simulation by showing them the screencast, without giving them any direct hands-on experience with this resource. These results suggest that, once students are directed back to the simulation to answer the going further questions, their behavior patterns are similar, regardless of which resource they used to answer the initial assignment questions. Students clearly learn how to manipulate the simulation from watching the screencast and do not require additional time to get up to speed. These results align with the classroom study findings showing similar achievement outcomes for students in the simulation and screencast groups. Both groups share an overall behavior pattern of focusing attention primarily on the assignment, using the electronic resource as a reference as needed. When moving into the going further assignment, the behavior between these groups converges, triangulating the results of the classroom study.

When the electronic resource is broken down into smaller areas of interest based on the structure of the simulation (Fig. 3), mixed-design ANOVAs show that all students, regardless of treatment, spend more time focusing on the simulation controls and particulate-level image than they do on the remaining AOIs, including the graph, the image of the container, and other on-screen areas. This is true for both the original assignment (main effect for AOI: F5,14 = 59.057, p < 0.001, ηp2 = 0.955) and going further questions (main effect for AOI: F5,14 = 15.223, p < 0.001, ηp2 = 0.845).

Fig. 3
figure 3

Mean fixation time (and standard deviation) in seconds during completing of assignment

Although no main effect or interaction effect with treatment exists, we do observe small differences in line with what we might expect based on the experimental design. For example, simulation students spend more time viewing the controls during the initial assignment than do screencast students, but screencast students spend slightly longer looking at the controls in the going further assignment (when they are first manipulating the simulation themselves). We also observe the simulation students spending slightly more time on the particulate-level image than the screencast students, which is an interesting finding in light of their lesser use of particulate-level explanations in the classroom study. Finally, it is observed that screencast students spend slightly more time viewing both the graph and container (which displays both pressure and temperature) than do the simulation students. It is possible that this focus allows them to better integrate across representation levels, resulting in the increased ability to describe gas behavior on a conceptual level that was demonstrated in the classroom study.

Limitations

The data for the classroom portion discussed in this study was collected at two institutions and with three different instructors. Thus, there was not a consistent instructor for all students. However, previous iterations of data collection have consistently held similar results across both institutions and regardless of instructor involved. This suggests that the findings may be more broadly applicable, though both institutions still share some common characteristics as they both primarily draw students from the same region. Since the learning activities were completed outside of the classroom, the classroom environment likely only has minimal impact; however, the courses involved in all iterations of this study focus on students developing deep conceptual understanding of chemical phenomena. Thus, it is possible that other results would be observed with students who have experienced a chemistry curriculum with more of a focus on mathematical calculations.

It is also worth recognizing that these materials are designed to provide an introduction to gases and how the properties of gases arise from particle motion. In this manner, they are meant to provide a foundational experience that instructors can then build upon. As such, they are not intended to provide full coverage of all content that students would likely encounter in most general chemistry courses.

Conclusions and implications

Students begin learning about gases in primary or secondary school, and the most recent US K-12 science standards specifically address particle-level understanding of gases (particles are spaced far apart and only interact when they collide with each other) (National Research Council, 2013). At the same time, students enter the university with varying levels of background information, and numerous naïve ideas about the behavior of gas particles still exist. In this project, a pretest focused on the relationship between the macroscopic properties and particulate behavior of gases shows little use of particulate-level (causal-mechanistic) reasoning for even second-semester general chemistry students. By giving students experience with the manipulation of variables and allowing for the observation of how this impacts particulate-level behavior as well as providing graphical and mathematical representations of the resultant macroscopic behavior, both screencasts and simulations help improve student understanding of the behavior of gas particles. Though certainly not to the level of mastery, the use of this type of activity outside of the classroom prior to instruction can help students begin to build an accurate mental model of gas particle behavior as well as provide a common experience upon which to build subsequent in class instruction. As we see very little difference in the use of the two interventions (simulation or screencast), based on allocation of attention as measured by eye tracking, nor in their overall performance from pre- to post-test, it is possible to use either the screencast or simulation as an out-of-class intervention to help bring students up to a more homogenous level of understanding before beginning classroom instruction. However, there may be some advantage to starting with the screencast as we saw a greater number of the screencast students moving from a macroscopic-level explanation on the pretest to a particle-level explanation on the post assessment. It may be that the guided narration of the screencasts allows for a better integration across multiple representations, as also suggested by the eye-tracking data. Further, the eye-tracking study indicated that students who initially watched the screencast were as easily able to manipulate the simulation to answer the follow-up questions as were the students who used the simulation from the beginning. This suggest that there were no cognitive disadvantages for students who did not manipulate the simulation variables on their own initially. Yet, it should be noted that this may not be the case for simulations that are more complex to manipulate.

Availability of data and materials

The raw data (student assignments and pretest) are not available to protect student identities and confidentiality. Derived coded data set that support the findings of this study are included as a supplementary information file.

Abbreviations

ANOVA:

Analysis of variance

AOI:

Area of interest

LO:

Learning objective

SD:

Standard deviation

References

  • Adbo, K., & Taber, K. S. (2009). Learners’ mental models of the particle nature of matter: a study of 16-year-old Swedish science students. International Journal of Science Education, 31(6), 757–786. https://doi.org/10.1080/09500690701799383.

    Article  Google Scholar 

  • Becker, N. M., Noyes, K., & Cooper, M. M. (2016). Characterizing students’ mechanistic reasoning about London dispersion forces. Journal of Chemical Education, 93(10), 1713–1724. https://doi.org/10.1021/acs.jchemed.6b00298.

    Article  Google Scholar 

  • Brook, A., Driver, R., Briggs, H., & University of Leeds, Centre for Studies in Science and Mathematics Education, & Children’s Learning in Science Project (England) (1984). Aspects of secondary students’ understanding of the particulate nature of matter. Centre for Studies in Science and Mathematics Education, University of Leeds, Leeds.

  • Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: the science of successful learning. Harvard University Press, Cambridge. http://www.jstor.org/stable/10.2307/j.ctt6wprs3.

  • Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332.

    Article  Google Scholar 

  • ChemSims. (n.d.). ChemSims. ChemSims—a research project made possible with support by the NSF. Retrieved March 6, 2020, from http://chemsims.com/

  • Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. https://doi.org/10.1080/00461520.2014.965823.

    Article  Google Scholar 

  • Cooper, M. M. (2015). Why ask why? Journal of Chemical Education, 92(8), 1273–1279. https://doi.org/10.1021/acs.jchemed.5b00203.

    Article  Google Scholar 

  • Cooper, M. M., Kouyoumdjian, H., & Underwood, S. M. (2016). Investigating students’ reasoning about acid–base reactions. Journal of Chemical Education, 93(10), 1703–1712. https://doi.org/10.1021/acs.jchemed.6b00417.

    Article  Google Scholar 

  • Crandell, O. M., Kouyoumdjian, H., Underwood, S. M., & Cooper, M. M. (2019). Reasoning about reactions in organic chemistry: starting it in general chemistry. Journal of Chemical Education, 96(2), 213–226. https://doi.org/10.1021/acs.jchemed.8b00784.

    Article  Google Scholar 

  • Davidowitz, B., & Chittleborough, G. (2009). Linking the macroscopic and sub-microscopic levels: diagrams. In J. K. Gilbert, & D. Treagust (Eds.), Multiple representations in chemical education, (pp. 169–191). Netherlands: Springer. https://doi.org/10.1007/978-1-4020-8872-8_9.

    Chapter  Google Scholar 

  • Gabel, D. L., Samuel, K. V., & Hunn, D. (1987). Understanding the particulate nature of matter. Journal of Chemical Education, 64(8), 695. https://doi.org/10.1021/ed064p695.

    Article  Google Scholar 

  • Herrington, D. G., Sweeder, R. D., & VandenPlas, J. R. (2017). Students’ independent use of screencasts and simulations to construct understanding of solubility concepts. Journal of Science Education and Technology, 26(4), 359–371. https://doi.org/10.1007/s10956-017-9684-2.

    Article  Google Scholar 

  • Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (2011). Eye tracking: a comprehensive guide to methods and measures. Oxford University Press, New York.

  • IBM SPSS Statistics 25 (Version 25) (2017). Computer software. IBM Corp, Armonk. https://www.ibm.com/products/spss-statistics.

  • Johnstone, A. H. (1982). Macro- and microchemistry. School Science Review, 64, 377–379.

    Google Scholar 

  • Just, M. A., & Carpenter, P. A. (1980). A theory of reading: from eye fixations to comprehension. Psychological Review, 87(4), 26.

    Article  Google Scholar 

  • Kelly, R. M., & Jones, L. L. (2007). Exploring how different features of animations of sodium chloride dissolution affect students’ explanations. Journal of Science Education and Technology, 16(5), 413–429. https://doi.org/10.1007/s10956-007-9065-3.

    Article  Google Scholar 

  • Kelly, R. M., & Jones, L. L. (2008). Investigating students’ ability to transfer ideas learned from molecular animations of the dissolution process. Journal of Chemical Education, 85(2), 303. https://doi.org/10.1021/ed085p303.

    Article  Google Scholar 

  • Kinetic Molecular Theory of Gases. (n.d.). Retrieved May 14, 2020, from http://dbpoc.com/pearson/chemsims/gold/kmtgold/KMT.php

  • Liu, X., & Lesniak, K. M. (2005). Students’ progression of understanding the matter concept from elementary to high school. Science Education, 89(3), 433–450. https://doi.org/10.1002/sce.20056.

    Article  Google Scholar 

  • Madden, S. P., Jones, L. L., & Rahm, J. (2011). The role of multiple representations in the understanding of ideal gas problems. Chemistry Education Research and Practice, 12(3), 283–293. https://doi.org/10.1039/C1RP90035H.

    Article  Google Scholar 

  • Marx, J. D., & Cummings, K. (2007). Normalized change. American Journal of Physics, 75(1), 87–91.

    Article  Google Scholar 

  • Mayer, R. E., & Anderson, R. B. (1991). Animations need narrations: an experimental test of a dual-coding hypothesis. Journal of Educational Psychology, 83(4), 484–490. https://doi.org/10.1037/0022-0663.83.4.484.

    Article  Google Scholar 

  • National Research Council (2013). Next generation science standards: for states, by states. The National Academies Press, Washington, DC.

  • Novick, S., & Nussbaum, J. (1981). Pupils’ understanding of the particulate nature of matter: a cross-age study. Science Education, 65(2), 187–196. https://doi.org/10.1002/sce.3730650209.

    Article  Google Scholar 

  • Nurrenbern, S. C., & Pickering, M. (1987). Concept learning versus problem solving: is there a difference? Journal of Chemical Education, 64(6), 508. https://doi.org/10.1021/ed064p508.

    Article  Google Scholar 

  • Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: recent developments. Educational Psychologist, 38(1), 1–4.

    Article  Google Scholar 

  • Pickering, M. (1990). Further studies on concept learning versus problem solving. Journal of Chemical Education, 67(3), 254. https://doi.org/10.1021/ed067p254.

    Article  Google Scholar 

  • Sanger, M. J., Campbell, E., Felker, J., & Spencer, C. (2007). “Concept learning versus problem solving”: does particle motion have an effect? Journal of Chemical Education, 84(5), 875. https://doi.org/10.1021/ed084p875.

    Article  Google Scholar 

  • Sanger, M. J., & Phelps, A. J. (2007). What are students thinking when they pick their answer? A content analysis of students’ explanations of gas properties. Journal of Chemical Education, 84(5), 870. https://doi.org/10.1021/ed084p870.

    Article  Google Scholar 

  • Sanger, M. J., Phelps, A. J., & Fienhold, J. (2000). Using a computer animation to improve students’ conceptual understanding of a can-crushing demonstration. Journal of Chemical Education, 77(11), 1517. https://doi.org/10.1021/ed077p1517.

    Article  Google Scholar 

  • Sawrey, B. A. (1990). Concept learning versus problem solving: revisited. Journal of Chemical Education, 67(3), 253. https://doi.org/10.1021/ed067p253.

    Article  Google Scholar 

  • Schwartz, D. L., Chase, C. C., Oppezzo, M. A., & Chin, D. B. (2011). Practicing versus inventing with contrasting cases: the effects of telling first on learning and transfer. Journal of Educational Psychology, 103(4), 759–775.

    Article  Google Scholar 

  • Seery, M. (2018, May). Take it easy on the equations. Let me explain …. RSC Education. https://edu.rsc.org/feature/take-it-easy-on-the-equations-let-me-explain-/3009025.article

  • Strauss, A., & Corbin, J. (1990). Basics of qualitative research: grounded theory procedures and techniques, (2nd ed., ). SAGE Publications, Inc.

  • Suchocki, J. A. (2010). Conceptual chemistry: understanding our world of atoms and molecules, (4th ed., ). Prentice Hall, Upper Saddle River.

  • Sweeder, R., Herrington, D., & VandenPlas, J. R. (2019). Supporting students’ conceptual understanding of kinetics using screencasts and simulations outside of the classroom. Chemistry Education Research and Practice, 20(4), 685–698. https://doi.org/10.1039/C9RP00008A.

    Article  Google Scholar 

  • Talanquer, V. (2010). Exploring dominant types of explanations built by general chemistry students. International Journal of Science Education, 32(18), 2393–2412.

    Article  Google Scholar 

  • Talanquer, V. (2013). When atoms want. Journal of Chemical Education, 90(11), 1419–1424. https://doi.org/10.1021/ed400311x.

    Article  Google Scholar 

  • Talanquer, V. (2018). Importance of understanding fundamental chemical mechanisms. Journal of Chemical Education, 95(11), 1905–1911. https://doi.org/10.1021/acs.jchemed.8b00508.

    Article  Google Scholar 

  • Tang, H., & Abraham, M. R. (2016). Effect of computer simulations at the particulate and macroscopic levels on students’ understanding of the particulate nature of matter. Journal of Chemical Education, 93(1), 31–38. https://doi.org/10.1021/acs.jchemed.5b00599.

    Article  Google Scholar 

  • Tobii, A. B. (2016). Tobii Studio user’s manual.

    Google Scholar 

  • Tobin, K. G. (2009). The practice of constructivism in science education. Routledge, New York.

  • Topczewski, J. J., Topczewski, A. M., Tang, H., Kendhammer, L. K., & Pienta, N. J. (2016). NMR spectra through the eyes of a student: eye tracking applied to NMR items. Journal of Chemical Education, 94(1), 29–37. https://doi.org/10.1021/acs.jchemed.6b00528.

    Article  Google Scholar 

  • Underwood, S. M., Reyes-Gastelum, D., & Cooper, M. M. (2016). When do students recognize relationships between molecular structure and properties? A longitudinal comparison of the impact of traditional and transformed curricula. Chemistry Education Research and Practice, 17(2), 365–380. https://doi.org/10.1039/C5RP00217F.

    Article  Google Scholar 

  • VandenPlas, J. R., Herrington, D. G., Shrode, A. D., & Sweeder, R. D. (2020). Use of simulations and screencasts to increase student understanding of energy concepts in bonding. Manuscript submitted for publication.

  • Wiggins, G. P., & McTighe, J. (2005). Understanding by design, (2nd ed., ). Association for Supervision and Curriculum Development, Alexandria.

  • Williamson, V. M., & Abraham, M. R. (1995). The effects of computer animation on the particulate mental models of college chemistry students. Journal of Research in Science Teaching, 32(5), 521–534.

    Article  Google Scholar 

  • Williamson, V. M., Hegarty, M., Deslongchamps, G., Williamson, K. C., & Shultz, M. J. (2013). Identifying student use of ball-and-stick images versus electrostatic potential map images via eye tracking. Journal of Chemical Education, 90(2), 159–164. https://doi.org/10.1021/ed200259j.

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank all the students and instructors for participating in the collection of the data. We also thank Alec Shrode for his thoughts about student responses.

Funding

This material is based upon work supported by the National Science Foundation under Grant Nos. 1705365 and 1702592.

Author information

Authors and Affiliations

Authors

Contributions

RDS, DGH, and JRV performed an initial literature review that was expanded by BLM. RDS, DGH, and JRV designed the initial pretest, assignments, and screencast. BLM assisted with revisions of the materials based on data analysis from initial iterations of the materials. BLM coded the data from the most recent iteration of the assignment that were used in this analysis. RDS and BLM analyzed the classroom data. JRV collected and analyzed the eye-tracking data. All authors contributed to the writing of the manuscript and all authors read and approved the final manuscript.

Corresponding author

Correspondence to Ryan D. Sweeder.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martinez, B.L., Sweeder, R.D., VandenPlas, J.R. et al. Improving conceptual understanding of gas behavior through the use of screencasts and simulations. IJ STEM Ed 8, 5 (2021). https://doi.org/10.1186/s40594-020-00261-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-020-00261-0

Keywords