Skip to main content

Employing technology-enhanced feedback and scaffolding to support the development of deep science understanding using computer simulations

Abstract

Constructivist learning theories consider deep understanding of the content to be the result of engagement in relevant learning activities with appropriate scaffolding that provides the learner with timely and substantive feedback. However, any group of students has a variety of levels of knowledge and cognitive development, which makes providing appropriate individual-level scaffolding and feedback challenging in the classroom. Computer simulations can help meet this challenge by providing technology-enhanced embedded scaffolding and feedback via specific simulation design. The use of computer simulations does not, however, guarantee development of deep science understanding. Careful research-driven design of the simulation and the accompanying teaching structure both play critical roles in achieving the desired learning outcomes. In this paper, we discuss the capabilities of computer simulations and the issues that can impact the learning outcomes when combining technology-enhanced scaffolding and feedback with external teaching structures. We conclude with suggestions of promising research avenues on simulation design and their use in the classroom to help students achieve deep science understanding.

Introduction

Recent educational reform efforts emphasize the need to support students in developing deep science understanding at the K-12 and undergraduate levels. Deep science understanding is reflected in the learner's ability to apply their knowledge to make sense of phenomena and solve problems in real life (Kaldaras, 2020; Kaldaras et al., 2021a, 2021b, 2023; National Research Council [NRC], 2012a, 2012b). Efforts focused on supporting the development of deep science understanding in K-12 education include PISA (OECD, 2016) as well as national standards developed in Germany (Kulgemeyer & Schecker, 2014), Finland (Finnish National Board of Education, 2015), China (Ministry of Education, PR China, 2018), and the US (NGSS Lead States, 2013). At the undergraduate level, there has also been emphasis on supporting students' deep science understanding as opposed to fact-based memorization (NRC, 2012a, 2012b).

Constructivist theories establish that students develop deep understanding through cognitively engaging in relevant learning activities (NRC, 2007) combined with appropriate scaffolding (Piaget, 1929; Smith et al., 2006; Stanberry & Payne, 2018; Vygotsky, 1978) and timely, substantive feedback (Carless et al., 2011; NRC, 1999). A growing number of research works suggest that computer simulations are a novel instructional medium that has the potential to provide effective scaffolding and feedback to support learning among students from different educational backgrounds (Hilton & Honey, 2011; Irmak & Kaldaras, 2023). However, as we learn more about the power of computer simulations to support learning, new research avenues also emerge. For example, most of the reviewed studies in a recent review on the use of computer simulations across K-12 and undergraduate settings in the context of Physical Science disciplines indicated that employing computer simulations in inquiry-based learning environments effectively supports learners in developing conceptual understanding of the relevant science topics and often to a larger or equivalent degree as traditional instruction (Irmak & Kaldaras, 2023). However, most of the reviewed studies focused on evaluating student learning gains as a result of using the simulations. The authors suggest that more research is needed on studying how different types of technology-enhanced feedback and scaffolding can be effectively used to support learning beyond content understanding including cognitive and emotional engagement and higher-order skills acquisition such as problem solving and knowledge application among others (Irmak & Kaldaras, 2023). This recommendation parallels similar recommendations expressed in another review study (Kim et al., 2007) focused on the need to scaffold inquiry processes when technology tools are used by students who lack confidence in self-directed learning to avoid cognitive overload (Kim et al., 2007). In this context, authors suggest that relevant research is missing, and we should focus our efforts on investigating how to better support learners in fostering meaningful interactions with technology to support various aspects of learning (Kim et al., 2007).

Importantly, realizing simulations’ potential to support deep science understanding for diverse students is not guaranteed. First, the capabilities, scaffolding, and feedback within the simulation need to be carefully designed to support a given learning outcome (Adams et al., 2008a, 2008b; Bumbacher et al., 2018; Podolefsky et al., 2010a, 2010b, 2013). As discussed in the above paragraph, additional research is also needed to help us understand how different types of scaffolding and feedback support different aspects of learning.

Second, teachers and their associated teaching structure play a critical role in providing feedback and scaffolding to achieve the desired learning outcomes, layering on top of and ideally working in tandem with that provided by the simulation (Geelan & Fan, 2014; Moore et al., 2014). Prior work shows that in technology-enhanced learning environments teacher coaching and questioning are especially useful when students had difficulties applying evidence in the context of scientific explanations (Land & Zembal-Saul, 2003). Further, prior studies also show that pairing technology-enhanced scaffolds with active supports from the teachers creates more effective learning environments that help students develop a deeper science understanding (Ustunel & Tokel, 2018). In general, it has been shown that to be effective in supporting learning, technology-enhanced scaffolds need to be designed to provide supports in conjunction with other scaffolds, including teacher-provided scaffolds (Puntambekar & Kolodner, 2005; Sharma & Hannafin, 2007). In this context teacher support and explanations are critical in guiding students’ use of different forms of scaffolding (Lumpe & Butler, 2002). Kim et al. (2007) presented a framework that could guide teaching and learning in technology-enhanced science classes. The framework describes factors at the macro (systematic) level, teacher level (teacher community) and micro level (technology-enhanced class). Factors at the micro level include three types of interactions: (1) student–tool interactions (when student meaningfully engage with the tool to accomplish tasks); (2) teacher–tool interaction (when teachers makes choices regarding the tools and ways of using the tools to facilitate learning in the classroom); (3) teacher–student interaction (when teacher provides higher-order scaffolding such as hints and questions to facilitate student’s interactions with the tool and promote learning). Kim et al. point out that there is a growing need for additional research studies focused on developing a deeper understanding on best practices for teacher facilitation of student-centered inquiry in technology-enhanced learning environments (Kim et al., 2007). Therefore, while it is clear from prior work that the combination of technology and teacher-provided scaffolding is effective in supporting development of deep science understanding, additional research is needed to help us better understand best practices for organizing and balancing teacher and technology-enhanced scaffolds and feedback during the learning process.

To summarize, gaps in prior research suggest that we need more studies focused on designing ways of effectively leveraging capabilities of computer simulations (specifically, technology-enhanced feedback and scaffolding) to support deep understanding and higher-order skills such as problem-solving among learners at varying levels of cognitive development. More research is also needed on how to effectively balance teacher and simulation-supported scaffolding and feedback when supporting these outcomes. We hope that this paper will shed some light on the aspects related to structuring the learning process and balancing technology-enhanced and teacher-led feedback and scaffolding when using computer simulations to support development of deep science understanding among learners with diverse levels of cognitive development, which is almost always the case in any educational settings. To achieve these goals, we will summarize prior work focused on discussing effective simulation design and accompanying teaching structure based on studies of learning from the widely used educational simulations (PhET Interactive Simulations, 2022; Wieman et al., 2008). We will also focus on conceptualizing and operationalizing the foundational concepts of scaffolding and feedback in the context of technology-enhanced learning with simulations to offer lenses for studying the role of these simulation design features in supporting learning. We will:

  • provide a conceptual framework for learning from simulations grounded in constructivism;

  • discuss how, through good design and technology-enhanced affordances, computer simulations can provide unique scaffolding and feedback to support knowledge construction and the development of deep science understanding;

  • discuss the role of teachers and teaching structures in facilitating learning experiences with computer simulations;

  • provide insights for effectively combining simulation and teacher scaffolding and feedback;

  • propose research directions to further guide how educational technology can achieve its full potential for transforming education across many grade levels.

We hope that discussing the above-mentioned topics will help the field better understand the promises that simulations hold for aiding teachers in supporting the learning process as well as the limitations inherent in simulation design features. We also hope that this discussion will help formulate specific future research directions in this area intersecting simulation and instructional design to support learning among cognitively diverse student populations.

The role of scaffolding and feedback in constructing understanding

Cognitive engagement lies at the core of learning and is implicit in constructivism. Students need to be cognitively engaged to learn any topic. Cognitive engagement involves students explicitly thinking about the phenomena and ideas to be learned. Engagement refers to the intensity and emotional quality of students’ involvement (Connell, 1990). The cognitive aspect of engagement is an important component of any effective learning process (Connell, 1990; NRC, 1999).

Deep cognitive engagement in instructional settings requires a combination of appropriate scaffolding of learning activities, and timely, well-tailored feedback about individual student progress (Ambrose et al., 2010; NRC, 1999). In constructivist theory, feedback and scaffolding are closely related. They are both essential for supporting a learner to build on existing knowledge and construct new knowledge during a learning activity (Ambrose et al., 2010; NRC, 1999). Scaffolding is a support system that defines and structures learning activities, facilitates providing students with timely, well-tailored feedback and helps students transition from assisted to autonomous task completion. The concept of scaffolding originates from Vygotsky’s learning theory that introduces the zone of proximal development (ZPD) as a cognitive gap between what students can do autonomously and what they can achieve with help (Vygotsky, 1978). Scaffolding within learning activities is essential for providing students with appropriate feedback. Feedback is essential for the learner to efficiently construct accurate new knowledge based on their existing knowledge.

The challenge for teaching is that any group of students has a variety of levels of knowledge and cognitive development. This variation makes providing appropriate scaffolding and feedback for each individual challenging in the classroom. Simulations provide a tool for addressing this challenge.

Types of scaffolding and feedback in learning with computer simulations

In the context of a computer simulation, scaffolds can be embedded in the design of a given computer simulation. Alternatively, scaffolding can be external to the simulation. Examples of external scaffolding include direct guidance from the teacher, or indirect guidance provided through the teaching structure such as the nature of the lesson tasks and prompts. Similarly, feedback can be either embedded or external.

Embedded scaffolding and feedback

Embedded scaffolding refers to scaffolding built into the overall design of the learning tool, in this case simulations, through its affordances, constraints, interface design and overall simulation structure (Adams et al., 2008b; Paul et al., 2013; Podolefsky et al., 2010a, 2010b, 2013). These simulation features support students in building their understanding.

Affordances are simulation features that permit certain actions. Simulations can, for example, allow students to dynamically change the values of key parameters (e.g., applied force), to show or hide different representations (e.g., vectors, values, fields), reconfigure a scenario (e.g., adding another object or selecting a different skate track), use measurement tools (e.g., a voltage meter), or speed up or slow down time. Affordances invite and help direct student interaction and enable building and testing mental models (Bumbacher et al., 2018; Cock et al., 2021; Fratamico et al., 2017; Paul et al., 2013; Perez et al., 2017; Podolefsky et al., 2010b). Constraints are simulation features that constrain certain undesirable actions to ensure intended and productive use of a simulation. Simulations can, for example, limit the values on a slider, the possible locations of moveable elements, the number, and kinds of objects available, and the parameters that can be adjusted. Constraints work to keep students engaged in pedagogically productive interactions and parameter spaces, thereby reducing cognitive load and enhancing learning outcomes.

PhET’s Energy Skate Park: Basics simulation,Footnote 1 provides an example of both these scaffolding elements. The first two screens (‘Intro’ and ‘Friction’) scaffold the conceptual space, allowing students to first explore energy on frictionless tracks, before introducing the complication of friction. In the “Intro” screen, students have access to affordances to build understanding, with multiple dynamic representations of energy, a grid to support systematic investigation, time controls to slow down the visualization and connect representations, and exploration of skater start position and mass dependencies. The use of track selection scenes constrains students to choose one of three productive scenarios to investigate, postponing the freedom to build any track until the last ‘Playground’ screen. This constraining of track selections was driven by studies with middle school students that showed students got lost in building un-useful tracks in Energy Skate Park. This led to the current version of the simulation that uses “scenes” within the Intro screen to give a choice between 3 track scenarios. This provides scaffolding that encourages more productive exploration. Providing specific internal scenes that draw student’s attention to important contrasting cases in some simulations is another use of this type of embedded scaffolding.

Embedded scaffolding goes beyond affordances and constraints to also involve features related to the interface design and the overall simulation structure. Simulations can, for example, scaffold the progression of learning, layering of concepts, and complexity by explicitly using a sequence of screens, where each screen is an effective “hard-stop” of exploration (Paul et al., 2013). An example mentioned above is the simulation two screens (“Intro” and “Friction”). There can also be options within a given screen that introduce additional ideas, and corresponding complexity. Features that go beyond simple affordances and constraints often are the most helpful for providing suitable different entry points for students at different levels of cognitive development.

Through the different types of embedded scaffolding discussed here, students have some agency in their exploration while being constrained and nudged toward productive choices. The different types of embedded scaffolding discussed here provide guided agency to students that encourages productive inquiry (Adams et al., 2008a, 2008b; Bumbacher et al., 2018; Norman, 2002; Podolefsky et al., 2010a; Ustunel & Tokel, 2018). These scaffolding impacts what to explore (Salehi et al., 2015), in what order (Podolefsky et al., 2013), and how to explore (Bumbacher et al., 2015, 2018).

Embedded feedback is a tool for supporting knowledge construction with simulations. Embedded feedback is a change in the displayed information or behavior of the simulation that results from students' interaction with the simulation, i.e., their exploration of different settings for controls, parameters, configurations, etc. This feedback can be immediate and dynamic, allowing students to more easily build connections between variables and behaviors, as well as multi-faceted, using multiple dynamic representations to support knowledge construction. For example, in the Acceleration screen of PhET’s Force and Motion: Basics simulationFootnote 2 when a student increases the applied force slider, they immediately see the acceleration value increase and the object speed increase more rapidly. Even checking and unchecking the “Forces” checkbox, which shows or hides a dynamic vector representation of the magnitude and direction of the applied and friction forces, provides feedback to help build the physical meaning of the word “Forces”.

Embedded scaffolding and feedback: methods for supporting knowledge construction for individual learners

Here we discuss how embedded scaffolding and feedback can reduce cognitive load, adjust the level of challenge, and support student agency for students with different levels of knowledge about a given topic.

Dynamic and interactive affordances to support model building. Simulations have dynamic and interactive affordances which allow them to present complex phenomena with much less cognitive load than other educational media (Kalyuga, 2009) through dynamic and visual representations. For example, charge flow is represented by blue balls (electrons) moving through the elements of an electric circuit (see Fig. 1), and when the voltage is increased (see Fig. 2) they can be seen to speed up resulting in larger current through each element. Through these affordances, they eliminate the cognitive step in traditional media of translating from oral or written descriptions, often involving technical language, into these dynamic relational models.

Fig. 1
figure 1

Circuit construction kit: DC simulation. Simulation image by PhET Interactive Simulations, University of Colorado Boulder, licensed under CC-BY-4.0

Fig. 2
figure 2

Circuit construction kit: DC simulation (increased voltage). Simulation image by PhET Interactive Simulations, University of Colorado Boulder, licensed under CC-BY-4.0

Multiple entry points and exploration paths to match students’ level. The open exploratory design of a simulation, when supported by all the embedded scaffolds and feedback discussed above, offers multiple points of entry for students. Students are free to choose where to start exploring and how their exploration evolves, which tends to match their own curiosity, background knowledge, and interest. The different entry points support cognitive engagement by matching the level of challenge to each students’ individual level of skills at different levels of readiness, supporting a state of flow (Csikszentmihalyi, 1990). In PhET’s Wave Interference simulationFootnote 3 on the topic of waves, for example, if students already understand amplitude, they might choose to focus on exploring frequency instead.

Adjustable complexity for students’ progression of understanding. As students become more familiar and build some understanding of a topic, the complexity of the simulation activity can be increased to match students’ growing understanding (often through the student’s choice), and thereby facilitate productive learning, by maintaining effective cognitive engagement. Complexity of a learning activity can be adapted by adjusting what can be manipulated and measured through offering different options and panels (Podolefsky et al., 2010b). In PhET’s Radio Waves and Electromagnetic Fields simulation students typically find the two-dimensional representation of the waves overwhelming when they first see it (Fig. 3b), and often opt for the far simpler one-dimensional representation (Fig. 3a). After playing with the simulation for a little while, students understand what the arrows represent and how they relate to the motion of the electrons. Then they can move to the 2D representation, intrigued by the richness and complexity and able to figure out a deeper understanding of how radio waves depend on location from the transmitter throughout space (Adams et al., 2008a, 2008b).

Fig. 3
figure 3

Radio waves and electromagnetic fields simulation. Simulation image by PhET Interactive Simulations, University of Colorado Boulder, licensed under CC-BY-4.0. a 1D representation of waves. b 2D representation of waves

Students’ agency for individualized engagement. A well-designed simulation allows students to drive what questions to ask and what to explore to match their level of understanding (Podolefsky et al., 2010a). This is a unique educational value provided by educational simulations and games. In a typical effective engagement with a simulation the learner will start by changing a variety of variables until they find one in which the changes they make seem fairly clearly associated with changes in behavior. Then they will explore this dependence to develop a mental model that predicts the observed changes. They will then build on that, exploring the effects of changing other parameters to extend their mental model to see the relationship between different factors. If the changes they make lead to observed behavior that makes no sense, they will adjust their explorations to find a level that works. Students starting out with a better background knowledge about the topic covered by the sim will more quickly move to exploring more complex behaviors and relationships, in keeping with their level of cognitive development. This leads to productive curiosity driven learning that no traditional media can provide (Adams et al., 2008a, 2008b; Bland & Tobbell, 2016; Kowalski & Kowalski, 2012; Podolefsky et al., 2013).

The embedded scaffolding and feedback provided by the simulation can allow the teacher to spend more time on delivering higher-level support through external scaffolding and feedback. This includes providing students with opportunities to engage in productive dialogues with teachers and peers about what is done in class to deepen their understanding (Carless, 2016).

It is important to point out that the embedded simulation scaffolds that could potentially facilitate knowledge construction discussed above carry certain limitations that need to be considered when designing the learning environment that leverages these scaffolds. The central cause for the potential limitations lies in the limitation on the degree of learning and productive engagement, which is implicit in the nature of the embedded scaffolding discussed above. Specifically, it is important to recognize that sometimes the embedded scaffolds might not be effective for specific purposes due to the various features of the content under study, learning goals, student prior knowledge on the subject and familiarity with technology itself among others. It is therefore important to investigate the effectiveness of these embedded scaffolds under various circumstances to ensure that they can meaningfully and productively facilitate the learning process. As pointed out above, in classroom settings the teacher could be the ultimate judge of the effectiveness and productive use of these embedded scaffolds.

For example, in the case of Dynamic and interactive affordances to support model building type of embedded scaffolding, it is necessary to investigate what kinds of representations are meaningful and helpful for a diverse range of learners before deciding on how certain aspects of the phenomenon under study will be represented in the simulation. This process might involve talking to subject-matter experts and educators as well as testing multiple representations with a diverse range of learners under various learning conditions (Adams et al., 2008a, 2008b). Similarly, for Multiple entry points and exploration paths to match students’ level type of embedded scaffolding it is important to investigate whether the multiple entry points provided by the simulation afford meaningful and seamless engagement for learners of different levels of cognitive development and what kind of associated additional scaffolding structure is needed to facilitate meaningful engagement of these various entry points. This is especially important as we know from prior work that learners could struggle in self-directed learning supported by simulations (Kim et al., 2007). Similar studies should be conducted for evaluating the effectiveness of Adjustable complexity for students’ progression of understanding type of embedded scaffolding.

Special attention should be dedicated to studying the type of experiences and prior knowledge required from students to productively engage with the simulation at various levels of complexity. This idea goes back to opportunity to learn as discussed by Gee (2008) who emphasizes that learning opportunities are productive if diverse learners can meaningfully engage with these opportunities. Therefore, to ensure that a given simulation offers opportunities for learners at various stages of understanding to meaningfully engage with the simulation, it is important to investigate how the simulation is functioning with these diverse student populations under various learning conditions. Similar limitations and future research avenues exist for Students’ agency for individualized engagement provided to embedded simulation scaffolds. In this context, it is important to investigate the types of embedded scaffolds that facilitate agency for individualized engagement for learners from diverse academic, cultural, and other backgrounds.

To summarize, while embedded simulation scaffolds hold considerable potential for supporting knowledge construction in individual learners, further research focusing on potential limitations of these scaffolds (discussed above) is needed to better understand how to ensure the full potential of embedded scaffolds is realized in the classroom.

External scaffolding and feedback

Teachers provide various external scaffolds and feedback when their students use simulations. (1) They design or choose the learning activity to be done with the simulation. This includes deciding on the learning objectives for the activity and how the activity should be orchestrated (time, individual students, or groups, etc.). (2) They monitor their students’ progress and provide external feedback, individualized or whole class. (3) They help solidify what the students learned from the simulation, putting it into the context of the unit being covered in class, often through facilitating follow-up discussion.

However, some external scaffolding and feedback can be detrimental (Quintana et al., 2018; Reiser, 2004). Too much external scaffolding can negatively affect students’ critical thinking abilities and student agency (Hmelo-Silver et al., 2007; Kim, 2021), by limiting what students can or will choose to explore on their own and preventing students from deeply engaging. Effective external scaffolding and feedback is also challenging when working with a large number of students with different individual needs (Tissenbaum & Slotta, 2019).

The effectiveness of simulations for supporting the development of deep understanding is highly dependent on how teachers use them, and multiple research studies have shown the importance of teachers in facilitating simulation use (Rutten et al., 2012; Smetana & Bell, 2012). The embedded scaffolding and feedback are never perfect and are not intended to stand alone (Perkins et al., 2012). Our research has found examples where some students did not explore critical variables or compare important contrasting cases (Salehi et al., 2015). The accompanying learning activities designed by the teacher can profoundly affect the learning achieved, by influencing the range of student exploration and by addressing any shortcomings in the simulation scaffolding (Chamberlain et al., 2014; Rehn et al., 2013).

The ways teachers use simulations in their classrooms is highly variable, from short demonstrations to virtual labs (Perkins et al., 2014; Price et al., 2018), from telling them to freely explore the simulation and record a few observations, to step-by-step instructions of which controls to manipulate and which data to record. We have also observed that many teachers do not fully appreciate the embedded scaffolding and feedback potentials of simulations, which causes them to assign activities that undermine the opportunities for students to actively construct their own knowledge. The students end up following a procedure instead of engaging in sensemaking. Below we offer principles for good learning activity design and support.

External feedback (Carless et al., 2011; Tissenbaum and Slotta 2019). The teacher can provide explicit, higher-level feedback about both the students’ exploration practices within the simulation and their interpretations/conclusions about the science. The teacher is also needed to facilitate discussion of the results of the simulation activity towards consensus and summary and help surface connections to other aspects of the course.

Giving students structured agency (Holmes et al., 2020). Providing students with structured agency, i.e., telling them they need to decide, but not what those decisions should be, allows them to practice making problem-solving decisions and to develop deeper science understanding. Examples of external scaffolding that still allow students agency in their choices within the simulation include prompts asking students to make a decision about a critical element, achieve a given outcome (e.g., lightbulb lighting), or collect data to answer a question. An effective activity design, for example, involves crafting a well-designed challenge that would support students in self-creating effective contrasting cases. Depending on the simulation and learning goals, providing higher-level external scaffolding in the form of contrasting cases, or worked examples may be necessary to ensure that students manipulate the important variables and look at the right contrasts (Salehi et al., 2015).

Task framing (Chamberlain et al., 2014). Framing of the task given to students can change how they explore the simulation as shown in the following experiment. Students were given 3 different sets of instructions with the Acid–Base Solutions pH scale simulation: (1) explore the simulation, (2) create a solution with pH of X, or (3) figure out what affects the pH. Students in condition 1 tried a wide variety of things with the simulation, but with little incentive for sensemaking. Students in condition 2 honed in on a few key options and systematically changed them in order to achieve their task but then stopped exploring as soon as they were done and did not follow up on anything interesting they noticed. Students in condition 3 were less targeted but also explored many more aspects of the simulation to construct their explanation. At the end, students in condition 3 were most likely to explain what makes a solution acidic or basic in terms of the molecules involved. Therefore, framing the tasks in a way that engages students in pursuing an overarching driving question that is broad enough to support them in exploring all aspects of the simulation is the most productive. For example, with the circuit construction kit, “Build a circuit that will light up a light bulb. Then figure out the general requirements for a circuit that will do this.”

Simulations as tools for learning problem solving and sensemaking

As mentioned above, deep science understanding is reflected in students' ability to make sense of phenomena and solve problems in real life (NGSS Lead States, 2013; NRC, 2012a, 2012b). Sections below provide examples of elevating embedded scaffolding in the simulations to support both problem solving and sensemaking for students at different levels of cognitive development.

Problem solving using computer simulations

In science and engineering, problem solving means investigating phenomena and building predictive models and theories about the natural world (science) and designing and developing models & systems (engineering) (NRC, 2012a, 2012b). An authentic problem that fosters such problem-solving competency is knowledge-rich and solving it requires applying conceptual knowledge, that is the steps to solve it are not clear in advance and it has multiple potential solutions (Salehi, 2018). Computer simulations offer a unique learning environment that allows students at different levels of cognitive development to engage with such problems with appropriate scaffolding and feedback to develop problem-solving competencies (Barzilai & Blau, 2014; Eseryel et al., 2014).

One example of such a learning environment is the black box problem found in a special research variation of the Circuit Construction Kit simulation, Circuit Construction Kit: Black Box Study (Fig. 4). The task has a simple circuit in a black box with four wires protruding from the box and visible to the problem solver. The goal is to infer the hidden circuit by interacting with the four wires. This has many elements of real-world troubleshooting (Jonassen & Hung, 2006). Solving this type of problem requires content knowledge and systematic strategies for collecting and interpreting the necessary data. Yet, technology-enhanced scaffolding and feedback opens opportunities for a wide range of learners.

Fig. 4
figure 4

The Black Box Problem User Interface. Simulation image by PhET Interactive Simulations, University of Colorado Boulder, licensed under CC-BY-4.0

The simulation provides the necessary scaffolding structure in this troubleshooting task which reduces its complexity by simplifying the components involved (examples of scaffolding what to explore), reducing measurement errors, and making the invisible information (e.g., electron flow) visible to students through animations (examples of scaffolding how to explore) (Kalyuga, 2009). At the same time, unlike typical end-of-chapter-style questions that provide all the data needed and expect students to plug in the numbers, the black box problem does not provide any data upfront. This is an important characteristic of an authentic problem-solving task. Students have to decide what data is needed and how to collect that data in the simulation.

Simulation-based problem-solving tasks can provide students with embedded feedback regarding their progress to support knowledge construction. One example we have studied is the special research variation of the mystery weight problem in the Balancing Act simulation (Fig. 5). The goal is to infer the weight of a gift by using a balancing beam with a central pivot point and a few bricks of known weights (e.g., 5 kg). It is unsolvable by a single brick. Students have to apply the concepts and equations of torque and consider both the weights of the objects and their distance from the pivot in order to balance the beam and solve the gift's weight. When the pillars are removed, the simulation displays how the beam would tilt based on the weights of the objects placed and their positions on the beam, thus providing instant feedback that is embedded in the simulation environment. Such embedded feedback enables students of diverse background knowledge levels to engage with the problem-solving task and bootstrap their understanding of the underlying physics concepts. At the same time, our previous research has found that the instantaneous nature of the feedback inadvertently led a subgroup of students to rely on an unsophisticated trial-and-error approach for problem solving. This insight underscores the need for future research to explore how to provide external feedback on students’ problem-solving strategies to ensure that they reflect and adjust ineffective strategies.

Fig. 5
figure 5

The Mystery Weight Problem User Interface. Simulation image by PhET Interactive Simulations, University of Colorado Boulder, licensed under CC-BY-4.0

Making sense of phenomena mathematically using computer simulations

Making sense of phenomena in real life is a critical component of science understanding (Zhao & Schuhardt, 2021). Sensemaking is a process of building or revising an explanation in order to “figure something out”—to formulate a predictive mental model incorporating the mechanism underlying a phenomenon in order to resolve a gap or inconsistency in one's understanding (Odden & Russ, 2019). This requires learning environments that allow for dynamic and interactive exploration of phenomena, continuous accumulation of new evidence associated with changes in the parameters of the system under study and observation of the resulting behavior to confirm or support revising explanations. This type of environment is hard to achieve in a classroom with static scenarios. Computer simulations provide a suitable environment for supporting sensemaking towards achieving the simulation goals even without external scaffolding with its embedded scaffolding and feedback (Adams et al., 2008b; Podolefsly et al., 2010a). However, developing deep science understanding reflected in applying relevant disciplinary ideas in real life requires engaging in sensemaking that goes beyond achieving simulation goals. No research is available on the types of support students at different levels of cognitive development need to effectively engage sensemaking for developing deeper science understanding.

Blended mathematical sensemaking in science is a special type of sensemaking that involves developing deep conceptual understanding of quantitative relationships and scientific meaning of equations describing a specific phenomenon (Kuo et al., 2013; Zhao & Schuchardt, 2021).

We studied students using the Acceleration tab of the Forces and Motion: Basics simulation to figure out a mathematical relationship that could describe what is happening in the simulation (Newton’s second law: F = m × a) (Kaldaras & Wieman, 2023a). The simulation affordances (variables that students can change, embedded feedback, overall design) helped most students identify the variables that should be included in the mathematical relationships and the qualitative patterns of how these variables change with respect to each other. Specifically, the simulation helped most students figure out that applied force, friction force, mass and acceleration are the variables that should be included. They were also able to conclude that decreasing the mass while keeping the applied force the same causes greater acceleration when accounting for the force of friction. This was possible due to the combination of embedded scaffolding and feedback which allowed students to engage in sensemaking and explore how these variables affect the resulting acceleration of an object.

Most students were not, however, able to figure out the exact mathematical relationship just by interacting with the simulation (Kaldaras & Wieman, 2023a). External scaffolding in the form of verbal questioning aimed at guiding students in noticing specific qualitative and quantitative patterns among the variables in the simulation was necessary to help even highly proficient students figure out the exact mathematical relationship. Results also showed that students with lower proficiency were not able to figure out the mathematical relationship just by interacting with the simulation. Those students required external scaffolding in the form of additional data based on the simulation, or a list of possible formulas along with the simulation to figure out the relationship (Kaldaras & Wieman, 2023a). Therefore, the combination of embedded scaffolding and feedback (in this case immediate change in numerical values of applied force, acceleration, and speed) with external scaffolding was observed to support mathematical sensemaking, allowing students at various proficiency levels to derive a mathematical relationship for Newton’s second law and relate it to the physical behavior.

We further built on this study to design an instructional model for teaching blended math–science sensemaking (MSS) by leveraging computer simulations (Kaldaras & Wieman, 2023b). This instructional model contains two main components: previously validated learning progression (LP) for MSS and interactive computer simulations. This framework guides the design of learning activities around relevant computer simulations where each task aligns to specific LP level and engages learners in increasingly sophisticated types MSS as described by the LP during the activity (Kaldaras & Wieman, 2023b). Where appropriate, learners receive feedback on their performance in the form of short text-based explanations of key aspects of the activity. Piloting this instructional approach with undergraduate freshmen learners from dominant and backgrounds historically marginalized in STEM enrolled in introductory STEM courses showed that this scaffolding structure that combines elements of embedded and external scaffolds helps most learners attain the highest level of MSS as defined by the LP. Further, we have also discovered that this activity structure helps learners develop short and long-term transferable MSS skills that they can successfully apply in STEM contexts that are different from those they investigated in the learning activities. These findings suggest that combining embedded and external scaffolds is a productive way of using simulations to support development of deep science understanding as reflected in high degree of transfer. This result is especially promising in light of the growing body of literature suggesting that the majority of students struggle to engage in MSS and especially transfer their understanding across domains (Becker & Towns, 2012; Kaldaras & Wieman, 2023a, 2023b; Redish, 2017). Our study was conducted in the context of several STEM disciplines including Physics and Chemistry, which suggests that this approach is appropriate for STEM-related disciplines. Future research is needed, however, to investigate whether this approach is appropriate for fields outside STEM.

In summary, simulations offer a way to provide problem-solving and sensemaking tasks that are optimally challenging and cognitively engaging for students. Simulation design features that are essential for supporting engagement and learning include incorporating characteristics of authentic problem solving and sensemaking, scaffolding the problem by sufficiently constraining the complexity to avoid overwhelming students, and providing immediate feedback. Interacting with computer simulations provides students with opportunities to fully engage in problem solving and sensemaking with less need for teacher involvement than in a regular, static classroom environment. However, appropriate external scaffolding and feedback are still required to best support the development of deep understanding.

Structuring simulation-based activity to support knowledge construction

As mentioned above, it is important to support teachers in using the power of embedded scaffolds to its fullest potential. In general, teachers should provide carefully chosen levels of guidance on how to use a simulation and push students to engage in self-guided inquiry to accomplish the task at hand. Important tips for writing such tasks and questions include using images from the simulation to design the task, structuring the task in a way that it can be completed using the simulation and generate further inquiry (PhET Virtual Workshop for Teachers, 2023). Examples of such tasks could be an open-ended task asking students what acceleration means and how it can be determined. For example, one could ask: What will happen to acceleration if I increase the mass of an object to which the force is being applied? The question could be asked in the context of a student investigating a relevant simulation (Fig. 6). Similarly, one could ask a different question in the context of the same simulation, such as: Which of the following actions will cause an increase in acceleration? Choices could be increasing the mass, decreasing the mass, increasing the applied force, decreasing the applied force, increasing friction, and decreasing friction among others. This structure would prompt students to investigate all the listed options and notice how they affect the acceleration.

Fig. 6
figure 6

Forces and Motion: Basics. Simulation image by PhET Interactive Simulations, University of Colorado Boulder, licensed under CC-BY-4.0

The same simulation often can also support interpreting different representations and notation. For example, Fig. 7 shows the same simulation but objects being pushed in the opposite direction, and applied force indicating both magnitude (163 newtons) and direction shown by the negative sign. In this context, student understanding of vectors and vector notation could be supported by asking questions on relating the sign and the magnitude of the applied force to the direction of the object's motion.

Fig. 7
figure 7

Forces and Motion: Basics. Simulation image by PhET Interactive Simulations, University of Colorado Boulder, licensed under CC-BY-4.0

Students can engage with these tasks in different formats—individually, as part of small groups or as a whole-class inquiry. These are just a few examples of how to design tasks that leverage simulation-based scaffolds. There are many more ways this can be accomplished at various educational settings to meet a variety of learning goals.

It is important to point out that exploring simulations in ways discussed thus far can have several cognitive and societal advantages for learners compared to hands-on experiments. For example, a study conducted by Finkelstein et al., (2005) examined the effects of substituting a computer simulation for real laboratory equipment in the second semester of a large-scale introductory physics course. The direct current circuit laboratory was modified to compare the effects of using computer simulations with the effects of using real light bulbs, meters, and wires. Two groups of students, those who used real equipment and those who used a computer simulation that explicitly modeled electron flow, were compared in terms of their mastery of physics concepts and skills with real equipment. Students who used the simulated equipment outperformed their counterparts both on a conceptual survey of the domain and in the coordinated tasks of assembling a real circuit and describing how it worked. Moreover, students who used the simulated equipment felt safer to engage in experimenting with different conditions and less worried about breaking something or hurting themselves (Finkelstein et al., 2005). Another study also showed that whether simulations are more effective than hands-on experiments is likely a question of affordances that both media provide rather than the medium itself (Bumbacher et al., 2018). Specifically, both simulation and hands-on experiments are likely to be effective if both learning environments are conducive to productive inquiry strategies (Bumbacher et al., 2018).

These are just a few examples of a study demonstrating the positive effect of substituting hands-on experiments for computer simulations. We believe that additional studies are needed to further investigate this topic across and beyond STEM fields. Specifically, in terms of cognitive aspects, simulations allow students to investigate unobservable or not directly obvious entities and/or concepts (like acceleration, current, waves, etc.), and therefore simulations can represent conceptual models better than hands-on experiments. In addition, simulations can help lower cognitive load for learners by minimizing the need to set up and conduct experiments and account for measurement errors among other factors, which could represent significant cognitive barriers preventing novice learners from effectively engaging in the learning process. Finally, in terms of societal advantage, simulations can be used in situations where the equipment and resources necessary for conducting experiments are not available therefore improving equitable learning outcomes. At the same time, given availability of resources for conducting experiments, it is possible for students to effectively engage with the same content covered by the simulations through hands-on experiments. Hands-on experiments may also be better for different learning objectives such as technical or observation skills. We believe that additional studies are needed to investigate the differences in effectiveness of simulations and hands-on experiments for supporting learners at different levels of cognitive development.

Future research avenues

While simulations have shown great value for teaching, more research is needed on designing, building, and using intelligent technology that helps coordinate external and embedded scaffolding. For example, we need technology that can diagnose student thinking while they use the simulation and provide timely and targeted feedback both within the simulation for the student and to the teacher regarding student performance and learning strategies. We call for new research to address the following specific questions:

  • How do we design automated and responsive embedded feedback and scaffolding within a simulation (intelligent scaffolding that is monitoring the student)?

  • How do we evaluate student performance in real time and/or from log data of their interactions with simulation?

  • How can we support teachers to optimally implement simulations in their teaching?

Advancing embedded feedback and scaffolding in simulations

There has been significant progress in the past decades on designing effective embedded scaffolding and feedback in simulations to support learners at different levels of mastery. Simulations represent a learning environment that is not individualized but where the individual is empowered to individualize it for themselves. Currently, new technology is emerging that allows building more customizable embedded feedback and scaffolding to better support learning at the individual student level. One such technology is PhET-iO, which provides data on student interactions with the simulation and allows real-time changes in what the student sees. This technology offers opportunities for studying customizations of the embedded scaffolding including but not limited to: how to optimize the embedded scaffolding for the task and student; and how to measure the impact of different starting states or different simulation versions or affordances within a screen. All these research avenues are important to better understand how to customize embedded scaffolding to meet the needs of individual learners. An example of intelligent scaffolding would be related to monitoring student interactions with the simulation and then popping up a dialog if the students are touching a control you want them to interact with, or giving them positive feedback if they achieve a challenge posed to them.

Advancing evaluation of students’ performance and individualized feedback

Simulations with technology such as PhET-iO provide a way to capture students' strategies and practices through log files of all the student’s actions. This information can be analyzed to reveal meaningful patterns of students' performance and evaluate the effectiveness of the adopted practices and strategies. Our research has explored how to automate the measurement of problem-solving practices using log data (Wang et al., 2021). We extracted semantically meaningful features from the log data of college students solving a complex problem and mapped these features to both students' solution quality and the effectiveness of problem-solving practices as coded by human researchers. We found that features derived from the log data corresponded to the scores of specific practices assigned by human coders through qualitative observation.

While encouraging, current work on automating the evaluation of student interactions with computer simulations is far from complete. More research is needed on: (a) establishing efficient workflows to extract useful information from the log data of diverse tasks; (b) improving diagnostic and feedback accuracy; (c) evaluating multi-faceted higher-order competencies and generalizing methods across different simulations and problem solving tasks. This foundational work is critical for designing more individualized embedded scaffolds and real-time feedback to help students develop effective learning practices. It also leads to the following research questions:

  • What information should be captured? How best to analyze it?

  • How to deal with sparse data sets?

  • What information is most useful to send to the students and when?

  • What information is most useful to send to the teachers, and when?

In analyzing and interpreting the vast amount of information generated in simulation-based tasks, current AI deep learning (DL) models do not have a straightforward way of considering such complex data sets with potential multiple indicators in the output. Therefore, new approaches need to be developed to enhance currently available DL models with additional information rather than relying exclusively on one-directional (with no cycles or loops) operations for decision-making. This will help improve diagnostics related to capturing problem-solving strategies, sensemaking and learning in general. The AI also has a challenge in that it cannot access the classroom tasks and teacher facilitation. Since context matters greatly, new approaches need to be developed to manage AI’s limited access to context when providing automatic feedback.

Research on optimizing algorithms for capturing student strategies and ways of thinking at different levels of cognitive development (Kaldaras & Haudek, 2022; Kaldaras et al., 2022) also has the potential to improve simulation design and use, helping identify specific learning patterns. This is an important step towards building effective, timely feedback and scaffolding within the simulation environment grounded in data on student interactions with simulations. Specifically, the underlying basis for decisions made by DL models is not easily interpretable by humans, which makes it hard to infer the validity of these decisions in regard to student learning outcomes. Future research is needed to make the decision process of these DL models more explicit and transparent by connecting the architectural components of the DL models to the concepts directly related to student learning known to educators. Ideally this research should aim to design AI-based models with multiple inputs (where student interaction with the simulation is just one input) that are capable of identifying student learning trajectories and learning patterns in the context of a given simulation, task, facilitation, etc., and also provide means for interpreting DL model’s decisions.

Advancing pedagogical and technical supports for teachers’ effective use of simulations in their instructions

Support for teachers to make effective use of simulations instruction needs more investment and research along two main threads: (1) teacher professional development to provide the foundational pedagogical knowledge and strategies to use simulations effectively, and (2) technological solutions for providing teachers with the immediate insights and feedback they need about student use of simulations to make instructional decisions to best facilitate student learning. New work to address the teacher professional development needs, e.g., through a 4-course, 60-h Coursera specialization “PhET Virtual Workshop for Teachers” (2023), is providing new emerging opportunities for teachers and for researchers.

Conclusion

The above discussion explains how simulation design provides effective embedded feedback and scaffolding that supports learners at different levels of cognitive development in building deep understanding. The emerging simulation technology offers unique opportunities to further expand this research field by affording rapid design iterations and insights into student actions, easy integration of various learning platforms with the simulations, readily available AI technologies and algorithms, etc., that can guide both research and learning. We hope that the ideas outlined here can help future researchers leverage new technology in studying how to conceptualize and operationalize the foundational concepts of technology-enhanced scaffolding and feedback to support learners at different levels of cognitive development in building deep understanding.

It is important to point out that while technology-enhanced feedback and scaffolding offer promising avenues for enhancing science education through computer simulations, they are not without limitations. The effectiveness of feedback and scaffolding heavily depends on the quality of the instructional design and the alignment with learning objectives, which can vary based on discipline and teaching context. Additionally, the effectiveness will depend on the quality of embedded scaffolding in the simulation and/or how well the affordances and constraints align with the instructor's learning objectives. Further, students may face technical challenges or lack access to necessary technology, hindering their engagement and learning experiences. Furthermore, students may perceive technology-mediated feedback as impersonal, which would hinder the uptake of taking up the feedback. These limitations underscore the importance of thoughtful integration and supplementation of technology-enhanced strategies with traditional teaching methods to ensure holistic and effective science education.

There are several limitations associated with this study. First, our research was focused on simulations in STEM subjects, which could impact the generalizability and application of our findings to non-STEM simulations. Second, our analysis did not consider simulations powered by emerging technologies such as augmented reality (AR) and virtual reality (VR) (Makransky et al., 2020). These rapidly evolving technologies could afford unique interactions and novel opportunities for scaffolding and feedback. Finally, we did not extend our analysis to the use of simulations in informal educational settings like libraries, museums, or at home. These informal environments may significantly differ from science classrooms in terms of how simulations are used and experienced by the target learners and present new design considerations. Despite these limitations, our study makes a significant contribution to the field of STEM education by presenting how simulations can be effectively designed and used to achieve deeper learning outcomes for students at varying levels of cognitive development.

Data availability

No data was analyzed or generated for this manuscript.

Notes

  1. https://phet.colorado.edu/sims/html/energy-skate-park-basics/latest/energy-skate-park-basics_all.html.

  2. https://phet.colorado.edu/sims/html/forces-and-motion-basics/latest/forces-and-motion-basics_en.html?screens=4.

  3. https://phet.colorado.edu/sims/html/wave-interference/latest/wave-interference_all.html.

References

  • Adams, W. K., Reid, S., LeMaster, R., McKagan, S. B., Perkins, K. K., Dubson, M., & Wieman, C. E. (2008a). A study of educational simulations part I-Engagement and learning. Journal of Interactive Learning Research, 19(3), 397–419.

    Google Scholar 

  • Adams, W. K., Reid, S., LeMaster, R., McKagan, S., Perkins, K., Dubson, M., & Wieman, C. E. (2008b). A study of educational simulations Part II–interface design. Journal of Interactive Learning Research, 19(4), 551–577.

    Google Scholar 

  • Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. John Wiley & Sons.

    Google Scholar 

  • Barzilai, S., & Blau, I. (2014). Scaffolding game-based learning: Impact on learning achievements, perceived learning, and game experiences. Computers & Education, 70, 65–79.

    Article  Google Scholar 

  • Becker, N., & Towns, M. (2012). Students’ understanding of mathematical expressions in physical chemistry contexts: An analysis using Sherin’s symbolic forms. Chemistry Education Research and Practice, 13(3), 209–220.

    Article  Google Scholar 

  • Bland, A. J., & Tobbell, J. (2016). Towards an understanding of the attributes of simulation that enable learning in undergraduate nurse education: A grounded theory study. Nurse Education Today, 44, 8–13.

    Article  Google Scholar 

  • Bumbacher, E., Salehi, S., Wierzchula, M., & Blikstein, P. (2015). Learning environments and inquiry behaviors in science inquiry learning: How their interplay affects the development of conceptual understanding in physics. International Educational Data Mining Society.

  • Bumbacher, E., Salehi, S., Wieman, C., & Blikstein, P. (2018). Tools for science inquiry learning: Tool affordances, experimentation strategies, and conceptual understanding. Journal of Science Education and Technology, 27(3), 215–235.

    Article  Google Scholar 

  • Carless, D. (2016). Feedback as dialogue. Encyclopedia of educational philosophy and theory, 1–6.

  • Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407.

    Article  Google Scholar 

  • Chamberlain, J. M., Lancaster, K., Parson, R., & Perkins, K. K. (2014). How guidance affects student engagement with an interactive simulation. Chemistry Education Research and Practice, 15(4), 628–638.

    Article  Google Scholar 

  • Cock, J., Marras, M., Giang, C., & Käser, T. (2021). Early Prediction of Conceptual Understanding in Interactive Simulations. International Educational Data Mining Society.

  • Connell, J. P. (1990). Context, self, and action: A motivational analysis of self-system processes across the lifespan. In D. Cicchetti & M. Beeghly (Eds.), The self in transition: From infancy to childhood (pp. 61–97). University of Chicago Press.

    Google Scholar 

  • Czikszentmihalyi, M. (1990). Flow: The psychology of optimal experience (pp. 75–77). Harper & Row.

    Google Scholar 

  • Eseryel, D., Law, V., Ifenthaler, D., Ge, X., & Miller, R. (2014). An investigation of the interrelationships between motivation, engagement, and complex problem solving in game-based learning. Journal of Educational Technology & Society, 17(1), 42–53.

    Google Scholar 

  • Finkelstein, N. D., Adams, W. K., Keller, C. J., Kohl, P. B., Perkins, K. K., Podolefsky, N. S., Reid, S., & LeMaster, R. (2005). When learning about the real world is better done virtually: A study of substituting computer simulations for laboratory equipment. Physical Review Special Topics-Physics Education Research, 1(1), 010103.

    Article  Google Scholar 

  • Finnish National Board of Education (FNBE). (2015). National core curriculum for general upper secondary schools 2015. Helsinki, Finland: Finnish National Board of Education (FNBE). Retrieved from. http://www.oph.fi/saadokset_ja_ohjeet/opetussuunnitelmien_ja_tutkintojen_perusteet/lukiokoulutus/lops2016/103/0/lukion_opetussuunnitelman_perusteet_2015

  • Fratamico, L., Conati, C., Kardan, S., & Roll, I. (2017). Applying a framework for student modeling in exploratory learning environments: Comparing data representation granularity to handle environment complexity. International Journal of Artificial Intelligence in Education, 27(2), 320–352.

    Article  Google Scholar 

  • Gee, J. P. (2008). A sociocultural perspective on opportunity to learn. Assessment, equity, and opportunity to learn, 76–108.

  • Geelan, D. R., & Fan, X. (2014). Teachers using interactive simulations to scaffold inquiry instruction in physical science education. Science Teachers’ use of visual representations (pp. 249–270). Springer, Cham.

  • Hilton, M. L., & Honey, M. A. (Eds.). (2011). Learning science through computer games and simulations. National Academies Press.

    Google Scholar 

  • Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 99–107.

    Article  Google Scholar 

  • Holmes, N. G., Keep, B., & Wieman, C. E. (2020). Developing scientific decision making by structuring and supporting student agency. Physical Review Physics Education Research, 16(1), 010109.

    Article  Google Scholar 

  • Irmak, M., & Kaldaras, L. (2023). Virtual learning environments. The International Handbook of Physics Education Research: Teaching Physics, 11–1.

  • Jonassen, D. H., & Hung, W. (2006). Learning to troubleshoot: A new theory-based design architecture. Educational Psychology Review, 18(1), 77–114.

    Article  Google Scholar 

  • Kaldaras, L. (2020). Developing and validating NGSS-aligned 3D learning progression for electrical interactions in the context of 9th grade physical science curriculum (Publication No. 28088258) [Doctoral Dissertation, Michigan State University]. ProQuest Dissertation and Theses Global.

  • Kaldaras, L., Akaeze, H., & Krajcik, J. (2021a). Developing and validating Next Generation Science Standards-aligned learning progression to track three-dimensional learning of electrical interactions in high school physical science. Journal of Research in Science Teaching, 58(4), 589–618.

    Article  Google Scholar 

  • Kaldaras, L., Akaeze, H., & Krajcik, J. (2021b). A methodology for determining and validating latent factor dimensionality of complex multi-factor science constructs measuring knowledge-in-use. Educational Assessment, 26(4), 241–263.

    Article  Google Scholar 

  • Kaldaras, L., Akaeze, H. O., & Krajcik, J. (2023). Developing and validating an next generation science standards‐aligned construct map for chemical bonding from the energy and force perspective. Journal of Research in Science Teaching, 1–38.

  • Kaldaras, L., & Haudek, K. C. (2022). Validation of automated scoring for learning progression-aligned next generation science standards performance assessments. Frontiers in Education., 7, 968289.

    Article  Google Scholar 

  • Kaldaras, L., & Wieman, C. (2023a). Cognitive framework for blended mathematical sensemaking in science. International Journal of STEM Education, 10(1), 18.

    Article  Google Scholar 

  • Kaldaras, L., & Wieman, C. (2023b). Instructional model for teaching blended math-science sensemaking in undergraduate science, technology, engineering, and math courses using computer simulations. Physical Review Physics Education Research, 19(2), 020136.

    Article  Google Scholar 

  • Kaldaras, L., Yoshida, N. R., & Haudek, K. C. (2022). Rubric development for AI-enabled scoring of three-dimensional constructed-response assessment aligned to NGSS learning progression. Frontiers in Education., 7, 983055.

    Article  Google Scholar 

  • Kalyuga, S. (2009). Optimizing cognitive load in instructional simulations and games. In Managing Cognitive Load in Adaptive Multimedia Learning (pp. 198–216). IGI Global.

  • Kim, M. (2021). Student agency and teacher authority in inquiry-based classrooms: cases of elementary teachers’ classroom talk. International Journal of Science and Mathematics Education, 20, 1–22.

    Google Scholar 

  • Kim, M. C., Hannafin, M. J., & Bryan, L. A. (2007). Technology-enhanced inquiry tools in science education: An emerging pedagogical framework for classroom practice. Science Education, 91(6), 1010–1030.

    Article  Google Scholar 

  • Kowalski, F. V., & Kowalski, S. E. (2012, October). Enhancing curiosity using interactive simulations combined with real-time formative assessment facilitated by open-format questions on tablet computers. In 2012 Frontiers in Education Conference Proceedings (pp. 1–6). IEEE.

  • Kulgemeyer, C., & Schecker, H. (2014). Research on educational standards in German science education—towards a model of student competences. EURASIA Journal of Mathematics, Science & Technology Education, 10(4), 257–269.

    Article  Google Scholar 

  • Kuo, E., Hull, M. M., Gupta, A., & Elby, A. (2013). How students blend conceptual and formal mathematical reasoning in solving physics problems. Science Education, 97(1), 32–57.

    Article  Google Scholar 

  • Land, S. M., & Zembal-Saul, C. (2003). Scaffolding reflection and articulation of scientific explanations in a data-rich, project-based learning environment: An investigation of progress portfolio. Educational Technology Research and Development, 51(4), 65–84.

    Article  Google Scholar 

  • Lumpe, A. T., & Butler, K. (2002). The information seeking strategies of high school science students. Research in Science Education, 32(4), 549–566.

    Article  Google Scholar 

  • Makransky, G., Petersen, G. B., & Klingenberg, S. (2020). Can an immersive virtual reality simulation increase students’ interest and career aspirations in science? British Journal of Educational Technology, 51(6), 2079–2097.

    Article  Google Scholar 

  • Ministry of Education, P. R. China. (2018). Curriculum plan for senior high school [普通高中课程方案]. People’s Education Press.

    Google Scholar 

  • Moore, E. B., Chamberlain, J. M., Parson, R., & Perkins, K. K. (2014). PhET interactive simulations: Transformative tools for teaching chemistry. Journal of Chemical Education, 91(8), 1191–1197.

    Article  Google Scholar 

  • National Research Council. (1999). How people learn: Bridging research and practice. National Academies Press.

    Google Scholar 

  • National Research Council. (2007). Taking science to school: Learning and teaching science in grades K-8. National Academies Press.

    Google Scholar 

  • National Research Council. (2012a). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. National Academies Press.

    Google Scholar 

  • National Research Council. (2012b). Discipline-based education research: Understanding and improving learning in undergraduate science and engineering. National Academies Press.

    Google Scholar 

  • NGSS Lead States. (2013). Next generation science standards: For states, by states. The National Academies Press.

    Google Scholar 

  • Norman, D. A. (2002). The design of everyday things. Basic Books.

    Google Scholar 

  • Odden, T. O. B., & Russ, R. S. (2019). Defining sensemaking: Bringing clarity to a fragmented theoretical construct. Science Education, 103(1), 187–205.

    Article  Google Scholar 

  • OECD. (2016). PISA 2015 Assessment and analytical framework: Science, reading, mathematic and financial literacy. OECD Publishing.

    Book  Google Scholar 

  • Paul, A., Podolefsky, N., & Perkins, K. (2013, January). Guiding without feeling guided: Implicit scaffolding through interactive simulation design. AIP Conference Proceedings (Vol. 1513, No. 1, pp. 302–305). American Institute of Physics.

  • Perez, S., Massey-Allard, J., Butler, D., Ives, J., Bonn, D., Yee, N., & Roll, I. (2017). Identifying productive inquiry in virtual labs using sequence mining. Artificial Intelligence in Education: 18th International Conference, AIED 2017, Wuhan, China, June 28–July 1, 2017, Proceedings 18 (pp. 287–298). Springer International Publishing.

  • Perkins, K. K., Moore, E. B., & Chasteen, S. V. (2014). Examining the use of PhET interactive simulations in US College and high school classrooms. Proceedings of the 2014 Physics Education Research Conference (pp. 207–210).

  • Perkins, K., Moore, E., Podolefsky, N., Lancaster, K., & Denison, C. (2012, February). Towards research-based strategies for using PhET simulations in middle school physical science classes. AIP Conference Proceedings (Vol. 1413, No. 1, pp. 295–298). American Institute of Physics.

  • Phet Interactive Simulations. PhET, https://phet.colorado.edu/.

  • Phet Virtual Workshop for Teachers (2023). PhET, https://phet.colorado.edu/en/teaching-resources/virtual-workshop/.

  • Piaget, J. (1929). The child’s conception of the world. Harcort, Brace.

    Google Scholar 

  • Podolefsky, N. S., Adams, W. K., Lancaster, K., & Perkins, K. K. (2010b). Characterizing complexity of computer simulations and implications for student learning. In AIP Conference Proceedings (Vol. 1289, No. 1, pp. 257–260). American Institute of Physics.

  • Podolefsky, N. S., Perkins, K. K., & Adams, W. K. (2010b). Factors promoting engaged exploration with computer simulations. Physical Review Special Topics—Physics Education Research, 6(2), 020117.

    Article  Google Scholar 

  • Podolefsky, N. S., Rehn, D., & Perkins, K. K. (2013). Affordances of play for student agency and student-centered pedagogy. In AIP Conference Proceedings (Vol. 1513, No. 1, pp. 306–309). American Institute of Physics.

  • Price, A. M., Perkins, K. K., Holmes, N. G., & Wieman, C. E. (2018). How and why do high school teachers use PhET interactive simulations. Learning, 33, 37.

    Google Scholar 

  • Puntambekar, S., & Kolodner, J. L. (2005). Toward implementing distributed scaffolding: Helping students learn science from design. Journal of Research in Science Teaching, 42(2), 185–217.

    Article  Google Scholar 

  • Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., Kyza, E., Edelson, D., & Soloway, E. (2018). A scaffolding design framework for software to support science inquiry. In The journal of the learning sciences (pp. 337–386). Psychology Press.

  • Redish, E. F. (2017). Analysing the competency of mathematical modelling in physics. In Key Competences in Physics Teaching and Learning: Selected Contributions from the International Conference GIREP EPEC 2015, Wrocław Poland, 6–10 July 2015 (pp. 25–40). Springer International Publishing.

  • Rehn, D. A., Moore, E. B., Podolefsky, N. S., & Finkelstein, N. D. (2013). Tools for high-tech tool use: A framework and heuristics for using interactive simulations. Journal of Teaching and Learning with Technology, 2(1), 31–55.

    Google Scholar 

  • Reiser, B. J. (2004). Scaffolding complex learning: The mechanisms of structuring and problematizing student work. The Journal of the Learning Sciences, 13(3), 273–304.

    Article  Google Scholar 

  • Rutten, N., Van Joolingen, W. R., & Van Der Veen, J. T. (2012). The learning effects of computer simulations in science education. Computers & Education, 58(1), 136–153.

    Article  Google Scholar 

  • Salehi, S., Keil, M., Kuo, E., & Wieman, C. E. (2015). How to structure an unstructured activity: Generating physics rules from simulation or contrasting cases. Physics Education Research Conference Proceedings American Association of Physics Teachers PACS (Vol. 1, No. 40).

  • Salehi, S. (2018). Improving problem-solving through reflection (Publication No. 28114837) [Doctoral dissertation, Stanford University]. ProQuest Dissertations & Theses Global.

  • Sharma, P., & Hannafin, M. J. (2007). Scaffolding in technology-enhanced learning environments. Interactive Learning Environments, 15(1), 27–46.

    Article  Google Scholar 

  • Smetana, L. K., & Bell, R. L. (2012). Computer simulations to support science instruction and learning: A critical review of the literature. International Journal of Science Education, 34(9), 1337–1370.

    Article  Google Scholar 

  • Smith, C. L., Wiser, M., Anderson, C. W., & Krajcik, J. (2006). Implications of research on children’s learning for standards and assessment: A proposed learning progression for matter and the atomic-molecular theory. Measurement: Interdisciplinary Research and Perspectives, 4(1–2), 1–98. https://doi.org/10.1080/15366367.2006.9678570

    Article  Google Scholar 

  • Stanberry, M. L., & Payne, W. R. (2018). Active learning in undergraduate STEM education: A review of research. Research Highlights in STEM Education, 147.

  • Tissenbaum, M., & Slotta, J. (2019). Supporting classroom orchestration with real-time feedback: A role for teacher dashboards and real-time agents. International Journal of Computer-Supported Collaborative Learning, 14(3), 325–351.

    Article  Google Scholar 

  • Ustunel, H. H., & Tokel, S. T. (2018). Distributed scaffolding: Synergy in technology-enhanced learning environments. Technology, Knowledge and Learning, 23(1), 129–160.

    Article  Google Scholar 

  • Vygotsky, L. S. (1978). Mind in society. Harvard University Press.

    Google Scholar 

  • Wang, K., Nair, K., & Wieman, C. (2021). Examining the links between log data and reflective problem-solving practices in an interactive task. LAK21: 11th international learning analytics and knowledge conference (pp. 525–532).

  • Wieman, C. E., Adams, W. K., & Perkins, K. K. (2008). PhET: Simulations that enhance learning. Science, 322(5902), 682–683.

    Article  Google Scholar 

  • Zhao, F., & Schuchardt, A. (2021). Development of the Sci-math Sensemaking Framework: Categorizing sensemaking of mathematical equations in science. International Journal of STEM Education, 8(1), 1–18.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank PhET Interactive Simulations project at the University of Colorado Boulder for making this work possible by providing openly licensed simulations.

Funding

This work has been partially funded by a grant from the Yidan Prize Foundation.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to developing the outline for the publication. LK served as lead author, with significant contributions and feedback by all authors during manuscript preparation. LK completed the first revision. All authors contributed to the final revisions of the manuscript.

Corresponding author

Correspondence to Leonora Kaldaras.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors have no conflict of interest to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kaldaras, L., Wang, K.D., Nardo, J.E. et al. Employing technology-enhanced feedback and scaffolding to support the development of deep science understanding using computer simulations. IJ STEM Ed 11, 30 (2024). https://doi.org/10.1186/s40594-024-00490-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-024-00490-7

Keywords