Skip to main content

Characteristics of well-propagated teaching innovations in undergraduate STEM

Abstract

Background

The undergraduate science, technology, engineering, and mathematics (STEM) education community has developed a large number of innovative teaching strategies and materials, but the majority of these go unused by instructors. To help understand how to improve adoption of evidence-based education innovations, this study focuses on innovations that have become widely used in college-level STEM instruction. Innovations were identified via a questionnaire emailed to experts in STEM instruction. Descriptions of identified innovations were validated by preparing brief descriptions of each innovation and sending them to the original developers, when applicable, for feedback, and searching relevant literature. Publicly available funding data was collected for each innovation. STEM disciplines surveyed include biology, chemistry, computer science, engineering, geoscience, mathematics, and physics.

Results

The 43 innovations identified were categorized based on two criteria: level of specificity (general, recognizable, branded) and type of change (pedagogical, content, both, neither). The 21 branded innovations were analyzed in more detail. The majority (14/21) require relatively modest changes in pedagogy and no changes in content. In addition, nearly all have received at least 3 million dollars in external funding over at least 10 years.

Conclusions

This paper presents the full list of instructional innovations produced, which can be used by educational innovation developers to understand how their ideas fit within the broader landscape and to identify innovations in one discipline that may have promise for transfer. The findings regarding funding of the branded innovations have important implications for both educational innovation developers and funding agencies. In particular, the study indicates that a long-term mindset and access to long-term funding are vital for broad adoption of new teaching innovations.

Background

Within science, technology, engineering, and mathematics (STEM) disciplines, significant education research has focused on developing teaching innovations and evaluating their efficacy (National Research Council 2012). This research has produced many new instructional strategies and teaching materials that have been shown to improve a variety of student-learning outcomes. However, most of these strategies are not widely used by STEM instructors (Austin 2011, Seymour, 2001, Fairweather 2008).

In contrast, there are some innovative teaching strategies that have become well known and widely used in STEM education. To begin to understand why some gain traction while others do not, we ask: are there factors and/or features common to instructional strategies that are widely used? If so, what are implications for developers of educational innovations?

There have been many calls for reform in STEM education at the college level (Brewer & Smith 2009; The White House 2010) and, as noted above, many innovative teaching strategies have been developed and their efficacy well supported. The limited use of these strategies suggests that we lack a coherent framework for implementing widespread reform in college STEM teaching. This has been an active research area in recent years (D’Avanzo 2013; Gannaway et al. 2011; Kezar 2011; Litzinger et al. 2011; Mckenna, Froyd, & Litzinger, 2014). Most of the work in this area is focused on understanding why current practices typically fail. Here, we take the opposite approach and seek to build knowledge by studying the few educational innovations that have made it to significant levels of use.

There are several bodies of prior work that have influenced our conceptualization of this project and the results presented in this paper. An important way that researchers in many fields think about the spread of innovations is Rogers’ (2003) diffusion of innovations model. For example, this model has been used to examine awareness and implementation of innovations in physics and engineering education (Borrego et al. 2010; Henderson et al. 2012). While diffusion of innovations provides a useful way to conceptualize how and why innovations spread, it is not sufficiently detailed for creating a dissemination plan. Several research groups have seen the need for a more detailed framework of dissemination of education innovations. In Australia, a detailed dissemination framework was evaluated for effectiveness among grant recipients, but the framework was found to be insufficient to promote understanding of dissemination planning (Gannaway et al. 2011). Developers used the language the framework provided but not its emphasis on planning, leading to revisions of the framework. Other researchers have conducted literature reviews and studies with grant recipients to explore what leads to successful dissemination (Bourrie et al. 2014; Hazen et al. 2012). These studies find that the interplay between factors such as the innovation itself and potential adopters is complex, and confirm that the process is consistent with Rogers’ (2003) ideas. For example, Bourrie et al. (2014), in a Delphi study of NSF grant recipients, found multiple factors lead to an innovation becoming successful, with the main factor being relative advantage. What are still needed are specific factors that can inform practice from the context of STEM education innovations.

To help identify these factors, we have identified a set of educational innovations that are well known and widely used along with basic information such as how long they have existed and been funded. We refer to these innovations as well-propagated instructional strategies and materials (WePISMs). A small number of these WePISMs have been identified and examined in depth to understand practices and processes that led to their widespread adoption (Khatri et al., 2016; Khatri, Henderson, Cole, & Froyd, 2014, 2015). The focus of this paper is to analyze the larger set (see the Additional file 1: of this paper for the full set).

Methods

This study was motivated by a desire to understand the current landscape of well known and widely used undergraduate teaching innovations within STEM. We used qualitative methods suitable to develop an emergent understanding of this previously unknown situation (Creswell 2007).

This study was carried out in several stages: initial data collection, validation of the results through additional data collection, and analysis using a new categorization scheme (Table 1).

Table 1 Overview of the three study phases

Initial data collection

An important goal of this study was to identify WePISMs. We began by surveying (via email) experts in research-based undergraduate teaching in the seven disciplines studied (biology, chemistry, computer science, engineering, geoscience, mathematics, physics). We identified experts through membership on national committees (e.g., NRC DBER committee), professional society leadership, and our professional networks of individuals who serve as journal editors and opinion leaders. We began with a list of at least ten experts from each discipline (except for computer science where we identified nine, see Table 2). After contacting these initial experts, if the minimum of five responses was not achieved, we asked the experts who did respond within their discipline to recommend additional experts we could contact.

Table 2 Number of experts contacted in each discipline

Each expert was sent an email that briefly introduced the project and asked the expert to respond to the following prompt:

Please respond to this email and identify the five or so ‘new’ learning materials or teaching strategies that you feel have been most successfully propagated in undergraduate [DISCIPLINE]. It will be very helpful if you could also include a short explanation of why each was chosen.

In order to increase the response rate, we sent up to two reminder emails to non-responders. In these follow-ups, we also made a point to mention the names of team members who might be familiar to survey recipients (e.g., mentioning the name of our chemistry team member when emailing the chemistry experts).

Most experts responded via email, while others (two) preferred to set up a phone call. Phone calls were not recorded, although the innovations named and the basic rationale for including them were written down during the phone call.

A minimum of five responses was sought in each discipline. If we did not get five responses in a discipline with the initially identified experts, we contacted additional experts to achieve the minimum.

Validation of the list of WePISMs

All suggested innovations were included in the list, which was validated in several steps: applying inclusion criteria (discussed below), member checking with the expert responders, and presenting the list for feedback from 70 additional education researchers in various STEM disciplines.

To help determine the extent to which each of the innovations on the list was widely propagated, we used Google Scholar to identify publications about each of the innovations. In addition to the expert recommendations and literature search, we held focus group discussions with National Science Foundation (NSF) Transforming Undergraduate Education in STEM (TUES) program directors (Khatri, Henderson, Cole, & Froyd, 2013). The primary motive behind the focus groups was to understand program director views of propagating educational innovations in general terms, but they frequently employed example innovations from their disciplines, and discussion of those innovations was considered additional evidence of propagation while checking the list. With this information, the list was winnowed using the following inclusion criteria for being counted as a well-propagated educational innovation:

  1. 1.

    Used primarily in college settings. Some items suggested by the experts were designed for and primarily used in K-12 settings, which was not our area of focus.

  2. 2.

    Used primarily as a teaching tool. Some items suggested by the experts, such as concept inventories, are more frequently used for research and evaluation rather than for instruction. Although we realize that there is no clear line, we nonetheless decided to exclude items used primarily for research and evaluation from our list.

  3. 3.

    All items on the list, in addition to meeting the first two criteria, also required evidence of significant use by others. We operationally define “significant use” as being used by at least 100 institutions or being highly visible in the field. We collected the following sources for evidence of significant use: (i) being mentioned by significant reports or papers authored by non-developers, such as being mentioned in the NRC DBER report (National Research Council 2012), (ii) literature written by innovation adopters who reported their experience in an education journal, (iii) being included in a well-attended workshop program, such as the Science Education Resource Center (SERC) On the Cutting Edge workshops (Gosselin et al. 2013), (iv) existence of a conference devoted solely to the innovation (e.g., (Dreyfuss 2013)), (v) frequency of mentions by experts, and (vi) Internet searches for examples of implementations and/or data provided by the innovation developers. Occurrences in more than one of these sources were required in order for an innovation to be judged as being significantly used.

As an example for applying the criteria, Workshop PhysicsFootnote 1 is (1) used in college settings and (2) used primarily as a teaching tool. When we apply (3) (significant use), however, it is not clear that Workshop Physics has ever been particularly highly used. But, it is clear that Workshop Physics is highly visible in the field and has contributed significantly to advances in how the physics community thinks about undergraduate instruction (Laws 1991). Therefore, Workshop Physics is included on the list.

Applying these criteria yielded an edited version two of the list, which was sent back to the participants for member checking. A participant received a list containing only innovations from version two specific to their discipline. Participants often disagreed on items in the second list and suggested a few more that could possibly be included, giving opinions or evidence as to why. We examined additional suggestions and applied the same criteria to determine if they should be added to the WePISM list. In addition, we used participant responses critiquing the spread of some innovations to remove some from the list, yielding version three of the list.

We used several available opportunities to validate version three of the list. The largest opportunity was at the TUES principal investigator (PI) meeting in January 2013. The list was presented both in a workshop with 70 participants and in a poster session (Henderson & Cole, 2013). These meeting participants had all received NSF education grant funding and were knowledgeable in their discipline. We received feedback on the innovations included and suggestions for additional innovations to include. We also sought and received feedback on the list from our project advisory board. After carefully considering this feedback and the other available evidence, in Spring 2015, we considered the list to be finalized. The final list contained 43 innovations.

Analysis of the WePISMs

Once the 43 WePISMs were identified additional analysis was needed to develop a better understanding of them. This involved collecting additional information about each WePISM and developing a categorization scheme to highlight important WePISM characteristics.

The first step in the analysis process was gathering additional data to develop a preliminary understanding of each of the innovations, many of which were outside of our fields. With the aid of digital libraries, literature, project abstracts, and project websites, we wrote a brief (~100 words) description of each. These were sent to the developers of the WePISMs for review and approval when a project leader for the innovation could be identified and contacted.

Categorization scheme for educational innovations

We searched the literature for an existing categorization scheme for educational innovations to begin characterizing the WePISMs. While we found some published schemes, none of these were suitable for our purposes. The most promising of these was developed by Ruiz-Primo et al. (2011), who identified four characteristics of educational innovations: conceptually oriented tasks, collaborative learning activities, technology, and inquiry-based projects. They found many of the 868 papers they analyzed combined one or more of these types, citing Peer InstructionFootnote 2 (Mazur, 1996) as an example that combines technology, collaborative learning, and conceptual tasks. We found, though, that categorizing our list in terms of these characteristics was not always possible and did not lead to meaningful groupings. For example, a major problem in using this scheme was that it uncomfortably put all the “technology” things together—even though BlueJFootnote 3 (Kölling et al. 2003), PhET SimulationsFootnote 4 (Wieman et al. 2008), and online homework are only similar in that they are all accessed on a computer. Their differences, intention and use of these innovations, however, outweigh this similarity.

Therefore, we needed to develop a new categorization scheme that would help us better understand this set of instructional strategies with respect to their successful propagation. A categorization scheme should be both replicable (different researchers can classify items the same way into the same categories) and theoretically meaningful (creating a basis for new insights). After many iterations and much discussion, we arrived at the categorization scheme presented in Table 3, which is based on whether use of the innovation requires a change in content, pedagogy, neither, or both. Each innovation on the list was coded separately by all six authors and discussed to come to agreement on its categorization.

Table 3 The authors’ categorization scheme of types of educational innovations

This categorization scheme is discussed in more detail elsewhere (Stanford et al., 2016).

In addition to placing each of the 43 innovations into one of the four categories in Table 3, we needed a second categorization scheme to further differentiate the innovations. Some innovations in the final list include large movements and big ideas in STEM education (e.g., use of metacognition) or large umbrella terms for many other innovations (e.g., active learning), while others were specific and their proper name well recognized (e.g., the PhET Interactive Simulations). A categorization scheme was needed to differentiate the innovations along this as yet undefined dimension. We used the following scheme: general (innovation is an idea with various types of implementation), recognizable (innovation is clear but without central leadership), and branded (innovation is clear and has central leadership) (Table 4). This second scheme proved useful when the authors studied factors that influenced propagation of the innovations (see following section).

Table 4 Level of specificity of educational innovations

Using the two categorization schemes, the six authors classified the list of 43 WePISMs. The team then discussed their individual ratings and disagreements were resolved to reach consensus in coding.

Data collection of branded WePISMs

We sought to identify the number of funding sources, years funded, and total amount of funding for each WePISM. We found that this information could be identified for most of the branded innovations, but not the general or recognizable innovations. Thus, this part of the study was conducted only with the 21 branded innovations. We identified the PIs of the branded innovations through project websites and literature and used search engines for the funding agencies and the websites for the innovations (when appropriate) to gather funding information. We note that not all funding agencies make their funding amounts public. As a result, there may be funding for many innovations beyond what was listed by project websites. The amounts presented in the results section are likely to be a low estimate for some innovations.

We sent the ~100-word project description to the original project PI (or, if unavailable, a prominent champion of the innovation) to allow them to check our understanding of the essence of their innovation (example descriptions in Table 5). We sent them the entire list of branded innovations in order to place their innovation into context, as we anticipated that without the context of the other short descriptions, they would say that our description was too short. In some cases, they wrote entirely new descriptions, and others gave a simple “Okay” to what we sent.

Table 5 Example descriptions from branded innovations list

Results

We present the results of this study in two parts: (1) the final list of all 43 WePISMs and (2) a more detailed analysis of the 21 branded innovations for which there was additional publically available data. Attributes of WePISMs discussed are based upon the email questions and member checking results. For example, identification of disciplines mentioning an innovation was only based on the email surveys of experts, not additional searches of the literature, although examples may be found in literature of WePISMs crossing over into other disciplines. We took this approach because the items on the list were validated with external sources. While use by other disciplines could be evidence of propagation, it does not imply widespread use in the other disciplines. External information (literature, digital libraries) was used to inform coding decisions regarding project type and level of specificity.

Characteristics of the WePISMs overall

The number of WePISMs identified in each discipline ranged from 6 to 16 (Fig. 1). Geoscience and physics had more, while chemistry, engineering, and computer science had fewer WePISMs. In addition to potential bias from the number of experts initially contacted in each discipline, disciplinary differences in the number of WePISMs were likely influenced by some extraneous factors. For example, the well-documented history of physics education research (Cummings 2011) and the centralized resources in geoscience (SERC) may have contributed to listing innovations that we were able to confirm were indeed well propagated.

Fig. 1
figure 1

Number of WePISMs reported by experts across disciplines

Figure 1 Number of WePISMs reported by surveyed experts in disciplines. Some innovations were mentioned by experts in multiple disciplines (e.g., SCALE-UPFootnote 5).

Further examining the breakdown of the level of specificity of WePISMs, there are notable differences in several disciplines (Fig. 1). Most share an even mix of general, recognizable, and branded innovations. Biology and geoscience experts identified recognizable WePISMs most frequently, but experts in all disciplines identified some of these innovations. Physics experts identified the most branded innovations, followed by math. Engineering experts identified no branded innovations. A chi square test (comparing projected and actual counts) showed significant differences between physics, engineering, and geoscience in use of branded, recognizable, and general innovations (Greenwood and Nikulin 1996). Physics uses more branded innovations, geoscience uses more recognizable, and engineering uses more general innovations.

Figure 2 Breakdown of the WePISMs (N = 43) by the categorization scheme described in Table 3. The columns are further broken into level of specificity (Table 4).

Fig. 2
figure 2

Type and Level of Specificity of WePISMs

We also examined WePISMs by categorization of innovations, as shown in Fig. 2. Most WePISMs invoke only pedagogical changes (60%). This is followed by innovations that do not require a change in pedagogy or content (28%) and innovations that require a change in both pedagogy and content (9%). There was only one innovation reported which required a change in content only (objects-first learning in computer science).

Branded WePISMs

Using publically available data about funding, we can offer more details about the branded WePISMs. We discuss the amount of time branded WePISMs were funded, the amount of funding, and the number of sources of funding.

Figure 3 shows a box-and-whisker plot showing the number of years of funding and the amount of funding for the branded WePISMs. The median time was 15 years and the median amount was 3.1 million dollars. The boxes represent the second and third quartiles.

Fig. 3
figure 3

Years and amount of funding for WePISMs

The branded innovations all received funding for a period of at least 8 years, with most receiving continuous or nearly continuous funding for over 10 years (Fig. 3).

The amount of funding covers a wide range, with the lower end at a half million dollars and a median of 3.1 million dollars (Fig. 3). It is important to note that these are low estimates since most innovations have some funding sources that do not disclose funding amounts. For example, unlike public funding agencies such as the National Science Foundation, companies and institutions backing an innovation often do not publically report funding amounts.

Figure 4 shows a box-and-whisker plot showing the number of funding sources for the branded WePISMs. The median amount was 3. The boxes represent the second and third quartiles.

Fig. 4
figure 4

Number of funding sources for WePISMs

Many of the innovations had more than one funding source (Fig. 4). Most received funding from two to five sources. Nearly all received funding from Federal sources (mainly the National Science Foundation). Additional funding sources were often the institution where the innovation was developed for private foundations and companies. Notably, computer program innovations such as the PhET Simulations, GeogebraFootnote 6, and ALICEFootnote 7 received funding from a large number of sources (20 or more).

Discussion

Disciplinary differences

Many of the WePISMs originating in physics are branded, in contrast to geoscience and engineering, which have more recognizable and general innovations. There are several possible reasons for this difference. Physics education research as a field is one of the older STEM education fields, and some of the well-propagated innovations are well-documented as part of the history of the field (Cummings 2011; National Research Council 2012). It could also be due to disciplinary differences. Engineering education encompasses many individual engineering disciplines (mechanical engineering, electrical engineering, civil engineering, etc.). Educational innovations adopted in these disciplines may be as different as the differences between physics and geosciences. Thus, engineering may rely on umbrella ideas more heavily than physics. Geoscience is more place-based, so instructional strategies for one setting may not transfer to others, but the overall template for a change might.

Content innovations

In this study only one innovation focused on content change could be confirmed to be well-propagated (objects-first learning in computer science). Several more were suggested, such as the Matter and Interactions course and textbook (Chabay and Sherwood 1999), but these candidates could not be verified as well propagated based on the criteria and data available for this study. The focus on pedagogy may imply a lack of development of content innovations, or it may be that content innovations are not being propagated. Possible barriers to propagation of content-based innovations might include disciplinary norms and expectations of content coverage at the departmental and interdepartmental levels. It may be that content innovations require a large amount of cooperation between individuals and departments and thus are slow to be implemented, while innovations focused on pedagogy are more easily adopted within existing institutional structures. This is an area that requires more investigation. Although we think of pedagogy as being firmly entrenched in higher education, our findings suggest that content may be even more so. If content-based innovations are, in fact, less likely to spread it is important for education researchers to ask themselves about the desirability of this state of affairs.

Funding implications

The findings regarding funding of the branded innovations have implications for educational developers. First, based on this study, characteristics of innovations likely to be broadly adopted can be identified at the proposal review stage. This study has found that innovations requiring content changes are unlikely to propagate widely using existing strategies. Therefore, if the goal of a project is broad adoption, then projects expecting significant content change should either propose significantly different propagation strategies or not be undertaken. Second, branded, broadly-adopted innovations received significant funding over a minimum of 8 years. For projects aiming for broad propagation and expecting pedagogical change, propagation plans should be developed with long-time horizons. Educational development projects may have goals other than broad adoption within a 10-year time horizon; for example, projects may be funded to stimulate consideration of a variety of very different content in some established courses. If this is the case, then goals for these projects should be clear to both developers and any organizations funding these projects.

These findings have implications for modifying existing funding structures. A typical grant for an education project lasts 3 or 4 years, so getting to the 8–10 years of funding that we found as a minimum for successful propagation means pursuing multiple grants. Developers often think that publishing and presenting results of the work at the end of a 3-year grant will mean the innovation reaches others. However, the implication here is that in addition to having a good idea, developers need to be willing and able to spend a decade or more working on an innovation and pursuing funding opportunities in order to develop something that can be well-propagated. As a result, funding agencies may wish to consider extension funding mechanisms for educational innovations that have demonstrated progress on developing, implementing, and evaluating propagation plans during the initial 3–4-year grant period.

Future research

While this study focused on the innovations that were well propagated, another avenue for study is studying innovations that were funded but did not reach a level of significant use. One study has done something similar to this by analyzing the propagation plans of a set of funded proposals and looking at outcomes several years later (Stanford et al., in press). A comparative analysis of well-propagated and not-as-well-propagated funded innovations could further illuminate factors related to propagation.

Conclusions

The purpose of this article was to discuss the characteristics of instructional strategies and materials that have spread well within undergraduate STEM education, and consider some of the factors associated with their propagation. We refer to these strategies and materials as WePISMs and identified 43 WePISMs with multiple ones in each STEM discipline. Across all 43 WePISMs, most of the disciplines had similar mixes of general, recognizable, and branded innovations. However, engineering, geoscience, and physics were significantly different from each other: engineering had more general, geoscience more recognizable, and physics more branded innovations. Overall, WePISMs largely represent changes to pedagogy, not changes to content, and the branded WePISMs share significant levels of external funding (median US$ 3.1M) over an extended period (median 15 years).

We hope these findings, the list and description of WePISMs, and the new vocabulary introduced in this paper to discuss educational innovations, will help developers think more explicitly about the type of change they wish to create and a propagation plan to support their goals.

Notes

  1. Workshop Physics: Instructional format in which traditional lectures and weekly laboratory sessions in a calculus-based introductory physics are replaced with inquiry-oriented activities and occasional demonstrations.

  2. Peer Instruction: Lecture-based strategy in which the instructor intersperses brief presentations with conceptual questions (i.e., ConcepTests), and allows students to respond. After responding, students discuss their answers in pairs and then respond again.

  3. BlueJ: Intro programming environment based in objects-first teaching, intended for introductory Java instruction.

  4. PhET Interactive Simulations: Over 125 free online and downloadable simulations, targeting a large number of physics and astronomy concepts (with more recently added simulations in chemistry, geoscience, biology, and mathematics).

  5. SCALE-UP: Instructional format where students work in small cooperative groups on activities, many of which are hands-on.

  6. Geogebra: Interactive software that joins geometry with algebra and calculus: rather than just showing and manipulating shapes, shapes are linked with algebraic expressions and spreadsheets in different views.

  7. Alice: Intro programming environment for object-oriented programming.

Abbreviations

STEM:

Science, technology, engineering, mathematics

NSF:

National Science Foundation

TUES:

Transforming Undergraduate Education in STEM

PI:

Principal investigator

WePISM:

Well-propagated instructional strategy or material

SERC:

Science Education Resource Center

References

  • Austin, A. E. (2011). Promoting evidence-based change in undergraduate science education. Fourth committee meeting on status, contributions, and future directions of discipline-based education research, 1–25.

  • Borrego, M., Froyd, J., & Hall, T. S. (2010). Diffusion of engineering education innovations: a survey of awareness and adoption rates in U.S. engineering departments. Journal of Engineering Education, 99(3), 185–207.

    Article  Google Scholar 

  • Bourrie, D. M., Cegielski, C. G., Jones-Farmer, L. A., & Sankar, C. S. (2014). Identifying Characteristics of Dissemination Success Using an Expert Panel. Decision Sciences Journal of Innovative Education, 12(4), 357–380. http://doi.org/10.1111/dsji.12049.

    Article  Google Scholar 

  • Brewer, C. A., & Smith, D., eds. (2009). Vision and Change in Undergraduate Biology Education: A Call to Action. AAAS. Washington, D.C. Retrieved from http://visionandchange.org/files/2013/11/aaas-VISchange-web1113.pdf.

    Google Scholar 

  • Chabay, R. W., & Sherwood, B. A. (1999). Bringing atoms into first-year physics. American Journal of Physics, 67(12), 1045. http://doi.org/10.1119/1.19180.

    Article  Google Scholar 

  • Creswell, J. W. (2007). Qualitative Inquiry and Research Design: Choosing among Five Traditions. Thousand Oaks, CA, CA: SAGE Publications.

    Google Scholar 

  • Cummings, K. (2011). A developmental history of physics education research. National Academies of Science.

  • D’Avanzo, C. (2013). Post-vision and change: do we know how to change? CBE Life Sciences Education, 12(3), 373–82. http://doi.org/10.1187/cbe.13-01-0010.

    Article  Google Scholar 

  • Dreyfuss, A. E. (2013). A history of peer-led team learning—1990-2012. In Peer-Led Team Learning International Society (pp. 1–5).

  • Fairweather, J. (2008). Linking Evidence and Promising Practices in Science, Technology, Engineering, and Mathematics (STEM) Undergraduate Education.

  • Gannaway, D., Hinton, T., Berry, B., & Moore, K. (2011). A review of the dissemination strategies used by projects funded by the ALTC Grants Scheme. Sydney: Australian Teaching and Learning Council.

  • Greenwood, P. E., & Nikulin, M. (1996). A guide to chi-squared testing. John Wiley & Sons: New York.

  • Gosselin, D. C., Manduca, C., Bralower, T., & Mogk, D. (2013). Transforming the Teaching of Geoscience and Sustainability. Eos, Transactions American Geophysical Union, 94(25), 221–222. http://doi.org/10.1002/2013EO250002.

    Article  Google Scholar 

  • Hazen, B. T., Wu, Y., Sankar, C. S., & Jones-Farmer, L. A. (2012). A proposed framework for educational innovation dissemination. Journal of Educational Technology Systems, 40(3), 301–321. http://doi.org/10.2190/ET.40.3.f.

    Article  Google Scholar 

  • Henderson, C., Dancy, M., & Niewiadomska-Bugaj, M. (2012). Use of research-based instructional strategies in introductory physics: where do faculty leave the innovation-decision process? Physical Review Special Topics - Physics Education Research, 8(2), 20104. http://doi.org/10.1103/PhysRevSTPER.8.020104.

    Article  Google Scholar 

  • Henderson, C. & Cole, R. (2013). Propagating Educational Innovations to have an Impact on Faculty Practice. Workshop A2. Workshop presented at the NSF-TUES PI meeting, Washington, DC.

    Google Scholar 

  • Kezar, A. (2011). What is the best way to achieve broader reach of improved practices in higher education? Innovative Higher Education, 36(4), 235–247. http://doi.org/10.1007/s10755-011-9174-z.

    Article  Google Scholar 

  • Khatri, R., Henderson, C., Cole, R., & Froyd, J. (2013). Successful propagation of educational innovations: Viewpoints from principal investigators and program directors. In Proceedings of the Physics Education Research Conference (Vol. 218, pp. 218–221). http://doi.org/10.1063/1.4789691.

  • Khatri, R., Henderson, C., Cole, R., & Froyd, J. (2014). Over One Hundred Million Simulations Delivered: A Case Study of the PhET Interactive Simulations. In Proceedings of the Physics Education Research Conference (pp. 205–208). http://doi.org/10.1119/perc.2013.pr.039.

  • Khatri, R., Henderson, C. R., Cole, R., & Froyd, J. E. (2015). Learning About Educational Change Strategies: A Study of the Successful Propagation of Peer Instruction. In Proceedings of the 2014 Physics Education Research Conference (pp. 131–134). http://doi.org/10.1119/perc.2014.pr.029.

  • Khatri, R., Henderson, C., Cole, R., Froyd, J. E., Friedrichsen, D., & Stanford, C. (2016). Designing for sustained adoption: A model of developing educational innovations for successful propagation. Physical Review Physics Education Research, 12(1), 10112. http://doi.org/10.1103/PhysRevPhysEducRes.12.010112.

    Article  Google Scholar 

  • Kölling, M., Quig, B., Patterson, A., & Rosenberg, J. (2003). The BlueJ System and its pedagogy. Computer Science Education, 13(4), 249–268. http://doi.org/10.1076/csed.13.4.249.17496.

    Article  Google Scholar 

  • Laws, P. W. (1991). Calculus-based physics without lectures. Physics Today, 44(12), 24–31. http://doi.org/10.1063/1.881276.

    Article  Google Scholar 

  • Litzinger, T. A., Zappe, S., Borrego, M. J., Froyd, J., Newstetter, W., & Tonso, K. (2011). Writing effective evaluation and dissemination/diffusion plans. In ASEE Annual Conference and Exposition. Vancouver, Canada.

  • Mazur, E. (1996). Peer Instruction: A User’s Manual. Upper Saddle River, NJ: Prentice Hall.

    Google Scholar 

  • Mckenna, A. F., Froyd, J., & Litzinger, T. (2014). The complexities of transforming engineering higher education: Preparing for next steps. Journal of Engineering Education, 103(2), 188–192. http://doi.org/10.1002/jee.20039.

    Article  Google Scholar 

  • National Research Council. (2012). Discipline-Based Education Research: Understanding and improving learning in undergraduate science and engineering. S. R. Singer, N. R. Nielsen, & H. A. Schweingruber, (Eds.), Washington, D.C.: The National Academies Press.

  • Ruiz-Primo, M. A., Briggs, D., Iverson, H., Talbot, R., & Shepard, L. A. (2011). Impact of Undergraduate Science Course Innovations on Learning. Science, 331(6022), 1269–1270. http://doi.org/10.1126/science.1198976.

    Article  Google Scholar 

  • Seymour, E. (2001). Tracking the processes of change in US undergraduate education in science, mathematics, engineering, and technology. Science Education, 86(1), 79–105. doi:10.1002/sce.1044.

    Article  Google Scholar 

  • Stanford, C., Cole, R., Froyd, J. E., Henderson, C. R., Friedrichsen, D. M., & Khatri, R. (In Press). Analysis of propagation plans of NSF-funded education development projects. Journal of Science Education and Technology.

  • Stanford, C., Cole, R., Froyd, J., Friedrichsen, D., Khatri, R., & Henderson, C. (2016). Supporting sustained adoption of education innovations: The Designing for Sustained Adoption Assessment Instrument. International Journal of STEM Education, 3(1), 1–13. http://doi.org/10.1186/s40594-016-0034-3.

    Article  Google Scholar 

  • The White House. (2010). President Obama Expands “Educate to Innovate” Campaign for Excellence in Science, Technology, Engineering, and Mathematics (STEM) Education. Office of the Press Secretary.

  • Wieman, C. E., Adams, W. K., & Perkins, K. K. (2008). PhET. Simulations that Enhance Learning, 322(5902), 682–683.

    Google Scholar 

Download references

Funding

This work was supported by the National Science Foundation under grants #1122446, #1122416, and #1236926. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Authors’ contributions

The authors collaborated on the design and execution of the study, including data collection and validation. RK took primary responsibility for the data analysis, collecting additional publicly available data for each innovation, and writing the article. The remaining authors participated in assisting with the development of the categorization schemes and data analysis. All authors read, edited, and approved this manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raina Khatri.

Additional file

Additional file 1:

Full set of Well-Propagated Strategies and Materials. (DOCX 34 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khatri, R., Henderson, C., Cole, R. et al. Characteristics of well-propagated teaching innovations in undergraduate STEM. IJ STEM Ed 4, 2 (2017). https://doi.org/10.1186/s40594-017-0056-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-017-0056-5

Keywords