Skip to main content

Supporting sustained adoption of education innovations: The Designing for Sustained Adoption Assessment Instrument

Abstract

Background

Every year, significant effort and resources are expended around the world to develop innovative instructional strategies and materials to improve undergraduate Science, Technology, Engineering, and Mathematics education. Despite convincing evidence of efficacy with respect to student learning, most will struggle to become successfully propagated to reach widespread use. To help developers improve their propagation plans and to encourage sustained adoption, we have developed an assessment instrument that supports development, analysis, evaluation, and refinement of propagation plans.

Results

Based on our synthesis of the literature, our analysis of successfully propagated innovations, and our analysis of a subset of funded NSF CCLI proposals, we argue that a primary reason for the lack of adoption is that developers focus their efforts on dissemination (spreading the word) instead of propagation (promoting successful adoption). To help developers focus on development of more effective propagation plans, the Designing for Sustained Adoption Assessment Instrument (DSAAI) is based on three primary bodies of literature: (1) change theory, (2) instructional systems, and (3) effective propagation strategies. The assessment instrument was designed in the form of a rubric to help education developers identify strengths and areas for improvement of their propagation plans. Based on extensive testing across several different groups of users, the DSAAI is divided into four main sections: identification of product type, features of target instructional strategies and/or materials, a propagation activities checklist, and aspects of the propagation plans that influence likelihood of successful propagation.

Conclusions

The instrument has proven useful for a variety of audiences to evaluate and improve proposals and current research projects. Education developers who provided feedback during the development of this assessment instrument found it useful because it helped them not only evaluate their propagation plans but also become aware of other strategies they could use to help develop and disseminate products of current projects, as well as strategies for supporting future adopters. Grant writing consultants can use it to provide feedback to their clients, and funding agencies can use parts of the DSAAI framework to evaluate grant proposals.

Background

Every year, many educational innovation development projects are funded to improve undergraduate Science, Technology, Engineering, and Mathematics (STEM) education. Many of these projects have produced instructional strategies and materials that have been extensively evaluated and shown to demonstrate efficacy with respect to student learning. However, there is concern that proven instructional strategies and materials (products of research and development projects) are not being widely adopted (National Research Council 2012). Currently, almost all educational innovations struggle to propagate because project teams tend to focus primarily on telling people about their product (dissemination) (Henderson et al. 2011a; National Research Council 2012); while necessary, dissemination is only one step of a process that leads to successful propagation of educational innovations.

In addition to almost exclusive focus on dissemination, there are other reasons for failure to propagate. Educational developers tend to rely on the belief that “good ideas spread naturally” (Seymour 2002; Henderson et al. 2011a). Also, most education developers focus their efforts on developing products without feedback from potential adopters. Third, once they have developed and tested the educational strategy or material, developers then disseminate their work via traditional academic means, such as journal papers, conference presentations, and websites (Tront et al. 2011; McMartin et al. 2012). These passive dissemination methods may raise awareness but are not sufficient to promote widespread and sustained change in instructional practices (Borrego et al. 2010; Lund and Stains 2015). Reasons for failure to propagate stem, in part, because developers consider dissemination and propagation synonymous rather than thinking about the varied factors that influence adoption of instructional practices.

In this paper, we distinguish between the terms dissemination and propagation because although they have different meanings, education developers often use them interchangeably. Dissemination is about spreading ideas: getting the word out to potential adopters about trying an innovation, e.g., publishing a paper or posting resources on a website. Propagation, on the other hand, has occurred only when a new teaching strategy is used successfully by non-developing instructors. Propagation requires developing an effective innovation (product) compatible with the needs, interests, and context of potential users; getting the word out to potential users and motivating them to try the innovation; and developing mechanisms to support users so that they are successful in implementation. One way to frame the difference is that propagation of an educational innovation is the goal and dissemination is one part of reaching that goal.

To propagate educational innovations, that is, to promote broader, sustained adoption, the National Science Foundation (NSF) funded the authors (Henderson et al. 2011b) to create resources that educational developers can use to develop and implement effective propagation plans. One resource is the Designing for Sustained Adoption Assessment Instrument (DSAAI), a tool to characterize and assess a propagation plan. In this paper, we describe development and features of the instrument.

Why develop an instrument to analyze propagation plans?

Much is known about effective propagation that would help project teams develop innovations that are more likely to propagate widely. However, this knowledge resides in many different journals representing a wide variety of disciplines (Henderson et al. 2011a; National Research Council 2012). There are considerable knowledge bases both on approaches to promote adoption of innovations (Rogers 2003; Wejnert 2002; Strang and Soule 1998) and organizational change (Weick and Quinn 1999; Kotter 1996; Van de Ven and Poole 1995). Resources are available that describe effective dissemination (Froyd 2001; Litzinger et al. 2011) and that provide organizational frameworks related to change in STEM education (Henderson et al. 2011a; Borrego and Henderson 2014; Hinton et al. 2011). While all of these resources exist, educational developers who want to apply these frameworks must synthesize their own approaches and evaluation tools. One resource, the D-Cubed framework (Hinton et al. 2011), explains how developers can build propagation strategies into their project design, and suggests the framework could be applied to evaluate a propagation plan, but does not provide tools to facilitate such evaluation. Therefore, we synthesized these frameworks to create the DSAAI to support development, analysis, evaluation, and refinement of propagation plans.

The DSAAI was designed with features similar to that of an analytic rubric. Rubrics are effective at increasing consistency of judgment by individuals when assessing tasks, increasing consistency of scoring by different raters, and helping provide focused feedback about the task (Jonsson and Svingby 2007). We envision the primary use of this tool to be self-assessment of propagation plans by education developers. This assessment instrument is also useful as an analytical tool for researchers and grant writing consultants to examine and assess proposals and project plans. In this way, it provides a common evaluative instrument with which to compare multiple proposals and project plans and to summarize the results of such analysis.

Literature background for the assessment instrument

The Designing for Sustained Adoption Assessment Instrument is based on three primary bodies of literature: (1) change theory (Borrego and Henderson 2014; Henderson et al. 2011a; Henderson et al. 2012a), (2) the instructional system (Lattuca and Stark 2009; Lattuca 2011), and (3) effective propagation strategies (Fixsen et al. 2005; Cuban 1999). In addition to grounding this assessment instrument in the literature, it is based on analysis of successfully propagated STEM innovations, as identified by members of the STEM community (Khatri et al. 2013a; Khatri et al. 2013b; Khatri et al. 2014), and analysis of a significant subset of proposals funded by the NSF Course, Curriculum, and Laboratory Improvement (CCLI) program (Cole et al. 2014; Stanford et al. 2014).

Change theory

Adopting an educational innovation requires an individual to make changes to implement the developed product. Many people are aware of the general framework developed by Rogers (2003) that describes the process adopters’ take: awareness, interest, evaluation, trial, and adoption. If a negative decision is made after evaluation, the process ends there. If use does not continue after the initial trial, we refer to it as “Adopt and Drop.” The goal is sustained use of the innovation or “Sustained Adoption.”

While it is important for developers to consider how adopters decide to learn about an innovation (Henderson et al. 2012b), it is not sufficient, particularly if the goal is for potential adopters to reach the trial and adoption stages. Change theory literature suggests several important factors, summarized in a white paper (Henderson et al. 2012a), which should be considered when promoting instructional changes. These factors include the ideas that:

  1. 1.

    Change takes time—it is a process, not an event.

  2. 2.

    Developing awareness is only the first stage in adoption.

  3. 3.

    Different strategies are required to facilitate change based on the type of change required.

  4. 4.

    Facilitating adoption/adaptation of different types of products may require altering instructor beliefs.

  5. 5.

    Effective change strategies must consider multiple elements of the instructional system, such as resources, instructor beliefs, and institutional context.

These factors are important because changes to teaching practices are made within a complex instructional system that influences instructional decisions.

Instructional system

To promote and support changes to instruction, educational developers must understand the instructional system within which potential adopters practice, acknowledging that adoption decisions do not occur in isolation and many things influence decision-making about teaching and learning (Borrego et al. 2010; Kezar and Eckel 2002). The instructional system, shown in Fig. 1, can be visualized in four levels: the individual, the departmental, the institutional, and the extra-institutional.

Fig. 1
figure 1

Components of the instructional system

Each of these levels has its own associated values and beliefs and is influenced by different factors that impact whether an educational innovation will be adopted. All of these levels interact with one another to create the instructional system in which teaching and learning occurs. Understanding the instructional system includes identifying the elements of the instructional system that affect adoption of an innovation and understanding how these elements are influenced by other elements in the system. Understanding what will change, what elements of the system will aid adoption (enablers), and what elements of the system will hinder adoption (barriers) can help improve the product and increase the likelihood of the adoption of an innovation (Bergquist and Pawlak 2008; Schein 1992). Depending on the type of innovation, some levels of the educational system are more important than others. Lattuca and Stark (2009) provide an instructional system framework to help visualize different components and their interactions. It is important to understand the instructional system in which potential adopters reside to select the propagation strategies that will be most effective to reach the intended audience.

Effective propagation strategies

Many people believe a “good idea, supported by convincing evidence of efficacy, will spread “naturally” –that, on learning about the success of particular initiatives, others will become convinced enough to try them” (Seymour 2002). However, there is no evidence that this happens in practice, particularly in education. So, it is important for project developers to discover factors that influence faculty adoption of an educational innovation. For example, adopter engagement with the product influences its adoption (Rogers 2003). One way that developers can encourage adoption is to engage potential adopters throughout the development process to identify their needs and determine motivators that facilitate adoption (Blank and Dorf 2012). By engaging those that might use the innovation, developers are able to receive feedback and improve the product. In addition to engaging users, multiple dissemination strategies should be used to help inform potential users about an innovation (Litzinger et al. 2011). Dissemination strategies should be selected based on the type of innovation, but all educational developers should use a mixture of passive dissemination methods (conference presentation, journal articles, websites) and interactive dissemination methods (workshops, leveraging existing communities, personal connections) (Cuban 1999; Fixsen et al. 2005; Froyd 2001) to promote propagation of their innovations. Lastly, developers should provide some form of support to assist new adopters in overcoming barriers when initially implementing an innovation and to help in sustaining adoption (Henderson et al. 2011a; National Research Council 2012). Support can come in several different forms including providing materials to aid implementation and people-oriented support provided both by the project team and by external sources.

Our synthesis of the literature suggests that improving the likelihood that a project will be successful and have an impact on undergraduate STEM education depends on project design and strategies that facilitate adoption and adaptation of new learning materials and teaching strategies from the beginning. The authors applied what is known about change theory, the instructional system, and effective propagation strategies and developed the Designing for Sustained Adoption Assessment Instrument to analyze strengths and weaknesses of a propagation plan. Our goal is to help developers improve their efforts in creating new learning materials and teaching strategies with an eye towards eventual use by others, employing more effective strategies for developing instructor expertise in using new learning materials and teaching strategies, and addressing challenges to widespread implementation of educational innovations.

Methods

Three main sources of data were used to develop this assessment instrument: (1) the literature on change theory, the instructional system, and effective propagation; (2) analysis of innovations that were identified by members of the STEM education community to be successfully propagated; and (3) analysis of a subset (N = 76) of proposals funded by the NSF CCLI program in 2009. The project team decided to study CCLI proposals because this program was the primary funding source for the development of undergraduate STEM educational innovations at the time. Proposals funded in 2009 were chosen with the assumption that by the time this study was conducted, successful projects would have been far enough along to evaluate initial propagation of their products. In total, 76 CCLI proposals provided by principal investigators (PIs) were analyzed. These proposals constituted a representative sample of all funded projects according to project type (CCLI types 1, 2, and 3) and discipline.

The assessment instrument was developed using an iterative approach as described by Moskal and Leydens (2000). This process begins with defining the assessment purpose and objectives, followed by the development of score criteria for each objective. The final step involves reflection and refinement to ensure that all of the objectives are measured and that extraneous scoring criteria have been eliminated. For the DSAAI, an initial draft of the assessment instrument was developed to assess specific components of propagation. The first draft was developed based on literature of change theory (Henderson et al. 2011a; Henderson et al. 2012a), the instructional system (Borrego et al. 2010; Lattuca and Stark 2009), and what is known about effective propagation (Seymour 2002; Blank and Dorf 2012; Cuban 1999; Fixsen et al. 2005). A range of criteria for scoring each component was determined. Then, the authors tested the criteria by rating a small subset (N = 3) of the 76 CCLI proposals. The authors revised the draft using the results of proposal rating. Subsequent development followed an iterative process. In each iteration, the current draft was tested by having members of the project team read and individually score three of the overall set of 76 funded 2009 proposals. The team then met to analyze the scores, discuss results, and reach consensus on each item of the assessment instrument. Afterwards, modifications were made to the assessment instrument to clarify items to improve the consistency of ratings. This iterative process was done for four iterations using a total of 12 proposals. After a finalized version of the DSAAI was complete, all 76 proposals were analyzed, including the 12 used during the development.

The authors found that in order to appropriately evaluate the propagation plan in each proposal, the nature of the innovation had to be identified. Therefore, the authors found that it was necessary to identify the product type and features of the innovation. To create the product type and feature sections of the DSAAI, the authors categorized 47 successfully propagated innovations identified by members of the STEM education community (Khatri et al. 2013b; Khatri et al. 2015) in addition to using the literature and 76 funded CCLI proposals. Because the project team did not have a detailed description of the propagation plan for each of the 47 successful innovations, propagation plans of these innovations could not be analyzed, but these 47 successfully propagated innovations were helpful in developing the product type and features of the DSAAI. Finally, a sixth research team member who did not take part in the initial development of the assessment instrument also tested the instrument by independently rating proposals.

During the development process, the authors wanted to ensure the instrument could be consistently interpreted by users other than its developers. After three rounds of revision and testing, the authors had the opportunity to ask 70 participants at the January 2013 NSF TUES PI (Transforming Undergraduate Education in STEM Principal Investigator) Conference to evaluate two proposals using a draft of the assessment instrument. The assessment instrument was also reviewed by members of the Analytical Process Oriented Guided Inquiry Learning (ANAPOGIL) consortium and members of our NSF project advisory board. These groups confirmed the face validity of the instrument and provided suggestions to improve ease of use. These users reported they found the DSAAI highlighted appropriate aspects relevant to supporting propagation and was very helpful in evaluating the different components required for propagation of educational innovations. Suggestions were also provided as to how to make the language more accessible to individuals without a strong background in change theory and to further divide the feature section so one can more easily distinguish pieces of a propagation plan.

After the instrument was revised using feedback from these groups, the next draft of the DSAAI was used by eight education developers and a grant writing consultant (representing two of our target audiences) during a face-to-face workshop in October 2014. Participants used the DSAAI to analyze and improve the propagation plans for their recently funded projects or proposals in development. These eight users found the DSAAI to be very useful and beneficial in evaluating propagation plans and reported that it provided new ideas about propagation that they had not considered before. The workshop participants also found the language to be very accessible and only suggested a few minor alterations to help improve clarity. In addition, the participants were able to consistently use the DSAAI but acknowledged they required a small amount of training to achieve that consistency. The need for training was also noted by the project team in subsequent workshops.

Asking different groups to use the instrument provided feedback about its usability and helpfulness. After each group tested the DSAAI, the authors used the feedback to modify the instrument. The process continued until members of the STEM community were able to consistently score and interpret the DSAAI. A complete timeline for the development and testing of the DSAAI is shown in Fig. 2.

Fig. 2
figure 2

Timeline for the development and testing of the DSAAI

Development

In this section, we describe the development of the instrument that emerged from the iterative process described above. The description is organized around the four main sections of the final version of the assessment instrument: Product Type, Features of Target Curricula and/or Pedagogies, Propagation Activities, and Aspects of Propagation Strategies that Influence the Likelihood of Success. (See Additional file 1 for the complete version of the assessment instrument.)

Section 1: Product type—Categorization of successfully propagated innovations

The categories for the product type section came from an attempt to organize a list of 47 successfully propagated innovations, as identified by members of the STEM community (Khatri et al. 2013b). Initially, the authors used the scheme developed by Ruiz-Primo et. al. (2011)—conceptually oriented tasks, collaborative learning activities, technology, and inquiry-based projects. However, we found successfully propagated innovations did not fit neatly into these four categories. The next round of coding was more open. Each member of the research team used the descriptions of each innovation to group similar types. As a team, we examined the differences and similarities in the groupings and discussed strengths and weaknesses of the different categorizations. This discussion eventually resulted in four categories: no change in pedagogy or content, change in pedagogy, change in content, and change in pedagogy and content. Subcategories were added to most categories to refine descriptions.

Section 2: Features of target curricula and/or pedagogies

The first iteration of this section was based on common themes in the literature and focused on (i) identifying what changes were required to properly implement the innovation, (ii) estimating the degree to which the materials and procedures were expected to be modified by an adopter, and (iii) identifying implicit assumptions made by the developers about what factors would lead to adoption by others (Bergquist and Pawlak 2008; Schein 1992; Henderson et al. 2011a). It became apparent when analyzing the CCLI proposals that these three categories contained multiple factors and some aspects needed to be unpacked further. The deviation from the normative approach item was expanded into two different categories: (1) the degree of change to traditional teaching practices required by an instructor to adopt the innovation and (2) the degree of structural change required to adopt an innovation. The category on Implicit Assumptions Behind the Propagation Strategy item was removed from the list and became its own section; its evolution is explained in more detail later.

The revised categories, (a) the degree of change to traditional teaching practices required by an instructor to adopt the innovation, (b) the degree of structural change required to adopt an innovation, and (c) the degree the materials and procedures could be modified by an adopter, led to greater consensus among the project team, but there were often multiple interpretations for the degree of structural changes. Eventually, this item was split into two separate items—cooperation among adopters and resources required for implementation—because many innovations require cooperation or resources, but not always both. The final version of the rubric classified innovations in terms of four features: (1) amount of modification expected, (2) degree of change to normal teaching practices, (3) degree of cooperation required for implementation, and (4) resources required for implementation.

Section 3: Propagation activities checklist

The change literature describes a variety of assumptions about propagation held by many developers, which influences how they develop and disseminate their innovations. Our first attempt to try and classify factors that should influence propagation/dissemination as part of our research project included an item called Implicit Assumptions Behind the Propagation Strategy, which required raters to infer the underlying reasoning developers used to select propagation activities. The following are examples of underlying assumptions that might be inferred based on propagation activities presented in the proposal:

  1. 1.

    If I build it and talk about it, people will use it—focus on talking about it (selling it to people).

  2. 2.

    Good quality materials will naturally find an audience—focus on making the material work really well (usability studies).

  3. 3.

    Academic journals are a good way to reach potential users.

  4. 4.

    It will take several encounters before people actually adopt something.

  5. 5.

    Effective project design and implementation requires a “systems” approach.

  6. 6.

    Workshops are required for faculty to most effectively adopt curricula.

  7. 7.

    User guides are helpful for curricular adoption.

  8. 8.

    Being able to adopt components rather than the entire “thing” makes it easier to adopt curricular materials.

Because developers might use more than one strategy, these implicit assumptions became a separate section of the assessment instrument, where the rater would select all that applied. As the research team continued to analyze proposals, there was always a large amount of discrepancy between raters about selected assumptions, and it was difficult to come to a consensus. After much discussion, it was determined that we were trying to identify two separate things in this section of the assessment instrument: the primary focus of the “change strategy” and what project teams were doing to help propagate the innovation. It was at this point that the Implicit Assumptions Behind Propagation Strategies section was removed from the assessment instrument and divided into two separate pieces: a checklist of the Propagation Activities used and the Primary Focus of theChange Strategy.” Testing of assessment instrument led to the decision that the Primary Focus of theChange Strategy” section was not useful to developers for developing and rating their own propagation plans. Based on the feedback from education developers, this section was found to be too abstract to understand without an extensive knowledge of change theory and did not provide meaning information to the education developers on the strengths and weakness of their propagation plan or how to improve it. The research team has continued to use this analysis to characterize proposals, but it is not included in the DSAAI itself.

Section 4: Aspects of propagation strategies that influence the likelihood of success

The items in the evaluative section of the assessment instrument were based on the literature summarized in the literature review. Through comparative analysis and discussion, we refined and simplified the descriptors and assumptions to improve the consistency of the ratings. An additional goal was to make the assessment instrument relatively jargon free and simple to understand so that those without a background in educational change theory would still be able to interpret the meaning and gradations of each item. The order of items was structured to facilitate the analysis. This section of the instrument focused on the following questions:

  1. 1.

    How detailed is the description and rationale provided for who the project team expects to adopt products?

  2. 2.

    How extensive and detailed is the plan for attracting, training, supporting, or following up with potential adopters?

  3. 3.

    To what extent does the project team plan to engage potential adopters early in the project to identify potential barriers for adoption and strategies that could be used to overcome these barriers?

  4. 4.

    To what extent has the project team considered the instructional system in which potential adopters reside and identified which elements of the instructional system will need to be addressed to support propagation?

  5. 5.

    How detailed is the propagation plan and how much explanation is provided to indicate the reasoning behind why selected strategies were chosen?

  6. 6.

    How well do the propagation strategies take into consideration the intended audience, the degree to which the innovation deviates from standard practice, and other factors that will influence adoption?

Each of these six questions formed the basis of a criterion in the analytical rubric portion of the instrument. The wording of the descriptors themselves was modified through the revision process to increase reliability in scoring and to facilitate utility of the instrument to target audiences.

Reliability and validity

The goal of the project was to develop a reliable assessment instrument that is useful to individuals with a varying degree of knowledge of change theory who are proposing, implementing, or evaluating educational development projects. Since this assessment instrument has many features similar to rubrics, we followed many of the rubric testing procedures to help test the validity and reliability of the DSAAI. According to the literature, when establishing reliability and validity for rubrics, it is important to test it with different members of the intended audience (Jonsson and Svingby 2007; Reddy and Andrade 2010; Moskal and Leydens 2000). An instrument is thought to be more reliable the more consistent the scores are when using different raters and settings. Validity is frequently tested by gathering expert opinions, correlating with similar instruments, and reflecting on the purpose of the instrument to ensure the language is appropriate and addresses all aspects that are being evaluated. Reliability was initially established through consistent use of the instrument among research team members. The project team coded a subset of funded NSF CCLI proposals from 2009 and agreed within one score 80 % of the time on each item of the DSAAI. It is typical for rubric reliability to be established using consensus agreement. According to the literature, 70 % agreement is a typical threshold used for acceptable reliability; however, it should be noted that agreement depends heavily on the number of levels to a rubric (Brookhart and Chen 2014; Jonsson and Svingby 2007).

The next step was to test the instrument with different populations to ensure that members of the intended audience for the DSAAI were consistent in interpreting the assessment instrument and scoring the propagation plans of different educational innovations, including their own. As described above, we invited eight education developers and one grant writing consultant to participate in a workshop to test the products we had developed, including the DSAAI. For this workshop, each participant applied the instrument to a short summary of a propagation plan for each participant’s recently funded project or proposal in development. While our initial purpose for the DSAAI was to characterize proposals, the workshop participants also found it helped them frame their thinking about propagation, gave them ideas for how to more explicitly address the different components of an effective propagation plan, and how propagation is an important part of the larger project when designing educational innovation. Prior to any training, the initial use of the DSAAI to evaluate the participants’ proposals resulted in 55 % agreement within one score for each category. However, after instruction and discussion regarding features that are required to design a successful propagation plan (using our “How-To” Guide), agreement increased to over 80 % for subsequent use of the DSAAI. This demonstrated that in order to be successful in applying the DSAAI, members of the intended audience need to develop a basic understanding of change theory and the instructional system in order to shift their thinking about what constitutes effective propagation. This information can be learned by attending a workshop or by working through the How-To Guide and workbook designed to be used in conjugation with the DSAAI (Henderson et al. 2015). The instrument has been further tested with about 30 participants in a session at the 2015 National Organization of Research Development Professionals (NORDP) annual conference and 24 faculty members who were developing IUSE proposals to be submitted in either November 2015 or January 2016.

The face validity of the DSAAI has been established through its alignment with the literature and assessment by other experts in the field. Interviews with experts indicated the DSAAI provides a scale to demonstrate what should be done to effectively propagate an innovation, ideas for people to use to help propagate their innovation during the life of the project, and a resource where the literature has been distilled down to a manageable amount for those with only a basic understanding of the theory.

To assess the ability of the DSAAI to predict the likelihood a successful innovation will be adopted by others, we used the DSAAI to analyze a representative sample of all funded 2009 NSF CCLI proposals. We conducted a search for evidence the products of these proposals had propagated for 31 of the 76 proposals analyzed. The most common evidence of propagation found included the various forms of dissemination such as conference presentations, journal articles, project websites, and course materials. This collected evidence of dissemination was then used to determine what propagation strategies were used when developing the innovation and how members of the intended audience are being supported.

The initial analysis of these findings indicates the instrument can reasonably predict the likelihood of successful propagation, particularly given the fact that there are often many deviations from what is proposed and what is enacted in practice. For five of the 31 proposals, almost no evidence of the project could be found. For the projects categorized as unlikely to propagate, there is no evidence to indicate that any materials have been adopted by others beyond the development team. There is some evidence that projects that were categorized as likely to successfully propagate have been adopted by others outside of the development team. These projects demonstrated that the PIs used a combination of passive and active dissemination strategies, engaged others in the development phase, and have provided several forms of support to help others adopt their innovation. Furthermore, evidence was found that products from many of these projects had been adopted by others and that the PIs had secured additional sources of funding to help propagate their innovations. The detailed results of this analysis will be presented in more detail in a forthcoming manuscript (Stanford et al. Expected 2016).

Overall, the DSAAI has been tested with members of our intended audience including 70 TUES PIs, 58 education developers across STEM fields, and 31 grant writing consultants. In addition, there has been positive response from NSF program directors as to the usefulness that an instrument like the DSAAI could provide to members of the STEM community in the context of broader impact for educational innovations developed by funding from NSF grants.

Results and discussion

In this section, we describe the instrument that emerged from the design process. There are four main sections of the final version of the assessment instrument, as shown in Table 1: Product Type, Features of Target Curricula and/or Pedagogies, Propagation Activities, and Aspects of Propagation Strategies that Influence the Likelihood of Success. (See Additional file 1 for the complete version of the assessment instrument.) In addition to these four sections, a cover page provides an overview of the goals and importance of each section of the assessment instrument. The cover page also orients users on how to most effectively apply the assessment instrument.

Table 1 Overview of the Designing for Sustained Adoption Assessment Instrument

Section 1: Product type

As stated above, propagation plans should depend on the type of educational innovation. This section of the assessment instrument is included to classify various types of educational innovations. Classifying product type helps identify the primary goal of the innovation and what major changes potential adopters must make to adopt the innovation. For example, educational innovations vary greatly in the resources, cooperation, and the degree of change to traditional teaching methods needed to successfully adopt and implement. Innovations that require more effort to implement require a more extensive propagation plan to achieve sustained adoption of the developed products.

The categories for product type are shown in Table 2. When using this portion of the assessment instrument, users are expected to select only one of the categories. Choosing the major category (C1, C2, C3, or C4) is appropriate if the subcategories do not quite fit a product or if the product fits multiple subcategories. Categorizing the type of educational innovation is an important first step in evaluating a propagation plan because different types of educational innovations require different propagation strategies (Henderson et al. 2015). In order to successfully select the propagation strategies that align best with the innovation, developers must be able to identify what they are trying to propagate. While this may seem trivial, in workshops where participants were asked to review proposals (or structured summaries of proposals), we found that they initially struggled to determine exactly what the proposal author intended to propagate. With this feedback and upon further discussion and reflection, proposal authors were able to revise their proposals to be more explicit about what they were developing and what was intended for adoption by others.

Table 2 Product type classification

Section 2: Features of target curricula and/or pedagogies

The nature of the changes required by an adopter to implement an educational innovation has significant impact on what is required to convince a potential adopter to try the innovation. This section of the assessment instrument focuses on the features of the target curricula and/or pedagogies and the degree of change required for adoption/adaptation. Ratings in this section are descriptive, not evaluative.

Table 3 illustrates the four main features of an innovation, including details for the extremes of each item. A complete copy of the instrument can be found in Additional file 1. It is important to note that for many innovations, there is a range of degrees to which an instructor could change their teaching practice and the cooperation and resources needed for implementation. Therefore, when rating this item, the rating should be based on the minimal degree of change that maintains the critical features of the product. Unless there is additional knowledge about the target population, ratings in this item should assume that current teaching practices are instructor centered with an emphasis on didactic lectures and current classrooms are those designed for teacher presentations. Identifying the key features of an innovation is important because it indicates how much change is expected of the potential adopter and what is needed to successfully implement an innovation. These factors play a key role in selecting appropriate propagation strategies that support sustained adoption of the innovation.

Table 3 Features of target curricula and/or pedagogies

Section 3: Propagation activities checklist

Propagation requires developing an effective innovation (product) compatible with the needs, interests, and context of potential users; getting the word out to potential users and motivating them to try the innovation; and developing mechanisms to support users so that they are successful in implementation. A variety of activities that change as a project matures are required to meet these requirements.

The goal of the Propagation Activities checklist, summarized in Table 4, is to encourage broad thinking about propagation by listing possible actions to encourage adoption. See Additional file 1 for the complete list. The list was generated based on the literature on effective propagation strategies, personal experiences of the research team, and analysis of well-propagated innovations and the subset of NSF CCLI proposals. This is not a complete list of all possible activities but a list of the most commonly used strategies. The types of propagation activities can be divided into three major categories: activities that are useful in developing an educational innovation that meets the needs of potential adopters, activities helpful in disseminating information about an innovation, and activities that assist the developer in supporting adopters to encourage sustained and effective implementation.

Table 4 List of propagation activities

Different propagation activities are important during the different stages of a project: getting started, refinement, and expansion. While a particular proposal may only focus on one or two of these stages, it is important to always keep the big picture in mind. Identifying the different types of propagation activities used helped raters determine the thoroughness of the propagation plan and the stage at which a project team intended to consider propagation. It is not expected that developers of educational innovations should use every method but that project teams select the methods that fit best with the product type and key features of the innovation being developed. This checklist is descriptive, not evaluative.

Section 4: Aspects of propagation strategies that influence the likelihood of success

Once the innovation and its product(s) have been characterized and propagation activities have been identified, the ultimate goal of the DSAAI is to evaluate the degree to which the developers intend to employ strategies identified in the literature as necessary for, or supportive of, successful propagation of education innovations. In this section of the instrument, a low score indicates that there is little evidence that the project team will use best practices and a high score indicates that there is clear evidence that the project team intends to use best practices. The high end of the assessment instrument represents ideal actions, which may not be feasible for all projects, particularly those at the pilot level or with very small budgets.

Table 5 illustrates the varying degrees in which developers can try to address each of the previously stated questions. (See Additional file 1 for the complete instrument and detailed descriptions of each item.)

Table 5 Aspects of propagation strategies that influence the likelihood of success

The descriptors for each criterion serve to aid in the evaluation of a propagation plan and as a model for the design of more effective propagation plans. Ultimately, the last section of the instrument is intended to determine how well the propagation strategies identified by the developers take into consideration the intended audience, the degree to which the innovation deviates from standard practice, the type of support needed by adopters, and other factors that will influence adoption.

Conclusions

Increasing attention is being given to the reality that instructional strategies and materials developed to improve student learning have not been widely adopted. Based on our synthesis of the literature, our analysis of successfully propagated innovations, and our analysis of a subset of funded NSF CCLI proposals, we argue that an important reason for the lack of adoption is that developers primarily focus their efforts on dissemination (spreading the word) instead of propagation (promoting successful adoption). To some degree, this focus is in response to requests for proposals that require dissemination plans but do not explicitly require intentional activities to support adoption by others. However, the funding landscape is beginning to change, as evidenced in a recent NSF program solicitation (National Science Foundation 2015) that required transferability and propagation to be addressed throughout a project’s lifetime. Based on conversations with members of our intended audience, it is also clear that many developers (and program officers) are frustrated with the lack of adoption of successful products but lack knowledge of what it required for such propagation to occur. In this article, we described development and testing of a resource (the DSAAI) to help higher education stakeholders (developers, funding agencies, etc.) plan for successful propagation.

The DSAAI is part of a set of related resources that include a How-To Guide (Henderson et al. 2015), an executive summary, and a workbook. Our goal is for these resources to support efforts to increase the impact of education innovations. One way education developers can use the DSAAI is to help shape and evaluate their proposals and projects by identifying strengths and areas of improvement for their propagation plans. The evaluation of proposals is important in determining the likelihood an innovation can be successfully propagated based on the propagation strategies selected and how well they match the features of the innovation. Even pilot projects that are focusing on development of an education innovation should engage in best practices that make it more likely that a successful product can be propagated at a later stage of the project. The users who provided feedback during the development of this assessment instrument found it useful because it helped them not only evaluate their propagation plans but also become aware of other strategies they could use to help develop and disseminate their products, as well as strategies for supporting future adopters. Grant writing consultants can use it to provide feedback to their clients, and funding agencies can use parts of the DSAAI framework to evaluate grant proposals. We hope that these resources will serve to improve propagation practices and accelerate additional research into what is required for effective propagation of education innovations.

Abbreviations

ANAPOGIL:

Analytical Chemistry Process Oriented Guided Inquiry Learning

CCLI:

Course, Curriculum, and Laboratory Improvement

DSAAI:

Designing for Sustained Adoption Assessment Instrument

NSF:

National Science Foundation

PI:

Principal Investigator

STEM:

Science, Technology, Engineering and Mathematics

TUES:

Transforming Undergraduate Education in STEM

References

  • Bergquist, WH, & Pawlak, K (2008). Engaging the six cultures of the academy. San Francisco: Jossey-Bass.

    Google Scholar 

  • Blank, S, & Dorf, B (2012). The startup owner’s manual: the step-by-step guide for building a great company. Pescadero: K and S Ranch, Inc.

    Google Scholar 

  • Borrego, M, Froyd, JE, & Hall, TS (2010). Diffusion of engineering education innovations: a survey of awareness and adoption rates in U.S. engineering departments. Journal of Engineering Education, 99(3), 185–207. doi:10.1002/j.2168-9830.2010.tb01056.x.

    Article  Google Scholar 

  • Borrego, M, & Henderson, C (2014). Increasing the use of evidence-based teaching in STEM higher education: a comparison of eight change strategies. Journal of Engineering Education, 103(2), 220–252.

    Article  Google Scholar 

  • Brookhart, S. M., & Chen, F. (2014). The quality and effectiveness of descriptive rubrics. Educational Review, 2014, doi: 10.1080/00131911.2014.929565.

  • Cole, R., Stanford, C., Henderson, C., Froyd, J., Gilbuena, D., & Khatri, R. (2014). Designing for impact: Recommendations for curriculum developers and change agents. Paper presented at the IUPAC International Conference on Chemistry Education, Toronto, CA.

  • Cuban, L (1999). How scholars trumped teachers: change without reform in university curriculum, teaching, and research, 1890–1990. New York: Teachers College Press.

    Google Scholar 

  • Fixsen, DL, Naoom, SF, Blase, KA, Friedman, RM, & Wallace, F (2005). Implementation research: a synthesis of the literature. Tampa: University of South Florida, National Implementation Research Network.

    Google Scholar 

  • Froyd, JE (2001). Developing a dissemination plan. Paper presented at the Frontiers in Education Conference, Reno, NV.

  • Henderson, C., Beach, A., & Finkelstein, N. (2011a). Facilitating change in undergraduate STEM instructional practices: An analytic review of the literature. Journal of Research in Science Teaching, 48(8), 952–984, doi: 10.1002/tea.20439.

  • Henderson, C., Cole, R., & Froyd, J. (2011b). Increasing the Impact of TUES Projects through Effective Propagation Strategies: A How-To Guide for PIs. National Science Foundation.

  • Henderson, C., Cole, R., Froyd, J., Gilbuena, D., Khatri, R., & Stanford, C. (2015). Designing Educational Innovations for Sustained Adoption: A How-to Guide for Education Developers Who Want to Increase the Impact of their Work. Kalamazoo, MI: Increase the Impact.

  • Henderson, C., Cole, R., Froyd, J., & Khatri, R. (2012a). Five Claims about Effective Propagation, A White Paper prepared for January 30–31, 2012 meetings with NSF TUES Program Directors..

  • Henderson, C., Dancy, M. H., & Niewiadomska-Bugaj, M. (2012b). Use of research-based instructional strategies in introductory physics: Where do faculty leave the innovation-decision process? Physical Review Special Topics - Physics Education Research, 8(2), 020104-020101 - 020104-020115. 

  • Hinton, T, Gannaway, D, Berry, B, & Moore, K (2011). The D-cubed guide: planning for effective dissemination. Sydney: Australian Teaching and Learning Council.

    Google Scholar 

  • Jonsson, A, & Svingby, G (2007). The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review, 2, 130–144.

    Article  Google Scholar 

  • Kezar, A. J., & Eckel, P. D. (2002). The effect of institutional culture on change strategies in higher education: Universal principles or culturally responsive concepts? The Journal of Higher Education, 73(4), 435-460.

  • Khatri, R., Henderson, C., Cole, R., & Froyd, J. (2013a). Over One Hundred Million Simulations Delivered: A Case Study of the PhET Interactive Simulations. In  Physics Education Research Conference.

  • Khatri, R., Henderson, C., Cole, R., & Froyd, J. (2014). Learning About Educational Change Strategies: A Study of the Successful Propagation of Peer Instruction. In  Physics Education Research Conference.

  • Khatri, R., Henderson, C., Cole, R., Froyd, J., Gilbuena, D., & Stanford, C. (2013b). Increase the Impact Resources: List of Well-Propagated Strategies and Materials.http://www.increasetheimpact.com/resources. Accessed May 5, 2015.

  • Khatri, R., Henderson, C., Cole, R., Froyd, J. E., Friedrichsen, D., & Stanford, C. (2015). Characteristics of well-propagated undergraduate STEM teaching innovations. In  Physics Education Research Conference.

  • Kotter, JP (1996). Leading change. Watertown: Harvard Business School Press.

    Google Scholar 

  • Lattuca, L. R. (2011). Influences on Engineering Faculty Members’ Decisions about Educational Innovations: A Systems View of Curricular and Instructional Change. A White Paper Commissioned for the Characterizing the Impact of Diffusion of Engineering Education Innovations Forum, retrieved from https://www.nae.edu/File.aspx?id=36674.

  • Lattuca, LR, & Stark, JS (2009). Shaping the college curriculum: academic plans in context (2nd ed.). San Francisco: Jossey-Bass.

    Google Scholar 

  • Litzinger, TA, Zappe, SE, Borrego, M, Froyd, JE, Newstetter, W, Tonso, KL, et al. (2011). Writing effective evaluation and dissemination plans for innovations in engineering education. Paper presented at the ASEE Annual Conference & Exposition, Vancouver, BC, Canada.

  • Lund, TJ, & Stains, M (2015). The importance of context: an exploration of factors influencing the adoption of student-centered teaching among chemistry, biology, and physics faculty. International Journal of STEM Education, 2(13), doi:10.1186/s40594-015-0026-8.

  • McMartin, F, Tront, J, & Shumar, W (2012). A tale of two studies: is dissemination working? In Proceedings of the JCDL Conference.

  • Moskal, BM, & Leydens, JA (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation, 7, 10.

    Google Scholar 

  • National Research Council (2012). Discipline-based education research: understanding and improving learning in undergraduate science and engineering. Washington DC: The National Academies Press.

    Google Scholar 

  • National Science Foundation (2015). Improving Undergraduate STEM Education (IUSE: EHR) Program Solicitation NSF 15-585. Retrieved from http://www.nsf.gov/pubs/2015/nsf15585/nsf15585.pdf.

  • Reddy, YM, & Andrade, H (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448.

    Article  Google Scholar 

  • Rogers, EM (2003). Diffusion of innovations (5th ed.). New York: Free Press.

    Google Scholar 

  • Ruiz-Primo, MA, Briggs, D, Iverson, H, Talbot, R, & Shepard, LA (2011). Impact of undergraduate science course innovations on learning. Science, 331(6022), 1269–1270.

    Article  Google Scholar 

  • Schein, EH (1992). Organizational culture and leadership (2nd ed.). San Francisco: Jossey-Bass.

    Google Scholar 

  • Seymour, E. (2002). Tracking the processes of change in US undergraduate education in science, mathematics, engineering, and technology. Science Education, 86(1), 79-105, doi:10.1002/sce.1044.

  • Stanford, C., Cole, R., Froyd, J., Gilbuena, D., Henderson, C., & Khatri, R. (2014). Increasing the Impact of STEM Education Projects. Paper presented at the Biennial Conference on Chemical Education, Grand Rapids, MI,

  • Stanford, C., Cole, R., Froyd, J. E., Friedrichsen, D., Khatri, R., & Henderson, C. (Expected 2016). Designing for sustained adoption: Analysis of propagation plans in NSF-funded education development projects. Journal of Science Education and Technology.

  • Strang, D, & Soule, SA (1998). Diffusion in organizations and social movements: from hybrid corn to poison pills. Annual Review of Sociology, 24, 265–290.

    Article  Google Scholar 

  • Tront, J, McMartin, F, & Muramatsu, B (2011). Improving the dissemination of CCLI (TUES) educational innovations. In Proceedings of the ASEE/IEEE Frontiers in Education Conference.

  • Van de Ven, AH, & Poole, MS (1995). Explaining development and change in organizations. Academy of Management Review, 20(3), 510–540.

    Google Scholar 

  • Weick, KE, & Quinn, RE (1999). Organizational change and development. Annual Review of Psychology, 50(1), 361–386.

    Article  Google Scholar 

  • Wejnert, B (2002). Integrating models of diffusion of innovations: a conceptual framework. Annual Review of Sociology, 28, 297–326.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Science Foundation under grants #1122446, #1122416, and #1236926. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We are grateful for the contributions of PIs who have allowed us to analyze their proposals and members of the STEM education community for sharing their knowledge of well-propagated innovations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Renée Cole.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors collaboratively developed, tested, and validated the assessment instrument. CS took primary responsibility for the data analysis and writing the article. The remaining authors participated in assisting with the data analysis, read, edited, and approved this manuscript. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Additional file

Additional file 1:

Designing for Sustained Adoption Assessment Instrument. Full version of the rubric. (PDF 191 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stanford, C., Cole, R., Froyd, J. et al. Supporting sustained adoption of education innovations: The Designing for Sustained Adoption Assessment Instrument. IJ STEM Ed 3, 1 (2015). https://doi.org/10.1186/s40594-016-0034-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-016-0034-3

Keywords