Skip to main content

The STEM Faculty Instructional Barriers and Identity Survey (FIBIS): development and exploratory results

Abstract

Background

As institutions and science, technology, engineering, and mathematics (STEM) departments emphasize faculty use of evidence-based instructional practices (EBIPs), various barriers and faculty perceptions hinder that process. While a large body of literature exists in this area, no survey instrument has been developed to comprehensively and systematically capture all these factors. We developed and piloted a survey instrument, the Faculty Instructional Barriers and Identity Survey (FIBIS), to delineate university STEM faculty perceptions of barriers to using EBIPs and examine the composition and sources of faculty professional identity, use of and dissatisfaction with these practices.

Results

Initial pilot study results with a small, targeted sample (n = 69) shows how FIBIS can be used to understand factors that impact instructional practices. For our sample, we found that higher perceived departmental barriers and limited supports correlate with lower work identity. Even at a research university, we do not see a negative correlation between research and teaching identities; STEM faculty can have both, despite hypothesized tensions in the literature. We also found that sense of belonging and faculty community were descriptively higher for underrepresented minority than non-underrepresented minority faculty. As previous work has suggested, use of EBIPs varied by department. Finally, descriptive differences were seen in faculty barriers for those who were and were not satisfied with EBIPs. This suggests another layer to add to models of faculty decision-making regarding the use of EBIPs: dissatisfaction with implementing these practices once they are attempted. Further work is needed to confirm and understand these initial findings.

Conclusions

Many of the exploratory findings from our pilot of FIBIS align with previous qualitative work, suggesting that FIBIS is a tool that can capture faculty identity, use of EBIPs, and barriers to instruction. While we do not intend to generalize our claims, the following suggestions for our institution may demonstrate how FIBIS can be used to try and reduce STEM faculty barriers to implementing EBIPs: (1) developing a strong teaching community (especially needed for persistence of faculty from underrepresented minorities), (2) helping faculty connect to the university as a whole, and (3) working with departments to better support implementation of EBIPs. The results presented and implications of these findings demonstrate the potential of FIBIS as a tool for examining factors that influence STEM faculty instructional practice. Future work includes further validating the FIBIS barriers component of the survey so that FIBIS can be used to identify and support change in institutions of higher education.

Introduction

Despite the abundance of evidence demonstrating the positive impact of active learning on student outcomes in undergraduate science, technology, engineering, mathematics (STEM) courses (e.g., Freeman, Eddy, McDonough, Smith, Okoroafor, Jordt, & Wenderoth, 2014; Prince, 2004), there are, overall, still only limited shifts in instruction to more evidence-based instructional practices (EBIPs) (Stains et al., 2018). Prior research suggests that these limited changes in instruction may be impacted by tensions between professional identity related to teaching and research (Brownell & Tanner, 2012) and instructor-perceived barriers (e.g., Henderson & Dancy, 2007; Foster, 2014; Shadle, Marker, & Earl, 2017). Further, instructional practices may differ based on disciplinary contexts and course level (e.g., Drinkwater, Matthews, & Seiler, 2017), and researchers argue that these contextual factors should be more thoroughly explored in order for instructional change efforts to be effective (Lund & Stains, 2015). However, no validated comprehensive instrument exists to systematically and efficiently assess the factors that may promote or impede instructors’ implementation of EBIPs. In the present study, we report on the development and initial validation of the Faculty Instructional Barriers and Identity Survey (FIBIS), share preliminary results from the instrument, and discuss the implications of using FIBIS to support departmental and institutional change efforts in undergraduate STEM education.

Literature review

EBIPs can be defined as “instructional practices that have been empirically demonstrated to promote students’ conceptual understanding and attitudes” (Lund & Stains, 2015, p. 2). There is a complex relationship between faculty beliefs about student learning, contextual barriers, and affordances to teaching and practice as they related to EBIPs; those seeking to change practice need to consider how all of these interact with one another (Hora, 2012; Hora, 2014).

Synthesizing change research studies across faculty development, physics education, and higher education, Henderson, Beach, and Finkelstein (2011) concluded that there are four categories of change strategies for undergraduate instruction: disseminating curriculum and pedagogy, developing reflective teachers, developing policy, and developing shared vision. They also concluded that making tested “best practice” materials available to faculty and having “top-down” policies meant to influence the use of EBIPs do not work to bring about change. Instead, what works are approaches that seek to change faculty beliefs, utilize long-term interventions, and recognize that institutions are complex. After a review of the literature using a systems approach, Austin (2011) explained that faculty decisions are shaped by both individual characteristics (i.e., prior experience, doctoral socialization, the discipline, career stage, appointment type, and faculty motivation) and their current contexts (i.e., institution, department, and external contexts). From this systems perspective, there are four “levers” that can serve as barriers or promoters of faculty choices: reward systems, work allocation, professional development (PD), and leadership (Austin, 2011). Similarly, in their model of faculty curriculum decision-making, Lattuca and Pollard (2016) pulled from the literature to conclude that influences external to the institution affect both individual (i.e., faculty identity, experiences, and beliefs/knowledge) and internal (e.g., institutional, departmental) influences. Their model suggests that external influences affect individual influences, which in turn affect motivation for change and, finally, the decision to engage in change. Below, we review the literature on these factors and the ways in which they impact instructional practice and demonstrate the gaps in the previous work that the present study seeks to address.

Barriers to implementing EBIPs

There is a body of mostly qualitative work published on faculty perceptions of barriers to implementing EBIPs (e.g., Michael, 2007; Henderson & Dancy, 2007; Dancy & Henderson, 2008; Walder, 2015). This literature suggests that there may be different barrier foci at different universities; however, there are some common themes across these studies to characterize instructional barriers. Based on our review and synthesis of the literature, we categorized these barriers into professor barriers, student barriers, technical/pedagogical barriers, and institutional/departmental barriers, similar to how Walder (2015) categorized barriers (Table 1).

Table 1 Overview of research focused on faculty perceptions of barriers to implementing EBIPS

Though there is general agreement on time being the greatest issue for faculty, the variety of studies performed at different universities has found different barrier foci. For example, Michael (2007) explored barriers to faculty using active learning at a workshop with 29 faculty participants, some of which were in STEM, with results falling into three categories: student characteristics, teacher characteristics/problems, and pedagogical issues that affect student learning. The most cited barriers were time, financial obstacles, teaching resistance, student commitment, and technical complexity. Walder (2016) interviewed 32 professors who had won teaching awards at a Canadian university and divided their barriers to innovation into six categories: professors, technical aspects, students, institution, assessment, and discipline. Professor-related obstacles were by far the highest mentioned barriers for this population. Turpen, Dancy, and Henderson (2016) interviewed 35 physics faculty across several institutions to examine barriers to the use of peer instruction (PI) by users and non-users. Non-users of PI were most concerned with time while PI users were most concerned with implementation difficulties, suggesting that faculty barriers differ between those who have and have not used EBIPs. In a study of six tenured physics faculty from four institutions, Henderson and Dancy (2007) found the largest barriers to the use of EBIPs to be the need to cover content, students’ attitudes toward school, lack of time due to heavy teaching load and research, departmental norms of lecture, student resistance, class size, room layout, and fixed or limited time structure. The differences observed in barriers to use of EBIPs across each study could be a result of the institutional context. However, without a common instrument to measure barriers, it is challenging to draw conclusions across studies on the most common or significant barriers for faculty. The development of FIBIS seeks to address this gap in the literature.

Several studies have examined barriers at the same institution across departments (e.g., Austin, 2011, Borrego, Froyd & Hall, 2010; Lund & Stains, 2015; Shadle et al., 2017; Stieha, Shadle, & Paterson, 2016) and have found that they vary—the implication being that each environment is unique and must be understood before effective reform efforts can move forward. In particular, Lund and Stains (2015) found that the physics faculty at their university possessed primarily student-centered views of teaching and viewed contextual influences positively, while the chemistry faculty had the most teacher-centered views and viewed contextual influences as barriers to the adoption of EBIPs; biology faculty fell in between. Conversely, Landrum et al. (2017) found that their chemistry faculty at their institution had the lowest rates of EBIP adoption while computer science faculty had the highest. These studies demonstrate that not only does EBIP implementation vary across departments within the same university, but EBIP implementation within the same department across universities also varies. Punctuating this, Reinholz and Apkarian (2018) reviewed the literature extensively and concluded that departments are key to systemic change in STEM fields. Therefore, it is important that barriers are understood within the context of both departments and institutions.

Despite the evidence supporting the use of EBIPs, many faculty members choose not to utilize such practices due to their beliefs (Addy & Blanchard, 2010); in particular, deeply held beliefs about teaching and learning (Pajares, 1992). Andrews and Lemons (2015) showed that faculty will continue to think and to use what they have always used until they have a personal experience that influences them into thinking otherwise. A number of studies have shown that dissatisfaction with current practice is necessary for instructors’ beliefs to change (Bauer et al., 2013; Gess-Newsome, Southerland, Johnston, & Woodbury, 2003; Gibbons, Villafane, Stains, Murphy, & Raker, 2018; Windschitl & Sahl, 2002). Reflection when teaching has been found to be vital in creating dissatisfaction that can lead to changes in practices in faculty (e.g., Kane, Sandretto, & Heath, 2004). In graduate students and postdocs, reflecting upon novel teaching experiences helped these instructors experience conceptual change in their teaching beliefs and practices (Sandi-Urena, Cooper, & Gatlin, 2011). The discipline-based education research (DBER) report reviewed the belief literature and found that teacher beliefs are usually the most common barrier to implementing EBIPs (National Research Council [NRC], 2012). When there is a mismatch between faculty beliefs and their practice, faculty are more likely to become dissatisfied with their teaching and be open to EBIPs. More important than extra time needed to implement or characteristics of the course or instructor, Madson, David, and Tara (2017) found the best predictor of faculty use of interactive methods to be faculty beliefs regarding the positive results of using these methods both for them and their students. However, no comprehensive survey has been found in the literature to assess barriers to instruction, which include faculty beliefs, to provide researchers a systematic way to start comparing and contrasting how these factors affect one another. The closest survey in scope and focus is the Survey of Climate for Instructional Improvement (Walter, Beach, Henderson, & Williams, 2014). However, this instrument investigates departmental climate for instructional improvement rather than barriers to instruction directly. A full listing of instruments and their foci is included in Table 2; most of these instruments are focused on teaching practices in a specific discipline. None of these instruments contain a comprehensive, quantitative measure of barriers to using EBIPs or identity. Further, one of the two instruments that focus on barriers (Shell, 2001) explores faculty barriers to implementing critical thinking in nursing courses, not barriers to STEM EBIPs more broadly. The other recently published work using a barriers-related survey (Bathgate et al., 2019) explores the existence (check or un-check boxes) of 30 paired barriers and supports (total of 60) for faculty using evidence-based teaching methods. Unlike these other instruments, the FIBIS instrument also connects a host of demographic and background information with use of EBIPs, barriers to using EBIPs (and the extent to which these are barriers), and issues related to teaching and research identity that may affect faculty’s use of EBIPs.

Table 2 List of survey instruments related to faculty use of EBIPs including measures of faculty teaching practices, beliefs, and institutional climate; barriers; and supports to using EBIPs. (Note: list is not intended to be exhaustive but to demonstrate what is being done in these areas)

Professional identity

In addition to barriers, researchers suggest that faculty professional identity can impact instructional practices (e.g., Abu-Alruz & Khasawneh, 2013; Brownell & Tanner, 2012). Professional identity can be defined as the core beliefs, values, and assumptions about one’s career that differentiates it from others (Abu-Alruz & Khasawneh, 2013) and includes professional values, professional location, and professional role (Briggs, 2007). Identity also includes the sense of professional belonging (Davey, 2013) and is constantly developing over the course of a lifetime (e.g., Henkel 2000). Professional identity in academia most generally relates to research and teaching that are discipline-specific (Deem, 2006). While there is not an agreed upon definition of professional identity in the higher education literature, based on the results of their meta-analysis, Trede, Macklin, and Bridges (2012) suggest three main characteristics for professional identity: (1) individuals develop “knowledge, a set of skills, ways of being and values” that are common among individuals of that profession, (2) these professional commonalities differentiate the individual from others in different professions, and (3) individuals identify themselves with the profession (p. 380). The authors also argue that faculty professional identity develops when individuals are students, and much of the research on professional identity focuses on the development of identity as undergraduate students (e.g., Nadelson et al., 2017; Hazari, Sadler, & Sonnert, 2013; Ryan & Carmichael, 2016), graduate students (e.g., Schulze, 2015; Gilmore, Lewis, Maher, Feldon, & Timmerman, 2015; Hancock & Walsh, 2016), and faculty (e.g., Abu-Alruz & Khasawneh, 2013; Barbarà-i-Molinero, Cascón-Pereira, & Hernández-Lara, 2017; Sabancıogullari & Dogan, 2015).

Overall, the research suggests that there are both internal (e.g., beliefs, prior experiences) and external (e.g., social expectations, departmental contexts) factors that may influence faculties’ professional identity (e.g., Abu-Alruz & Khasawneh, 2013; Samuel & Stephens, 2000; Starr et al., 2006). Faculty, particularly those at research-intensive universities, can identify with both the teacher/educator profession and the research profession. However, many times, these identities can be in tension, which may be due to the institutional and individual value placed on these two responsibilities (e.g., Brownell & Tanner, 2012; Fairweather, 2008; Robert & Carlsen, 2017).

First, the institutional value placed on research and teaching may have an impact on faculty’s professional identity. In a study of research productivity of 25,780 faculty across 871 institutions, Fairweather (2002) found that the more time faculty spend on teaching, the lower their average salary, regardless of institution type. Furthermore, Fairweather (2002) found that only 22% of faculty studied were able to attain high productivity in both teaching and research during the study’s two 2-year period, with untenured faculty being the least likely to be highly productive in both. Exploring faculty perceptions of institutional support for teaching in Louisiana (n = 235), Walczyk, Ramsey, and Zha (2007) determined that faculty tended to include teaching as part of their professional identity when they perceived their institution as valuing teaching in the tenure and promotion process. These studies suggest that the ways in which institutions value teaching, or not, may play an important role in faculty’s professional identity.

Second, the individual value of teaching and research perceived by a faculty member as well as how others (e.g., colleagues, advisors) perceive the value of teaching and research may impact faculty professional identity. For example, Fairweather (2005) describes faculty perceptions that time spent improving teaching leads to less time for research, which leads to less faculty involvement in reform, even when faculty are committed to student learning (Leslie, 2002). This may suggest that, despite a strong teaching identity, their own perceptions may negatively impact this identity. Robert and Carlsen (2017) explored the tension of teaching and research through a case study approach of four professors and concluded that personal career goals and aspirations, likely derived during their graduate school experiences, may be leading faculty to disvalue teaching. Further, Brownell and Tanner (2012) hypothesized that tensions between “scientists’ professional identities—how they view themselves and their work in the context of their discipline and how they define their professional status—may be an invisible and underappreciated barrier to undergraduate science teaching reform, one that is not often discussed, because very few of us reflect upon our professional identity and the factors that influence it” (p. 339). Despite the perceived importance of both teaching and research identities for faculty in implementing EBIPs, no instrument to our knowledge measures both. Thus the FIBIS incorporates both teaching and research identity components into the survey to capture these potential tensions.

Conceptual framework for EBIP implementation

The research described above suggests that there are a variety of factors that can promote or impede faculty implementation of EBIPs. These factors include:

  • Internal and external-related barriers

  • Teaching and research identities

  • Faculty beliefs

  • Institutional supports

Further, these factors can be influenced by:

  • Departmental contexts

  • Mentors and colleagues

  • The tension between teaching and research identities

Table 3 lists and describes several different models related to instructor decision-making and use of EBIPs used in education literature. We are not attempting to provide a new model for interpreting faculty decisions to use EBIPs and have adapted Lattuca and Pollard’s (2016) model of faculty decision-making about curricular change. This model has a number of important factors described while still maintaining simplicity and specific focus on higher education instructional barriers. From Lattuca and Pollard’s original model, we incorporated other elements from the literature such as dissatisfaction with practice (Andrews & Lemons, 2015) and the role of prior experiences, as opposed to institutional environment, in shaping beliefs about teaching and the decision to engage in EBIPs (Oleson & Hora, 2014). As seen in Fig. 1, there are extra-institutional influences (on faculty and the institution/department), external influences (from the institution and department on the faculty), and internal influences (from the faculty themselves in the form of beliefs, identity, prior experiences, and knowledge). This aligns with Henderson et al.’s (2015) bridge model for sustained innovation showing individual, department, institution, and extra-institution factors affecting teaching practices. In Lattuca and Pollard’s model, however, the extra-institutional and external influences are interpreted through faculties’ internal lens, informing their motivation to change and, ultimately, their decision to engage in the use of EBIPs. In this study, we examine identity, which includes components of extra-institutional, external, and internal influences, as well as barriers to the use of EBIPs, which include external and internal influences. We also look at the dissatisfaction component of motivation to change.

Table 3 List of frameworks related to faculty decision-making and use of EBIPs
Fig. 1
figure 1

Model for faculty’s decision-making process on using EBIPs. Underlined terms were included in the FIBIS instrument and measured in this study. (Note: arrows indicate theoretically-based relationships, not empirically tested ones.)

Purpose and rationale

The prior research paints a complex picture around faculty instruction and the choices made to implement or not implement EBIPs. As reviewed in detail by Williams et al. (2015), there are a variety of self-report instruments that measure faculty teaching practices in specific STEM disciplines (e.g., Borrego, Cutler, Prince, Henderson, & Froyd, 2013; Henderson & Dancy, 2009; Marbach-Ad, Schaefer-Zimmer, Orgler, Benson, & Thompson, 2012) and a few that measure practices in STEM higher education broadly (e.g., Hurtado, Eagan, Pryor, Whang, & Tran, 2011). Currently, only one instrument we know of quantitatively measures research and professional identity in higher education (Abu-Alruz & Khasawneh, 2013); however, there are no instruments that include teaching identity. A few previously development instruments include small sections on barriers to EBIPs (e.g., Lund & Stains, 2015; Prince, Borrego, Henderson, Cutler, & Froyd, 2013); however, these sections are not comprehensive of all potential barriers elucidated in the qualitative literature. To our knowledge, only two surveys quantitatively measure barriers to teaching in significant depth. A survey used by Shell (2001) focused on barriers to teaching critical thinking in general (no other EBIPs) and was developed for a nursing context only. Just published, Bathgate et al. (2019), developed a survey that included sections on use of evidence-based teaching and 30 questions assessing only the presence or absence of barriers and their associated supports (60 total questions). While an important contribution to the field, this work is limited in its nuance (presence or absence of barriers rather than magnitude of the barriers), does not address faculty identity or extensive faculty background details, and does not provide details on the validity of the instrument or publish the instrument itself. Despite the plethora of work on faculty use of EBIPs, faculty identity, and barriers to using EBIPS, no studies seek to validate an instrument that possesses both breadth and nuance in order to systematically understand the relationship between these particular variables. The present study aims to fill this gap by developing and initially validating an instrument, the Faculty Instructional Barriers and Identity Survey (FIBIS), to examine relationships between (a) use of EBIPs, (b) barriers to using EBIPs, and (c) tensions in teaching and research identity. From our initial pilot of the FIBIS, we sought to answer the following research questions:

  1. 1.

    What are faculty members’ reported use of and satisfaction with EBIPs, perceived instructional barriers, and professional identity?

  2. 2.

    What relationship exists between these variables?

  3. 3.

    Where do differences in barriers, identity, and reported use of EBIPs exist?

Methodology

Below, we describe the steps in the development and initial validation of the FIBIS instrument. We then detail the pilot study data collection and analysis methods. The final instrument can be viewed in the supplementary material accompanying the online article (Additional file 1: FIBIS Instrument Supplement). Note that all research was conducted with permission from our institution’s Internal Review Board with informed consent of all voluntary participants whose data were used in this study.

Survey development and initial validation

The FIBIS instrument was developed based on best practices in instrument development derived from the literature (e.g., American Education Research Association [AERA], 2014; Sawada et al., 2002; Walter et al., 2016) and helped to identify the steps in our process, detailed below. The FIBIS includes both Likert scale and open-ended questions to quickly, yet descriptively, capture information on factors that the literature suggests influence the use of EBIPs but have not all been quantified. Our survey development and validation steps included (1) reviewing the literature to construct the initial FIBIS sections and questions, (2) obtaining face and content validity of FIBIS from an expert panel, and (3) revising FIBIS based on panel feedback. The subsequent section overviews the FIBIS pilot study. The supplementary material accompanying the online article includes additional details about the survey development and initial validation.

Construction of initial survey

We began construction of the instrument by searching the literature to find questions that could be modified and adapted for FIBIS. This search resulted in the three main components of the FIBIS: (1) faculty use of and satisfaction with EBIPs (modified from Lund & Stains, 2015), (2) barriers to faculty implementation of EBIPs (developed from the qualitative literature), and (3) professional identity of academics who conduct research and teach (modified from Abu-Alruz & Khasawneh, 2013). Because of the literature on changing instructional practice, we also included a section on contextual factors, which included the types and sizes of classes taught, department, level of engagement in the universities’ teaching center, prior experiences, and satisfaction with current practice. The specific details follow in the next few paragraphs.

To identify instruments focused on understanding which EBIPs faculty use, we searched the terms “evidence-based practices,” “research-based practices,” or “active learning” combined with “higher education” and “STEM.” We found a survey instrument that included a portion in which instructors reported their familiarity and use of defined EBIPs (Lund & Stains, 2015), which included 11 total EBIPs: Think-pair-share, just-in-time-teaching, case studies, Process Oriented Guided Inquiry Learning (POGIL), SCALE-UP, interactive lecture, collaborative learning, cooperative learning, teaching with simulations, and peer instruction. After reading the EBIP name and definition, participants indicated their use of these EBIPs on a scale from 1 = never heard of it to 6 = I currently use all or part of it. In the initial stages of development, we used these 11 EBIPs in their original form and added an additional open-ended question for those who indicated they had used EBIPs to understand the extent to which participants were satisfied or dissatisfied with their practice.

An initial search of the literature using the search terms “barriers,” “higher education,” and “STEM” resulted in no instruments that measured the variety of barriers to EBIPs. Lund and Stains included a few barrier questions on their survey but not an extensive section. Walter et al.’s (2014) SCII survey focused on climate for improvement rather than barriers themselves, serving a different, but related, purpose than ours. One instrument was found in nursing education which related to barriers to the use of critical thinking (Shell, 2001), which we used to compare to the barriers questions in the FIBIS. With no other instruments to begin with at that time, we developed our own barrier questions based upon a review of the barriers literature described above. For this part of the instrument, the list of barriers was compiled from the most relevant and extensive barrier articles (Brownell & Tanner, 2012; Dancy & Henderson, 2008; Elrod & Kezar, 2017; Henderson & Dancy, 2007; Michael, 2007; Porter, Graham, Bodily, & Sandberg, 2016; Turpen et al., 2016; Walder, 2015). The questions developed were organized around the four barrier types found in Table 1, which were similar to categories developed by Walder (2015).

Finally, we searched the literature for questions on professional identity with search terms “professional identity” or “teaching and research” combined with “higher education” and “STEM.” We found one instrument involving faculty identity (Abu-Alruz & Khasawneh, 2013), which organized faculty professional identity around four dimensions:

  1. 1.

    Work-based (e.g., connection with the university community)

  2. 2.

    Student-based (e.g., commitment to supporting students in the classroom)

  3. 3.

    Self-based teaching (e.g., connection to the teaching community)

  4. 4.

    Skill-based teaching (e.g., commitment to improving teaching)

However, there were no questions related to faculty research identity. We used this instrument as the basis to build the identity portion of our survey. We added two sections to their original survey to create a self-based research identity component and skill-based research identity component that mirrored the already present teaching identity sections. We also modified the overall work section of this survey to include analogous research identity statements. For example, for the statement “I am committed to the university mission, vision, and goals for teaching,” we added the analogous statement “I am committed to the university mission, vision, and goals for research.”

Face and content validity

Next, we sought to obtain face and content validity (AERA, 2014; Trochim, Donnelly, & Arora, 2016) of the survey through review by a panel of four experts. Both face and content validity seek to determine the degree to which a construct is accurately translated into the operationalization. Face validity examines the operationalization at face value to determine whether or not it appears to be a good translation of the construct, while content validity examines the operationalization compared to the construct’s relevant content area(s). To ensure the EBIP terms and definitions from Lund and Stains (2015), descriptions of barriers, and characterization of identity were valid, we included a science educator, two STEM faculty members, and the director of the institution’s teaching center in our expert panel. The two faculty members served as face validity experts and gave feedback regarding survey question clarity and readability. The director and educational researcher served as content and face validity experts, providing feedback on the extent to which the questions captured the ideas of the constructs.

Survey revision

Based on the panel feedback, we made small modifications to the barriers to implementation and professional identity sections. The barriers statements were revised based on the committee’s comments for clarity and accuracy with a couple relatively redundant statements removed and some rewording for clarity. For example, the statement “Educational support faculty/staff (e.g., educational developers, instructional designers, educational researchers) do not value my experience” was changed based on committee feedback to “Faculty/staff who support instructional development (e.g., educational developers, instructional designers, educational researchers) do not value my experience.” The identity portion was modified to a small extent for clarity but mostly left untouched.

Descriptions of the various EBIPs were more extensively modified based on a glossary developed by our university’s teaching center, comments by the review committee, and a search of the extant literature. We also went back to the definitions used by the authors who branded and/or researched each EBIP. References included the SCALE-UP website (http://scaleup.ncsu.edu/), cooperative learning’s seminal article (Slavin, 1980), a review of active learning (Prince, 2004), POGIL.org, and peer instruction’s designer (Crouch & Mazur, 2001). Finally, the panel agreed that the Likert scale for reported use of EBIPs should collapse into 1 = never heard of it, 2 = heard of it, 3 = familiar but have not used it, 4 = familiar and plan to implement, and 5 = used it in the past or currently use it.

The modified version of FIBIS was then sent to the director of the university teaching center for final review feedback. With the director’s comments on the modified version, a final meeting between the director and the researchers was held and a consensus was reached on the final version of the instrument with a limited number of final modifications. The final version of the survey was entered into Qualtrics at which point it was emailed to the two faculty reviewers and the two researchers who designed the survey to take and test that the Qualtrics version of the survey was working properly and had no discernable problems or errors in the questions. A few comments were made regarding flow, and appropriate changes were made. The final survey sent to faculty for this study consisted of four sections: awareness and adoption of EBIPs (11 Likert questions with two scales, five free response), barriers to implementation (46 Likert questions, two free response), professional identity (31 Likert questions, one multiple choice, four free response, two fill-in-the-blank), and demographics/background (ten multiple choice questions, two free response).

Pilot study

Data collection

The FIBIS survey was sent via Qualtrics in November of 2017 to 150 STEM faculty members who were part of a larger research project at our institution. Of the 150 faculty emailed, 86 (57%) completed the survey. Of those, 69 respondents indicated they both taught and conducted research within their discipline or around teaching and learning (as opposed to only teaching courses). Since the survey included questions on both teaching and research identities, only those 69 respondents were used for the survey analysis. Table 4 overviews the demographics of these participants.

Table 4 Overview of participant demographics

Data coding

The 11 EBIP questions were first identified as the instructor using or not using the practice (i.e., chose 5 = I have used it in the past or I currently use it). Given previous qualitative research on time/effort being an important barrier to implementing EBIPs (e.g., Michael, 2007; Walder, 2016; Shadle et al., 2017), we chose to group the EBIPs, with feedback from our expert panel, based on the perceived effort needed to implement the practice (Table 5). This helped us organize the types of EBIPs, better understand the barriers to using EBIP data, and elucidate why faculty might be using particular practices or not. Each participant received a % score for how many of each type of EBIP effort category they indicated that they were implementing. For example, if a participant indicated that they used POGIL, case studies, simulations, and peer instruction, they would receive a 33% for high effort EBIPs, 25% for moderate EBIPs, and 25% for low effort EBIPS.

Table 5 EPIB effort-to-implement groups

We also calculated the amount of dissatisfaction participants had with implementing EBIPs. From participants’ responses to their dissatisfaction with EBIPs that they had implemented, each participant received a sum score of the number of EBIPs with which they were dissatisfied. The percent dissatisfaction was calculated by dividing the sum of the EBIPs they were dissatisfied with by the total number of EBIPs they reported implementing. For example, if a participant reported implementing six EBIPs and was dissatisfied with three of them, they would receive a score of 50%.

Identifying constructs

After coding the data, we then calculated reliability for the constructs/dimensions under instructional barriers and professional identity. Since the professional identity questions were modified from a prior instrument (Abu-Alruz & Khasawneh, 2013), the constructs were based upon the prior instruments dimensions of work-based, self-based, and skill-based. Based on the framework in Fig. 1, these dimensions included extra-institutional (e.g., how connected they feel to the greater professional community), external (e.g., how connected they feel to the university), and internal (e.g., their passion for research and teaching) influences. Barriers to instruction questions were developed for this survey and were not previously organized by construct. Due to the small sample size, the data set was not appropriate for exploratory factor analysis (EFA) to identify barriers groupings (Costello & Osborne, 2005); thus, we developed initial groupings from the literature as well as feedback from our expert panel.

Grouping the barrier questions into meaningful constructs was an iterative process of (1) reviewing correlations between questions, (2) calculating a Cronbach’s alpha reliability score of the group of questions (Cronbach, 1951), and (3) identifying questions to remove based on low/high correlations and improvement of reliability upon removal. Initially, we used the four categories of barriers that were formed from the literature review. However, when analyzed, these were shown to not be reliable. Through iterative grouping and checking of reliability, we formed seven groupings, five of which were reliable (α > .68), divided into external and internal categories based on our framework. Thus, there were a total of five professional identity dimensions and five instructional barriers constructs (all αs > .68) (Table 6). These ten constructs were used in the analysis described in the next section. We acknowledge the limitations of our methods in validating the barriers section of the instrument and are conducting a follow-up study in order to psychometrically evaluate the validity of these barriers questions.

Table 6 Overview of instructional barriers and professional identity constructs

Quantitative data analysis

To answer research question 1, we descriptively explored the data by calculating means, standard deviations, and frequencies for different constructs and sub-constructs. For example, we calculated the frequency of participants who reported using and were dissatisfied with each of the EBIPs. As another example, we calculated means and standard deviations for each of the five professional identity dimensions. To understand participants’ professional identity, we ran four paired t tests to test the hypothesis that participants at our research-intensive university would have higher research identities than teaching identities. We used a more conservative p value to account for additional statistical tests conducted (p < .05/4).

For research question 2, we ran correlations to understand relationships between use of EBIPs, instructional barriers, and identity dimensions. For research question 3, we calculated descriptives for subgroups of participants (e.g., participant race, prior experience, instructor type). For example, we calculated means and standard deviations for under-represented minority (URM) and non-URM participants. A Mann-Whitney U non-parametric test was used to explore differences in reported use of EBIPs, barriers, and professional identities between male and female faculty.

Qualitative data analysis

The survey contained several open-ended questions regarding participants’ satisfaction/dissatisfaction with use of EBIPs, most significant perceived barriers, and experiences in graduate school related to teaching and research. These data were used to inform portions of the quantitative results and revisions on the instrument itself. The first author coded the free response questions in the general inductive content analysis tradition (Patton, 2002; Creswell, 2003) using a cyclical process of open coding and analysis of emergent themes. Responses were read over initially for common themes, tentative nodes were created, and the responses were then coded into those nodes. Revisal of nodes occurred as the analysis proceeded. Once responses were coded, nodes were reviewed for overall themes and some were combined and modified as appropriate until a final list of categories was created. The final list of codes were discussed between the two researchers to ensure the data were represented by the coding.

Pilot data results

The intent of developing the FIBIS was to provide STEM stakeholders a quick, yet descriptive, method to capture information on factors that impact instructors’ use of and satisfaction with EBIPs. Our sample is small and specific to a set of STEM instructors at our institution and is meant to demonstrate the ways in which FIBIS can be used rather than make definitive claims. What our pilot data results, described below, suggest is that FIBIS has the potential to identify faculty-reported use of and satisfaction with EBIPs, perceived instructional barriers, and professional identity. When possible, we demonstrate how FIBIS can be used to identify significant differences between groups and constructs. When small subgroup sample sizes limit the ability to make inferential claims, we use descriptive data that, with a larger sample, could be used to make inferential claims. Further work on FIBIS is being conducted with a larger sample to more rigorously validate the instrument and these findings.

RQ1: Characterizing use of EBIPs, instructional barriers, and professional identity

Use of EBIPs

Overall, participants were most familiar with and most often used interactive lecture and collaborative learning in their STEM courses (Fig. 2). Of those that did not use EBIPs, the largest percentage of participants appeared to be familiar with the EBIP but did not intend to use it (i.e., 3-familiar but not used in Fig. 2). The least known EBIP for participants was SCALE-UP, with over a third of participants never having heard of the practice.

Fig. 2
figure 2

Participant familiarity and use of EBIPs. Dotted lines emphasize differences in percentages for each EBIP

When exploring faculty-reported use of and dissatisfaction with these practices, we observed that the largest percentage of faculty was dissatisfied with the least often used practices (Table 7). The third largest percentage of dissatisfaction in practice, however, was for faculty who implemented collaborative learning.

Table 7 Participant use and dissatisfaction with EBIPs

Professional identity

Participants overall held both strong teaching and research professional identities (Table 8). There were no significant differences in participants’ teaching and research identities. Fifteen participants (21.7%) indicated that they conducted education research outside their disciplinary research.

Table 8 Participant professional identity descriptives

However, there were significant differences in participants’ self-based (i.e., connection to the community) and skill-based (i.e., commitment to improving) identities. For both the teaching and research identities, participants’ skill-based dimension was significantly higher, suggesting that these participants are willing and interested in improving in both domains.

Instructional barriers

Both the Likert data and the inductive coding of the open-ended qualitative barriers support similar top barriers cited by participants as impacting their implementation of EBIPs (Table 9). Typical issues of time, institutional value of teaching, and appropriate classrooms were most cited by faculty when asked about barriers to using EBIPs, but when they were asked what they were dissatisfied with in their teaching, internal influences relating to beliefs, confidence, and knowledge were most reported by far. For example, one faculty member explained, “I don’t feel that what I've done is enough to take advantage of the benefits of active learning strategies.” Another acknowledged, “I am overly reliant on lecture and I aspire to make the student’s in-class experience more interactive. There is some interactiveness built into my courses (more than many of my peers), but I can do a better job and plan to do a better job.” These types of comments were more frequent than comments of dissatisfaction related to external factors.

Table 9 Overlap in top reported barriers to instruction from qualitative and quantitative data

RQ2: Relationship between EBIPs, barriers, and professional identity

There appeared to be significant moderate and strong relationships between reported implementation of EBIPs, professional identity, and perceived barriers (Table 10). There were a few significant relationships observed between EBIPs when organized by effort level. Participants who reported implementing more low-effort EBIPs (e.g., think-pair-share) also tended to report implementing moderate-effort EBIPs (e.g., case studies) and tended to have a stronger teaching identity in both the self and skill dimensions. Participants who reported implementing more moderate-effort EBIPs tended to have significantly lower barriers related to their own beliefs (e.g., negative student evaluations).

Table 10 Correlations between implementation of EBIPs, perceived barriers to implementation, and professional identity

There was no significant relationship between participants’ implementation of low-, moderate-, or high-effort EBIPs and their % dissatisfaction with implementing EBIPS. In other words, participants who reported using more high-effort EBIPs did not have a higher level of dissatisfaction with their EBIP practice than participants who reported using fewer high-effort EBIPs. Participants’ level of dissatisfaction with the use of EBIPs was significantly and positively correlated with half of the instructional barriers categories. Participants who had a higher percentage dissatisfaction tended to also perceive students as higher barriers to instruction, had negative beliefs about EBIPS that more often barred implementation, and had more barriers related to negative prior experiences with EBIPs.

Of the five barrier constructs, barriers related to faculty beliefs appeared to be correlated most often with other constructs. Faculty who held more negative teaching beliefs about EBIPs tended to also have perceptions of students as barriers as well as perceived departmental barriers. There was a strong positive correlation between perceived departmental barriers and perceived limited supports. Participants who had negative prior experiences with EBIPs as either an instructor or student also tended to perceive students as barriers.

Finally, of the five professional identity dimensions, work identity appeared to be most often correlated with other constructs. Faculty who perceived higher departmental barriers tended to have lower work identities (i.e., feel less connected with the university and the larger academic community). Work identity was also significantly and positively correlated with self-teaching, self-research, and skill-research dimensions of professional identity.

Not surprising was the strong, positive correlation between the two teaching identity dimensions. In other words, participants who felt connected to the teaching community at the university also felt committed to improving their teaching. The same relationship was true for the two research dimensions. While no EBIPs effort groupings were correlated with the professional identity constructs, there were significant relationships between identity and perceived departmental barriers. Participants who perceived higher departmental barriers had a lower work identity. Similarly, participants who felt more committed to improving their research skills had lower perceived departmental barriers related to implementing EBIPs.

RQ3: Differences in EPIB implementation, identity, and barriers

Demographic differences

Differences existed between faculty of different status, gender, and ethnicity. First, there were significant differences for faculty of different status as it related to their beliefs about implementing EBIPs F(65,3) = 3.76, p = .015. Post-hoc comparisons between instructor types, using a Bonferroni adjustment, identified tenure-track professors to hold teaching beliefs that were significantly more of a barrier to use of EBIPs (M = 2.44, SD = .54) than teaching faculty (M = 1.83, SD = .47). Second, when exploring participant gender, female participants implemented significantly more Think-pair-share (M = 4.48, SD = 1.12) than their male counterparts (M = 3.39, SD = 1.68) (U = 421, n1 = 22, n2 = 45, p < .01). Males appeared to be significantly more familiar with and use simulations in teaching (M = 3.65, SD = 1.37) than females (M = 2.83, SD = 1.37), F(67,1) = 5.58, p = .021. Third, descriptively, there appeared to be differences in URM professional identity and perceived instructional barriers (Table 11). While a small sample, these data may suggest that URM participants have a higher self-teaching identity (M = 4.52, SD = .71) and self-research identity (M = 4.43, SD = .68) than non-URM participants (M = 4.39, SD = .43 and M = 4.18, SD = .99, respectively). Further, URM participants’ beliefs about implementing EBIPs were lower (M = 1.78, SD = .73) than non-URM participants (M = 2.06, M = .61) and their percent implementation of high-effort EBIPs were higher (URM: M = 38.09%, SD = 35.63; non-URM: M = 32.80, SD = 29.87).

Table 11 Comparison of URM professional identity with perceived instructional barriers

Dissatisfaction with practice

Looking at just the participants who reported implementing a particular EBIP, there were descriptive differences in perceived barriers and self-based teaching identities for participants. For participants who reported implementing collaborative learning (n = 54), there were differences in perceived barriers for participants who were and were not satisfied with using collaborative learning (Fig. 3). While the numbers of dissatisfied participants was low (< 6) for the remaining EBIPs, the trends appeared similar. Thus, there may be a relationship between participants’ perceived barriers and their satisfaction with implementation.

Fig. 3
figure 3

Descriptive differences in perceived barriers for participants who were satisfied and dissatisfied with their use of collaborative learning

Similarly, participants who implemented collaborative learning but were dissatisfied with their practice held self-based teaching identities that were lower than participants who were satisfied with their practice of implementing collaborative learning (Fig. 4). There were virtually no differences between the two groups on their work identity, skill-based teaching dimension, or either of the research-based dimensions. These data represent a small number of participants; however, it may suggest that participants’ connection to the teaching community may differ when they are implementing EBIPs that do not go well for them.

Fig. 4
figure 4

Descriptive differences in professional identity dimensions for participants who were satisfied and dissatisfied with their use of collaborative learning

Departmental contexts

Participants from different departments appeared to have different levels of implementation of EBIPs (Table 12). While the numbers are small for each department and may not be representative of the department itself, there did appear to be differences in the frequency of faculty who were using particular EBIPs. The majority of social science and physics/astronomy department participants reported implementing very few of these EBIPs. Conversely, the majority of participants from the chemistry and computer science departments reported implementing many of the practices.

Table 12 Percentage of faculty who implement EBIPs in each department

Interestingly, there existed differences in relationships between participants’ knowledge and use of EBIPs, identity, and barriers for different departments. For the chemistry department, participants who implemented a larger percentage of the EBIPs tended to have a stronger work identity (r = .756, p = .049) and skill-based research identity (r = .898, p = .006) along with lower barriers related to their own beliefs about EBIPs (r = − .762, p = .046). No other significant relationships existed between percent of familiar EBIPs used and other variables for participants in other departments.

Discussion

The present study sought to develop, initially validate, and pilot the Faculty Instructional Barriers and Identity Survey (FIBIS). Our pilot data provided feedback that was used to make small, final revisions to the instrument. Although this was a pilot study with a small sample, we were able to demonstrate how FIBIS can be used to characterize STEM faculties’ use of and satisfaction with EBIPS, barriers to implementing EBIPs, and professional identity. Below, we discuss how our FIBIS results align with previous work exploring these different factors to provide evidence of for the construct validity, or the degree to which the factors measure what they were intending to measure (AERA, 2014). We also use our pilot data to suggest a modification to the decision-making framework, which will be tested in future work. Lastly, we demonstrate how FIBIS could be used at other institutions for meaningful and practical STEM education transformation.

EBIPs, barriers, and identity

Prior research suggests that science faculty may have tensions between their research and teaching responsibilities (e.g., Brownell & Tanner, 2012), which drove the development of the identity section of FIBIS with both teaching and research identity questions. Given our own context, we hypothesized that there would be significant differences between the teaching and research identity components of FIBIS in our sample of faculty at our research-intensive university. However, our data did not show any relationship between research and teaching identities. Such a finding aligns with a larger body of literature that shows a lack of relationship (e.g., Hattie & Marsh, 1996; Jenkins, 2004) or a small, positive correlation (uz Zaman, 2004) between research and teaching. Our data may suggest that FIBIS can identify when faculty have both strong teaching and research professional identities. However, further FIBIS testing and follow-up interviews with faculty would help confirm the validity of FIBIS in identifying tensions, or lack thereof, between teaching and research identities for STEM faculty.

Despite the small pilot sample, FIBIS was able to elucidate descriptive differences between URB and non-URM faculty identity dimensions. Given the concern with individuals’ persistence in STEM (Waldrop, 2015) and the importance of peer and mentor relationships for faculty of color (Ong, Smith, & Ko, 2018), understanding differences in STEM URM and non-URM faculty identity are important. Thus, being able to connect demographics and identity within FIBIS could potentially be a powerful, yet practical way to identify how connected faculty are to their fields, teaching, and the institution. In future larger-scale studies, the FIBIS may be used to explicate whether significant differences in professional identity for different subgroups of faculty exist.

We also found that FIBIS was able to identify how STEM faculty internal characteristics (e.g., teaching identity, beliefs) played an important role in implementation of EBIPs at different levels of effort. These findings from FIIBS aligns with prior research suggesting that faculty background and beliefs coming into the university plays a key role in how resistant faculty are to implementing EBIPs (White, 2016; Robert & Carlsen, 2017; Oleson & Hora, 2014). Further, when asked what faculty were dissatisfied with in their teaching using EBIPs, internal influences relating to beliefs, confidence, and knowledge were reported most often for faculty in this pilot study. These results also align with research suggesting that implementation of EBIPs can be challenging (e.g., Stains & Vickrey, 2017) and demonstrate that FIBIS has the potential to identify not only faculties’ decision to use EBIPs but whether, and why, they might be dissatisfied with their implementation. Unfortunately, these internal barriers cannot be mitigated by the university making more active learning classrooms available or giving faculty rewards—they point to deeply held beliefs founded on prior experiences with teaching and learning. Further exploration into FIBIS results for faculty with differing prior experiences with teaching and learning would be important to further understand these relationships.

A suggested modification to the framework

Prior studies have captured and conceptualized the factors that may impact whether faculty choose to implement, or not, EBIPs (e.g., Andrews & Lemons, 2015; Henderson, Dancy, & Niewiadomska-Bugaj, 2012; Gess-Newsome et al., 2003). In our study, we found that there appears to be an additional layer to understanding STEM faculty barriers that includes faculty who choose to implement EBIPs but may be satisfied or dissatisfied with their practice. Models of implementation and decision-making (e.g., Lattuca & Pollard, 2016) should account for faculty satisfaction with newly attempted EBIPs; therefore, we have updated the conceptual framework presented in the literature review (Fig. 1) to a slightly revised model based on the initial results of this study (Fig. 5). Note that we have shown arrows as uni-directional but that these arrows could also be bi-directional; this would need to be tested in future quantitative work.

Fig. 5
figure 5

Suggested revised model for faculty’s decision-making process on using EBIPs. Additions to the framework based on results of the FIBIS pilot data are in blue/white. Arrow bolded indicates greater influence in the relationship. Note that arrows indicate theoretically-based, as opposed to empirically tested, relationships. This model will need to be confirmed in future studies

The revised model presented in Fig. 5 aligns with the fifth and final step of Rogers’ (2003) decision-innovation model in which decision-making regarding use of an innovation occurs in five stages: (1) knowledge about the innovation, (2) persuasion about the benefits of the innovation, (3) decision to use the innovation, (4) implementation of the innovation, and (5) confirmation of continued implementation of the innovation. Our work highlights the importance of this last step, its connection to faculty dissatisfaction and beliefs, and the need to further study it and to provide the support needed to sustain the use of an innovation once faculty decide to try an EBIP. Indeed, Henderson and Dancy (2007) concluded in their qualitative study of barriers that physics faculty needed support to know what situational barriers they would likely face before implementing an innovation. In their study, when faculty faced situational barriers while implementing, they often quit using the innovation. Our work suggests this extends to STEM faculty more broadly and supports the previously cited work regarding the importance of supporting faculty both before and during implementation of EBIPs in order to sustain use of said practices.

Our qualitative data demonstrated that internal factors may be most important to STEM faculty who implement but are dissatisfied with practice. While our results are preliminary and need the confirmation of a larger study, this study may suggest that external factors might be most important in initial adoption of EBIPs but that internal factors may be most important for STEM faculty satisfaction with their implementation of EBIPs. Figure 5 indicates this with its bolder arrow indicating a more influential relationship between internal influences and the decision to quit or persist when faced with dissatisfaction with EBIPs. Future research should seek to explore the impact of internal and external influences with a larger STEM faculty sample across institutions and confirm this suggested revised framework.

Importance of context

Descriptively, we found differences in EBIP implementation across STEM departments, which aligns with prior studies demonstrating differences in EBIPs across departments (e.g., Landrum, et al., 2017; Lund & Stains, 2015). When adding the results from the FIBIS pilot to these two previous studies, not only does EBIP implementation vary across departments within the same university, but EBIP implementation within the same department across universities also appears to vary. In addition, the types of faculty (e.g., non-tenure teaching track faculty vs physics researchers) seen engaging most in the use of EBIPs between the three studies may indicate that another factor, such as faculty PD utilization—not reported in any of the studies—may be influencing faculty’s use of EBIPs. Indeed, one successful reform program pointed to the extensive support of disciplinary experts trained in pedagogy and education research being vital in their faculty beginning and continuing to use EBIPs (Wieman, Deslauriers, & Gilley, 2013). Thus, FIBIS could be used to not only identify differences in faculty across departments but the impact of faculty PD. Further research is warranted regarding reported EBIP use, ways to reduce barriers, and how to shift professional identity.

Like Henderson et al. (2012), we did not find that faculty rank, years of teaching, or research identity influenced their results. However, these results contrast two other studies of STEM faculty decision-making. Hora (2012) showed that faculty identified decision-making moderators included tenure/social status, and Landrum et al. (2017) showed a difference in adoption of EBIPs at their university by tenure-track versus non-tenure track faculty. Nevertheless, Hora also found that faculty decision-making was moderated by individual factors such as personal initiative, faculty autonomy that encourages individual approaches to instruction, and doctoral training, which our study’s initial qualitative results appear to agree with. With further psychometric testing of FIBIS to validate the barriers section of the survey, we aim to use FIBIS for a national study across institutions to determine what factors matter when and where.

Limitations

While the FIBIS has been developed and initially validated through the present study, there are still limitations to be considered. First, due to the limitation of sample size (n = 69), we were unable to conduct a factor analysis to further establish validity of the barriers section of the instrument. Our current work with the revised FIBIS will allow for psychometric testing to further validate FIBIS. Second, the sample of faculty with whom we piloted FIBIS are likely not representative of the entire STEM faculty population at the university nor are they representative of all faculty. This may limit our ability to understand how FIBIS reliably and accurately captures faculty use of EBIPs, barriers, and identity. Third, the EBIP questions possessed a scale that asked if faculty implemented the practices but not the frequency of implementation, which limits our understanding of the nuances of the data presented. The revised FIBIS, described below, includes a refined EBIP scale to better elucidate both presence and frequency of EBIP use. Adding this nuance will allow for further exploration of the relationship between EBIP use, barriers, and identity.

Final FIBIS revision

From the pilot data presented in the results section, we made small changes to the FIBIS to create the final version provided in the supplementary online material of this paper. For example, the qualitative coding of open-ended questions focused on participant satisfaction with using EBIPs and barriers to instruction was used to inform and confirm the Likert barrier statements. We compared the barriers elucidated from the qualitative data to the Likert barriers questions to identify any additional questions that should be added to the quantitative barriers questions. For a complete description of what was changed and why, see the supplementary material accompanying the online article (Additional file 2: Survey Development Supplement). Note that these final changes were not tested in this study. The revised version is the instrument provided in the online supplementary material.

Potential uses of FIBIS

The intent of developing FIBIS was to provide a tool that can be used to (1) inform faculty development programs, (2) support institutional efforts to reduce faculty barriers to implementing EBIPs, and (3) systematically study STEM faculty and teaching in higher education. While we feel confident in our ability to achieve our first two goals, researchers should use caution when using the barriers section of the FIBIS for research until further analysis has been done to validate this section using exploratory factor analysis and a larger STEM faculty sample.

Based on the evidence we collected in our pilot study, we have developed a list of ways in which we will use the information at our own institution to address both external and internal barriers for our STEM faculty. This list is not intended to be used by other institutions as-is but to provide an example for others on how FIBIS data could be used to shift external influences:

  • There may be a need for the university to address external issues of institutional value and rewards, finances for building appropriate classrooms, and facilitating a climate where active learning is normalized among students. Faculty developers at this institution could focus efforts on finding ways to reduce these barriers for STEM faculty.

  • PD programs at this institution could be created to concentrate on external supports: developing a strong teaching community for STEM faculty and helping faculty feel connected to the university as a whole (i.e., improve faculty work identity), since our data indicated that faculty who perceived higher departmental barriers tended to have lower work identities. Further, encouraging faculty cohorts within departments to engage in center programs, and direct work with STEM department chairs may also promote implementation of EBIPs.

  • To support STEM URM faculty, STEM educators and faculty developers could partner with URM faculty to build diverse communities of practice across the university.

The FIBIS can also help identify internal influences that can be leveraged to promote change:

  • Faculty developers could identify STEM faculty with high skill-based teaching dimensions on the FIBIS as potentially receptive to PD.

  • Faculty developers should consider demographics, prior experiences, and contexts as they work to support faculty in using EBIPs in undergraduate STEM classrooms.

  • Faculty developers could give focused support to STEM faculty during implementation of EBIPs to help address dissatisfactions that arise.

These approaches for addressing internal barriers align with aspects of effective faculty PD programs, such as sustained support for faculty during PD (e.g., Gehrke & Kezar, 2017; Rathbun, Leatherman, & Jensen, 2017), especially those who are disciplinary experts trained in pedagogy (Wieman et al., 2013). Recent work has also highlighted the importance of forming communities of practice to help faculty make the shift to using EBIPs, especially in large enrollment courses (Tomkin, Beilstein, Morphew, & Herman, 2019), and the necessity to focus on improving supports, such as developing social support networks for faculty implementing EBIPs rather than focusing on addressing barriers (Bathgate et al., 2019). Further research is warranted to understand the ways in which faculty PD can improve reported EBIP use, reduce barriers, and shift professional identity.

Conclusion

This study sought to develop and initially validate the FIBIS instrument. Many of the exploratory findings from our FIBIS pilot align with previous work, suggesting that FIBIS can be used to capture faculty identity, use of and satisfaction with EBIPs, and barriers to instruction. While we cannot generalize our claims, the following suggestions for our institution may demonstrate how results from FIBIS can inform efforts to try and reduce STEM faculty barriers to implementing EBIPs: (1) developing a strong teaching community (especially needed for persistence of URM faculty), (2) helping faculty connect to the university as a whole, and (3) working with departments to better support implementation of EBIPs. The results presented and implications of these findings demonstrate the potential of FIBIS as a tool for examining factors that influence faculty instructional practice. Future work includes further validating the FIBIS barriers component of the survey so it can be used to support work focused on bringing change in institutions of higher education. Beyond the development of the FIBIS instrument itself and ways in which FIBIS can be used, we hope this article shows readers a glimpse of the work that has already been done in a variety of fields. By connecting these disparate bodies of literature, we hope researchers will build off of the previous work conducted across fields in order to meet our shared goal of bringing about change in higher education.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to them containing information that could compromise research participant privacy but are available specially anonymized from the corresponding author by request.

Abbreviations

DBER:

Discipline-based education research

EBIPs:

Evidence-based instructional practices

EFA:

Exploratory factor analysis

FIBIS:

Faculty instructional barriers and identity survey

PBL:

Problem-based learning

PD:

Professional development

POGIL:

Process oriented guided inquiry learning

RBISs:

Research-based instruction strategies

STEM:

Science, technology, engineering, mathematics

TCSR:

Teacher-Centered Systemic Reform model

URM:

Under-represented minority

References

  • Abu-Alruz, J., & Khasawneh, S. (2013). Professional identity of faculty members at higher education institutions: A criterion for workplace success. Research in Post-Compulsory Education, 18(4), 431–442.

    Article  Google Scholar 

  • Addy, T. M., & Blanchard, M. R. (2010). The problem with reform from the bottom up: Instructional practices and teacher beliefs of graduate teaching assistants following a reform-minded university teacher certificate programme. International Journal of Science Education, 32(8), 1045–1071.

    Article  Google Scholar 

  • American Educational Research Association, American Psychological Association, and the National Council on Measurement in Education. (2014). Standards for educational & psychological testing. DC: Washington.

    Google Scholar 

  • Andrews, T. C., & Lemons, P. P. (2015). It's personal: Biology instructors prioritize personal evidence over empirical evidence in teaching decisions. CBE-Life Sciences Education, 14(1), 1–18.

    Article  Google Scholar 

  • Austin, A. (2011). Promoting evidence-based change in undergraduate science education. A white paper commissioned by the National Academies National Research Council Board on Science Education.

    Google Scholar 

  • Barbarà-i-Molinero, A., Cascón-Pereira, R., & Hernández-Lara, A. B. (2017). Professional identity development in higher education: Influencing factors. International Journal of Educational Management, 31(2), 189–203.

    Google Scholar 

  • Bathgate, M. E., Aragón, O. R., Cavanagh, A. J., Waterhouse, J. K., Frederick, J., & Graham, M. J. (2019). Perceived supports and evidence-based teaching in college STEM. International Journal of STEM Education, 6(11), 11–25. https://doi.org/10.1186/s40594-019-0166-3.

    Article  Google Scholar 

  • Bauer, C., Libby, R., Scharberg, M., & Reider, D. (2013). Transformative research-based pedagogy workshops for chemistry graduate students and postdocs. Journal of College Science Teaching, 43(2), 36–43.

    Article  Google Scholar 

  • Borrego, M., Cutler, S., Prince, M., Henderson, C., & Froyd, J. (2013). Fidelity of implementation of Research-Based Instructional Strategies (RBIS) in engineering science courses. Journal of Engineering Education, 102(3), 394–425.

    Article  Google Scholar 

  • Borrego, M., Froyd, J. E., & Hall, T. S. (2010). Diffusion of engineering education innovations: A survey of awareness and adoption rates in U.S. engineering departments. Journal of Engineering Education, 99(3), 185–207.

    Article  Google Scholar 

  • Briggs, A. R. J. (2007). Exploring professional identities: Middle leadership in further education colleges. School Leadership and Management, 27(5), 471–485.

    Article  Google Scholar 

  • Brownell, S. E., & Tanner, K. D. (2012). Barriers to faculty pedagogical change: Lack of training, time, incentives, and tensions with professional identity? CBE-Life Sciences Education, 11(4), 339–346.

    Article  Google Scholar 

  • Buehl, M. M., & Beck, J. S. (2014). The relationship between teachers’ beliefs and teachers’ practices. In H. Fives & M. G. Gill (Eds.), International handbook of research on teachers’ beliefs (pp. 66–84). New York: Routledge.

    Google Scholar 

  • Costello, A. B., & Osborne, J. W. (2005). Exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(7), 1–9.

    Google Scholar 

  • CPRIU. (2012). Faculty Survey of Student Engagement (FSSE). Retrieved from http://fsse.indiana.edu/pdf/2012/FSSE12_TS.pdf

  • Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychomerika, 16(3), 297–334. https://doi.org/10.1007/BF02310555.

    Article  Google Scholar 

  • Crouch, C., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970–977.

    Article  Google Scholar 

  • Dancy, M., & Henderson, C. (2008). Barriers and promises in STEM reform. Commissioned paper for National Academies of Science Workshop on Linking Evidence and Promising Practices in STEM Undergraduate Education. DC: Washington.

    Google Scholar 

  • Davey, R. (2013). The professional identity of teacher educators. Career on the cusp? Routledge: London and New York.

    Book  Google Scholar 

  • Deem, R. (2006). Changing research perspectives on the management of higher education: Can research permeate the activities of manager-academics? Higher Education Quarterly, 60(3), 203–228.

    Article  Google Scholar 

  • Drinkwater, M. J., Matthews, K. E., & Seiler, J. (2017). How is science being taught? Measuring evidence-based teaching practices across undergraduate science departments. CBE - Life Sciences Education, 16(1), 1–11.

    Article  Google Scholar 

  • Durham, M. F., Knight, J. K., & Couch, B. A. (2017). Measurement Instrument for Scientific Teaching (MIST): A tool to measure the frequencies of research-based teaching practices in undergraduate science courses. CBE-Life Sciences Education, 16(4), ar67. https://doi.org/10.1187/cbe.17-02-0033.

    Article  Google Scholar 

  • Elrod, S., & Kezar, A. (2017). Increasing student success in STEM: Summary of a guide to systemic institutional change. Change: The Magazine of Higher Learning, 49(4), 26–34.

    Article  Google Scholar 

  • Fairweather, J. (2008). Linking evidence and promising practices in Science, Technology, Engineering, and Mathematics (STEM) undergraduate education. A status report for the National Academies National Research Council Board of Science Education.

    Google Scholar 

  • Foster, R. (2014). Barriers and enablers to evidence-based practices. Kairaranga, 15(1), 50–58.

    Google Scholar 

  • Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. PNAS Proceedings of the National Academy of Sciences of the USA, 111(23), 8410–8415.

    Article  Google Scholar 

  • Froyd, J. E., Borrego, M., Cutler, S., Henderson, C., & Prince, M. J. (2013). Estimates of use of Research-Based Instructional Strategies in core electrical or computer engineering courses. IEEE Transactions on Education, 56(4), 393–399.

    Article  Google Scholar 

  • Gehrke, S., & Kezar, A. (2017). The roles of STEM faculty communities of practice in institutional and departmental reform in higher education. American Educational Research Journal, 54(5), 803–833.

    Article  Google Scholar 

  • Gess-Newsome, J., Southerland, S. A., Johnston, A., & Woodbury, S. (2003). Educational reform, personal practical theories, and dissatisfaction: The anatomy of change in college science teaching. American Educational Research Journal, 40(3), 731–767.

    Article  Google Scholar 

  • Gibbons, R. E., Villafañe, S. M., Stains, M., Murphy, K. L., & Raker, J. R. (2018). Beliefs about learning and enacted instructional practices: An investigation in postsecondary chemistry education. Journal of Research in Science Teaching, 55(8), 1111–1133.

    Article  Google Scholar 

  • Gilmore, J., Lewis, D. M., Maher, M., Feldon, D., & Timmerman, B. E. (2015). Feeding two birds with one scone? The relationship between teaching and research for graduate students across the disciplines. International Journal of Teaching and Learning in Higher Education, 27(1), 25–41.

    Google Scholar 

  • Hancock, S., & Walsh, E. (2016). Beyond knowledge and skills: Rethinking the development of professional identity during the STEM doctorate. Studies in Higher Education, 41(1), 37–50.

  • Hattie, J., & Marsh, H. W. (1996). The relationship between research and teaching: A meta-analysis. Review of Educational Research, 66(4), 507–542.

    Article  Google Scholar 

  • Hazari, Z., Sadler, P., & Sonnert, G. (2013). The science identity of college students: Exploring the intersection of gender, race, and ethnicity. Journal of College Science Teaching, 42(5), 82–91.

    Google Scholar 

  • Henderson, C., Beach, A., & Finkelstein, N. (2011). Facilitating change in undergraduate STEM instructional practices: An analytic review of the literature. Journal of Research in Science Teaching, 48(8), 952–984.

    Article  Google Scholar 

  • Henderson, C., Cole, R., Froyd, J., Friedrichsen, D. G., Khatri, R., & Stanford, C. (2015). Designing educational innovations for sustained adoption: A how-to guide for education developers who want to increase the impact of their work. Kalamazoo, Michigan: Increase the Impact.

    Google Scholar 

  • Henderson, C., & Dancy, M. (2007). Barriers to the use of research-based instructional strategies: The influence of both individual and situational characteristics. Physical Review Special Topics: Physics Education Research, 3(2), 020102-1-020102-14.

  • Henderson, C., & Dancy, M. (2009). Impact of physics education research on the teaching of introductory quantitative physics in the United States. Physical Review Physics Education Research, 5(2), 020107–002016.

    Article  Google Scholar 

  • Henderson, C., Dancy, M., & Niewiadomska-Bugaj, M. (2012). Use of research-based instructional strategies in introductory physics: Where do faculty leave the innovation-decision process? Physical Review Physics Education Research, 8(2), 020104-1-020104-15.

    Google Scholar 

  • Henkel, M. (2000). Academic identities and policy change in higher education. London: Jessica Kingsley.

    Google Scholar 

  • Hora, M. T. (2012). Organizational factors and instructional decision-making: A cognitive perspective. Review of Higher Education, 35(2), 207–235.

    Article  Google Scholar 

  • Hora, M. T. (2014). Exploring faculty beliefs about student learning and their role in instructional decision-making. Review of Higher Education, 38(1), 37–70.

    Article  Google Scholar 

  • Hurtado, S., Eagan, K., Pryor, J. H., Whang, H., & Tran, S. (2012). Undergraduate teaching faculty: The 2010-2011 HERI faculty survey. Available at https://heri.ucla.edu/publications-fac/

  • Jenkins, A. (2004). A guide to the research evidence on teaching-research relations. York: Higher Education Academy.

    Google Scholar 

  • Kane, R., Sandretto, S., & Heath, C. (2004). An investigation into excellent tertiary teaching: Emphasising reflective practice. Higher Education, 47(3), 283–310.

    Article  Google Scholar 

  • Landrum, R. E., Viskupic, K., Shadle, S. E., & Bullock, D. (2017). Assessing the STEM landscape: the current instructional climate survey and the evidence-based instructional practices adoption scale. International Journal of STEM Education, 4(1), 25–35. https://doi.org/10.1186/s40594-017-0092-1.

    Article  Google Scholar 

  • Lattuca, L. R., & Pollard, J. R. (2016). Toward a conceptualization of faculty decision-making about curricular and instructional change. In Organizing Academic Work in Higher Education: Teaching, Learning and Identities (pp. 89-108). Taylor and Francis Inc.

  • Leslie, D. W. (2002). Resolving the dispute: Teaching is academe's core value. The Journal of Higher Education, 73(1), 49–73.

    Google Scholar 

  • Lund, T. J., & Stains, M. (2015). The importance of context: An exploration of factors influencing the adoption of student-centered teaching among chemistry, biology, and physics faculty. International Journal of STEM Education, 2(13), 1–21. https://doi.org/10.1186/s40594-015-0026-8.

    Article  Google Scholar 

  • MacDonald, R. H., Manduca, C. A., Mogk, D. W., & Tewksbury, B. J. (2005). Teaching methods in undergraduate geoscience courses: Results of the 2004 On the Cutting Edge Survey of U.S. faculty. Journal of Geoscience Education, 53(3), 237–252.

    Article  Google Scholar 

  • Madson, L., David, T., & Tara, G. (2017). Faculty members' attitudes predict adoption of interactive engagement methods. Journal of Faculty Development, 31(3), 39–50.

    Google Scholar 

  • Marbach-Ad, G., Schaefer-Zimmer, K. L., Orgler, M., Benson, S., & Thompson, K. V. (2012). Surveying research university faculty, graduate students and undergraduates: Skills and practices important for science majors. Vancouver: Paper presented at the annual meeting of the American Educational Research Association.

    Google Scholar 

  • Michael, J. (2007). Faculty perceptions about barriers to active learning. College Teaching, 55(2), 42–47.

    Article  Google Scholar 

  • Nadelson, L. S., McGuire, S. P., Davis, K. A., Farid, A., Hardy, K. K., Hsu, Y., Kaiser, U., Nagarajan, R., & Wang, S. (2017). Am I a STEM professional? Documenting STEM student professional identity development. Studies in Higher Education, 42(4), 701–720.

    Google Scholar 

  • National Center for Education Statistics [NCES]. (2004). National Study of Postsecondary Faculty (NSOPF). National Center for Education Statistics. Available at http://nces.ed.gov/surveys/nsopf/pdf/2004_Faculty_Questionnaire.pdf

    Google Scholar 

  • National Research Council. (2012). Discipline-Based Education Research: Understanding and improving learning in undergraduate science and engineering. Washington, DC: The National Academies Press.

    Google Scholar 

  • Oleson, A., & Hora, M. T. (2014). Teaching the way they were taught? Revisiting the sources of teaching knowledge and the role of prior experience in shaping faculty teaching practices. Higher Education, 68(1), 29–45.

    Article  Google Scholar 

  • Ong, M., Smith, J. M., & Ko, L. T. (2018). Counterspaces for women of color in STEM higher education: Marginal and central spaces for persistence and success. Journal of Research in Science Teaching, 55(2), 206–245.

    Article  Google Scholar 

  • Pajares, M. (1992). Teachers' beliefs and educational research: Cleaning up a messy construct. Review of Educational Research, 62(3), 307–332.

    Article  Google Scholar 

  • Patton, M. Q. (2002). Qualitative research & evaluation methods. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Porter, W. W., Graham, C. R., Bodily, R. G., & Sandberg, D. S. (2016). A qualitative analysis of institutional drivers and barriers to blended learning adoption in higher education. The Internet and Higher Education, 28(1), 17–27.

    Article  Google Scholar 

  • Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223–231.

    Article  Google Scholar 

  • Prince, M., Borrego, M., Henderson, C., Cutler, S., & Froyd, J. (2013). Use of research-based instructional strategies in core chemical engineering courses. Chemical Engineering Education, 47(1), 27–37.

    Google Scholar 

  • Rathbun, G. A., Leatherman, J., & Jensen, R. (2017). Evaluating the impact of an academic teacher development program: Practical realities of an evidence-based study. Assessment & Evaluation in Higher Education, 42(4), 548–563.

    Article  Google Scholar 

  • Reinholz, D. L., & Apkarian, N. (2018). Four frames for systemic change in STEM departments. International Journal of STEM Education, 5(3), 1–10. https://doi.org/10.1186/s40594-018-0103-x.

    Article  Google Scholar 

  • Robert, J., & Carlsen, W. S. (2017). Teaching and research at a large university: Case studies of science professors. Journal of Research and Science Teaching, 54(7), 937–960.

    Article  Google Scholar 

  • Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press.

    Google Scholar 

  • Ryan, M., & Carmichael, M. (2016). Shaping (reflexive) professional identities across an undergraduate degree programme: A longitudinal case study. Teaching in Higher Education, 21(2), 151–165.

  • Sabancıogullari, S., & Dogan, S. (2015). Effects of the professional identity development programme on the professional identity, job satisfaction and burnout levels of nurses: A pilot study. International Journal of Nursing Practice, 21(6), 847–857.

    Article  Google Scholar 

  • Samuel, M., & Stephens, D. (2000). Critical dialogues with self: Developing teacher identities and roles – a case study of South African student teachers. International Journal of Educational Research, 33(5), 475–491.

    Article  Google Scholar 

  • Sandi-Urena, S., Cooper, M. M., & Gatlin, T. A. (2011). Graduate teaching assistants’ epistemological and metacognitive development. Chemistry Education Research and Practice, 12(1), 92 https://doi.org/10.1039/c1rp90012a.

    Article  Google Scholar 

  • Sawada, D., Piburn, M., Judson, E., Turley, J., Falconer, K., Benford, R., & Bloom, I. (2002). Measuring reform practices in science and mathematics classrooms: The Reformed Teaching Observation Protocol. School Science and Mathematics, 102(6), 245–253.

    Article  Google Scholar 

  • Schulze, S. (2015). The doctoral degree and the professional academic identity development of female academics. South African Journal of Higher Education, 29(4), 260–276.

    Google Scholar 

  • Shadle, S. E., Marker, A., & Earl, B. (2017). Faculty drivers and barriers: Laying the groundwork for undergraduate STEM education reform in academic departments. International Journal of STEM Education, 4(8), 1–13. https://doi.org/10.1186/s40594-017-0062-7.

    Article  Google Scholar 

  • Shell, R. (2001). Perceived barriers to teaching for critical thinking by BSN nursing faculty. Nursing and Health Care Perspectives, 22(6), 286–291.

    Google Scholar 

  • Slavin, R. (1980). Cooperative Learning. Review of Educational Research, 50(2), 315–342.

    Article  Google Scholar 

  • Stains, M., Harshman, J., Barker, M. K., Chasteen, S. V., Cole, R., DeChenne-Peters, S. E., ... Young, A. M. (2018). Anatomy of STEM teaching in North American universities. Science, 359(6383), 1468-1470.

    Article  Google Scholar 

  • Stains, M., & Vickrey, T. (2017). Fidelity of implementation: An overlooked yet critical construct to establish effectiveness of Evidence-Based Instructional Practices. CBE-Life Sciences Education, 16(1), 1–11.

    Article  Google Scholar 

  • Starr, S., Haley, H. L., Mazor, K. M., Ferguson, W., Philbin, M., & Quirk, M. (2006). Initial testing of an instrument to measure teacher identity in physicians. Teaching and Learning in Medicine, 18(2), 117–125.

    Article  Google Scholar 

  • Stieha, V., Shadle, S. E., & Paterson, S. (2016). Stirring the pot: Supporting and challenging general education science, technology, engineering, and mathematics faculty to change teaching and assessment practice. Journal of General Education, 65(2), 85–109.

    Article  Google Scholar 

  • Sunal, D. W., Hodges, J., Sunal, C. S., Whitaker, K. W., Freeman, L. M., Edwards, L., Johnston, R. A., & Odell, M. (2001). Teaching science in higher education: Faculty professional development and barriers to change. School Science and Mathematics, 101(5), 246–257.

    Article  Google Scholar 

  • Tomkin, J. H., Beilstein, S. O., Morphew, J. W., & Herman, G. L. (2019). Evidence that communities of practice are associated with active learning in large STEM lectures. International Journal of STEM Education, 6(1), 1–18. https://doi.org/10.1186/s40594-018-0154-z.

    Article  Google Scholar 

  • Trede, F., Macklin, R., & Bridges, D. (2012). Professional identity development: A review of the higher education literature. Studies in Higher Education, 37(3), 365–384.

    Article  Google Scholar 

  • Trigwell, K., & Prosser, M. (2004). Development and use of the approaches to teaching inventory. Educational Psychology Review, 16(4), 409–424.

    Article  Google Scholar 

  • Trochim, W., Donnelly, J. P., & Arora, K. (2016). Research methods: The essential knowledge base (2nd ed.). Boston, MA: Cengage Learning.

    Google Scholar 

  • Turpen, C., Dancy, M., & Henderson, C. (2016). Perceived affordances and constraints regarding instructors’ use of peer instruction: Implications for promoting instructional change. Physical Review Physics Education Research, 12(1), 010116-1-010116-18.

  • Uz Zaman, M. Q. (2004). Review of the academic evidence on the relationship between teaching and research in higher education. London: Department for Education and Skills.

    Google Scholar 

  • Walczyk, J. J., & Ramsey, L. L. (2003), Use of learner‐centered instruction in college science and mathematics classrooms. Journal of Research in Science. Teaching., 40(6), 566–584.

  • Walczyk, J. J., Ramsey, L. I, & Zha, P. (2007). Obstacles to instructional innovation according to College science and mathematics faculty. Journal of Research in Science Teaching, 44(1), 85-106.

  • Walder, A. M. (2015). Obstacles to innovation: The fear of jeopardising a professorial career. British Journal of Education, 3(6), 10–16.

    Google Scholar 

  • Waldrop, M. M. (2015). Why we are teaching science wrong, and how to make it right. Nature News, 523(7560), 272.

    Article  Google Scholar 

  • Walter, E., Beach, A., Henderson, C., & Williams, C. (2014). Describing instructional practice and climate: Two new instruments. In Paper presented at the Transforming Institutions: 21st Century Undergraduate STEM Education Conference. Indianapolis: IN.

    Google Scholar 

  • Walter, E. M., Henderson, C. R., Beach, A. L., & Williams, C. T. (2016). Introducing the Postsecondary Instructional Practices Survey (PIPS): A concise, interdisciplinary, and easy-to-score survey. CBE-Life Sciences Education, 15(4), 1–11.

    Article  Google Scholar 

  • White, B. A. (2016). Exploring the ways new faculty form beliefs about teaching: A basic interpretive study (Doctoral dissertation). Retrieved from University of Tennessee TRACE (No. 3883).

  • Wieman, C., Deslauriers, L., & Gilley, B. (2013). Use of research-based instructional strategies: How to avoid faculty quitting. Physical Review Physics Education Research, 9(2), 023102-1-023102-5.

  • Wieman, C., & Gilbert, S. (2014). The teaching practices inventory: a new tool for characterizing college and university teaching in mathematics and science. CBE-Life Sciences Education, 13(3), 552–569.

    Article  Google Scholar 

  • Williams, C. T., Walter, E. M., Henderson, C., & Beach, A. (2015). Describing undergraduate STEM teaching practices: a comparison of instructor self-report instruments. International Journal of STEM Education, 2(18), 1–14. https://doi.org/10.1186/s40594-015-0031-y.

    Article  Google Scholar 

  • Windschitl, M., & Sahl, K. (2002). Tracing teachers’ use of technology in a laptop computer school: The interplay of teacher beliefs, social dynamics, and institutional culture. American Educational Research Journal, 39(1), 165–205.

    Article  Google Scholar 

  • Woodbury, S., & Gess-Newsome, J. (2002). Overcoming the paradox of change without difference: A model of change in the arena of fundamental school reform. Educational Policy, 16(5), 763–782.

    Article  Google Scholar 

  • Zieffler, A., Park, J., Delmas, R., & Bjornsdottir, A. (2012). The Statistics Teaching Inventory: A survey of statistics teachers’ classrooms practices and beliefs. Journal of Statistics Education, 20(1), 1–29.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Dr. Jenn Maeng, Dr. Brian Helmke, and Dr. Michael Palmer for their invaluable contributions to the development of this survey instrument.

Funding

This study including survey distribution and analysis did not require funding.

Author information

Authors and Affiliations

Authors

Contributions

HS reviewed the literature and designed the initial survey instrument with input from LW. LW helped revise the initial instrument and discuss framework choices. HS cleaned the data, analyzed the qualitative data, and LW oversaw the quantitative analyses. HS and LW interpreted the data, discussed instrument revisions, and both contributed to the writing of the manuscript. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Hannah Sturtevant.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

FIBIS Instrument. (DOCX 33 kb)

Additional file 2:

Survey Development Supplement. (DOCX 15 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sturtevant, H., Wheeler, L. The STEM Faculty Instructional Barriers and Identity Survey (FIBIS): development and exploratory results. IJ STEM Ed 6, 35 (2019). https://doi.org/10.1186/s40594-019-0185-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-019-0185-0

Keywords