Skip to main content

Initial implementation of active learning strategies in large, lecture STEM courses: lessons learned from a multi-institutional, interdisciplinary STEM faculty development program

Abstract

Background

A faculty development program was implemented over four years at a 4-year regional comprehensive university and two partnering community colleges. This project was focused on improving student learning in introductory Science, Technology, Engineering, and Math (STEM) courses at each institution, by helping faculty adopt inclusive, student-centered pedagogies. Survey data were combined with data from classroom videos, faculty interviews, and student questionnaires collected during the first two years of the project to give insight into how faculty initially implemented the theory and strategies they learned and how the students perceived instruction from participating faculty.

Results

These data sources were combined to generate four overall themes to characterize our project and guide future projects. These are: (1) implementation of student-centered learning took a variety of forms; (2) quality implementation of student-centered teaching practices lagged behind understanding of the theory behind those practices; (3) the most robust perceived barriers to implementation of student-centered teaching stayed constant, while more moderate barriers were ranked differently from year 1 to year 2; and (4) faculty perceptions of student-centered learning practices were not always the same as students’ perceptions. These themes build from the extant faculty development literature in that they are drawn from the unique context of a multidisciplinary, multi-institutional project, and that they represent an “on the ground” perspective from case studies combined with “big picture” findings from surveys.

Conclusions

This paper describes the faculty development project, as well as our collection and interpretation of data from surveys and case studies, to ultimately develop the four themes. Recommendations deriving from these themes are also described. These include modeling a variety of pedagogies; adopting realistic expectations for faculty change; institutionalizing faculty development so it can take place over multiple years; being transparent with faculty about known barriers and aligning supports with those barriers; and helping faculty develop strategies for transparency with students about student-centered pedagogies.

Introduction

The benefits of active, or student-centered, learning in introductory science classrooms is well established, both in terms of overall student learning (Freeman, Eddy, McDonough, Smith, Okoroafor, Jordt, & Wenderoth, 2014), and in terms of reducing gaps in course grades, performance on concept inventories, and/or failure rates between underrepresented (in STEM) groups and majority students (Beichner, Saul, Abbott, Morse, Deardorff, Allain, & Risley, 2007; Eddy & Hogan, 2014; Haak, Hillerislambers, Pitre, & Freeman, 2011). However, uptake of student-centered practices in postsecondary teaching is slow, even if such practices are known and understood to promote student understanding (American Association for the Advancement of Science, 2019; Stains et al., 2018). Reasons for this include the systemic nature of transformation toward student-centered practices (American Association for the Advancement of Science, 2019) as well as barriers such as large class sizes, lack of time to develop materials and pedagogy, perceived pressure to “cover” a certain amount of content, and lack of student buy-in (Henderson & Dancy, 2007; Shadle, Marker, & Earl, 2017). Recent research suggests that changes to teaching practices at the postsecondary level are occurring, especially in cases where a systemic approach to change is taken (American Association for the Advancement of Science, 2019; Gess-Newsome, Southerland, Johnston, & Woodbury, 2003; Henderson, Beach, & Finkelstein, 2011; Laursen, Austin, Soto, & Martinez, 2015) and faculty are involved in creating a vision for their classroom practice (Henderson & Dancy, 2007; Shadle et al., 2017). In such cases, barriers can become less important than supports or drivers for change. Drivers such as ease of implementation and department support can mitigate some of these barriers, and may be stronger predictors of adoption of student-centered practices than barriers (Bathgate et al., 2019; Shadle et al., 2017).

Unlike K-12 teaching which usually requires a certificate, training in teaching methods is not typically required for one to graduate from a doctoral program or to be deemed qualified for a faculty position at a college or university. Though some programs exist for preparing doctoral candidates for postsecondary teaching (Pruitt-Logan, Gaff, & Jentoft, 2002), not all faculty can be expected to have had access to training in pedagogical methods by the time they begin teaching their first courses. Professional development (PD) for faculty is therefore crucial to broadening the use of student-centered approaches. Here, we describe one such project, called Change at the Core (C-Core), which was a PD program for faculty in Science, Technology, Engineering, and Math (STEM) departments at three interlinked institutions. As such programs become more common (American Association for the Advancement of Science, 2019; Borrego & Henderson, 2014), there is an increasing need for the sharing of resources, outcomes, and lessons learned among those seeking to facilitate faculty adoption of student-centered practices. We describe some general trends in the uptake of student-centered learning from one cohort of faculty participants in C-Core, and examine a few participants’ experiences in more depth through case studies. Our study adds to the literature in terms of (1) the context of the study, the data having been drawn from a multidisciplinary faculty development project that involved collaboration between three types of institutions generally underrepresented in the change literature, a regional, primarily undergraduate university and two community colleges, and (2) the combination of survey and case study foci, which allows us to paint a picture on the individual faculty level while putting their experiences in a bigger-picture context of STEM education reform at the three institutions. The implications we draw from our data thus have the potential to inform a variety of other faculty development projects.

Faculty development in STEM

Despite demonstration of the inadequacy of passive, teacher-centered pedagogies such as lecture in promoting student understanding in STEM (Freeman et al., 2014), these methods persist as a major mode of instruction in higher education (Stains et al., 2018). Faculty PD has been identified as a high-impact lever of change toward more student-centered practices (American Association for the Advancement of Science, 2019). Research on faculty development has identified several necessary conditions for change, which include the translation of research on teaching and learning into practical, actionable steps toward improving practice (American Association for the Advancement of Science, 2019); long-term interventions (Henderson et al., 2011); programs that attend to multiple parts of a system in parallel (American Association for the Advancement of Science, 2019; Henderson et al., 2011; Laursen et al., 2015); and interventions that take into account and seek to change the beliefs of participants (Gess-Newsome et al., 2003; Henderson et al., 2011). Further, PD should attend to institutional and departmental contexts in order to generate products and practices that are adaptable to those contexts (Lund & Stains, 2015). Finally, faculty PD that creates a “sustained community” (American Association for the Advancement of Science, 2019, p. 167) is necessary for changing cultures of academic units, which is in turn needed for lasting change in instructional practices.

Change at the Core

C-Core was a collaboration between two 2-year colleges, identified here as Community Colleges 1 and 2 (CC1, CC2), and one 4-year primarily undergraduate Regional University (RU). This project was intended to (1) improve undergraduate STEM teaching and learning in participating institutions, (2) increase the engagement and success of underrepresented students in STEM majors, and (3) create an adaptive model for the adoption of student-centered practices. C-Core adopted a diffusion of innovations model for change, working to reach a tipping point through the development of a large enough contingency of adopters of inclusive, student-centered teaching practices to change the culture of the institutions involved and promote further spreading of the practices (Rogers, 2003). The program focused on improving the learning of all students in STEM courses, with an emphasis on closing gaps in achievement between underrepresented (in STEM) and majority students. For the purposes of C-Core, students identifying as LatinX, African American, Native American, Pacific Islander, and/or women were considered underrepresented.

C-Core followed a cohort model, where each faculty cohort participated for two years. This paper focuses on the first of three cohorts, with some data from the second cohort used for comparison. Cohort A faculty participated in two 5-day summer institutes, three Saturday workshops during each of two academic years, and monthly meetings of faculty professional learning communities (PLCs). The adoption of a 2-year model with embedded PLC’s was a response to research showing the efficacy of longer-term, work-embedded interventions over “one-shot” workshops (Henderson et al., 2011; Owens et al., 2018; Wei, Darling-Hammond, Andree, Richardson, & Orphanos, 2009). Participants included contingent faculty as well as tenure track faculty at the assistant, associate, and full professor ranks at all three institutions. The first cohort included faculty from biology, chemistry, environmental science, and geology. Subsequent cohorts also included computer science, engineering, math, physics, and astronomy. This diversity in faculty rank and discipline was intentional as the aim of the project was to create “bottom-up” change guided by an emergent vision representing multiple institutions, disciplines, and ranks. Such an approach falls into the “Shared Vision” category of systemic change proposed by Henderson et al. (2011).

Professional development for all C-Core cohorts began with, and continually connected activities back to, core principles of learning rooted in cognitive science (Ambrose, Bridges, DiPietro, Lovett, & Norman, 2010; Bransford, Brown, & Cocking, 2000). This structure was intentional as implementation of active learning strategies without an understanding of their basis in the psychology of learning can lead to unproductive adaptations of practices (Henderson & Dancy, 2009; Turpen & Finkelstein, 2009). The learning theory that we continually connected to was constructivism, the idea that students actively build their understanding (Baviskar, Hartle, & Whitney, 2009; Colburn, 2000; Piaget, 1978; Vygotsky, 1978). Because constructivism is a learning theory and does not prescribe any particular pedagogies (Baviskar et al., 2009), C-Core’s foundation in this learning theory provided a basis from which to choose and adapt pedagogical strategies in an informed manner. This structure was crucial in allowing for participants to be responsive to their own contexts, an important facet of successful education reform efforts (Lund & Stains, 2015).

Formative assessment, as envisioned by Wiliam (2011), served as a framework for selecting and sequencing content and pedagogy to align with constructivist theory. The broad interpretation of formative assessment we used starts with sharing learning targets, and follows with selecting and using evidence-generating instructional tasks, responding to evidence from the tasks, and activating learners as resources for themselves and each other (Wiliam, 2011). Although originally developed for K-12 contexts, we found this framework to be extremely relevant to our faculty participants when it came to connecting high-impact pedagogies with important curricular goals. The connection to core ideas and frameworks further responds to Shadle et al.’ (2017) call to “take a more holistic approach, and propose a broader vision for the transformation of teaching, rather than focusing solely on the adoption of [Evidence Based Instructional Practices]” (p. 11).

Inclusive teaching was embedded throughout C-Core activities, starting with participants’ examination of their own identities, and following with exploration of students’ identities and experiences. Participants then learned about and planned for the use of inclusive pedagogical strategies, both in and outside of the classroom (Tanner, 2013). Many of these strategies, such as structured, small group discussions and frequent, formative feedback, overlapped with constructivist-based pedagogies participants were learning about. Other strategies, such as creating a welcoming syllabus and setting classroom norms, attended to the development of an inclusive classroom climate. We started by attending to identities in order to help foster inclusive mindsets that go beyond the implementation of specific practices (Sathy & Hogan, 2019). Inclusive, student-centered pedagogical strategies such as paper voting cards, jigsaw, structured discussions, and quick-writes were explicitly modeled by facilitators throughout C-Core and continually connected back to the core principles of constructivism and framework of formative assessment.

A significant percentage of each C-Core event was devoted to structured work time, in order to help address the frequently cited barrier to the adoption of student-centered practices of lack of time (Henderson & Dancy, 2007; Shadle et al., 2017). This work time balanced structure, in the form of tools and coaching, with flexibility, in the form of participant choice of how to use the time. Participants worked on such tasks as building learning progressions (Popham, 2008), designing formative assessment tasks, planning inclusive pedagogical strategies, and designing tutorials and other learning materials. C-Core mixed activities that were done in discipline-specific teams, such as the development of learning materials, with opportunities for cross-disciplinary discussions of pedagogy. Some of the cross-disciplinary work, as well as special sessions for chairs and deans, focused on institutional adaptations to allow for change, addressing calls for work across departments and systems to create irreversible transformation (American Association for the Advancement of Science, 2019; Gess-Newsome et al., 2003; Henderson et al., 2011; Laursen et al., 2015).

The other important component of C-Core besides the formal PD events were the professional learning communities (PLCs). Compared to many 1-year faculty learning community (FLC) models (Richlin & Cox, 2004; Tinnell, Ralston, Tretter, & Mills, 2019), our 2-year PLC model is somewhat unique. We used a model for PLCs originally proposed for K-12 teachers to work together to enact formative assessment (Wiliam, 2007). In this model, teams of teachers regularly meet to work on problems of practice, discuss peer observations, or analyze student work. A protocol for PLC meetings ensures the rotation of leadership, follow-up of discussions with action items, and finally builds in accountability through report-back mechanisms. Teams are crucial in facilitating broad adoption of student-centered practices, because they provide safe spaces in which to experiment, as well as peer support and accountability (Cox, Richlin & Cox, 2004). However, effective PLCs can take many forms (Cox, 2001; Cox, Richlin, & Cox, 2004; Olmstead, Beach, & Henderson, 2019; Stoll, Bolam, McMahon, Wallace, & Thomas, 2006). Therefore, though most of the teams were discipline-based, some crossed disciplines and even institutions. Summer institutes and Saturday workshops incorporated time for the PLCs to work together.

Research questions

Because C-Core cohorts were active for 2 years, the opportunity to investigate faculty change within the C-Core model is somewhat unique, compared to FLCs cited in the literature, many of which were active for 1 year (Richlin & Cox, 2004; Tinnell et al., 2019). The goal of this article is to illustrate how faculty members implemented the knowledge and practices they learned through this 2-year PD model, and how students perceived the changes that were implemented. The specific research questions that guided this study are:

RQ1: What changes did faculty make to their classroom instruction and learning environments to improve students’ understanding of core ideas for their course?

RQ2: What supported or constrained the changes faculty attempted to make to their instruction?

RQ3: How did instructors’ implementation of student-centered practices align with students’ perceptions?

In order to facilitate data collection and analysis, we focused on participants in the first C-Core cohort, beginning in year 1 (cohort A) , including some data from participants starting in year 2 (cohort B) for comparison. A combination of survey and case study data allowed us to gain a view of general trends while describing some of the details of what implementation of student-centered instruction looked like for specific individuals. Though survey data provided us with a useful starting point for determining some of the outcomes of the C-Core program, the majority of this paper focuses on the use of four case studies. The case study approach adds depth to the quantitative data and affords the opportunity to triangulate several different types of data. Further, a cross-case approach (Yin, 2003) allowed us to explore commonalities among and contrasts between the subjects.

Methods

Study context

This study took place at a mid-sized, primarily undergraduate, masters-granting regional university and two community colleges in the Pacific northwest. All three institutions are on the quarter system, with three 10-week academic quarters (fall, winter, spring) and one summer quarter. All data were gathered during the academic year. Data were gathered from faculty teaching in a range of contexts. Courses met from 3 to 6 hours per week and had enrollment caps of 20–200 students. Faculty typically taught 1–3 courses per quarter, sometimes teaching multiple sections of the same course. Some courses had linked laboratory meetings, in many cases at the regional university taught by graduate or undergraduate teaching assistants, others had associated labs that were taught as separate courses, others had mixed lab and lecture format, and still others had no associated labs.

Data sources

There were two main sources of data. First, survey data gathered by the external evaluators, Horizon Research, Inc., were examined for trends in cohort A and, for comparison, cohort B faculty practices and beliefs, measured quantitatively through an annual survey given to all STEM faculty at each of the three institutions. Second, qualitative data were collected from four members of cohort A to examine unique examples of the development of these individuals’ practices. The sources for this second data set were classroom video recordings, interviews, and a student perceptions questionnaire. Table 1 summarizes the data sources, their alignment to the research questions, and data collection and analysis procedures. Informed consent was obtained for all faculty and students whose data are represented in this paper.

Table 1 Summary of the data collection and analysis for each research question

External evaluation surveys

To address research question 2 (What supported or constrained the changes faculty attempted to make to their instruction?), we examined data from three questions, each containing a set of Likert scale items, in a survey the external evaluators developed and administered to all STEM faculty across the three institutions. The questions and items were iterated between project leadership and the external evaluators to maximize their alignment with C-Core’s PD goals. Faculty were asked to focus on the first introductory (100–200 level) STEM course they taught each week (or upper-division course if they did not teach any introductory courses) in answering the survey questions. The questions asked them to report (1) how prepared they felt to use specific student-centered practices (in the survey, these were called reform-oriented practices), (2) how often they implemented those practices, and (3) how strongly they agreed with statements expressing well-known supports and barriers (Henderson & Dancy, 2007) to implementing student-centered practices. The questions about perceived preparation and implementation of student-centered practices used the same items, which included having clear learning goals, using formative assessment data to build and adjust instruction based on students’ thinking, and having students monitor and assess their thinking and learning. However, different rating scales were used for the question about preparedness (a four-point scale from Not at all prepared to Very well prepared), versus implementation (a four-point scale from Not at all to Weekly). The third question asked about perceived supports and barriers to reform-oriented instruction, utilized a six-point Likert scale from Strongly Disagree to Strongly Agree, and included items about time, departmental expectations, and student resistance. The survey questions are included in Additional file 1.

The surveys were administered to all STEM faculty at each institution, regardless of C-Core participation, in the spring of years 1 and 2 of C-Core, with 60% (n = 167) and 55% (n = 157) response rates, respectively. Table 2 shows the characteristics of each sample. At the time of administration of the year 1 survey, cohort A (34 faculty) had completed two Saturday workshops, and cohort B (24 faculty) had not yet begun the program. At the time of administration of the year 2 survey, cohort A had completed five Saturday workshops and one Summer Institute and cohort B had completed two Saturday workshops.

Table 2 Characteristics of external evaluation survey samples

Items within questions 1 and 2 (these were the 14 same items, with different scales, as described above) were combined to create two composite variables: (1) Perceptions of Preparedness to use Reform-Oriented Teaching Practices, and (2) Use of Reform-Oriented Teaching Practices. Each composite was calculated by summing the responses to the relevant items and then dividing by the total points possible. Composite scores range from 0 to 100 points; someone who marks the lowest point on every item in a composite receives a score of 0, and someone who marks the highest point on every item receives a score of 100. Cronbach’s alpha was determined to be 0.873 and 0.810 for composite variables 1 and 2, respectively, suggesting strong internal consistency between the items within each composite variable. Confirmatory factor analysis using structural equation modeling produced standard root mean square residuals (SRMR) of 0.089 and 0.085, and root mean square error of approximation (RMSEA) of 0.112 and 0.081 for the two composite variables, respectively. In all cases except the RMSEA for the Perceptions of Preparedness variable, these fit indices fall within the accepted range of 0.05–0.1 for internal consistency (Marsh, Wen, & Hau, 2004). Although the RMSEA did not fall within this range for the Perceptions of Preparedness variable, the fact that the other statistics fall within the accepted range (SRMR of 0.089 and Cronbach’s Alpha of 0.873) supports this variable as a reliable measure.

A two-level hierarchical linear model (HLM) was used to test whether there were statistically significant differences over time on the composites regarding perceptions of preparedness and the use of reform-oriented instructional practices (Eqs. 1, 2, and 3).

$$ \mathrm{Level}\ 1:{\mathrm{Composite}}_{ti}={\pi}_{0i}+{\pi}_{1i}\ast \left(\mathrm{Time}2\right)+{e}_{ti} $$
(1)
$$ \mathrm{Level}\ 2:{\pi}_{0i}={\beta}_{00}+{r}_{0i} $$
(2)
$$ {\pi}_{1i}={\beta}_{10} $$
(3)

Level 1 of the HLM model included the time points of the composite pre- and post-survey means in years 1 and 2, respectively (Time2), which were nested within faculty (level 2). No other variables were included in the model. Thus, the intercept (π0i) represents the composite pre-survey mean score in year 1 and the slope equation at level 2 (π1i) represents the change in composite means from the pre- to post-survey. Residuals for the model were normally distributed with no outliers, indicating the appropriateness of the model for the data. The intraclass correlation coefficient (ICC) was 0.65, indicating that 65% of the variation in composite scores was across time points and 35% was among faculty members.

In order to make comparisons between years 1 and 2 within groups (institutions or cohort), the model was run with different subsets of data: One run included all responses, and six included subsets of the responses to represent the following groups: RU only, CC1 only, CC2 only, non-participants, cohort A, and cohort B. Because introductory courses were oversampled, weights were calculated to reflect instruction across courses. The HLM model was used to test for statistical significance at a level of α = 0.05. Effect sizes were calculated as the regression coefficient for the difference in the composite scores between the two time points, divided by the standard deviation of the composite across time points.

Finally, we examined responses to the items in the perceived supports and barriers question to gain insight into the most common barriers faculty perceived they were facing in implementing student-centered practices. The sum percent of faculty responding Slightly Agree, Moderately Agree, and Strongly Agree on each item were ranked. The ranked lists were compared to identify potential shifts in perceived barriers.

Case studies

In addition to the survey data, we collected data from four participants of cohort A to feature as case studies, addressing all three research questions. We took a cross-case approach to analysis, exploring the four cases for themes and contrasts (Yin, 2003). This approach allowed us to gain more detailed insight into what participants experienced when they made changes to their instruction, as well as the variations and similarities in those experiences. We selected all case study subjects from cohort A and aimed to maximize variation in institution, discipline, and gender. All case study data were collected from instances of subjects teaching introductory (100 or 200-level) STEM courses. Most of these courses are structured as foundational courses for majors in the discipline, so they cover a wide range of content. Table 3 describes these four individuals, who are referred to using pseudonyms. We use the word “subject” to refer to one or more of the case study individuals, while “participant” refers more generally to faculty involved in C-Core. Since C-Core was the first formal faculty development program in student-centered teaching and learning in STEM at all three institutions, it represented most participants’ first engagement with PD in STEM teaching and learning.

Table 3 Summary of data collected from each case study subject, and information from the courses in which they were collected, during the seven academic quarters of data collection

Findings from the case studies were based on three sources of data: classroom video recordings, interviews of the subjects, and a student perceptions questionnaire. We used a combination of existing protocols and thematic analysis (Braun & Clarke, 2006) as a first-order approach to coding the data and searching for recurring themes.

Classroom videos

We performed observations of classroom instruction for the four subjects using video recordings to address research questions 1 (What changes did faculty make to their classroom instruction and learning environments to improve students’ understanding of core ideas for their course?) and 3 (How did instructors’ implementation of student-centered practices align with students’ perceptions?). The videos sampled up to two class sessions per quarter per subject. We chose to analyze only those that represented typical day-to-day instruction, rather than featuring new materials developed during a C-Core event. Although we did video most of the subjects employing new materials they had created as a result of C-Core (such as an active learning tutorial), these were relatively rare (1–4 course meetings per quarter) and we wanted to be conservative in our interpretations of day-to-day instruction. Therefore, we did not include these videos in the subsequent analysis.

We analyzed the videos using the Classroom Observation Protocol for Undergraduate STEM (COPUS) (Smith, Jones, Gilbert, & Wieman, 2013) and the Assessing the Impact of Math and science projects (AIM) observation protocol (Weiss, Pasley, Smith, Banilower, & Heck, 2003). The use of these two protocols allowed us to both describe and evaluate classroom practice, thus helping us gain a richer picture of participants’ teaching practices than either protocol would give us alone. The COPUS protocol is a way of capturing behaviors of the instructor and students. Many of these behaviors (students discussing, instructor asking clicker questions) align with the pedagogies modeled in the professional development, as well as with participants’ plans to change their practice. The AIM protocol is aligned with the same cognitive research on learning that C-Core participants were exposed to (Banilower, Cohen, Pasley, & Weiss, 2008; Bransford et al., 2000) and allowed us to evaluate how well the practices used in the classroom, whether active or not, supported students in engaging in deep learning.

The COPUS involved coding the actions of the instructor and students every 2 minutes and therefore yielded descriptive data. Before coding the videos, two members of the research team engaged in a COPUS training protocol using videos and keys obtained from personal communication with the COPUS developers. The researchers first coded each of two training videos. After each video, they compared their codes to the key and discussed the meaning of each code. In subsequent coding, the researchers coded individually, compared their codes, and then reached consensus through discussion of instances in which their codes initially disagreed. We then categorized each activity as passive engagement (students listening, instructor lecturing) or active engagement (all other activities on COPUS, including asking/answering questions, small or large group discussions, working on tutorials, etc.) and reported the percentage of time spent on these two categories of activities. The decision to group in this way represents the most conservative possible interpretation of passive learning, giving the benefit of the doubt to all other codes besides listening and lecturing.

The AIM was developed to evaluate the quality of each lesson in terms of its likelihood to result in deep understanding of important ideas. This protocol includes six indicators for effective science teaching: (1) important science content, (2) opportunities to elicit initial ideas, (3) engaging with examples/phenomena, (4) using evidence to make claims, (5) opportunities for sense-making, and (6) classroom culture. Each of these is further explicated in the Effective Science Instruction (ESI) framework, which is a constructivist-based framework for classroom instruction (Banilower et al., 2008). Multiple undergraduate research assistants were trained on the AIM protocol by iterations of practicing with stock videos and discussing results with C-Core’s internal evaluator (Hanley) who is familiar with the protocol. Each video was then separately rated by two research assistants on a scale from 1 (low) to 4 (high) on each of the six indicators. The research assistants then compared, discussed, and reached consensus on their ratings. The multiple sets of codes for each of the video observation protocols were combined (summed for the COPUS and averaged for the AIM) for each case study subject, in order to capture their overall use of student-centered practices as well as the opportunities for constructivist-based learning they afforded the students.

Interviews

Semi-structured interviews were conducted by undergraduate research assistants, with each case study subject at the beginning (entrance) and end (exit) of most quarters during which their instruction was video recorded. Only data from the exit interviews are included in this paper, because these addressed research questions 1: What changes did faculty make to their classroom instruction and learning environments to improve students’ understanding of core ideas for their course? and 2: What supported or constrained the changes faculty attempted to make to their instruction? The exit interview protocol consisted of the following questions: (1) What changes did you make to your instruction this quarter?; (2) How did those changes affect your class, in terms of (a) student engagement with the materials, and (b) student interactions with each other and the instructor?; (3) What supported the changes you made this quarter?; and (4) What hindered the changes you made this quarter? During most of the interviews, probing questions were used to gain further insight into subjects’ responses.

Audio recordings of the interviews were transcribed and coded by research assistants in consultation with C-Core project leadership. Thematic analysis (Braun & Clarke, 2006) was employed to develop emergent codes from the transcripts. In developing these codes, we looked for (1) instances where the subjects mentioned instructional strategies they implemented, and (2) the supports and constraints to implementation that were verbalized. After a list of codes was developed by two separate coders, the larger research team met in order to refine code definitions, as well as merge similar codes together or separate unique codes from each other in order to best represent the interview transcripts. After the list of codes was developed, each interview was coded separately by two research assistants who then discussed their codes and reached consensus.

Student perceptions questionnaire

In order to address research question 3 (How did instructors’ implementation of student-centered practices align with students’ perceptions?), the subjects administered a student survey at the end of most quarters of data collection, using the Student Assessment of their Learning Gains (SALG) questionnaire (Wiese, Seymour, & Hunter, 2000) as a template. This instrument comes with stock questions but allows for customization. Our version asked students to rank their perceptions of the class according to a number of indicators and finally to respond to two open-ended questions about the strengths and weaknesses of the class. The two questions that were ultimately analyzed for the purposes of this report were (1) “Please comment on how the instructional approach, class activities, assignments, exams, resources, information, and/or support in this course helped your learning”, and (2) “Please comment on how the instructional approach, class activities, assignments, exams, resources, information, and/or support in this course could be improved.” The questionnaires were administered online and results were gathered anonymously, with built-in consent. Only responses from consenting students were analyzed. The instructor did not have access to responses before grades were due. Research assistants analyzed the responses to the two questions about strengths and weaknesses in consultation with C-Core leadership. Qualitative analysis software Atlas. Ti was used to assign and perform subsequent analyses on the codes. Two levels of codes were assigned. Level one codes identified what course features (e.g., lecture, tutorials) students found effective or ineffective. These codes were labeled positive (e.g., “Lecture was incredibly helpful”), negative (e.g., “The tests were so unclear”), or suggestion (e.g., “More in-class activities would be beneficial”). The most frequent level one codes were then identified to guide the next round of data analysis. In this next round (level two), we coded for why the students thought the features were effective or ineffective (e.g., high student engagement, unclear explanations). Each questionnaire was coded separately by two research assistants who then discussed their codes and reached consensus.

Synthesis of data

Table 3 summarizes the data collected from case study subjects. Once all the data were coded, we synthesized them through multiple comparative approaches (Algozzine & Hancock, 2006). First, subsets of the research team (1–2 researchers in each case) wrote subject-specific summaries to describe the important details of each subject’s case pertaining to the three research questions, using the codes from each data source. The syntheses were written after multiple meetings among the entire research team to refine the codes and the coding procedures for each of the data sources. The summaries were then reviewed and discussed by the entire research team. After revising the summaries based on the discussions and comparisons with the coded data, subsets of the research team consisting of those who did the first-order analysis of the relevant data (1–2 researchers in each case) looked at one data source at a time across subjects. They then wrote data source-specific summaries capturing the important themes relating to the research questions, developed from each of the three types of data (video, interview, and student perceptions questionnaire). Again, these were reviewed and compared to the coded data in a meeting of the entire research team. Finally, in a data triangulation meeting, the entire research team read and discussed the case-specific and data source-specific summaries, and collaboratively developed themes that cut across multiple data sources from multiple cases in response to the research questions.

Results

RQ1: changes to instructors’ practices

Research question 1 asks, “What changes did faculty make to their classroom instruction and learning environments to improve students’ understanding of targeted ideas for their course? Why did they make these changes?” To address these questions, we combined data from the external evaluation surveys with data from video observations and interviews of our case study subjects.

External evaluation surveys

Figure 1 summarizes the findings from the Perceptions of Preparedness and Use of Reform-Oriented Practices questions on the external evaluation surveys in year 1 and year 2, broken out both by institution and by C-Core participation. When asked about their perceived preparedness to implement student-centered practices in their classroom, the ratings of the respondents improved from year 1 to year 2 for RU and CC1, with small effect sizes, and these improvements were statistically significant (p = 0.002, effect size = 0.30 for RU, p = 0.005, effect size = 0.32 for CC1). Ratings from CC2 respondents remained the same from year 1 to year 2; however, these ratings began at the highest level in year 1 compared to the other two institutions. Further, cohort A participants showed the largest increase in perceived preparedness, with a medium effect size, and this gain was statistically significant (p < 0.001, effect size = 0.60). When asked about actual implementation of student-centered approaches, on the other hand, only cohort A participants showed a statistically significant gain, coupled with a medium effect size (p = 0.037, effect size = 0.47). Comparison to cohort B is instructive here: Because the second survey was administered at the beginning of cohort B’s participation and the middle of cohort A’s participation (cohort A had completed one summer academy and five Saturday workshops; cohort B had completed only two Saturday workshops), we can extract some cross-sectional data for how cohorts look at different time periods. On neither set of surveys did cohort B participants show statistically significant gains. However, it should be noted that on average, cohort B participants rated their initial (year 1) perceptions of preparedness and use of reform-oriented practices higher than cohort A, which may at least partially explain why their year 1–year 2 changes were not significant.

Fig. 1
figure 1

External evaluation survey findings from perceptions of preparedness and use of practices questions for years 1 and 2. *Indicates a statistically significant difference (p < 0.05)

Classroom videos

To get a sense for what implementation of student-centered strategies looked like in the classroom, we analyzed videos of everyday instructional practice for each of the case study subjects. Coding with the COPUS protocol revealed a large degree of variation between subjects. When the activities were categorized into passive (instructor lecturing, students listening) and active engagement (all other activities), the active portion ranged from 9 to 97% of the total lesson time. When we looked more closely at the activities that composed the active engagement periods, we found that answering questions posed by the instructor represented the largest fraction of these periods in each class (tied with whole class discussion in Celeste’s class), but the other active engagement activities varied widely (Table 4). Such variation is common in the use of the COPUS and reflects daily variation in teaching (Stains et al., 2018).

Table 4 COPUS and AIM protocol results for each case study subject

The COPUS can only give us a glimpse of the activities in which the students and instructors were engaged during a class—it cannot generate information about how well those activities helped students engage with the content. Further, the COPUS results above may reflect daily variation in teaching more than trends in the use of student-centered teaching practices. Thus, to gain a more complete picture of how the practices used supported student learning, we also coded each video using the AIM protocol. Though there was high variability within and between cases, the highest scores for all the videos were in the area of course content, which focused on the accuracy and alignment between lesson activities and content, while the lowest scores were for either sense-making, which focused on the extent to which students were guided to construct ideas themselves, or initial ideas, which focused on the extent to which students had opportunities to examine and make public their incoming knowledge. The low scores in the initial ideas category could be due to the timing of the observations, which may have come in the middle of a lesson instead of at the beginning, when initial ideas are more likely to be elicited. Examination of the qualitative data yielded no patterns within or between subjects in strengths, but there were two observations that were frequently documented as challenges: first, there were few opportunities for student engagement, and second, when there were such opportunities, there was little structure in place to ensure equitable, accountable participation around important ideas.

Interviews

To gain more understanding of the specific details of how participants took up the information and practices they learned about through C-Core activities, the four case study subjects were interviewed at the end of most of their courses during the first and second years in the program. Though there was wide variation in their intended changes, the most common of these across all four subjects was use of ABCD voting cards to make lectures more interactive. ABCD cards are best described as “low-tech clickers” where students fold a sheet of paper, on which A, B, C, and D are displayed in different colors on the different quadrants of the paper, to display an answer to a multiple choice question. The most common goals for using ABCD cards, as expressed in entrance interviews, were to improve student engagement during class and promote deeper learning. Subjects also reported using newly developed tutorials or worksheets to help students develop ideas, having students represent their ideas on portable whiteboards, and incorporating group discussions into class meetings.

RQ2: supports and constraints to implementation

Research question 2 asks, “What supported or constrained the changes faculty attempted to make to their instruction?” To address this question, we combined data from external evaluation surveys with case study subject interviews.

External evaluation surveys

To learn about the supports and barriers toward implementing inclusive, student-centered learning strategies to their fullest extent, we examined data from the question on the external evaluation survey asking participants to rate their agreement with several statements aligning with known barriers and supports. Table 5 shows the rankings of these supports and barriers by the percentage of slight, moderate, or strong agreement in both years. Most (80% or more) faculty members agreed with statements expressing the effectiveness of student-centered learning, their desire to incorporate student-centered practices, and their familiarity with those practices, in both years. Encouragingly, more faculty members agreed with the statement about familiarity with student-centered approaches in year 2 (89%) compared to year 1 (80%). Fewer faculty members agreed with statements asking about alignment of student-centered teaching with department policies and practices.

Table 5 Percent of faculty respondents agreeing with various statements about the use of student-centered teaching practices, by year

When asked about the challenges they faced in implementing student-centered learning strategies, the most often cited barriers were the perceived need for content coverage (94% of faculty responding slightly, moderately, or strongly agree in year 1 and 87% in year 2) and perceived effectiveness of lecture (79% in year 1 and 70% in year 2). However, in year 2, class layouts not conducive to group work replaced a fear of resistance to student-centered practices from students from year 1 among the three highest-cited barriers. Class layouts received 49% agreement in year 1 and 55% in year 2, while fear of student resistance received 57% agreement in year 1 and 46% in year 2. There were many barriers that received mid-level (40–50%) agreement in both years and these shifted in their rankings somewhat from year 1 to year 2.

Case study subject interviews

To corroborate the results from the surveys, we examined the transcripts from the exit interviews for the four case study subjects. All subjects reported improvement in student engagement and peer-to-peer interactions and some reported observing improvements in the students’ understanding of the topics and exam scores, compared to previous courses in which they used student-centered strategies less frequently. For example, Peter said, “[ABCD card] questions seemed to keep students engaged, as they require students to process and apply concepts … [and] provide a means for students to interact with both the instructor and each other.” However, not all subjects were convinced that the changes in their teaching were the cause of these improvements. For example, Travis said, “I think the exam scores, well, the averages were higher than my historic averages, but I’m not yet quite sure how much of that is from the changes I’ve done in teaching versus the changes I’ve done in the exams I write.” All subjects mentioned the importance of collaboration with their colleagues through PLCs in making changes to their practice and felt supported by their PLCs and department administrators. Furthermore, they all planned to continue using inclusive, student-centered learning strategies but expressed the need to further modify them to fit their classroom contexts. For example, Celeste said, “I will keep the changes, I just want to work on fitting them in in a more meaningful way.”

Importantly, three of the subjects noted that their departments acted as supports, rather than barriers, to their work in implementing student-centered practices. Peter (CC1) said, “Because my division chair and dean are supportive of this work, there were no institutional barriers for making these changes.” Phillip expressed a great deal of freedom in experimenting with instructional strategies: “I’m supported by my chair and my department. .. I don’t feel constrained that I have to do certain things or teach a certain way, or make sure that we cover this and cover this.” Travis, after he had taken a position at CC1, expressed support from his departmental colleagues: “This institution that I teach at is very much an active learning or teaching first institution, so they promote using these sorts of methods in their teaching. Many of my colleagues are using these methods.”

The most frequently expressed barriers to implementing student-centered practices had some overlaps with those expressed in the external evaluation surveys: Time constraints and not enough collaboration with colleagues in developing new materials, classroom layouts not conducive to group work, and student resistance to active learning strategies were the most frequently cited. For example, Peter said in an exit interview, “The only thing that hindered these changes was the additional time and effort needed to make these changes.” In response to a question about whether her PLC should have met more frequently, Celeste said, “We were supposed to kind of rotate jobs in the PLC… It just seemed like the group that I was in, no one really wanted to take the time to make it happen.” Travis, when he was at RU, listed some logistical barriers as well as lack of time: “I have found that there is a limit to the degree of changes that can be done, given the facilities, the time and our level of support. I feel there needs to be more assistant support and there is a limit to what you can do in those classrooms.” Later in the program and after taking a position at CC1, he noted group dynamics as the most important barrier: “I think the group dynamic is always an issue. Some students don’t want to be in a group, and you can see some groups click and some groups don’t…”.

RQ3: intersections between instructors’ instructional practices and students’ perceptions

Research question 3 asks, “How did instructors’ implementation of student-centered practices intersect with students’ perceptions?” To address this question, we examined responses to student perception questionnaires.

Student perceptions questionnaire

To investigate how students viewed the strategies their instructors were implementing, we collected student satisfaction data at the end of most of the case study subjects’ courses using student perceptions questionnaire. The most frequent course features identified during the first-level coding (what?) process were course organization, examples, lecture, homework, slides, and tutorials. When these features were examined for reasons why they played an important role in the students’ learning (either positive or negative), five second-level (why?) codes were identified: (1) Practice: practicing and applying concepts and skills; (2) Preparation: preparing for exams or homework, reviewing material; (3) Pace: rate at which material was covered, time allotted for tests and assignments; (4) Coherence: organization/flow of content, alignment between content presented and exams/assignments; and (5) Engagement: student connection with the course content, peers, and/or the instructor during class.

The distribution of these codes throughout the four case study subjects’ student comments are shown in Table 6. Positive student comments centered most heavily around the features lecture and homework. When we examined the reasons for these positive responses, comments focused most often on preparation and engagement. In other words, students valued lecture and homework the most because they thought those helped them prepare for exams and other assessments, and/or because they helped them engage in the course. When asked about parts of the course that could be improved, students most often cited course organization and homework, because they contributed to pacing that was too fast, did not help them prepare for assessments, and/or did not help them engage in the course. We found it instructive that some students perceived value in homework and others saw it as an obstacle. The main suggestions were to improve the organization of the class, because students thought it would improve content preparation.

Table 6 Summary of codes from SALG data. First level (what?) codes are in normal text, and second level (why?) codes are in italics

Discussion and implications

By synthesizing the various data sources for this project, we were able to make some preliminary inferences about what classroom-level changes as a result of C-Core looked like. These findings can inform the creation and implementation of other faculty development projects. We organize these findings into the following broader themes, summarized in Table 7: (1) Implementation of student-centered learning took a variety of forms; (2) Quality implementation of student-centered teaching practices lagged behind understanding of the theory behind those practices; (3) The most robust perceived barriers to implementation of student-centered teaching stayed constant, while more moderate barriers were ranked differently from year 1 to year 2; and (4) Faculty members’ perception of student-centered learning practices was not always the same as students’ perceptions. The novelty of these findings comes from two main sources. First, the fact that the data come from faculty representing multiple STEM disciplines and institution types working together gives us a sense of the broad implications of faculty development that span these contexts. The contexts themselves, a regional, primarily undergraduate institution and two community colleges, tend to be underrepresented in literature describing postsecondary STEM education reforms (Stains et al., 2018). Projects involving collaborations between faculty across those institution types, and across STEM disciplines, are less well represented. Though the case study approach limits broad generalizability of these findings, it generated implications that can be considered across institution types and disciplines in planning faculty development programs. Second, data from the case studies give a richer picture of what implementation of inclusive, student-centered practices look like on the level of individual faculty, and when combined with survey data can help explain some of the trends. This unique combination of data can help provide a more nuanced picture of what faculty development looks like “on the ground.” Each of these themes is discussed below.

Table 7 Summary, implications, and recommendations drawn from themes identified in this study

Variety of implementation

Although there were a few practices that were used by a large number of participants, variation in strategies was more the rule than the exception. Case study subject interviews revealed the use of ABCD cards, whiteboards, group discussion, and tutorials, among other strategies. Responses to the student perceptions questionnaire revealed students noticed many of these strategies. Additionally, COPUS results from video recordings varied widely between instructors. This variety may reflect the approach we took to professional development. Priority was placed first and foremost on helping participants understand theories of learning, notably constructivism, while specific strategies were introduced and modeled as ways of supporting student learning within a constructivist paradigm. The focus on core ideas first and specific pedagogies second was intended to help faculty be nimble in attending to their specific contexts (Lund & Stains, 2015), choosing and adapting pedagogies appropriately to meet their goals and avoid potentially unproductive adaptations (Offerdahl, McConnell, & Boyer, 2018). Faculty may have experimented with different approaches in order to get students closer to the depth of learning they were hoping for. The implication of this theme is that in order for faculty to have the flexibility to try different approaches and improve over time, professional development should be grounded in a theory of learning. This way, faculty have a vision in mind and can experiment with strategies to move toward that vision.

Lag between theory and practice

Although implementation of strategies appeared to be grounded in theory, we observed a lag between the learning of that theory and the implementation of related practices. Results from the external evaluation survey show faculty consistently ranked their understanding of the research on teaching and learning higher than their implementation of practices that stemmed from that research (Fig. 1). Differences from year 1 to year 2 were smaller for implementation than perceived preparedness on the external evaluation survey, and were smaller on both measures for faculty newer to C-Core (cohort B) than for those having participated for longer than one year (cohort A). Notably, on average, cohort B rated both their perceptions of preparedness and use of reform-oriented instructional practices higher in year 1 than did cohort A participants. This difference may stem from increased realization of the difficulty of implementing such strategies as one becomes more familiar with them. These initial ratings may also be, in part, responsible for the differences in significance between the two years for cohort B.

Evidence from video observations also provides support for the theme of a lag between theory and practice. While COPUS codes revealed a range of student-centered strategies, sense-making was consistently the lowest ranked factor in the AIM video coding protocols. We often saw a pattern where the instructor would try a strategy, such as ABCD cards, but would not necessarily know how to respond to the student input from such practices (e.g., they moved on with lecture after asking a question). The observation of a lag in implementation is not surprising, as other studies of faculty change have documented much more widespread knowledge and interest in student-centered pedagogies than implementation of them (Lund & Stains, 2015; Stains et al., 2018). One possible interpretation is that it takes time to gain enough mastery of the strategies to use them in a way that is consistent with the research in which they are grounded. An implication for professional development is to offer ongoing opportunities for learning and practice, spaced out over a long enough period of time to give instructors a chance to evolve.

Robustness in highly vs. moderately ranked barriers

Comparison of year 1 and year 2 external evaluation survey results reveals the consistency in certain barriers toward implementation of student-centered practices and potential shifts in other barriers. Fear of student resistance to student-centered practices was a highly cited barrier in year 1 and non-ideal class size/layout became more important in year 2. However, the two highest ranked barriers, perceived need for content coverage and perceived effectiveness of lecture, were the same in both years. This convergence may indicate that a belief about the purpose of teaching as conveying information is robust among our participants. If conveying information is the purpose, then lecture is certainly an effective mode of instruction and the need to cover that content is paramount. Further, examination of the supports shows less agreement among faculty with statements about policies and practices of their departments aligning with the implementation of student-centered practices. Thus, while case study subjects expressed departmental support in trying new practices, the goals of instruction may have still been encoded as content delivery into departmental standards.

Differences in ratings of other barriers that had mid-level agreement from year 1 to 2 may reflect differences in needs and foci as participants began to implement student-centered teaching strategies. For example, participants may have seen more limitations to their classroom contexts after they began implementing practices, but had enough positive interactions with students around these practices to allay, to some extent, their fears about students not buying into those practices. To be sure, no barriers appeared to have gone away completely and there was a fair amount of shifting in the rankings of barriers that had moderate agreement. We take this to mean barriers shift according to where faculty members are in their knowledge and implementation of student-centered strategies. Anecdotally, this sentiment was relayed by several C-Core participants throughout the project. Content coverage and worries about student resistance, however, remained perennial challenges, consistent with the literature (Henderson & Dancy, 2007; Shadle et al., 2017).

One implication from these findings is that faculty members’ beliefs about teaching and learning should be uncovered early on, and addressed throughout, a professional development program. The importance of attending to, and collectively developing, common visions of and beliefs about teaching and learning is illustrated in other studies as well (Henderson et al., 2011; Shadle et al., 2017). Another implication is that professional development programs must be flexible enough and take place over a long enough period of time to meet different needs during different stages of faculty learning. Additionally, such programs must be ready to address the content coverage barrier by helping faculty engage in conversations with their departmental colleagues on curriculum unburdening in order to help make room for student-centered learning. Though we made the argument early on that “less is more,” partially through a review of the literature (Luckie et al., 2012; Schwartz, Sadler, Sonnert, & Tai, 2008), we found that faculty were more willing to engage in unburdening after trying, and having some success with, student-centered learning strategies in their classrooms, and realizing they needed to make room for them. Further, to address the time barrier, programs should consider ways of forming faculty groups, such as PLCs, so that instructors can share the burden of designing materials and implementing new practices. A final implication is that instructors need to be prepared to handle potential student resistance. This can be done by practicing uncomfortable conversations, setting up the first day of class, and hearing about positive student responses to student-centered instruction. Strategies for addressing these barriers are laid out in the National Academy report, Reaching Students (Kober, 2015), which we had Cohort A participants use as a planning tool during the final summer institute.

Student vs. faculty perceptions

The final theme we uncovered is that instructors’ and students’ perceptions of student-centered learning were not always aligned. In interviews and on the external evaluation surveys, participants reported implementing various student-centered learning strategies. However, the most noticed and highly rated strategy from the student perceptions questionnaire was lecture. This could be because implementation of student-centered practices was a slow and evolutionary process in some classrooms and lecture was still the predominant form of instruction. It could also be because of differences between students and faculty in their views of learning. Students may have seen learning as more of a passive activity, whereas the case study subjects were thinking about learning as an active, constructivist process. Finally, the summative assessments, which were mostly exams, may have influenced the type of learning students valued, as many expressed lecture as helping them prepare for exams. Here, a possible implication is that professional development should incorporate discussions about how to be transparent to students about why certain strategies are being used and how they align with research on learning. Also, summative assessments can be developed to better reflect this research, for example, by emphasizing reasoning over the right answer.

Conclusion

Transforming classrooms is a complex undertaking, requiring work across disciplines and at different levels of a system (American Association for the Advancement of Science, 2019; Gess-Newsome et al., 2003; Henderson et al., 2011; Laursen et al., 2015). In this paper, we described some of the outcomes of C-Core on the faculty/classroom level. Through examination of various data sources, we have gained a better understanding of the kinds of strategies being implemented, as well as some of the affordances and barriers faculty experienced in implementing them. The stories of faculty change are as varied as the faculty themselves, but we were able to identify some important themes relevant to faculty development program design. It is clear from these stories that faculty development toward effective use of student-centered pedagogies is a career-long pursuit and that different measures of success can be expected at different stages. For example, it may be reasonable to expect a solid understanding of how people learn and implications for teaching, but only rudimentary changes in practice after even one to two years of sustained faculty development. For this reason, institutionalized, sustained faculty development is required for lasting change.

Limitations

The results described in this paper are limited in their generalizability to other contexts, first, because of the nature and sampling limitations from the external evaluation surveys, second, because of the case study approach, and third, because of difficulties in attributing changes to the C-Core program. For the external evaluation survey, we had response rates of 60% and 55% for years 1 and 2, respectively. We cannot assume that the respondents are representative of the overall study population, and in fact may over-represent knowledge about and preparation to implement student-centered practices in the overall population compared to those who did not respond. Furthermore, because this survey was given to all STEM faculty in the three institutions, there is likely variation in respondents from year 1 to year 2. This variation may explain some of the differences in faculty members’ agreement with barriers and supports. Finally, the survey was developed as part of project evaluation, not as part of a research project, and therefore closely reflects the goals of this particular project. A survey that is developed and validated for a wider audience would be needed for more generalizable claims.

Because the case study approach sought data from four individuals in the program, the results from case study data are unique to those individuals’ contexts as well as their baseline knowledge and beliefs. Further, the limited number of videos analyzed for each faculty member limit our ability to capture precisely the variation in the strategies they used, and the variation can only be described between subjects, not within a single subject. The variation between subjects was captured by triangulating the videos with other sources of data (interviews and the student perceptions questionnaire), but cannot be generalized to larger populations. Also, the COPUS is primarily a descriptive instrument and further research is needed to investigate to what degree the various “active” learning behaviors align with deep student learning. Our use of the AIM was intended to supplement the descriptive nature of the COPUS with more information about student learning, but we do not have enough coded video data to make any claims about whether the active learning behaviors from the COPUS align with the affordances of the instruction toward deep student learning from the AIM. Finally, not every data source was collected for every subject during every quarter, due to resource limitations of the project. More data would be needed to describe in richer detail the practices, perceptions, and experiences of these instructors and their students.

Finally, as with most initiatives in complex systems, C-Core existed among a constellation of other activities and contextual factors in each institution. This makes it impossible to attribute the changes we document in this paper solely to C-Core. The faculty interviews give us some indication that for the case study subjects, C-Core activities informed their beliefs and attempts to implement various practices, but we cannot and do not claim C-Core was the sole factor influencing our participants. We do not offer our results in order that they be generalized to other populations. Rather, we have compiled our data to generate some themes that may be important to consider when constructing other faculty development programs.

Abbreviations

AIM:

Assessing the Impact of Math and science projects

CC1:

Community College 1

CC2:

Community College 2

C-Core:

Change at the Core professional development program

COPUS:

Classroom Observation Protocol for Undergraduate STEM

HLM:

Hierarchical Linear Model

ICC:

Intraclass Correlation Coefficient

PD:

Professional Development

PLC:

Professional Learning Community

RMSEA:

Root Mean Square Error of Approximation

RU:

4-year Regional University

SALG:

Student Assessment of their Learning Gains

SRMR:

Standard Root Mean Square Residual

STEM:

Science, Technology, Engineering, and Math

References

  • Algozzine, R., & Hancock, D. R. (2006). Doing case study research: A practical guide for beginning researchers. New York: Teachers College Press.

    Google Scholar 

  • Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching (1st ed.). San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • American Association for the Advancement of Science. (2019). Levers for Change: An assessment of progress on changing STEM instruction. Retrieved from https://www.aaas.org/resources/levers-change-assessment-progress-changing-stem-instruction, [November 2019].

  • Banilower, E. R., Cohen, K., Pasley, J., & Weiss, I. R. (2008). Effective science instruction: What does research tell us? Second edition. Portsmouth, NH: RMC Research Corporation, Center on Instruction.

    Google Scholar 

  • Bathgate, M., Aragón, O., Cavanagh, A., Waterhouse, J., Frederick, J., & Graham, M. (2019). Perceived supports and evidence-based teaching in college STEM. International Journal of STEM Education, 6(1), 1–14. https://doi.org/10.1186/s40594-019-0166-3.

    Article  Google Scholar 

  • Baviskar, S. N., Hartle, R. T., & Whitney, T. (2009). Essential criteria to characterize constructivist teaching: Derived from a review of the literature and applied to five constructivist-teaching method articles. International Journal of Science Education, 31(4), 541–550.

    Article  Google Scholar 

  • Beichner, R. J., Saul, J. M., Abbott, D. S., Morse, J. J., Deardorff, D. L., Allain, R. J., . . . Risley, J. S. (2007). The student-centered activities for large enrollment undergraduate programs (SCALE-UP) project. In E. Redish & P. J. Cooney (Eds.), Research-Based Reform of University Introductory Physics (Vol. 1). Available: http://www.per-central.org/document/ServeFile.cfm? ID=4517 [November 2019].

  • Borrego, M., & Henderson, C. (2014). Increasing the use of evidence-based teaching in STEM higher education: A comparison of eight change strategies. Journal of Engineering Education, 103(2), 220–252. https://doi.org/10.1002/jee.20040.

    Article  Google Scholar 

  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, D.C.: National Academy Press.

    Google Scholar 

  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa.

    Article  Google Scholar 

  • Colburn, A. (2000). Constructivism: Science education’s “grand unifying theory”. The clearing house: A Journal of Educational Strategies, Issues and Ideas, 74(1), 9–12. https://doi.org/10.1080/00098655.2000.11478630.

    Article  Google Scholar 

  • Cox, M. D. (2001). Faculty learning communities: Change agents for transforming institutions into learning organizations. To Improve the Academy, 19(1), 69–93. https://doi.org/10.1002/j.2334-4822.2001.tb00525.x.

    Article  Google Scholar 

  • Cox, M. D., Richlin, L., & Cox, M. D. (2004). Introduction to faculty learning communities. New Directions for Teaching and Learning, 2004(97), 5–23. https://doi.org/10.1002/tl.129.

    Article  Google Scholar 

  • Eddy, S. L., & Hogan, K. A. (2014). Getting under the Hood: How and for whom does increasing course structure work? CBE Life Sciences Education, 13(3), 453–468. https://doi.org/10.1187/cbe.14-03-0050.

    Article  Google Scholar 

  • Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences of the United States of America, 111(23), 8410–8415. https://doi.org/10.1073/pnas.1319030111.

    Article  Google Scholar 

  • Gess-Newsome, J., Southerland, S. A., Johnston, A., & Woodbury, S. (2003). Educational reform, personal practical theories, and dissatisfaction: The anatomy of change in college science teaching. American Educational Research Journal, 40(3), 731–767. https://doi.org/10.3102/00028312040003731.

    Article  Google Scholar 

  • Haak, D. C., Hillerislambers, J., Pitre, E., & Freeman, S. (2011). Increased structure and active learning reduce the achievement gap in introductory biology. Science, 332(6034), 1213–1216. https://doi.org/10.1126/science.1204820.

    Article  Google Scholar 

  • Henderson, C., Beach, A., & Finkelstein, N. (2011). Facilitating change in undergraduate STEM instructional practices: An analytic review of the literature. Journal of Research in Science Teaching, 48(8), 952–984. https://doi.org/10.1002/tea.20439.

    Article  Google Scholar 

  • Henderson, C., & Dancy, M. (2009). Impact of physics education research on the teaching of introductory quantitative physics in the United States. Physical Review Special Topics - Physics Education Research, 5(2), 020107. https://doi.org/10.1103/PhysRevSTPER.5.020107.

    Article  Google Scholar 

  • Henderson, C., & Dancy, M. H. (2007). Barriers to the use of research-based instructional strategies: The influence of both individual and situational characteristics. Physical Review Special Topics - Physics Education Research, 3(2), 020102-020101--020102-020114.

  • Kober, N. (2015). Reaching students: What research says about effective instruction in undergraduate science and engineering. Board on science education, division of behavioral and social sciences and education. Washington, DC: The National Academies Press.

  • Laursen, S. L., Austin, A. E., Soto, M., & Martinez, D. (2015). ADVANCing the agenda for gender equity. Change: The Magazine of Higher Learning, 47(4), 16–24.

    Article  Google Scholar 

  • Luckie, D. B., Aubry, J. R., Marengo, B. J., Rivkin, A. M., Foos, L. A., & Maleszewski, J. J. (2012). Less teaching, more learning: 10-yr study supports increasing student learning through less coverage and more inquiry. Advances in Physiology Education, 36(4), 325–335. https://doi.org/10.1152/advan.00017.2012.

    Article  Google Scholar 

  • Lund, T., & Stains, M. (2015). The importance of context: An exploration of factors influencing the adoption of student-centered teaching among chemistry, biology, and physics faculty. International Journal of STEM Education, 2(1), 1–21. https://doi.org/10.1186/s40594-015-0026-8.

    Article  Google Scholar 

  • Marsh, H. W., Wen, Z., & Hau, K.-T. (2004). Structural equation models of latent interactions: Evaluation of alternative estimation strategies and Indicator construction. Psychological Methods, 9(3), 275–300. https://doi.org/10.1037/1082-989X.9.3.275.

    Article  Google Scholar 

  • Offerdahl, E. G., McConnell, M., & Boyer, J. (2018). Can I have your recipe? Using a fidelity of implementation (FOI) framework to identify the key ingredients of formative assessment for learning. CBE Life Sciences Education, 17(4), 1–9. https://doi.org/10.1187/cbe.18-02-0029.

    Article  Google Scholar 

  • Olmstead, A., Beach, A., & Henderson, C. (2019). Supporting improvements to undergraduate STEM instruction: An emerging model for understanding instructional change teams. International Journal of STEM Education, 6(1), 1–15. https://doi.org/10.1186/s40594-019-0173-4.

    Article  Google Scholar 

  • Owens, M. T., Trujillo, G., Seidel, S. B., Harrison, C. D., Farrar, K. M., Benton, H. P., et al. (2018). Collectively improving our teaching: Attempting biology department-wide professional development in scientific teaching. CBE Life Sciences Education, 17(1), 1–17. https://doi.org/10.1187/cbe.17-06-0106.

    Article  Google Scholar 

  • Piaget, J. (1978). Success and understanding. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Popham, W. J. (2008). Transformative assessment. Alexandria, VA: Association for Supervision and Curriculum Development.

    Google Scholar 

  • Pruitt-Logan, A. S., Gaff, J. G., & Jentoft, J. E. (2002). Preparing future faculty in the sciences and mathematics: A guide for change. Retrieved from http://www.preparing-faculty.org/PFFWeb.PFF3Manual.pdf, [December, 2019].

  • Richlin, L., & Cox, M. D. (2004). Developing scholarly teaching and the scholarship of teaching and learning through faculty learning communities. New Directions for Teaching and Learning, 2004(97), 127–135. https://doi.org/10.1002/tl.139.

    Article  Google Scholar 

  • Rogers, E. M. (2003). Diffusion of innovations (5th ed. ed.). New York: Free Press.

    Google Scholar 

  • Sathy, V., & Hogan, K. A. (2019). Want to reach all of your students? Here's how to make your teaching more inclusive. Chronicle of Higher Education, July 22.

  • Schwartz, M. S., Sadler, P. M., Sonnert, G., & Tai, R. H. (2008). Depth versus breadth: How content coverage in high school science courses relates to later success in college science coursework. Science Education, 93(5), 798–826.

    Article  Google Scholar 

  • Shadle, S., Marker, A., & Earl, B. (2017). Faculty drivers and barriers: Laying the groundwork for undergraduate STEM education reform in academic departments. International Journal of STEM Education, 4(1), 1–13. https://doi.org/10.1186/s40594-017-0062-7.

    Article  Google Scholar 

  • Smith, M. K., Jones, F. H. M., Gilbert, S. L., & Wieman, C. E. (2013). The classroom observation protocol for undergraduate STEM (COPUS): A new instrument to Characterize University STEM classroom practices. CBE Life Sciences Education, 12(4), 618–627.

    Article  Google Scholar 

  • Stains, M., Harshman, J., Barker, M. K., Chasteen, S. V., Cole, R., Dechenne-Peters, S. E., et al. (2018). Anatomy of STEM teaching in north American universities. Science, 359(6383), 1468–1470. https://doi.org/10.1126/science.aap8892.

    Article  Google Scholar 

  • Stoll, L., Bolam, R., McMahon, A., Wallace, M., & Thomas, S. (2006). Professional learning communities: A review of the literature. Journal of Educational Change, 7(4), 221–258. https://doi.org/10.1007/s10833-006-0001-8.

    Article  Google Scholar 

  • Tanner, K. D. (2013). Structure matters: Twenty-one teaching strategies to promote student engagement and cultivate classroom equity. CBE Life Sciences Education, 12(3), 322–331.

    Article  Google Scholar 

  • Tinnell, T. L., Ralston, P. A. S., Tretter, T. R., & Mills, M. E. (2019). Sustaining pedagogical change via faculty learning community. International Journal of STEM Education, 6(1). https://doi.org/10.1186/s40594-019-0180-5.

  • Turpen, C., & Finkelstein, N. D. (2009). Not all interactive engagement is the same: Variations in physics Professors’ implementation of “peer instruction”. Physical Review Special Topics - Physics Education Research, 5(2), 020101. https://doi.org/10.1103/PhysRevSTPER.5.020101.

    Article  Google Scholar 

  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press.

    Google Scholar 

  • Wei, R. C., Darling-Hammond, L., Andree, A., Richardson, N., & Orphanos, S. (2009). Professional learning in the learning profession: A status report on teacher development in the United States and abroad. Retrieved from https://learningforward.org/docs/default-source/pdf/nsdcstudytechnicalreport2009.pdf, [December, 2019].

  • Weiss, I. R., Pasley, J. D., Smith, S. P., Banilower, E. R., & Heck, D. J. (2003). Looking inside the classroom: A study of K-12 mathematics and science education in the United States. Retrieved from: http://www.horizon-research.com/insidetheclassroom/reports/looking/, [December, 2019].

  • Wiese, D., Seymour, E., & Hunter, A. B. (2000). Creating a better mousetrap: On-line student assessment of their learning gains. Retrieved from https://salgsite.net/docs/SALGPaperPresentationAtACS.pdf, [December, 2019].

  • Wiliam, D. (2007). Changing classroom practice. Educational Leadership, 65(4), 36–42.

    Google Scholar 

  • Wiliam, D. (2011). Embedded formative assessment. Bloomington, IN: Solution Tree Press.

    Google Scholar 

  • Yin, R. K. (2003). Case study research: design and methods (3rd ed.). Thousand oaks, Calif.: Sage publications.

Download references

Acknowledgments

We are grateful for the participation of all C-Core faculty, especially our four case study subjects, as well as for the guidance of external evaluators Drs. Eric Banilower and Peggy Trygstad; advisory board members Drs. Jeanne Narum, Sarah Miller, Heather MacDonald, Bill Penuel, and Darryl Williams; project co-PIs Drs. Joann Otto and Ed Harri; and faculty catalysts Drs. Jessica Cohen, Regina Barber-DeGraaff, Kaatje Kraft, Tran Phung, Tony St. John, and Gabe Mast.

Availability of data and material

Please contact author for data requests.

Funding

This work was funded in part by the National Science Foundation under the Widening Implementation and Development of Evidence-based Reforms (WIDER) program, DUE-1347711. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Contributions

EB contributed to the project and study design, drafted the manuscript, and organized revisions and responses to reviewers. ES contributed to qualitative data analysis, drafted sections of the manuscript, and participated manuscript revision. DH contributed to the study design, guided data analysis, and participated manuscript revision. EG contributed to the project design and participated in manuscript revision. SW directed the project and participated in manuscript revision. CI and LS contributed to qualitative data analysis. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Emily Borda.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

External evaluation survey questions about perceptions of preparedness to implement reform-oriented instructional strategies, use of reform-oriented instructional strategies, and supports and barriers to implementing reform-oriented instructional strategies.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Borda, E., Schumacher, E., Hanley, D. et al. Initial implementation of active learning strategies in large, lecture STEM courses: lessons learned from a multi-institutional, interdisciplinary STEM faculty development program. IJ STEM Ed 7, 4 (2020). https://doi.org/10.1186/s40594-020-0203-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-020-0203-2

Keywords