Skip to main content

Investigating how faculty social networks and peer influence relate to knowledge and use of evidence-based teaching practices

Abstract

Background

Calls for science education reform have been made for decades in the USA. The recent call to produce one million new science, technology, engineering, and math (STEM) graduates over 10 years highlights the need to employ evidence-based instructional practices (EBIPs) in undergraduate STEM classes to create engaging and effective learning environments. EBIPs are teaching strategies that have been empirically demonstrated to positively impact student learning, attitudes, and achievement in STEM disciplines. However, the mechanisms and processes by which faculty learn about and choose to implement EBIPs remain unclear. To explore this problem area, we used social network analysis to examine how an instructor’s knowledge and use of EBIPs may be influenced by their peers within a STEM department. We investigated teaching discussion networks in biology and chemistry departments at three public universities.

Results

We report that tie strength and tie diversity vary between departments, but that mean indegree is not correlated with organizational rank or tenure status. We also describe that teaching discussion ties can often be characterized as strong ties based on two measures of tie strength. Further, we compare peer influence models and find consistent evidence that peer influence in these departments follows a network disturbances model.

Conclusions

Our findings with respect to tie strength and tie diversity indicate that the social network structures in these departments vary in how conducive they might be to change. The correlation in teaching practice between discussion partner and peer influence models suggest that change agents should consider local social network characteristics when developing change strategies. In particular, change agents can expect that faculty may serve as opinion leaders regardless of their academic rank and that faculty can increase their use of EBIPs even if those they speak to about teaching use EBIPs comparatively less.

Introduction

Calls for science education reform in the USA have been made for decades. A recent call for universities to produce one million additional college graduates with degrees in science, technology, engineering, and mathematics (STEM) by 2022 puts significant pressure on institutions to implement successful instructional strategies throughout the undergraduate curriculum but especially in the large gateway courses, where the retention rates are typically low (President’s Council of Advisors on Science and Technology, 2012). Reform efforts aim to promote the widespread adoption of evidence-based instructional practices (EBIPs) by college and university faculty. EBIPs are instructional strategies and methods that have been empirically demonstrated to impact student learning, attitudes, and achievement in STEM disciplines, with especially large gains shown for students from underrepresented groups (Daempfle, 2006; Handelsman et al., 2004; Handelsman, Miller, & Pfund, 2006; Schroeder, Scott, Tolson, Huang, & Lee, 2007; Wise & Okey, 1983). Despite their promise, EBIPs have not become the predominant teaching mode (Borrego, Froyd, & Hall, 2010; Durham, Knight, & Couch, 2017; Henderson & Dancy, 2009; Stains et al., 2018).

Among the issues affecting implementation, higher education lacks proven models and frameworks for catalyzing change (Henderson, Beach, & Finkelstein, 2011), reflecting a gap in understanding of how to shift faculty practice toward crafting learning environments that include EBIPs. Research on barriers to changing teaching practices shows the process to be a complex interplay between personal and contextual factors (Andrews & Lemons, 2015; Austin, 2011; Brownell & Tanner, 2012; Gess-Newsome, Southerland, Johnston, & Woodbury, 2003; Henderson & Dancy, 2007; Lund & Stains, 2015; Shadle, Marker, & Earl, 2017).

Faculty social networks represent one factor that has been hypothesized to influence change (Kezar, 2014; Quardokus & Henderson, 2015). Indeed, networks of interpersonal relationships are thought to facilitate the spread of ideas and affect individual decisions to change behaviors (Banerjee, Chandrasekhar, Duflo, & Jackson, 2013; Harrison, Sciberras, & James, 2011; Kezar, 2014). Changes often involve risk-taking, and being part of a network can buffer such risks if others are willing to share costs and benefits (Valente, 1995). Studies also suggest a connection between what an individual learns through a change effort and the strength of the social relationships (i.e., ties) in that individual’s network (Tenkasi & Chesmore, 2003). If faculty are not effectively connected to each other or if the networks do not enable communication and support for teaching, educational reforms may not easily be initiated and implemented (Kezar, 2014). Peers can play as much of a role in a person’s decision to engage in the change process as organizational norms (Kezar, 2014). Furthermore, a strong relationship exists between the social network characteristics of a system targeted by change efforts and the impact of the change efforts (Kezar, 2014).

For the use of EBIPs to be widespread and sustained, change efforts must move beyond dissemination of ideas and instead target relevant environments and structures that relate to behaviors (e.g., Henderson et al., 2019; Henderson & Dancy, 2007; Henderson, Dancy, & Niewiadomska-Bugaj, 2012; Pollock & Finkelstein, 2008). Department social networks represent important structures that can potentially be leveraged to influence behaviors and environments (Henderson et al., 2019). In the USA, departments often serve as the primary arenas for change efforts because they are relatively independent, contain their own organizational structures, play a central role in promotion decisions, and harbor significant social interaction (AAAS, 2011; Edwards, 1999; Henderson et al., 2019; Wieman, Perkins, & Gilbert, 2010).

The K-12 literature recognizes social networks as an important component in educational change (Daly, 2010; Henderson et al., 2019), but STEM education researchers have only recently applied social network analyses at the postsecondary level (e.g., Andrews, Conaway, Zhao, & Dolan, 2016; Knaub, Henderson, & Quardokus Fisher, 2018; Ma, Herman, Tomkin, Mestre, & West, 2018; Mestre, Herman, Tomkin, & West, 2019; Quardokus & Henderson, 2015). These studies describe social networks within and across STEM departments and identify leaders of instructional change, often referred to as change agents. For example, Quardokus and Henderson (2015) collected surveys to characterize faculty teaching networks within five science departments at one institution. They used these data to introduce and illustrate how certain social network measures can inform instructional change initiatives. In a second study, Andrews et al. (2016) examined teaching networks in four life-sciences departments by conducting surveys and follow-up interviews. Their goal was to characterize who was talking to whom about teaching (i.e., ties) and the content of these conversations. They reported that interactions about undergraduate teaching were uncommon in the departments studied. Less than half of the respondents reported such interactions with more than one colleague monthly. Furthermore, discipline-based education research (DBER) faculty were overrepresented as resource providers (e.g., provided instructional materials, social support, feedback, information) and change agents. In a third study, Knaub et al. (2018) contrasted two methods for identifying leaders of STEM education reform. The first method had respondents nominate individuals as current or potential leaders in STEM reform. A person was identified as a leader if they receive two or more nominations. The second method used network metrics of a person’s position in a teaching discussion network to identify whether someone is a leader. Respondents were faculty members at a large comprehensive doctoral institution in six departments surveyed multiple times. The authors found that respondent nominations yielded a different set of leaders than social network analysis (SNA) and concluded that facilitators of change initiatives should use multiple methods for identifying leaders.

Additional studies of two STEM education reform efforts focused on the social networks that emerge from deliberate communities of practice (CoP). Ma et al. (2018) and Mestre et al. (2019) showed that mentors who served as leaders of CoPs played a key bridging role in connecting faculty in conversation and collaboration around teaching. Further, the CoPs that more effectively supported the adoption of EBIPs included all CoP members in conversation but that a few core participants drove the adoption of new teaching approaches (Ma et al., 2018).

Finally, Quardokus Fisher and Apkarian performed a reanalysis of data collected across three studies consisting of 22 STEM departments using a social capital and social network lens (Quardokus Fisher & Apkarian, 2019). While data collection methods differed in the original three reference studies, the authors took steps to filter and process the data in ways that enabled them to make comparisons. This reanalysis reported on the ways in which departments show variability across a range of social network attributes related to teaching and learning, including tie density, connectedness, and tie distribution. Additionally, they highlighted how some departments had prominent central actors, while others had ties that were more evenly distributed. Finally, the authors provided case study descriptions of how the social network characteristics of two departments can inform change agents and reform efforts.

These descriptive studies broadly demonstrate how SNA can inform the design and implementation of change initiatives. Ma et al. (2018) suggest that programs successfully supporting EBIP adoption must promote both bridging ties and strong ties in the network. How EBIP adoption spreads in the absence of such a specially structured program remains unclear. Thus, there is still much to explore about how teaching innovations such as EBIPs propagate through networks.

Social networks and institutional change in higher education

Kezar’s (2014) extensive review of the change-related SNA literature identifies a series of social network characteristics hypothesized to be important for change in higher education. Here, we describe three predictions concerning social network characteristics investigated in the current study.

First, Kezar (2014) predicts that on-campus networks will primarily contain weak and diffuse ties (Fig. 1), which she proposes can impede change. According to Granovetter, tie strength reflects a “combination of the amount of time, the emotional intensity, the intimacy (mutual confiding) and reciprocal services which characterize the tie” (Granovetter, 1973, p. 1361). Petróczi, Nepusz, and Bazsó (2007) provide a comprehensive review of the indicators of tie strength, two of which can be used with data collected for the current study. The first is whether a tie is reciprocated between two individuals. Reciprocated ties are regarded as strong and unreciprocated ties as weak for the purposes of analysis (Blumstein & Kollock, 1988; Eckmann & Moses, 2002; Friedkin, 1980, 1982; Larsen & Lewis, 2017; Mathews, White, Soper, & von Bergen, 1998; Memic, 2009; Perlman & Fehr, 1987). The second indicator is whether a tie is multiplex, that is, does the connection occur only in one context (uniplex) or in multiple contexts (Basov & Brennecke, 2017; Blumstein & Kollock, 1988; Marsden & Campbell, 1984; Perlman & Fehr, 1987). A multiplex tie is viewed as stronger than a uniplex tie. For example, faculty might talk about different activities, such as teaching, research, or departmental affairs. If a faculty member talks to another about both research and teaching, that could be considered a stronger tie than if they only speak about teaching. As for Kezar’s position that weak ties impede change, the literature suggests a more nuanced picture. Weak ties allow for the introduction of new ideas because they indicate a less insular network (Granovetter, 1973); however, strong ties are predicted to be more likely to promote change because they allow for the exchange of complex ideas (Balkundi & Harrison, 2006; Kezar, 2014; Tenkasi & Chesmore, 2003). Recent work by Centola (2018) argues that the spread of a behavior through a network, compared to the spread of information, depends more on the clustering driven by strong ties rather than the bridging provided by weak ties.

Fig. 1
figure 1

Social network characteristics. a Type of tie strength and central actor. b Diversity of ties

The second prediction important to our study is that Kezar proposes that having a diversity of ties (i.e., heterophily, Fig. 1) will promote the dissemination of new ideas by providing access to new information that can help solve teaching-related problems. Ties are diverse when they connect individuals that have different characteristics (e.g., men with women, tenure-track faculty with non-tenure-track faculty). Social ties are often homophilous, which means they occur disproportionately between individuals who share socially important characteristics. Sociologists unpack homophily into baseline homophily and inbreeding homophily (McPherson, Smith-Lovin, & Cook, 2001, p. 419). Baseline homophily is the proportion of ties that are expected between individuals that share an attribute (i.e., ingroup ties) based on the frequency of that attribute within the network. For example, in a network evenly split between men and women, 50% of all ties would occur by chance within gender and 50% of the ties between genders. Inbreeding homophily refers to the excess of ingroup ties over the baseline expected and is measured by a “coefficient of inbreeding,” which expresses how much the proportion of ingroup ties exceeds chance.

Third, Kezar (2014) describes opinion leadership and high centrality as two characteristics that can affect change leadership and predicts that opinion leadership is likely to be more influential than high centrality (cf. Knaub et al., 2018). Opinion leaders are faculty perceived by others as having influence on their attitudes and opinions, whereas central actors (i.e., those with high centrality; Fig. 1) are faculty who have connections to many others in their organization. In social network analysis, the two metrics overlap when networks consist of directed ties. In that case, opinion leadership is often operationalized in terms of indegree centrality: the intuition is that a person often nominated by others is a key individual whose opinion counts more than a person rarely nominated by others. While the two are conflated within social networks, the extent to which the combined construct is related to higher education is investigated in the current study.

Peer influence models

One way that faculty social networks may be important to educational change rests on the idea that faculty are subject to peer influence. If some faculty adopt EBIPs, the peer influence process could facilitate the spread of EBIP adoption from these “seeds” to their associated colleagues. Thus, a change in behavior, specifically the pedagogical techniques used by faculty, could be facilitated by leveraging the peer influence process within faculty networks.

Statistical models for peer influence exist in the social network literature (Marsden & Friedkin, 1993) and have been applied in a variety of contexts (Duke, 1993; Gimpel & Schuknecht, 2003; Mizruchi, Stearns, & Marquis, 2006; O’Malley & Marsden, 2008). These models estimate the manner and degree to which network connections affect some outcome variable, such as knowledge or use of EBIPs. Leenders (2002) provides the standard reference for peer influence studies in social network analysis. Within these peer influence models, an individual’s opinion or behavior depends on the individual’s own intrinsic characteristics (i.e., covariates) as well as on their interactions with peers. Here, we explore two models, which reflect differences in underlying mechanism about how influence occurs. In the “network effects” model, an individual’s opinion (i.e., knowledge or use of EBIPs) is based on their own intrinsic opinion (determined by covariate values) as well as the opinions of those with whom they communicate. In the context of this study, that would appear as a faculty member’s (i.e., the individual’s) teaching practices becoming more like those with whom they discuss teaching. A faculty member who regularly uses EBIPs would always decrease in their use if their discussion partners used EBIPs less than that faculty member. In the “network disturbances” model, an individual again has an intrinsic opinion. However, here, comparison with peers causes that individual to adjust their behavior based on how others deviate from expectations, ultimately modifying their opinion in the same direction that others’ deviate from that predicted by their covariates. In the context of this study, faculty members would have a shared tendency to depart from the opinion predicted by their attributes, rather than having directly correlated opinions. A faculty who regularly uses EBIPs could increase their use even when speaking to faculty who had a lower use of EBIPs as long as the faculty they spoke to deviated positively from their predicted use.

In the current study, we aimed to test predictions about the characteristics of faculty social networks hypothesized to influence instructional change efforts. We also aimed to explore how a faculty member’s knowledge and use of EBIPs relate to the knowledge and use of their peers. We conducted data collection in biology and chemistry departments at three large public research universities in the USA. In particular, we sought to answer the following research questions:

  1. 1.

    What is the frequency and relative strength of ties among faculty in each of these STEM departments?

  2. 2.

    How diverse are ties among faculty overall and within STEM fields?

  3. 3.

    Do faculty of higher rank (i.e., potential opinion leaders by virtue of status) tend to occupy more central positions across the sample?

  4. 4.

    Does a faculty member’s knowledge and use of EBIPs correlate with that of their discussion partners across the sample?

  5. 5.

    What peer influence model best describes the relationship between faculty social networks and their teaching practices across the sample?

Methods

Aims, design, and setting

To address our research questions about the role of networks in relation to teaching, we developed and used an online survey to gather data from faculty. The three institutions are located in different regions of the country and vary in what teaching development resources are available to faculty (e.g., centers for teaching and learning, workshops, professional development programs, mentoring programs). The universities also have different department leadership structures and promotion and tenure practices. As such, they reflect a range of department and institutional dynamics. Each of these institutions has administrators and faculty interested in systemic change in STEM education with a particular focus on understanding the factors that lead to greater EBIP adoption, which partially motivated the focus on these institutions.

Characteristics of participants

Survey recipients were all full-time permanent faculty in biology and chemistry departments during the 2015–2016 academic year. In one institution, the discipline of biology is housed in two departments, both of which were included in the study.

Table 1 presents the basic department-level demographics for gender, rank, and tenure status generated from our survey. We include these variables as control variables in our network change models. Because of the diversity of position titles across the three institutions, participants are divided according to tenure status with all non-tenurable positions (such as “lecturer” or “instructor”) in a non-tenure track category. Tenurable positions are divided into untenured (assistant professor) and tenured (associate and full professor). We also report department size and response rates in Table 1. Response rates ranged from 40 to 80% per department, which is comparable or somewhat higher than recent studies in university STEM departments (e.g., Henderson et al., 2019).

Table 1 Department demographics

Processes and methodologies employed

The project was introduced at respective department meetings, and invitations to the survey were subsequently emailed to faculty. We developed and administered the survey in Qualtrics in the 2016 spring semester. The survey consisted of a consent form, a question about academic rank, prompts to generate information on participant social networks within and outside their department, and questions to measure the extent of their own EBIP knowledge and use.

We used the roster method to generate networks within departments. Faculty were asked to choose from a complete list of peers in their department with whom they discussed matters in each of three domains within the last year, with no limit to the number of individuals selected within their department. The faculty list was limited to those who had a teaching assignment during the last academic year and did not include postdoctoral researchers, graduate students, or other staff. The discussion domains included teaching activities (e.g., teaching strategies, student learning, grading, student achievement), research activities (e.g., your research topics, their research topics, mutual collaborations, funding opportunities), and general department and university affairs (e.g., course scheduling, administrative policies, faculty governance). These domains were selected based on the standard appointments and official responsibilities of faculty within these departments and do not necessarily represent completely distinct domains. We used the name generator approach to elicit ties outside of departments and outside of the university. Respondents could list up to seven names outside their department and another seven outside their university. These outside ties were not included in statistical models due to challenges in collecting their demographic characteristics. Finally, respondents were asked to select the top three to five individuals that they discussed matters in each domain with the most. Networks generated in each of these domains were combined to form a multiplex network with variable tie strength.

The final section of the survey included two questions about EBIPs, defined on the survey as “active learning techniques, such as just-in-time teaching, peer instruction, think-pair-share, cooperative learning, team-based learning, and many others.” One question asked respondents about their knowledge of EBIPs on a 5-point scale from “extremely knowledgeable” to “not knowledgeable at all,” and a second question asked about use of EBIPs on a 5-point scale from “used extensively” to “not used at all” with a sixth option of “No courses I teach are appropriate courses for EBIP application.” In the analysis, this sixth category was merged with the fifth category.

We used automated email follow-ups to enhance response rates in the survey. We closed the survey after three follow-ups. Data collection and analysis was reviewed and approved by the Institutional Review Boards at Boise State University (935-SB16-056), University of Nebraska-Lincoln (16000), and University of South Florida (Pro00025701).

Statistical analyses

Descriptive analysis of faculty social networks was conducted in R (R Core Team, 2014) using R’s general statistical utilities for procedures like t tests and ANOVA and using the package sna written specifically for social network analysis, which includes the routine to fit peer influence models (Butts, 2008). Descriptive analyses were used to address the first four research questions and identify any differences between groups.

The analysis of social influence processes used a variant of regression analysis and linear network autocorrelation models to explore two potential mechanisms by which an individual’s knowledge and use of EBIPs can become correlated with that of their discussion partners. In the “network effects” model, an individual’s opinion (i.e., EBIP score) is based on their own covariates and adjusted to be more like that of the people connected to them. In the “network disturbances” model, an individual’s opinion is based on their own covariates and adjusted based on how the opinions of their peers differ from expected based on covariates alone. The linear equations for these two effects formalize these points. Using Leenders (2002) notation, Eq. 1 below specifies the network effects model, Eq. 2 the network disturbances model, and Eq. 3 is a model that combines both effects.

$$ y=\rho Wy+ X\beta +\varepsilon $$
(1)
$$ y= X\beta +{\varepsilon}_{,}\varepsilon =\rho W\varepsilon +\upnu $$
(2)
$$ y={\rho}_1{W}_1y+ X\beta +\varepsilon, \varepsilon ={\rho}_2{W}_2\varepsilon +\upnu $$
(3)

In these equations, y is a vector of opinions, with ith element being person i’s opinion, X is a matrix of individual covariates (the columns), with the ith row corresponding to person i’s values on the covariates. The correlation effect through networks is captured by the ρ1 and ρ2 coefficients, the W1y matrix captures the weighting of partners’ opinions on a respondent’s opinion, and the W2ε matrix captures the weighting of partners’ deviations from expected opinion on a respondent’s deviation from his or her expected opinion. The deviations themselves are denoted by ε and ν, with ε being the first-order deviation modeled by the network disturbances process and v the residual deviation once adjusting for correlated disturbance.

Central to both models are the W matrices, which capture the weight accorded partners’ opinions or deviations. We have two ways to specify the W matrix: according to the simple binary adjacency matrix or according to the tie strength adjacency matrix. Regardless of which specification is chosen, the W matrix is row normalized to capture the assumption that a respondent accords influence to partners in proportion to the tie strength to each. In the first specification, all ties have the same strength, but in the second, strength varies. Formally, the W matrices can be defined as shown in Eq. 4 and 5.

$$ {W}_{ij}=\frac{X_{ij}}{\Sigma_j{X}_{ij}}\ \mathrm{for}\ {X}_{ij}=0,1 $$
(4)
$$ {W}_{ij}=\frac{S_{ij}}{\Sigma_i{S}_{ij}}\ \mathrm{for}\ {S}_{ij}=0,1,2,3 $$
(5)

Table 7 presents the results of the analysis for each stipulation of the weight matrix and for each of the three models, that is, network effects only, network disturbances only, and both.

A key question is the effect of response rates on the peer influence analysis. If individuals who were commonly nominated by respondents as teaching discussion partners did not respond, data on their knowledge and use are missing and that impacts the assessment of peer influence. However, if the non-respondents were never nominated as discussion partners (i.e., have an indegree of zero), then their absence has no effect on assessing peer influence. In our data, the number of nominations received by non-respondents is significantly lower than the number received by respondents (3.21 vs 5.59, t(193.95) = − 5.47, p < 0.001) and over half of the non-respondents received two or fewer nominations (20 received 0, 18 received 1, and 9 received 2). Furthermore, Huisman’s simulation study concludes that in many cases, simple imputation is inferior to ignoring missing data and that reciprocity is stable with up to 40% missing data (Huisman, 2014).

Results

Strength of social relationships

First, we report the frequency and relative strength of ties for each of the departments in this study. Table 2 reports the percent of ties that are reciprocated in each department. For benchmarking purposes, data from a law firm studied by Lazega (2001) are presented. These data have been widely used in social network analysis as a benchmark for research of a substantive (Lazega & van Duijn, 1997) and methodological nature (Snijders, Pattison, Robins, & Handcock, 2006). In the law firm data, percent strong (i.e., reciprocated) ties vary from 24 to 53%. In the seven departments in our study, the percent strong (reciprocated) ties for the teaching discussion relation vary from 33 to 65%, percentages that are comparable to the law firm benchmarks.

Table 2 Relative frequency of strong ties defined as reciprocated nominations

Another way to examine the strength of ties is to assess the multiplexity of connections, which has been used as a measure of tie strength since Fischer (1982) (see also Blumstein & Kollock, 1988). In our survey, we asked respondents about the discussions they had with colleagues in the domains of teaching, research, and university affairs. Survey prompts generated three separate networks, which can be added to an integer-valued measure of tie strength. Uniplex teaching ties comprise the weakest type; the next strongest are ties that have a teaching tie and one of the others; and the strongest type is a teaching tie that co-occurs with both of the other types of ties. Table 3 shows that overall, only 26% of the teaching ties are uniplex (“weak”), 42% occur in two domains (“moderate”), and 32% are multiplex across all the three domains (“strong”). Additionally, teaching ties paired with department affairs ties are three times as common as teaching ties paired with research ties.

Table 3 Count and percent of multiplex teaching ties

The distribution of multiplexity types varies significantly across the departments as the large value of χ2 indicates. Specifically, Bio1 and Bio3 have significantly more and Chem2 and Chem3 have significantly fewer of the strongest type of tie, the triple-stranded connection, than expected by chance. Also, Bio1, Bio3, and Chem3 have proportionately more teaching ties co-occurring with departmental affairs ties than expected by chance (Table 3).

Diversity of social relationships

Second, we assessed the diversity of ties between faculty overall and within fields. Based on a homophily analysis for gender, rank, and tenure status, in most cases, τ (the inbreeding homophily coefficient, which ranges from 0.0 to 1.0) is significant but small relative to scores found in other studies (Table 4, Additional file 1: Tables S1–S3 present the complete results of the homophily analysis for teaching discussions). For example, Skvoretz (2013) reports homophily scores for over 50 cases in which the ties are marriage, dating, and cohabitations, and the grouping variables are ethno-racial group, education, and religion. Those homophily scores vary from 0.085 to 0.853 with a mean score of about 0.450. In our data, the largest value—the most homophily—occurs for the chemistry departments with respect to tenure status (Table 4). Indeed, inbreeding homophily as measured is noticeably stronger in the chemistry departments than in the biology departments.

Table 4 Summary of homophily overall and by field

A second sense of tie diversity is whether teaching ties reach out beyond the local context of the department. We examine this notion of diversity by analyzing the naming of discussion partners outside the department but inside the university (nt.u) and also partners outside the university (nt.o); we note that a maximum of seven names could be given to each prompt. We know of no other data set that can serve as a benchmark, but we can explore variation in these extra-departmental ties across our seven departments. Additional file 1: Figure S1 displays the distributions of these variables. By far, the most common response is to give no names in response to each prompt. Across the sample, a conventional (paired) t test yields a statistically significant difference (at the 0.05 level) between the means of nt.u (M = 1.63, SD = 2.16) and nt.o (M = 1.08, SD = 1.92); t(131) = 3.11, 0.002). Although it is possible that different fields may have different practices regarding teaching consultations, the comparisons of fields on their means of nt.u (Biology mean 1.70, SD = 2.09; Chemistry mean 1.55, SD = 2.24; t(128.75) = 0.39, ns) and nt.o (Biology mean 1.04, SD = 1.82 ; Chemistry mean 1.11, SD = 2.02; t(127.65) = − 0.19, ns) indicate no statistically significant difference between these chemistry and biology departments. Thus, not surprisingly, respondents name more teaching contacts outside their department but in their university than outside their university, and the hypothesis that the fields do not differ in their reaching out cannot be rejected.

Roles of key players

Third, we determined if faculty of higher rank tend to occupy more central positions. Table 5 presents the mean indegree (number of people who nominated a given individual) of teaching discussion nominations by rank and by tenure status. There is variation over categories, but ANOVA (controlling for non-independence in indegree scores) shows that indegree centrality is not associated with organizational position (either rank [F(3, 128) = 1.002, p = 0.394] or tenure status [F(2, 129) = 1.331, p = 0.268]) across the sample. Thus, individuals of a certain rank or tenure status are not nominated more often as teaching discussion partners than those of any other rank or tenure status.

Table 5 Centrality and organizational position

Each department’s score on indegree centralization, a department-level metric that measures the amount of variation in individual centrality relative to the maximum possible, is significantly higher than expected by chance indicating that ties within these departments tend to favor certain individuals rather than being evenly spread across the networks. The centralization score varies theoretically in the interval [0, 1] with higher numbers being more centralized (Anderson, Butts, & Carley, 1999). Scores for the seven departments Bio1, Bio2, Bio3, Bio4, Chem1, Chem2, and Chem3 are 0.444, 0.472, 0.457, 0.314, 0.321, and 0.207, respectively. Taken together, these analyses of potential key players suggest that targeting central actors for innovation seeding is superior to selection on the basis of high-status organizational position.

Knowledge and use of EBIPs

Fourth, our survey asked respondents about their knowledge and use of EBIPs, which we use to assess if knowledge or use of EBIPs correlates between discussion partners across the sample. Additional file 1: Figure S2 shows the distribution of responses. The middle category is the most frequent on both knowledge and use but with clear variation on both questions (Additional file 1: Figure S3). We first examine whether knowledge and use are related to individual attributes like gender, rank, and tenure status to determine what control variables are required for the peer influence models; second, we assess whether an individual’s knowledge and use are correlated with the levels of knowledge and use of their teaching discussion partners; and third, we propose and estimate specific models of peer influence.

Gender, rank, and tenure status define different groups among which average scores on knowledge and use may differ. For the first dimension, a simple t test can be used and for the others, ANOVA. Results for the two EBIP items are found in Table 6. For the first item, knowledge of EBIPs, none of the three dimensions makes a significant difference. For the second item, use of EBIPs (in which a lower score indicates greater use), gender makes a difference but not rank or tenure status. Specifically, women (n = 42) have greater use of EBIPs than men (n = 90), which is shown by a lower score on the use item, and the difference is significant at 0.05. Because the difference in use is significant between genders, we also calculated the magnitude of effect using Hedge’s g (0.44), which is advised for t tests of different sample sizes. This finding implies gender should be included as an individual-level factor in the specific models of peer influence.

Table 6 EBIP scores and individual attributes

The second step determines if a respondent’s knowledge and use of EBIPs are related to that of their teaching discussion partners. We have two measures of the tie sent: first, a binary indicator for the presence/absence of a teaching discussion tie, and second, a count of the multiplexity of the teaching discussion. In the first case, everyone nominated by a respondent has equal influence potential. In the second case, those connected by additional strands (research and/or departmental affairs discussion) are presumed to have stronger ties and a greater influence potential.

We can first assess whether there is any relationship between a respondent’s knowledge and use and that of their partners before estimation of specific models. To do this, we calculate two averages: one where partners are weighted equally and one where partners are weighted proportional to tie strength, and then correlate these averages with the respondent’s knowledge and use scores. These correlations from across the sample are presented in Additional file 1: Table S4. First, knowledge and use are correlated to a substantial degree (0.77) meaning that those reporting more knowledge about EBIPs also report more use of them. Second, the correlations between the respondent’s response and the average response of their peers are small but significant, and this confirms the value of estimating specific peer influence models. The correlation between a respondent’s score and average of their partners is 0.24 for the knowledge item and 0.23 for the use item, and these correlations are similar irrespective of whether partner averages are unweighted or weighted. For both EBIP items, the two versions of partner average are highly correlated: for knowledge, the correlation is 0.98, and for use, it is 0.98. These correlations suggest the more complicated measure of tie strength does not differ much from the simple measure and either has about the same relationship to a respondent’s attitude. Sociograms of the unweighted and weighted teaching discussion networks in each department visually demonstrate the substantial similarity between the two (Additional file 1: Figure S3).

Peer influence

Fifth, we assessed which peer influence model best describes the relationship between faculty social networks and their teaching practices across the sample. Gender was chosen as an individual covariate for peer influence analysis, since previous analysis indicated a gender effect on the EBIP variables. Gender is entered as a binary dummy variable, so the constant refers to the effect of the reference category, male, on the dependent variable. Table 7 provides the results from three models of peer influence: network effects, network disturbance, and a combined model.

Table 7 Peer influence models

With respect to attribute effects, gender affects use but not knowledge across models. Female respondents report more use of EBIPs (lower scores from the reference category), and the difference is statistically significant at the 0.05 level under both the weighted and unweighted specifications of the influence matrix. While female respondents also report consistently more knowledge, none of the differences reach statistical significance. These results for gender are consistent with earlier findings by Henderson, Dancy, and Niewiadomska-Bugaj (2010).

With respect to the peer influence process itself, there are three general findings. First, the correlation between a respondent’s knowledge and use and the knowledge and use scores of their partners is more consistent with a “network disturbances” process than a “network effects” process. The values of the network effects coefficient ρ1 are not significantly different from 0, while the values of the network disturbances coefficient ρ2 are all positive and statistically significant at the 0.05 level. Thus, the respondent-partner correlation arises from a shared tendency to depart from the opinion predicted by individual attributes rather than adjustment in response to the opinion themselves. In plain words, two individuals tied in the teaching discussion network have positively correlated scores on knowledge and positively correlated scores on use because being connected means they move in the same direction relative to the scores predicted from attributes alone. This is different from network effects where the average of partners’ scores affects respondent’s score directly. Second, the AIC measure (i.e., an estimate of model quality relative to other models that considers both goodness of fit and model simplicity) shows that using the more complicated weighted matrix does not improve fit over the simpler option of the unweighted matrix. As a result, including the strength of a teaching discussion tie as determined by its overlap with the two other types of ties (e.g., research and departmental affairs) does not produce a better fitting model over one based on the simple existence of a teaching discussion tie. Third, the simplest model that provides the best fit according to AIC is the network disturbances only model. There is no evidence for a network effects process (either alone or in a combined model) producing correlation in the knowledge and use of EBIPS by individuals tied in the discussion network.

Discussion

The current research is motivated by the idea that successfully answering the calls for STEM reform requires understanding faculty social networks and the part they can play in institutional change initiatives. Studies of faculty networks in higher education are limited in number, so our work adds to a growing empirical base. Including multiple institutions and departments allows us to observe our results in multiple contexts. We focus analysis on variation in tie strength and diversity, the relationship between indegree centrality and organizational position, and the influence that peers may have on one another with respect to key concerns of STEM reform efforts, specifically, the knowledge and use of EBIPs. Our study is the first to investigate network effects models in this context. By analyzing our data using network effects models, we may consider how teaching reform initiatives can leverage network effects. Future work could consider testing these models in additional contexts and aiming to purposefully compare across institutions and departments achieving sufficient response rates.

First, we find that teaching discussion ties are often strong ties by two measures: reciprocity-based and multiplexity-based. The presence of strong ties indicates that the departments studied may be conducive to behavioral changes (Centola, 2018) suggesting good prospects for instructional reform. Second, the teaching discussion ties are diverse. They link faculty in different demographic (gender) and organizational groups (rank and tenure status). Teaching discussion ties also occur to colleagues outside the local department, another form of diversity. We find that departments in our study vary in tie diversity (i.e., heterophily), which serves as further encouragement for exploration about the relationship between tie diversity and diffusion of new practices, as suggested by Kezar (2014). Third, opinion leadership, defined here as indegree centrality, in the teaching discussion network is not associated with faculty rank or tenure status. Support of organizationally higher ranked faculty does not provide advantage to change efforts because these individuals are no more likely to be opinion leaders than lower ranked faculty (at least in a network sense). However, variation in centrality means targeting opinion leaders may be a useful strategy. These opinion leaders might be explicitly supported to function similarly to the CoP mentors studied by Ma et al. (2018) and Mestre et al. (2019). Finally, there is evidence of peer influence on knowledge and use of EBIPs, supporting the motivating premise that faculty social networks matter to change initiatives. The specific type of peer influence, however, is complex: faculty do not adjust their knowledge and use to match that of their discussion partners but instead adjust when their partners are more or less knowledgeable or more or less engaged in use than predicted for those partners.

This work has relevance for leaders of instructional change initiatives. Assuming the seven departments studied across three institutions are representative, the existence of strong ties suggests potential levers for change, and the diversity of ties indicates further opportunities. These two positive change indicators may reflect that each research site includes groups of STEM faculty who are interested in systemic change in STEM education. The effect caused by the presence of these faculty could explain why this result is different than that predicted by Kezar (2014). Other universities with less supportive climates may fit the expected pattern of mostly weak and not diverse ties.

Indegree centrality, or how highly connected an individual is within a network, is not associated with academic rank in these departments. On the one hand, those of higher organizational rank are not more likely to be central or influential. If the support of higher ranks is essential to success, this finding does not bode well because these individuals do not necessarily have greater network access. On the other hand, there is variation in indegree centrality. Targeting central actors who do have a high indegree centrality to begin seeding innovation would be superior to selecting people at random or, in this case, those with high academic ranks. An implication is that prior to any change initiative, agents could use a social network approach as part of a set of strategies to determine leverage points within an institution or department. However, this finding does not mean that academic rank has no bearing on instructional change. Indeed, individuals of high academic rank may still influence departmental teaching practices through network-independent mechanisms, such as by setting precedent through their own instructional reputation, influencing teaching climate based on comments at faculty meetings, or affecting policy-level decisions. As recommended in Knaub et al. (2018), change agents should draw from multiple sources of data including SNA to identify those most likely to be harnessed to promote change.

Our peer influence analysis reinforces the importance of teaching discussions among faculty. In particular, we provide the first empirical investigation that suggests the peer influence process among undergraduate science faculty within academic departments follows a network disturbance model. According to this model, the teaching practices of connected individuals vary together likely based on a joint response to factor(s) not accounted for in their personal attributes. This response is not necessarily a conscious behavior, and it could arise in individuals together based on discussions or separately based on shared experiences related to their ties. While the model cannot pinpoint a specific causative factor, we can hypothesize a process by which network disturbance could have arisen. For example, if a department changed how teaching was evaluated (e.g., through the development of new criteria), an individual might be inclined to change based on discussions. Alternatively, if instructional designers worked preferentially with introductory-level instructors and these instructors tend to talk with each other, they could produce correlated EBIP scores (even though the instructors may not directly discuss the causative factor). There is no evidence for a network effects mechanism which, assuming positive autocorrelation, would produce a convergence of opinions towards the average of the opinions of discussion partners.

The network disturbance model also has key implications for change agents. Our data suggest that instructional change does not arise simply from individuals seeking to conform their teaching to those of their colleagues (i.e., the network effects model). Rather, individuals appear to be inclined to adjust their teaching practices based on how others deviate from predictions. Thus, the goal of change agents should not be to recruit individuals who implement high levels of EBIPs in hopes that others will seek to emulate this behavior. Change agents could seek to create opportunities for positive growth, which would affect the network as correlated change between discussion partners. This finding has significance for the change movement, because it implies that individuals can be affected by a positive change among any of their discussion partners, regardless of their partners’ absolute EBIP implementation levels.

Other considerations

There are several limitations that arise from working with a small number of departments as well as from SNA in general. First, the amount of data collected depends on survey response rates and descriptive results such as homophily and indegree centrality analyses must be tempered accordingly.

Second, we are logistically limited to a modest number of departments and faculty. Combining the relatively small size of the average STEM department and the reliance on survey responses, homophily and indegree centrality analyses cannot be performed at the department level due to a small sample size. Furthermore, there are no specific hypotheses as to why and how departments may differ. Performing analyses at the department level would therefore be unmotivated by research questions.

Third, models of peer influence assume that the structure of the network affects the opinion or deviation thereof. However, we cannot claim a causal relationship between the network disturbance model and knowledge and use of EBIPs. We have shown evidence that is consistent with the network influencing this knowledge and use, but it is not conclusive. On a related note, we cannot provide conclusive evidence that the network disturbances model is the best fit to the data. Rather, the network disturbances model is consistent with our data and more supported than the network effects model.

Finally, the self-report data on knowledge and use of EBIPs was not confirmed through data triangulation for this research. While some studies have questioned the validity of faculty self-reported survey data (e.g., Ebert-May et al., 2011), other evidence suggests that faculty responses can align closely with a third-party observer regarding the frequency of active learning in a course (Durham et al., 2018). An intriguing question for future investigation is whether peer influence results differ based whether teaching practices are self-reported versus measured by a third-party observer.

Conclusions

SNA has great potential for answering core questions about how information and change spread throughout a department, but also can be used as a tool by change initiatives to better understand departmental dynamics and thereby purposefully design change strategies best suited for that environment. For example, in the chemistry and biology departments at these three institutions, change agents can expect that faculty may serve as opinion leaders regardless of their academic rank. Also, faculty in these departments can be influenced to increase their knowledge and use of EBIPs, even if their discussion partners have comparatively lower EBIP knowledge and use. Future studies should strive to capture departmental diversity considering disciplines, institutions, and the presence or absence of already occurring change initiatives to continue to answer fundamental questions about the dynamics of STEM departments.

Availability of data and materials

The data sets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Assoc:

Associate Professor

Asst:

Assistant Professor

CoP:

Communities of practice

df:

Degrees of freedom

EBIP:

Evidence-based instructional practice

ns:

Not significant

nt.o:

Number of discussion partners outside the university

nt.u:

Number of discussion partners inside the university

SNA:

Social network analysis

tt:

Tenure track

References

  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action. Washington, DC.

  • Anderson, B. S., Butts, C., & Carley, K. (1999). The interaction of size and density with graph-level indices. Social Networks, 21(3), 239–267.

    Article  Google Scholar 

  • Andrews, T. C., Conaway, E. P., Zhao, J., & Dolan, E. L. (2016). Colleagues as change agents: How department networks and opinion leaders influence teaching at a single research university. CBE-Life Sciences Education, 15(2), ar15.

    Article  Google Scholar 

  • Andrews, T. C., & Lemons, P. P. (2015). It’s personal: Biology instructors prioritize personal evidence over empirical evidence in teaching decisions. CBE-Life Sciences Education, 14(1), ar7.

    Article  Google Scholar 

  • Austin, A. (2011). Promoting evidence-based change in undergraduate science education: A white paper commissioned by the National Academies National Research Council Board on Science Education.

    Google Scholar 

  • Balkundi, P., & Harrison, D. A. (2006). Ties, leaders, and time in teams: Strong inference about network structure’s effects on team viability and performance. Academy of Management Journal, 49(1), 49–68.

    Article  Google Scholar 

  • Banerjee, A., Chandrasekhar, A. G., Duflo, E., & Jackson, M. O. (2013). The diffusion of microfinance. Science, 341(6144), 1236498.

    Article  Google Scholar 

  • Basov, N., & Brennecke, J. (2017). Duality beyond dyads: Multiplex patterning of social ties and cultural meanings. In P. Groenewegen, J. E. Ferguson, C. Moser, J. W. Mohr, & S. P. Borgatti (Eds.), Research in the sociology of organizations: Structure, content and meaning of organizational networks: Extending network thinking 53 (pp. 87–112). Greenwich: JAI Press.

    Chapter  Google Scholar 

  • Blumstein, P., & Kollock, P. (1988). Personal relationships. Annual Review of Sociology, 14(1), 467–490.

    Article  Google Scholar 

  • Borgatti, S. P., Everett, M. G., & Freeman, L. C. (2002). UCINET for Windows: Software for social network analysis. Harvard: Analytic Technologies.

    Google Scholar 

  • Borrego, M., Froyd, J. E., & Hall, T. S. (2010). Diffusion of engineering education innovations: A survey of awareness and adoption rates in us engineering departments. Journal of Engineering Education, 99(3), 185–207.

    Article  Google Scholar 

  • Brownell, S. E., & Tanner, K. D. (2012). Barriers to faculty pedagogical change: Lack of training, time, incentives, and… tensions with professional identity? CBE-Life Sciences Education, 11(4), 339–346.

    Article  Google Scholar 

  • Butts, C. T. (2008). Social network analysis with sna. Journal of Statistical Software, 24(6), 1–51.

    Article  Google Scholar 

  • Centola, D. (2018). How behavior spreads: The science of complex contagions. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Daempfle, P. A. (2006). The effects of instructional approaches on the improvement of reasoning in introductory college biology: A quantitative review of research. Bioscene: Journal of College Biology Teaching, 32(4), 22–31.

    Google Scholar 

  • Daly, A. (Ed.). (2010). Social network theory and educational change. MA Harvard Education Press: Cambridge.

    Google Scholar 

  • Duke, J. B. (1993). Estimation of the network effects model in a large data set. Sociological Methods & Research, 21(4), 465–481.

    Article  Google Scholar 

  • Durham, M. F., Knight, J. K., Bremers, E. K., DeFreece, J. D., Paine, A. R., & Couch, B. A. (2018). Student, instructor, and observer agreement regarding frequencies of scientific teaching practices using the Measurement Instrument for Scientific Teaching-Observable (MISTO). International Journal of STEM Education, 5(1), 31.

    Article  Google Scholar 

  • Durham, M. F., Knight, J. K., & Couch, B. A. (2017). Measurement Instrument for Scientific Teaching (MIST): a tool to measure the frequencies of research-based teaching practices in undergraduate science courses. CBE-Life Sciences Education, 16(4), ar67.

    Article  Google Scholar 

  • Ebert-May, D., Derting, T. L., Hodder, J., Momsen, J. L., Long, T. M., & Jardeleza, S. E. (2011). What we say is not what we do: Effective evaluation of faculty professional development programs. BioScience, 61(7), 550–558.

    Article  Google Scholar 

  • Eckmann, J. P., & Moses, E. (2002). Curvature of co-links uncovers hidden thematic layers in the World Wide Web. Proceedings of the National Academy of Sciences USA, 99(9), 5825–5829.

    Article  Google Scholar 

  • Edwards, R. (1999). The academic department: How does it fit into the university reform agenda? Change: The Magazine of Higher Learning, 31(5), 16–27.

    Article  Google Scholar 

  • Fischer, C. S. (1982). To dwell among friends: Personal networks in town and city. Chicago: University of Chicago Press.

    Google Scholar 

  • Friedkin, N. (1980). A test of structural features of Granovetter’s strength of weak ties theory. Social Networks, 2(4), 411–422.

    Article  Google Scholar 

  • Friedkin, N. (1982). Information flow through strong and weak ties in intraorganizational social networks. Social Networks, 3(4), 273–285.

    Article  Google Scholar 

  • Gess-Newsome, J., Southerland, S. A., Johnston, A., & Woodbury, S. (2003). Educational reform, personal practical theories, and dissatisfaction: The anatomy of change in college science teaching. American Educational Research Journal, 40(3), 731–767.

    Article  Google Scholar 

  • Gimpel, J., & Schuknecht, J. (2003). Political participation and the accessibility of the ballot box. Political Geography, 22(5), 471–488.

    Article  Google Scholar 

  • Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380.

    Article  Google Scholar 

  • Handelsman, J., Ebert-May, D., Beichner, R., Bruns, P., Chang, A., DeHaan, R., et al. (2004). Scientific teaching. Science, 304(5670), 521–522.

    Article  Google Scholar 

  • Handelsman, J., Miller, S., & Pfund, C. (2006). Scientific teaching. New York: W.H. Freeman & Company, in collaboration with Roberts & Company Publishers.

    Google Scholar 

  • Harrison, F., Sciberras, J., & James, R. (2011). Strength of social tie predicts cooperative investment in a human social network. PLoS One, 6(3), e18338.

    Article  Google Scholar 

  • Henderson, C., Beach, A., & Finkelstein, N. (2011). Facilitating change in undergraduate stem instructional practices: An analytic review of the literature. Journal of Research in Science Teaching, 48(8), 952–984.

    Article  Google Scholar 

  • Henderson, C., Dancy, M., & Niewiadomska-Bugaj, M. (2012). Use of research-based instructional strategies in introductory physics: Where do faculty leave the innovation-decision process? Physical Review Special Topics-Physics Education Research, 8(2), 020104.

    Article  Google Scholar 

  • Henderson, C., & Dancy, M. H. (2007). Barriers to the use of research-based instructional strategies: The influence of both individual and situational characteristics. Physical Review Physics Education Research, 3(2), 020102.

    Article  Google Scholar 

  • Henderson, C., & Dancy, M. H. (2009). Impact of physics education research on the teaching of introductory quantitative physics in the United States. Physical Review Physics Education Research, 5(2), 020107.

    Article  Google Scholar 

  • Henderson, C., Dancy, M. H., & Niewiadomska-Bugaj, M. (2010). Variables that correlate with faculty use of research-based instructional strategies. In C. Singh, M. Sabella, & S. Rebello (Eds.), Proceedings of the 2010 AAPT physics education research conference (p. 169). Melville: American Institute of Physics.

    Google Scholar 

  • Henderson, C., Rasmussen, C., Knaub, A., Apkarian, N., Daly, A. J., & Fisher, K. Q. (Eds.). (2019). Researching and enacting change in postsecondary education: Leveraging instructors’ social networks (Vol. 28). Routledge.

  • Huisman, M. (2014). Imputation of missing network data: some simple procedures. Encyclopedia of Social Network Analysis and Mining, 707–715.

  • Kezar, A. (2014). Higher education change and social networks: A review of research. The Journal of Higher Education, 85(1), 91–125.

    Article  Google Scholar 

  • Knaub, A. V., Henderson, C., & Quardokus Fisher, K. (2018). Finding the leaders: An examination of social network analysis and leadership identification in STEM education change. International journal of STEM education, 5(1), 26.

    Article  Google Scholar 

  • Larsen, J. M., & Lewis, J. I. (2017). Ethnic networks. American Journal of Political Sciences, 61(2), 350–364.

    Article  Google Scholar 

  • Lazega, E. (1998). Social networks and relational structures. Paris: University Presses of France.

    Google Scholar 

  • Lazega, E. (2001). The collegial phenomenon: The social mechanisms of cooperation among peers in a corporate law partnership. Oxford, UK: Oxford University Press.

    Book  Google Scholar 

  • Lazega, E., & van Duijn, M. (1997). Position in formal structure, personal characteristics and choices of advisors in a law firm: A logistic regression model for dyadic network data. Social Networks, 19(4), 375–397.

    Article  Google Scholar 

  • Leenders, R. T. A. J. (2002). Modeling social influence through network autocorrelation: Constructing the weight matrix. Social Networks, 24(1), 21–47.

    Article  Google Scholar 

  • Lund, T. J., & Stains, M. (2015). The importance of context: An exploration of factors influencing the adoption of student-centered teaching among chemistry, biology, and physics faculty. International Journal of STEM Education, 2(1), 13.

    Article  Google Scholar 

  • Ma, S., Herman, G. L., Tomkin, J. H., Mestre, J. P., & West, M. (2018). Spreading teaching innovations in social networks: The bridging role of mentors. Journal for STEM Education Research, 1(1-2), 60–84.

    Article  Google Scholar 

  • Marsden, P. V., & Campbell, K. E. (1984). Measuring tie-strength. Social Forces, 63, 482–501.

    Article  Google Scholar 

  • Marsden, P. V., & Friedkin, N. E. (1993). Network studies of social influence. Sociological Methods & Research, 22(1), 127–151.

    Article  Google Scholar 

  • Mathews, K. M., White, M. C., Soper, B., & von Bergen, C. W. (1998). Association of indicators and predictors of tie-strength. Psychological Reports, 83, 1459–1469.

    Article  Google Scholar 

  • McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1), 415–444.

    Article  Google Scholar 

  • Memic, H. (2009). Testing the strength of weak ties theory in small educational social networking websites. ITI 2009 31st International Conference on Information Technology Interfaces (pp. 273-278). IEEE.

  • Mestre, J. P., Herman, G. L., Tomkin, J. H., & West, M. (2019). Keep your friends close and your colleagues nearby: The hidden ties that improve STEM education. Change: The Magazine of Higher Learning, 51(1), 42–49.

    Article  Google Scholar 

  • Mizruchi, M. S., Stearns, L. B., & Marquis, C. (2006). The conditional nature of embeddedness: A study of borrowing by large US firms, 1973-1994. American Sociological Review, 71(2), 310–333.

    Article  Google Scholar 

  • O’Malley, A. J., & Marsden, P. V. (2008). The analysis of social networks. Health Services and Outcomes Research Methodology, 8, 222–269.

    Article  Google Scholar 

  • Perlman, D., & Fehr, B. (1987). The development of intimate relationship. In D. Perlman & S. Duck (Eds.), Intimate relationships (pp. 13–42). Newbury Park: Sage.

    Google Scholar 

  • Petróczi, A., Nepusz, T., & Bazsó, F. (2007). Measuring tie-strength in virtual social networks. Connections, 27(2), 39–52.

    Google Scholar 

  • Pollock, S. J., & Finkelstein, N. D. (2008). Sustaining educational reforms in introductory physics. Physical Review Physics Education Research, 4(1), 010110.

    Article  Google Scholar 

  • President’s Council of Advisors on Science and Technology. (2012). Report to The President -- Engage to excel: Producing one million additional college graduates with degrees in science, technology, engineering, and mathematics.

    Google Scholar 

  • Quardokus Fisher, K., & Apkarian, N. (2019). Instructor discussion networks across 22 STEM departments. In C. Henderson, C. Rasmussen, A. V. Knaub, N. Apkarian, K. Quardokus Fisher, & A. J. Daly (Eds.), Researching and Enacting Change in Postsecondary Education (pp. 106–134). Routledge.

  • Quardokus, K., & Henderson, C. (2015). Promoting instructional change: Using social network analysis to understand the informal structure of academic departments. Higher Education, 70(3), 315–335.

    Article  Google Scholar 

  • R Core Team. (2014). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. URL http://www.R-project.org.

  • Schroeder, C. M., Scott, T. P., Tolson, H., Huang, T. Y., & Lee, Y. (2007). A meta-analysis of national research: Effects of teaching strategies on student achievement in science in the United States. Journal of Research in Science Teaching, 44(10), 1436–1460.

    Article  Google Scholar 

  • Shadle, S. E., Marker, A., & Earl, B. (2017). Faculty drivers and barriers: Laying the groundwork for undergraduate STEM education reform in academic departments. International Journal of STEM Education, 4(1), 8.

    Article  Google Scholar 

  • Skvoretz, J. (2013). Diversity, integration, and social ties: Attraction versus repulsion as drivers of intra- and intergroup relations. American Journal of Sociology, 119(2), 486–517.

    Article  Google Scholar 

  • Snijders, T. A. B., Pattison, P. E., Robins, G. L., & Handcock, M. S. (2006). New specifications for exponential random graph models. Sociological Methodology, 36(1), 99–153.

    Article  Google Scholar 

  • Stains, M., Harshman, J., Barker, M. K., Chasteen, S. V., Cole, R., DeChenne-Peters, S. E., et al. (2018). Anatomy of STEM teaching in North American universities. Science, 359(6383), 1468–1470.

    Article  Google Scholar 

  • Tenkasi, R. V., & Chesmore, M. C. (2003). Social networks and planned organizational change: The impact of strong network ties on effective change implementation and use. The Journal of Applied Behavioral Science, 39(3), 281–300.

    Article  Google Scholar 

  • Valente, T. W. (1995). Network models of the diffusion of innovations. Cresskill: Hampton Press.

    Google Scholar 

  • Wieman, C., Perkins, K., & Gilbert, S. (2010). Transforming science education at large research universities: a case study in progress. Change: The Magazine of Higher Learning, 42(2), 6–14.

    Article  Google Scholar 

  • Wise, K. C., & Okey, J. R. (1983). A meta-analysis of the effects of various science teaching strategies on achievement. Journal of Research in Science Teaching, 20(5), 419–435.

    Article  Google Scholar 

Download references

Funding

This work was funded by NSF DUE grants to Boise State University (1726503), University of Nebraska-Lincoln (1726409), and University of South Florida (1726330). This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746051. This material is based upon work supported by the National Science Foundation under Grant No.1849473. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Contributions

AKL was a major contributor in writing and editing the manuscript. JS performed analysis, worked on experimental design, and was a major contributor to writing the manuscript. JZ, BAC, JEL, LBP, SES, and MS provided major feedback on analysis, contributed to writing, and worked on experimental design. BE and JDM contributed significant feedback on the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to J. Skvoretz.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Supplemental Materials includes additonal tables and figures. (DOCX 7701 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lane, A.K., Skvoretz, J., Ziker, J.P. et al. Investigating how faculty social networks and peer influence relate to knowledge and use of evidence-based teaching practices. IJ STEM Ed 6, 28 (2019). https://doi.org/10.1186/s40594-019-0182-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-019-0182-3

Keywords