- Open Access
Finding the leaders: an examination of social network analysis and leadership identification in STEM education change
© The Author(s). 2018
- Received: 21 November 2017
- Accepted: 23 May 2018
- Published: 20 June 2018
Social network analysis (SNA) literature suggests that leaders should be well connected and can be identified through network measurements. Other literature suggests that identifying leaders ideally involves multiple methods. However, it is unclear using SNA alone is sufficient for identifying leaders for higher education change initiatives. We used two sets of data, teaching discussion network data taken at three different times and respondent nominations for leaders, to determine whether these two methods identify the same individuals as leaders.
Respondent-nominated leaders have more direct and indirect ties on average than non-leaders, which aligns with the SNA literature. However, when looking at individuals as leaders, many respondent-nominated leaders would not be identified using SNA because they are poorly connected. Also, many individuals who were not nominated would have been considered leaders because they are well connected. Further examining these results did not indicate why there is such a difference between the SNA-identified and respondent-nominated leaders.
While these two methods identify some of the same individuals as leaders, there are many differences between the two methods. Using just one method may not be adequate for ensuring that suitable individuals are selected to lead these projects. We recommend multiple methods when selecting leaders.
- Social network analysis
- Higher education change
- Higher education leadership
STEM change initiatives, multiple projects undertaken to broadly improve STEM education, have been part of the landscape to improve teaching practices in STEM higher education for decades. The success of these resource intensive endeavors relies on multiple individuals who are considered “champions,” enthusiastic persons who can lead efforts (e.g., Scherr et al. 2014). One suggested way to identify these leaders is using social network analysis (SNA) because it can provide a way to study the underlying, hidden structure of departments (Quardokus and Henderson 2015). However, it is unclear whether SNA data alone are adequate for identifying leaders. This paper builds on Quardokus and Henderson’s work by determining whether SNA data by itself is adequate for identifying leaders. The data were collected three times over 4 years at a single institution and analyzed to find differences in network position between leaders and non-leaders.
Social network analysis and change in higher education
Researchers from a wide variety of disciplines (e.g., sociology, physics, public health) use social network analysis (SNA) to study relationships among entities (Wasserman and Faust 1994). Relationships can be any type of connection or tie including self-selected ties (e.g., friendships) or ties by circumstances (e.g., kinship, department membership). Entities can be people, groups, or organizations, and they can be referred to by multiple names. For ease of discussion, we use the term “actors” to refer to people within the network.
Much can be discovered from understanding relationships among actors. In K-12 education reform, network structure has been shown to influence change efforts through multiple mechanisms such as teachers providing advice and support to one another (Daly 2010; Coburn et al. 2012; Finnigan and Daly 2012; Penuel et al. 2012). In these networks, teachers having ties to each other contributed to the success of these change efforts.
There are many ways to study network structure. Network metrics consider both individual actors or the entire network. Common metrics include centralities (i.e., who is the most important actor in the network?), groups (e.g., how large and tightly connected is a group?), and density (i.e., how connected are the actors to one another within the network?). Metrics can also examine how the network changes over time. SNA metrics can be further analyzed using descriptive statistics (e.g., averages) and comparative statistics (e.g., t tests). Attributes of actors in the network can be useful for understanding how different identities can impact networks.
The attribute of interest for this study is leadership because of its critical role in groups. Teams with leaders who are central to team members tend to perform better (e.g., Balkundi and Harrison 2006). Leaders can also play a role in exchanging knowledge, and diffusing information between groups (Burt 1999) can be seen in STEM education. For example, Andrews et al. (2016) found that colleagues knowledgeable about education, such as discipline-based education research (DBER) faculty, are often considered to be opinion leaders by their peers who were interested in changing teaching practices.
Currently, SNA is not often used to study instructional practices or change in higher education. Biancani and McFarland’s (2013) study found only 117 SNA articles related to higher education faculty, with no articles on higher education faculty teaching networks and most on co-authorship or citation networks. Since the Biancani and McFarland article, a few articles that study higher education teaching networks have been published (e.g., Quardokus and Henderson 2015; Andrews et al. 2016). Despite the lack of current use, there have been some calls to use SNA to inform change initiatives (e.g., American Association for the Advancement of Science 2015).
Changing teaching practices in STEM higher education and identifying those to lead efforts
Sustaining changed teaching practices in STEM higher education
Changing teaching practices is a social activity from learning about teaching innovations (Borrego et al. 2010; Dancy and Henderson 2010; Turpen et al. 2016) to implementing the new practice. Work such as American Association for the Advancement of Science’s (AAAS) Vision and Change (American Association for the Advancement of Science 2015) and Association of American Colleges and Universities (AAC&U)’s Project Kaleidoscope (Elrod and Kezar 2014) recommend creating coalitions of instructors and administrators to work on change initiatives. Although many STEM education change initiatives perceive individuals as the unit of change, these change initiatives are believed to be more successful when the entire department is involved (Wieman et al. 2010; Henderson et al. 2011; Zhu and Engels 2013; Foote et al. 2016) in order to gain support from multiple members.
Leadership identification in higher education change
Based on their work on Project Kaleidoscope, Elrod and Kezar (2014) posit that leadership is needed to create widespread change. These leaders act as champions, members of the organization who can garner support for an innovation and overcome challenges that may block adoption of the innovation (Rogers 2003). A successful champion or leader should have skills such as knowing how to create the conditions for change and be able to work with others when implementing change initiatives (e.g., Foote et al. 2016; Knaub et al. 2016).
Finding faculty who are willing to lead efforts can be challenging (American Association for the Advancement of Science 2015). While one could call upon the same individuals typically involved, this can create issues including few faculty are involved, which may limit change efforts (Wieman et al. 2010); individuals who are weary of being called upon (Rogers 2003); and potential leaders not being mentored by more senior leaders (e.g., Martin and Marion 2005). At the same time, simply selecting anyone to lead is not a solution. Not all individuals want to improve STEM education and may actively resist efforts to do so (Wieman et al. 2010; Foote et al. 2016).
Few studies exist in higher education regarding leadership identification. The ones that do tend to delve into formal leadership identification and not informal leadership (e.g., Bisbee’s (2007) exploratory study on administrative leaders such as deans). Because of this research gap, public health literature has informed this study. Public health seeks to change behaviors and uses a variety of techniques including individuals lead initiatives that encourage healthy behaviors (e.g., Kelly et al. 1991; Atkins et al. 2008; Kelly 2004). This is akin to how STEM education initiatives work to change teaching behaviors.
Public health research on programmatic efforts found that organizations identify leaders in many ways including role or title (e.g., an institution names a provost), self-nomination (e.g., individuals nominate themselves as leaders), nomination by members of the group (e.g., individuals nominate others), nomination by formal leaders or staff members working on the program, expert nomination (e.g., staff members nominate known experts in the area), and sociometric measures (e.g., using SNA to determine who is a leader based on some network metric) (Valente and Davis 1999; Valente and Pumpuang 2007).
In a review of literature on health-related leadership, Valente and Pumpuang (2007) found that each type of leadership selection method has pros and cons. For nomination methods, some of the pros include selecting individuals who are trusted by the targeted community and being fairly easy to implement. For nomination methods, some of the cons include selection bias (e.g., selecting only one’s friends regardless of suitability), leaders’ abilities may not be adequate if respondents only vaguely know those they select, volunteer leaders may not be capable of leading, and the nominated person may not be interested in leading. For sociometric methods, the pros include being able to select a leader based on their position in the social structure (e.g., selecting a leader who is highly connected) and typically being able to evaluate leaders based on multiple measures (e.g., selecting a leader who has many direct and indirect ties). The cons include that the methods can be time-consuming and are contingent upon who provides information on the network. They suggest that when possible, using multiple methods is ideal to identify leaders.
Although not ideal, perhaps using one method, such as SNA, is adequate for leadership identification. SNA literature implies that leaders may have networks that are different from non-leaders (Balkundi and Kilduff 2006; Watts and Dodds 2007; Katona et al. 2011). Some studies (e.g., Jonnalagadda et al. 2012; Xu et al. 2014) suggest that leaders can be identified through their metrics, such as centralities, and do not suggest using multiple methods.
Knowing whether SNA data alone are adequate for selecting a leader can better inform SNA use in this area. Given that using SNA to inform higher education change is relatively new but growing area, providing guidance to novices can support their success in using this tool which in turn can support their success in their change initiative. Departments engaged in STEM education change initiatives may already have SNA data and may plan on using these data to identify leaders. If SNA data are not adequate for selecting leaders, alternatives methods should be considered.
Summary and research question
Although it is infrequently used in higher education change, SNA shows great potential to deepen our understanding in this area. Changing teaching practices in STEM higher education is a complex process involving the interactions of multiple people. SNA, which is used as a tool to describe social interactions, has a rich body of literature that indicates social interactions influence people’s behaviors.
Leadership identification is one possible way SNA data could be used. Having suitable leaders involved in higher education change who are interacting with others is important. In ideal circumstance, it is suggested that leadership identification is done using multiple methods. However, despite best efforts, ideal practices do not always occur. While triangulation may ideal, it is unclear how imperfect using one method is. In particular, it is unclear whether each method yields a different set of leaders or many of the same leaders.
Based on this ambiguity, our core research question asks whether SNA metrics alone be used to predict or identify recognized leaders in STEM education change?
The data in this manuscript are part of a large study of a single institution involved with a grant-funded STEM change initiative that began in 2012. The Carnegie classification of this institution is a large, 4-year primarily residential, very high research activity comprehensive doctoral institution. The change initiative features many projects that require social interactions: faculty learning communities, live-learn communities for students that involve multiple teaching staff members, and course reform. The change initiative originally involved five departments but expanded to six by the second year.
Social network data were initially collected as part of a long-term study to see if teaching discussion networks change as the change initiative projects matured and expanded. Data were collected at three different timestamps: spring 2012, spring 2013, and spring 2016.
Survey response rates by data collection timestamp
Data collection no. 1
Data collection no. 2
Data collection no. 3
Individuals invited to survey
Response rate (%)
Individuals invited to survey
Response rate (%)
Individuals invited to survey
Response rate (%)
The first question of the survey, regardless of administration year, included a question asking respondents to select their teaching discussion partners from the academic year. “Teaching discussions” were not defined on the survey, but interview data from Quardokus and Henderson (2015) suggested that these discussions consisted of course or pedagogy issues as well as course logistics and staffing issues.
Respondents used a drop-down menu to select a maximum of seven teaching discussion partners from a roster. Limiting the number of discussion partners to seven was based on pilot data. The roster was based on the current teaching staff in the department. For timestamps 2 and 3, we also included individual who were not currently teaching but were listed on prior versions of the survey. This was to see if they continued to be active within teaching discussion, despite having no teaching responsibilities.
The 2016 version of the survey included additional questions related to leadership in STEM education change. Respondents were asked to identify current and potential leaders using a drop-down menu that contained the roster for their department, identical to the drop-down menu for teaching discussion. Respondents were given the opportunity to name additional individuals. We limited the number of responses to five for each type of leader. We did not define current or potential leaders, but we emphasized the prior success and importance of the change initiative to improve STEM education and the need to have individuals lead efforts. For the potential leader question, we emphasized the importance of fostering the next generation of leaders to take over when current leaders step down.
We treated the teaching discussion ties as undirected. For example, suppose individual A selected individual B on a survey. If these data were treated as undirected, it is assumed individual B also selected individual A, regardless of whether individual B did. For leadership identification, many SNA studies treated their data as directed, which means ties are not automatically mutual. Using the prior example, if the data were treated as directed, it is not automatically assumed individual B selected individual A. However, these studies also asked questions regarding who influences the respondent. An advice network would be an example of a directed network. Teaching discussion can be influential but is not necessarily directed.
The data presented only include respondents and actors that respondents selected. If a member of the department did not take the survey and was not named by a respondent, the member was not included in the network because their ties in the network are unknown.
Degree centrality counts the number of ties an actor has (Wasserman and Faust 1994). Someone with high degree centrality may be able to diffuse information more broadly than someone with low degree centrality. In Fig. 1, the red middle actor has two ties while each blue actor has one tie. For this study, degree centrality is the total number of ties an actor has.
Reach centrality counts the number of actors that the ego, the actor of interest, can contact within k ties. This metric suggests the possibility of messages traveling between actors (Wasserman and Faust 1994). We anticipated that this would be an important measure because the ego’s ideas can travel indirectly to more actors. Figure 1, the red actor has a two-step reach of three because it has two direct ties plus one tie that can be reached through two steps.
We used a specific type of reach centrality, two-step centrality. This can be thought of as an ego’s ties plus the friends of friends. While studies have indicated that behavioral influence can occur as far as three steps away (e.g., Christakis and Fowler 2007; Christakis and Fowler 2009), an ego is only aware of the activities of their direct ties and ties two steps away (e.g., friends of friends) (Friedkin 1983). Because those leading change initiatives should be knowledgeable of actors in their departments, we used two-step centrality and not 3-step centrality.
Because each year and department had different numbers of actors, we used relative measures for each centrality. Relative measures make comparison possible when the number of actors varies significantly from network to network (Scott 2012). Relative measures were found by dividing a measure of an individual actor by n - 1 where n is the total number of actors in the network. For example, suppose department X has 11 members. If one actor has six ties, relative degree centrality of this actor is 0.6.
As mentioned earlier, these metrics can be analyzed like other social science data. As we were interested in comparisons among groups (i.e., leaders and non-leaders), we used ANOVAs to determine if there were any statistically significant difference among groups. ANOVAs were used instead of t tests because as we will discuss in the next section, we had three categories. We used post hoc tests to determine which two groups had statistically significant differences. Post hoc Tukey tests were used when variance was equal and Games-Howell when it was not.
Definitions of leadership categories
Actor who has received 2+ nominations as a leader (current or potential)
Actor who has received an almost equal number of nominations in the current and potential categories. The maximum absolute difference between the two categories is 1.
Actor who has received 2+ nominations as a leader, with the majority of nominations as a current leader (2+ more nominations as a current leader than as a potential leader)
Actor who has received 2+ nominations as a leader, with the majority of nominations as a potential leader (2+ more nominations as a potential leader than as a current leader)
Actor who has received only one nomination as a leader
Actor who has receive no nominations as a leader
Initially, we examined our data for two categories: non-leaders and leaders. Actors who were not nominated as leaders were considered non-leaders. We were interested whether actors were genuinely leaders and assumed that the more nominations an actor received, the more likely that an actor was truly a leader. However, we did not want to simply categorize actors who received few nominations as non-leaders and hypothesized they may be different from non-leaders and leaders. Looking at frequencies of leadership nominations, we found that the largest break was between 1 and 2 nominations. Thus, we categorized those with one nomination as maybe leader and those with 2+ nominations as leaders.
Our original categories, current and potential, also needed to be modified. Some individuals were only nominated within one category (e.g., respondents only selected actor A as a current leader), making it clear in which category the respondents thought the actor belonged. Many other actors were nominated at least once in each category, making it less clear in which category they belonged.
To determine in which category an actor belonged, we first found the absolute difference between these two categories. We subtracted the nominations for current leader from the nominations for a potential leader and took the absolute value. For example, if an actor received six nominations as a current leader and seven for a potential leader, the absolute difference would be 1. To create the both category, we looked at the frequency of absolute differences to determine where the largest break in the data was. Actors who were nominated equally for current and potential, as well as actors who had an absolute difference of 1 were placed in the both category. If the difference was greater than 1, actors were placed in the majority category. For example, an actor who was nominated three times as a potential leader and five times as a current leader would be considered a current leader.
The primary limitation of this study is the data may be incomplete. Departmental response rates range from 28 to 61% over the three timestamps (see Table 1). Departmental networks represent 46 to 87% of the possible actors. This means that some actors may not appear in the data even though they were involved in teaching discussion. For the 2016 version of this survey, there may be more leaders within each department but those who would nominate them did not take the survey.
Non-responsiveness is common with SNA using survey data (Žnidaršič et al. 2012; Morris and Deckro 2013). While there are various techniques (e.g., exponential random graph models, imputation methods that add ties to the network) to attempt to rectify this issue, there is currently no standard method for handling missing data (Stork and Richards 1992; Morris and Deckro 2013). Methods to rectify missing data can also introduce other issues as they are built on assumptions that ties would exist for non-respondents (Žnidaršič et al. 2012; Morris and Deckro 2013).
One suggested method for communication networks is to consider the data undirected (Stork and Richards 1992). This is the method we chose. The caveats are that the respondents and non-respondents should be similar in demographics and that the type of communication is undirected (e.g., discussion is undirected, advice may be directed) (Stork and Richards 1992). As seen in Table 1, respondents and non-respondents are present in each department.
Comparison of respondents and non-respondents
N respondents involved
N of non-respondents involved
N of respondents not involved
N of non-respondents not involved
Another limitation is that this study focused on one institution engaged in change initiative activities that either directly (e.g., faculty learning communities) or indirectly (e.g., course reform) encourage faculty and staff to communicate. These findings may be different for institutions not engaged in a change initiative. Network metrics may also be different by institution type (e.g., a small liberal arts college). These data should be considered the beginning of this line of work; further research at both similar and different institutions is needed to determine what is typical and whether teaching discussion network metrics vary by institution type or other variables.
Three-hundred fifteen actors appear as a teaching discussion partner in at least one of the timestamps. Respondents nominated a total of 150 individuals (47.6% of all actors in the network) as leaders within their departments. The average number of times an individual was nominated as a leader is 4 ± 1.1 In four of the six departments, nine respondents nominated themselves as leaders. All self-identified leaders except for one were nominated multiple times by other respondents.
Non-leaders (N = 165)
Maybe leaders (N = 57)
Leaders (N = 93)
Current leaders (N = 32)
Potential leaders (N = 29)
Both leaders (N = 32)
Descriptive statistics of individuals identified as leaders and maybe leaders by department
% of roster
% of roster
Department B has the highest percentage of nominated leaders (40%) while department F has the lowest (18.3%). Department F’s low percentage of leaders may be due to a lower response rate for the 2016 survey; we do not know why their response rate was lower than the other departments in 2016 or lower than their 2012 response rate. Department B’s response rate was comparable to the other departments. There may be other factors at play, such as department size (e.g., the department is small enough so that members are aware of their colleagues’ STEM education leadership activities) or department culture (e.g., the department encourages STEM education leadership).
We analyzed the data using ANOVAs to see if there were any differences among departments and the two centralities (degree and reach). There are no statistically significant differences (p < 0.05).
Can SNA metrics be used to predict or identify recognized leaders in STEM education reform?
Social network metrics t test results comparing leaders, possible leaders, and non-leaders
Data collection no. 1
F(2, 57) = 8.30*
0.18 ± 0.02
(N = 42)
0.09 ± 0.02
(N = 24)
0.1 ± 0.009
(N = 75)
F(2, 138) = 7.75*
0.48 ± 0.04
(N = 42)
0.30 ± 0.04
(N = 24)
0.31 ± 0.03
(N = 75)
Data collection no. 2
F(2, 72) = 26.7*
0.19 ± 0.14
(N = 61)
0.1 ± 0.01
(N = 31)
0.08 ± 0.007
(N = 134)
F(2, 223) = 23.5*
0.56 ± 0.03
(N = 61)
0.38 ± 0.04
(N = 31)
0.32 ± 0.02
(N = 134)
Data collection no. 3
F(2, 106) = 22.8*
0.17 ± 0.01
(N = 83)
0.1 ± 0.01
(N = 42)
0.08 ± 0.007
(N = 82)
F(2, 204) = 25.3*
0.54 ± 0.02
(N = 83)
0.41 ± 0.03
(N = 42)
0.33 ± 0.02
(N = 82)
Presence in network at least once at each timestamp (max. 3)
F(2, 128) = 2.65
2 ± 1
(N = 93)
2 ± 1
(N = 165)
2 ± 1
(N = 57)
We were also interested in whether actors appear in all three timestamps, regardless of how many ties they have. Our hypothesis was that leaders would appear in more timestamps than maybe leaders and non-leaders and that simply being present in multiple timestamps was as important as an actor’s number of ties (i.e., degree centrality) or indirect ties (i.e., reach centrality). This perhaps could be another way of identifying leaders using SNA. Actors in all three categories appeared the same number of times. This suggests that simply being present in the network does not make one a leader.
Can early SNA data predict leaders?
False SNA positive: those who were not nominated as leaders on the survey but have higher than average centralities.
False SNA negative: those who were nominated as leaders on the survey but have lower than average centralities.
Accuracy of leadership identification via degree and reach centralities
False SNA negatives (% of total nominated leaders)
Leaders identified by both SNA data and nominations (% of total nominated leaders)
False SNA positives (% of leaders identified from SNA)
13 of 25 (52%)
17 of 38 (44.7%)
15 of 44 (34.1%)
39 of 81 (48.1%)
6 of 33 (18.2%)
16 of 69 (23.2%)
Do recent SNA data provide better leadership identification?
Degree and reach centrality
Are some leadership categories more prone to false negatives?
False negatives broken down by leadership category
Total false negatives from the SNA data
Are respondents identifying leaders based on title, and does that lead to false SNA negatives or false SNA positives?
One concern regarding respondent-nominated leaders was that respondents would select those who have formal leadership titles (e.g., dean, department chair) and overlook those who are leading but lack a title. Perhaps the false SNA negative and false positive issues were due to title-identified leaders. Those with titles may not be as active in teaching discussion because they have other responsibilities and thus, this would lead to false negatives. Alternatively, those with formal titles could also not be considered leaders in STEM education change by their departments but still interact often regarding teaching as a result of their title, leading to false positives.
We examined the leaders category to see how many actors with formal leadership titles pertinent to education were also respondent-nominated leaders. These titles include: department chair, director of undergraduate studies, director of graduate studies, course coordinator, named leader for the change initiative, and membership to a committee related to education. Using each department’s website, we found 36 individuals with one of these titles. Twenty were identified as a leader. Four were considered maybe leader. All six chairs were nominated as some type of leader.
This result suggests that although title-designated leaders were often respondent-nominated leaders, merely having a title is not sufficient be nominated as a leader. Twelve individuals (33%) with relevant titles were not identified a leader or maybe leader by respondents; these twelve individuals also were not false negatives or false positives. Respondents also nominated many leaders without titles. Of the 93 leaders, sixty-nine (74%) do not have a formal leadership title.
This study sought to examine whether teaching discussion social network data alone can be used to identify leaders for STEM education change initiatives. We found that SNA data yield a different set of leaders than respondent nomination. In the following sections, we discuss these results and their implications for using SNA to inform change initiatives.
Discussion: can SNA metrics be used to predict or identify leaders in STEM education reform?
The SNA data do depict the expected distribution: many actors who are nominated as leaders in STEM education change work tend to have many direct ties (i.e., high degree centrality) as well as a high total of direct and “friends of friends” ties (i.e., high two-step reach centrality). Many actors who are not considered leaders tend to have lower metrics. However, this is not universally true. We saw a non-trivial number of false SNA negatives, those who were nominated as leaders by survey respondents but would not have been identified merely by SNA data. We also saw a number of false SNA positives, those who were not nominated as leaders by survey respondents but would have been identified as leaders based on SNA data.
Further analysis of the false SNA negatives did not yield much more insight. Within the three categories of leaders (current leaders, potential leaders, and both leaders), many of the false SNA negatives were in the both leaders category. Being categorized as both leaders may have been because respondents interacted less with those actors and thus were less sure how to categorize them. However, a non-trivial number were in the current leaders and potential leaders categories for each timestamp. Potential leaders may have had fewer ties because they were newer to change work or department. In the future, they may have enough ties to indicate they are leaders. However, it is curious why those perceived as current leaders would be false negatives. We assumed that actors who were perceived as current leaders would be active in teaching discussion.
We also considered whether respondents simply selected title-designated leaders. We thought this could have led to false SNA positives (e.g., their jobs required them to be active in teaching discussion, even if they were not considered leaders) or false negatives (e.g., those with titles do not have the time to be active in the teaching discussion network due to their job responsibilities). Title-designated leaders did not contribute to false SNA positives or false negatives. Some title-designated leaders were not nominated leaders.
Reflecting upon the data, we considered how response rates could impact the results. While we took care to use a technique to mitigate this issue (i.e., considering the data undirected), response rates might still impact the results. Recall it is possible that part of the network could be missing if there are no respondents from a section of the network. Perhaps had more respondents answered the survey, the false positives would have been identified as leaders by respondents or the false negatives would have higher centralities. Even with a nearly perfect response rate, it is also possible that the false positives and false negatives would still exist.
The response rate issue could be perceived as a flaw in this study. However, in Andrews et al. (2016) study that looked at similar teaching discussion networks, response rates from four departments ranged from 33 to 63%. Our response rates are comparable to theirs, suggesting this is not a unique issue to our study. The practical implication is that even if survey-generated teaching discussion SNA data with a perfect rate is suitable for identifying leaders, it may be unlikely to obtain a perfect response rate.
While these results may seem to contradict the SNA literature on leaders and our own findings that respondent-nominated leaders on average have higher metrics than non-leaders, we note that we are ultimately interested in identifying individuals as leaders. The SNA studies and our findings in Table 5 show averages. The details of individuals can get lost in averages. When using the data to select individuals, researchers should consider collecting and analyzing data so individuals can readily be found. In our case, plotting out the data in a histogram was useful to see degree and two-step reach centrality of individual actors.
Because selecting leaders is important to ensure the change initiative is successful, these data suggest that one should use multiple methods of identifying a leader as Valente and Pumpuang (2007) recommended. Based on the literature on changing teaching practices in STEM higher education, we know that change leaders should possess skills (e.g., understanding how to overcome barriers) and be well connected. If leaders in STEM education initiatives were selected simply by using network data, there is non-trivial chance that they would overlook a leader or select someone whose departmental members do not recognize as a leader. If leaders are selected simply based on respondent nomination, the leader may lack ties that are useful for spreading ideas or gaining new perspectives. Either scenario could hinder change efforts and threaten success.
Besides leadership identification, a similar study could be used as a tool for leadership planning. If nominated leaders are not well connected, those working on the change initiative could devise mechanisms to help connect nominated leaders. Likewise, there may be individuals who are well connected, interested in leading STEM education projects, and are not nominated as leaders. It may be useful to consider about why they are not nominated.
Conclusions, recommendations, and further research
Leaders on average have more ties and reach in their networks than non-leaders
SNA yields a somewhat different set of leaders than respondent nominations.
Both methods have advantages. Nominations are useful for knowing whom departments trust for this work, and SNA is useful for determining who is well connected. Using just nominations could lead to selecting individuals who are poorly connected. Using just SNA could lead to selecting individuals who are not seen as leaders. Using multiple methods of identification, as we have done, would mitigate these issues.
Determining more differences among current, potential, and both categories. There may be other aspects these data do not capture. This may be useful in understanding what experiences potential leaders need to transition to a current leader and why some leaders are not so distinctly defined.
Investigating “false SNA positives” and “false SNA negatives.” It would be useful to know why some actors have many ties in the teaching discussion network but are not perceived as leaders. Similarly, it would be useful to know why some leaders have few teaching discussion ties.
Studying how change initiatives impact how respondents nominate leaders. The presence of change initiatives may mean that leaders are more visible and readily identified by respondents. Approximately 30% of all actors in this study were considered a leader of some kind. Institutions that do not have a change initiative may have fewer nominated leaders. Similarly, institutions that have new change initiatives may have fewer nominated leaders. It also would be important to see whether leaders remain visible after the change initiative ends.
Examining other types of relevant network data in a similar study. As noted earlier, teaching discussion networks are important for change in instruction (e.g., Sun et al. 2013). A teaching discussion network seemed suitable for identifying leaders because we hypothesized leaders would be more active in discussing teaching. However, there may be other networks in higher education that may be better suited for identifying leaders. One example could be studying which actors participate in which teaching activities (e.g., learning communities).
By continuing to expand our knowledge regarding how leadership operates in STEM education change initiatives, we can support the next generation of STEM education leaders and sustain or even grow current efforts.
Unless otherwise noted, all error measurements are standard error of the mean.
AVK collected the latest data for this study, did the current analysis, and wrote this manuscript. KQF created the initial study and collected data in prior years. Both KQF and CH provided feedback on the manuscript. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- American Association for the Advancement of Science (2015). Vision and change in undergraduate biology education: chronicling change, inspiring the future. Washington, DC.: American Association for the Advancement of Science.Google Scholar
- Andrews, TC, Conaway, EP, Zhao, J, Dolan, EL. (2016). Colleagues as change agents: how department networks and opinion leaders influence teaching at a single research university. CBE-Life Sciences Education, 15(2). https://doi.org/10.1187/cbe.15-08-0170.
- Atkins, MS, Frazier, SL, Leathers, SJ, Graczyk, PA, Talbott, E, Jakobsons, L, Bell, CC. (2008). Teacher key opinion leaders and mental health consultation in low-income urban schools. J Consult Clin Psychol, 76(5), 905.View ArticleGoogle Scholar
- Balkundi, P, & Harrison, DA. (2006). Ties, leaders, and time in teams: strong inference about network structure’s effects on team viability and performance. Acad Manage J, 49(1), 49–68.View ArticleGoogle Scholar
- Balkundi, P, & Kilduff, M. (2006). The ties that lead: a social network approach to leadership. Leadership Q, 17(4), 419–439.View ArticleGoogle Scholar
- Biancani, S, & McFarland, DA (2013). Social networks research in higher education. In MB Paulsen (Ed.), Higher Education: Handbook of Theory and Research. Dordrecht: Springer.Google Scholar
- Bisbee, DC. (2007). Looking for leaders: current practices in leadership identification in higher education. Planning and Changing, 38(1/2), 77.Google Scholar
- Borgatti, SP, Everett, MG, Freeman, LC (2002). Ucinet for Windows: Software for Social Network Analysis. Harvard: Analytic Technologies.Google Scholar
- Borrego, M, Froyd, JE, Hall, TS. (2010). Diffusion of engineering education innovations: a survey of awareness and adoption rates in US engineering departments. J Eng Educ, 99(3), 185–207.View ArticleGoogle Scholar
- Burt, RS. (1999). The social capital of opinion leaders. Ann Am Acad Pol Soc Sci, 566(1), 37–54.View ArticleGoogle Scholar
- Christakis, NA, & Fowler, JH. (2007). The spread of obesity in a large social network over 32 years. New Engl J Med, 357(4), 370–379.View ArticleGoogle Scholar
- Christakis, NA, & Fowler, JH (2009). Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives. Little Brown.Google Scholar
- Coburn, CE, Russel, JL, Kaufman, JH, Stein, MK. (2012). Supporting sustainability: teachers’ advice networks and ambitious instructional reform. Am J Educ, 119(1), 137–182.View ArticleGoogle Scholar
- Daly, AJ (2010). Social Network Theory and Educational Change. Cambridge: Harvard Education Press.Google Scholar
- Dancy, MH, & Henderson, C. (2010). Pedagogical practices and instructional change of physics faculty. Am J Phys, 78(10), 1056–1063.View ArticleGoogle Scholar
- Elrod, S, & Kezar, A. (2014). Developing leadership in STEM fields: the PKAL summer leadership institute. J Leadersh Stud, 8(1), 33–39.View ArticleGoogle Scholar
- Finnigan, KS, & Daly, AJ. (2012). Mind the gap: organizational learning and improvement in an underperforming urban system. Am J Educ, 119(1), 41–71.View ArticleGoogle Scholar
- Foote, K, Knaub, A, Henderson, C, Dancy, M, Beichner, RJ. (2016). Enabling and challenging factors in institutional reform: the case of SCALE-UP. Phys Rev Phys Educ Res, 12(1), 010103.View ArticleGoogle Scholar
- Friedkin, NE. (1983). Horizons of observability and limits of informal control in organizations. Soc Forces, 62(1), 54–77.View ArticleGoogle Scholar
- Henderson, C, Beach, A, Finkelstein, N. (2011). Facilitating change in undergraduate STEM instructional practices: an analytic review of the literature. J Res Sci Teach, 48(8), 952–984.View ArticleGoogle Scholar
- Jonnalagadda, S, Peeler, R, Topham, P. (2012). Discovering opinion leaders for medical topics using news articles. J Biomedi Semantics, 3(1), 2.View ArticleGoogle Scholar
- Katona, Z, Zubcsek, PP, Sarvary, M. (2011). Network effects and personal influences: the diffusion of an online social network. J Mark Res, 48(3), 425–443.View ArticleGoogle Scholar
- Kelly, JA. (2004). Popular opinion leaders and HIV prevention peer education: resolving discrepant findings, and implications for the development of effective community programmes. AIDS Care, 16(2), 139–150.View ArticleGoogle Scholar
- Kelly, JA, St Lawrence, JS, Diaz, YE, Stevenson, LY, Hauth, AC, Brasfield, TL, Andrew, ME. (1991). HIV risk behavior reduction following intervention with key opinion leaders of population: an experimental analysis. Am J Public Health, 81(2), 168–171.View ArticleGoogle Scholar
- Knaub, AV, Foote, KT, Henderson, C, Dancy, M, Beichner, RJ. (2016). Get a room: the role of classroom space in sustained implementation of studio style instruction. Int J STEM Educ, 3(1), 1–22.View ArticleGoogle Scholar
- Martin, JS, & Marion, R. (2005). Higher education leadership roles in knowledge processing. Learn Organ, 12(2), 140–151.View ArticleGoogle Scholar
- Morris, JF, & Deckro, RF. (2013). SNA data difficulties with dark networks. Behav Sci Terrorism Political Aggress, 5(2), 70–93.View ArticleGoogle Scholar
- Penuel, WR, Sun, M, Frank, KA, Gallagher, HA. (2012). Using social network analysis to study how collegial interactions can augment teacher learning from external professional development. Am J Educ, 119(1), 103–136.View ArticleGoogle Scholar
- Quardokus, K, & Henderson, C. (2015). Promoting instructional change: using social network analysis to understand the informal structure of academic departments. High Educ, 70(3), 315–335.View ArticleGoogle Scholar
- Rogers, EM (2003). Diffusion of Innovations, (5th ed., ). New York: Free Press.Google Scholar
- Scherr, RE, Plisch, M, Goertzen, RM (2014). Sustaining Programs in Physics Teacher Education: A Study of PhysTEC Supported Sites. College Park, MD: American Physical Society.Google Scholar
- Scott, J (2012). Social Network Analysis: A Handbook, (2nd ed., ). London: Sage Publications.Google Scholar
- Stork, D, & Richards, WD. (1992). Nonrespondents in communication network studies problems and possibilities. Group Org Manag, 17(2), 193–209.View ArticleGoogle Scholar
- Sun, M, Frank, KA, Penuel, WR, Kim, CM. (2013). How external institutions penetrate schools through formal and informal leaders. Educ Adm Q, 49(4), 610–644.View ArticleGoogle Scholar
- Turpen, C, Dancy, M, Henderson, C. (2016). Perceived affordances and constraints regarding instructors’ use of peer instruction: implications for promoting instructional change. Phys Rev Phys Educ Res, 12(1), 010116.View ArticleGoogle Scholar
- Valente, TW, & Davis, RL. (1999). Accelerating the diffusion of innovations using opinion leaders. Ann Am Acad, 566, 55–67.View ArticleGoogle Scholar
- Valente, TW, & Pumpuang, P. (2007). Identifying opinion leaders to promote behavior change. Health Educ Behav, 34(6), 881–896.View ArticleGoogle Scholar
- Wasserman, S, & Faust, K (1994). Social Network Analysis: Methods and Applications, (vol. 8). Cambridge: Cambridge University Press.Google Scholar
- Watts, DJ, & Dodds, PS. (2007). Influentials, networks, and public opinion formation. J Consum Res, 34(4), 441–458.View ArticleGoogle Scholar
- Wieman, CE, Perkins, KK, Gilbert, S. (2010). Transforming science education at large research universities: a case study in progress. Change, 42(2), 6–14.View ArticleGoogle Scholar
- Xu, WW, Sang, Y, Blasiola, S, Park, HW. (2014). Predicting opinion leaders in Twitter activism networks: the case of the Wisconsin recall election. Am Behav Sci, 58(10), 1278–1293.View ArticleGoogle Scholar
- Zhu, C, & Engels, N. (2013). Organizational culture and instructional innovations in higher education: perceptions and reactions of teachers and students. Educ Manag Adm Leadersh, 42(1), 136–158.View ArticleGoogle Scholar
- Žnidaršič, A, Ferligoj, A, Doreian, P. (2012). Non-response in social networks: the impact of different non-response treatments on the stability of blockmodels. Soc Networks, 34(4), 438–450.View ArticleGoogle Scholar