Development and application of the Action Taxonomy for Learning Assistants (ATLAs)
International Journal of STEM Education volume 7, Article number: 1 (2020)
The success of the learning assistant (LA) model has largely been attributed to LA facilitation of active learning tasks. A deeper understanding of how LAs facilitate these tasks would inform LA training and support successful adoption of the LA model. Our investigation of LA actions during their interaction with students in the classroom contributes to that understanding. We present and discuss the development of the action taxonomy for learning assistants (ATLAs), as well as illustrate its applicability by presenting some analyses that were conducted on sample data.
The LAs carried out several different actions that we categorized broadly as LA-Directed Facilitation, LA-Guided Facilitation, Advice, Feedback, Course-Related Talk, and Non-Course-Related Talk. LA-Directed Facilitation and LA-Guided Facilitation were the most common types of actions observed. We found that LA actions varied by course.
ATLAs is a tool that can be used to examine LA actions. In our sample data set, LAs undertook many different actions during interactions with students which indicates that LAs play several different roles in the classroom. These findings have practical implications not only for faculty seeking to implement a peer instruction model such as the LA model, but also for instructors wanting to utilize LAs in their courses more effectively. Understanding what the LAs are doing during interactions with students can provide us insight into the different roles that LAs undertake. Knowledge of these roles will guide effective training, feedback, and direction of LAs, particularly during the pedagogy course.
Peer teaching models have increasingly been introduced into undergraduate instructional practices throughout the USA as a way to address the need for a student-centered approach to science, technology, engineering, and mathematics (STEM) education (Henderson & Dancy, 2011). It is well documented that the USA needs improved undergraduate STEM education in order to better prepare its STEM graduates (Talanquer, 2014). These reform efforts aim to support increased critical thinking and authentic scientific practices (Kim, Sharma, Land, & Furlong, 2013). Active learning increases students’ engagement in the learning process and has improved students’ learning outcomes and lowered failure rates compared to classes using traditional lecture methods (Freeman et al., 2014). Peer teaching models can provide a way to facilitate active learning in classrooms (Freeman et al., 2014). Research has consistently shown that the support of peer coaches during active learning activities can help the student to discuss the material more thoroughly (Knight, Wise, Rentsch, & Furtak, 2015). However, despite these encouraging findings, little has been done to understand the peer instruction models from the perspective of the peer instructors. A better understanding of the peer teaching experience, and what peer instructors do to facilitate student learning, will help to better prepare, implement, and sustain peer instructional models.
There are several models of peer teaching that are used in higher education today, including peer helpers, tutors, teaching assistants, and learning assistants (Colvin, 2007; Knight & Wood, 2005; Otero, Pollock, McCray, & Finkelstein, 2006). Faculty, students, and the peer instructors can all be impacted beneficially from implementing a peer learning model (Whipple, 1987). Findings support the notion that cognitive processing of material in these interactions is different to that of learning for assessment performance (Bargh & Schul, 1980). These peer-to-peer interactions can be viewed as tutor-tutee relationships which can lead to the co-construction of knowledge (Chi, 1996). Active learning instructional practices and peer teaching models have consistently increased students’ engagement in the learning process and improved students’ learning outcomes compared to classes using traditional lecture methods (Freeman et al., 2014). However, despite these encouraging findings, little has been done to understand the specific aspects of these interactions and facilitation by peer instructors during active learning tasks. The purpose of this paper is to present a newly developed taxonomy that will enable researchers to identify the different actions that learning assistants undertake during their interactions with students.
The learning assistant (LA) model is a peer instruction model that has been effective in a number of ways, including supporting student learning, faculty and course transformation, institutional change, and teacher preparation. For example, numerous studies have found that students in LA-supported courses experience better learning outcomes, usually measured as learning gains on concept inventories (Nelson, 2010; Pollock, 2009; Sellami, Shaked, Laski, Eagan, & Sanders, 2017; Talbot, Hartley, Marzetta, & Wee, 2015). Other studies have found high student satisfaction with the teaching and learning experiences in LA-supported courses (Talbot et al., 2015). Research has also found that LAs contribute to course transformation and changes in faculty practice. For example, Pollock and Finkelstein (2013) describe the process of curricular change and pedagogical transformation in introductory physics, in which LAs are a central part of the transformation effort. Adoption of the LA model also supports institutional transformation. Also, the LA model has been shown to be a successful teacher preparation mechanism whereby the LAs who went on to be secondary teachers displayed more reform-based teaching practices compared to their peers who had not been LAs (Gray, Webb, & Otero, 2016).
There have also been benefits for the LAs themselves. For example, Close, Conn, and Close (2016) found that LAs develop stronger content mastery and science identities. Top, Schoonraad, and Otero (2018) investigated LAs’ pedagogical knowledge and found that the language of “student ideas” was the pedagogical principle that was most likely to be taken up by LAs over the course of a semester. In their work, “student ideas” refers to prior ideas that students bring into the classroom with them.
Although research on LAs is growing, more needs to be understood about the specific actions that LAs undertake in the classroom to facilitate student learning. There has been some work in this area. For example, Knight et al. (2015) looked at what LAs do to support student learning by examining how cues from LAs before clicker questions impacted student discourse. When students interacted with LAs, there were significant changes in the students’ use of questioning and duration of discussion. They also identified that different LA prompts elicited specific student responses. Particularly, questioning prompts elicited more student reasoning, but explanation prompts resulted in shorter student discussion. Thus, LA prompts directly influenced students’ interactions during classroom discussions. While Knight and colleagues coded both student and LA statements during an interaction, they focused on the students’ responses to LA prompts. In addition, Davenport, Amezcua, Sabella, and Van Duzor (2018) have looked at the LA-faculty partnerships which provides a window into their classroom interactions. In their work, Davenport et al. developed the Preparation Session Observation Tool (PSOT) which is meant to capture the partnerships that LAs and faculty develop in their weekly prep meetings. This work helps to identify and reflect on the intended or planned-for interactions that LAs and students will have in the supported class. Much of the research described in this section can be accessed by visiting the visit Learning Assistant Alliance Resource page (https://sites.google.com/view/laa-resources/assessment-research-and-results).
Our work builds on the previous research but focuses specifically on what the LAs are doing during their interactions with students. In this paper, we present the development of a new tool, the action taxonomy for learning assistants (ATLAs). This tool allows us to investigate the following research question: What are LAs doing during their interactions with students? We illustrate the applicability of ATLAs by presenting analysis of sample data to explore what LAs do at the course, individual and interaction level.
Context: the LA model
The LA model was started in 2003 at the University of Colorado Boulder. Since then, over 80 institutions have adopted the LA model and built their own LA programs based on that model (https://www.learningassistantalliance.org/). The LA model supports institutional change by providing and training LAs who support faculty teaching and student learning in undergraduate courses, mainly in STEM. In our research, the LA programs under study focus on large introductory science courses. The LA model provides faculty with the opportunity to implement more student-centered instructional practices, such as active learning. One of the central stated roles of the LAs in these programs is to promote student discourse and argumentation.
In our work, we investigated the actions and interactions of LAs from programs at two universities: one urban teaching and research institution in the Mountain West, and the other a more rural teaching and research institution in the Midwestern USA. Shared goals of both LA programs were (1) to improve the learning of undergraduate students, (2) to support institutional change towards student-centered learning environments, and (3) to recruit math and science majors into K-12 teaching licensure programs and teaching careers. LAs are recruited by faculty through a competitive process. Figure 1 shows the three main aspects of the LA experience.
Vygotsky’s sociocultural theory of learning states that learning is a social process situated in cultural, historical, or institutional factors. “Human beings are viewed as coming into contact with, and creating their surroundings as well as themselves through the actions in which they engage” (Wertsch, 1991). Accordingly, action and interaction are the analytic categories that provide insight into mental functioning. In our work, LAs are performing actions and engaging in interactions with students around the tasks and activities that the students are completing. Thus, the tasks and activities are the artifacts which mediate these interactions. For example, a complex task that requires collaborative problem-solving might allow for a deeper or longer interaction when compared to a factual recall task. Further, the actions that an LA takes within these interactions will differ based on the task or activity.
These interactions between students and LAs are situated in the context of the classroom, which has its own norms for community interaction. These norms are created implicitly or explicitly by the instructor and the other participants in the classroom and are also shaped by historical and institutional factors, for example, whether the class is an introductory course, a “weed-out” course, or a required course. The LAs are scaffolding the learning by interacting around the tasks with students and exploring the students’ zone of proximal development (ZPD). While this is occurring, the LAs are also gaining experience in engaging in discourse around different topics with students and are becoming more expert in the concepts and skills being covered. In this way, learning is occurring for both the LAs and the students.
Theories of situated learning can also help us understand how the LAs interact with students in the classroom context. Following the work of Close et al. (2016), we view the LAs as participating in a community of practice (CoP; Lave & Wenger, 1991). The actors (instructors, students, LAs) in our classroom communities are assumed to be mutually engaged in the joint enterprise of learning science. Of specific interest to us in this work is the shared repertoire that develops as the instructor, students, and LAs negotiate and determine norms of interacting and engaging around mediational artifacts. Taken together, this mutual engagement, joint enterprise, and shared repertoire are defining characteristics of a CoP (Lave & Wenger, 1991). Because we are ultimately interested in the roles that LAs play in the supported courses as they engage in the shared repertoire, CoP provides an appropriate theoretical lens.
Although our study focuses on the actions of the LAs, and not the mediational means and mechanisms themselves or the contexts in which the LAs are operating, taking a sociocultural and situated approach provides us with a lens to help classify and categorize LA actions in terms of their mediated social interaction with students. The work presented in this paper is one aspect of a much larger research project which examines the entire classroom system and interactions between other parts of that system (Talbot et al., 2016).
Development of ATLAs
In this section, we describe our open-ended study to investigate what LAs do in their interactions with students in the classroom and present the resulting taxonomy of LA actions in that context. LAs can perform many roles in the classroom such as providing feedback to the instructor, consulting with other LAs, facilitation of student discussion, and administration. However, for the purpose of our work, we examined only the portions of class time where LAs are interacting with students and sought only to characterize their actions during this time. Since LAs are the focus of our study, we followed individual LAs and recorded their interactions with multiple and different students and groups of students. We recruited LAs to wear point-of-view cameras throughout entire class periods. LAs either clipped the cameras to their shirts or wore them on a lanyard around their neck. This provided us with audio and video of class time from their perspective.
The taxonomy is based on video data collected across two universities. University A is a diverse, urban, public, teaching and research university in the Mountain West of the USA. University B is a more rural, teaching and research university in the upper Midwest. Due to the potentially sensitive nature of wearing a personal camera, we relied on a volunteer sample of LAs for this study. From that sample, only LAs working for instructors who consented to the data collection in their courses were selected to participate. In addition, LAs in a variety of science courses were prioritized. In total, video data was collected from 25 LAs in 9 different courses across the two universities. The LA program at each institution aims to hire a group of LAs that are representative of the student body. We strived to have the same representation in our data sample. The demographics of the university population, LA program population, and our data sample population are shown in Table 1.
Each of the 25 LAs wore the camera on one class day. Since our goal was to gain an understanding of what LAs do in their interactions with students, we prioritized getting data from LAs in a variety of course contexts over getting more data from individual LAs. Details regarding the courses in which the 25 LAs were working are shown in Fig. 2. Our sample included 6 courses from university A and 3 courses from university B. We sampled courses across biology, chemistry, and physics over the introductory and upper-division levels. The courses were taught in both auditorium- and studio-style classrooms. There was also a range in the opportunities that LAs had to interact across the courses. We capture this difference here through the amount of active learning used in each of the courses. We conducted observations of each course (more than 8 observations per course) using the Classroom Observation Protocol for Undergraduate STEM (COPUS; Smith, Jones, Gilbert, & Wieman, 2013) and calculated the percentage of active learning by looking at the number of 2-min time intervals where students were active (i.e., individual thinking, clicker questions, group worksheets, or other group activity). For each course, there was at least a 20% difference between the lowest and highest amount of active learning among the observed class days. We can categorize our courses as having low (average 10%), medium (average 30–40%), and high (average 55–60%) amounts of active learning. Further details about the courses including class size, instructor experience teaching LA-supported courses, and percentage active learning from observed courses are presented in the Additional file 1.
LAs wore the camera for the entire class, so in addition to capturing the LA-student interactions that we wanted to examine, it also recorded portions of the class including lecture, LA-LA interaction, and LA-instructor interaction that were not of interest. Before analysis, one researcher watched all the videos to extract all the instances of LA-student interaction. The point-of-view nature of the recordings provided us with high-quality audio of the interactions and some visual features that helped us interpret contextual aspects of the interactions (e.g., what materials the interaction was based around, gestures, or how many students were involved in the interaction). An example capture from a video is shown in Fig. 3.
Two researchers watched the LA-interaction clips for three LAs from university A together with the goal of creating codes to characterize LA “moves” during their interactions with students. The discussion was focused both on describing each of the LA’s observable actions during an interaction and on deciding a grain size for coding. This initial analysis was grounded in the data but was of course influenced by previous literature, our beliefs about learning and facilitation, and our knowledge of LA training. We decided on LA turn of talk being our unit of coding with the possibility of multiple LA moves being coded per turn of talk. Therefore, while one LA turn of talk could be coded with one single action code, it is also possible that a number of LA action codes can be seen in one LA turn of talk. Commonalities in our description of LA actions across clips for the three LAs led to the identification of a set primary codes which we also grouped into parent codes. At this point in the analysis, we decided that the developed codes would not require the student’s voice to be heard or interpreted for an action code to be assigned. While the students’ voice is an important aspect of the interaction, and could be prompting the LAs response or shaping the sequence of action codes, it was important to develop a set of codes that could be assigned by a researcher who was interested in looking at the LA’s moves but who did not necessarily have content expertise to interpret the depth of the context being covered. This decision led to the collapsing of several codes into others. For example, we initially had a code “Expand on Student Reasoning” that was merged with “Explain.” In the initial code, the LA would continue the line of reasoning introduced by the students or use the ideas mentioned by students to continue with a question or task. In terms of LA talk, this looked identical to “Explain” and it could only be coded through interpreting the disciplinary content in both the student and LA talk. When a researcher who was not a content expert coded turns of talk, these two codes were not distinguished consistently.
Following the decision on grain size and our identification of a preliminary set of LA action codes, the two researchers coded the remainder of the data collected at university A. This was an iterative process with each researcher coding a set of selected clips independently and then comparing and discussing codes. This process led to further refinements and additions to code descriptors and further primary code collapsing. For example, the codes “Sharing Perspectives” and “Sharing Approaches” were merged into the new code “Sharing Perspectives and Approaches” because they were being double coded very often, and their separation was not adding nuanced information about what the LAs were doing. On the final set of clips, the researchers had 100% agreement on codes assigned to each turn of talk. What resulted from this process was a set of 25 primary codes. These primary codes were categorized into 6 parent codes: LA-Directed Facilitation, LA-Guided Facilitation, Feedback, Advice, Course-Related Talk, and Non-Course-Related Talk. Reliability of ATLAs coding is discussed in more detail in a specific section following the ATLAs taxonomy.
Our final set of codes comprises ATLAs which is shown in Table 2. Table 2 outlines descriptors for all the primary and parent codes along with illustrative LA quotes. A visual overview of ATLAs can be seen in the appendix. Given that we are examining LA actions during their interactions with students, typically around active-learning tasks, we could likely group all the identified LA actions broadly under the term “facilitation.” However, here we use facilitation in the first two parent codes (LA-Directed Facilitation and LA-Guided Facilitation) to describe LA actions that help students move forward with the specific task (worksheet, problem, etc.) in which they are engaged. One can think of “LA-Directed” as exemplifying a more univocal discourse and “LA-Guided” as being a more dialogic one, in which the LA and student(s) peer-peer groups are mutually engaged in the joint enterprise of knowledge construction (Chi, 1996; Lave & Wenger, 1991). It is also important to highlight the bounding of each of the parent codes. For example, while an LA discussing what course to take next may be considered advice in a general sense, it belongs in “Non-Course Related Talk” based on our categorization. Our “Advice” parent code is constrained to LA actions related to how to do well in the course and in school. Providing information on how to do well in school will also pertain to the current course and in fact be influenced by the current context.
Our specific meaning of each code is operationalized in the code descriptor. While the name of some of the codes may seem similar based on colloquial usage, we have been careful in our selection of language to most closely represent our meaning. The descriptor and illustrative quote help to outline our meaning more explicitly. For example, the “Feedback” codes “Validate Student” and “Affirm Student” have important distinctions in our operationalizations, with validate describing LA moves that support the value of students’ ideas and affirm describing more affective actions to acknowledge student feelings and struggles.
Since the same two researchers had created and refined the codes through moderation, we recognized the need to conduct further rater agreement in the development of ATLAs. An additional independent researcher was recruited to code video clips for reliability purposes. This researcher was trained to use the code set in two parts. Firstly, the two initial researchers talked through there coding of a number of video clips with the third researcher. Then, the third researcher coded some video clips independently and discussed differences in assigned codes with the other two. On the next set of 65 video clips, the third researcher had 100% agreement with the other two researchers of codes assigned to each turn of talk.
In addition to checking rater agreement during the development of ATLAs, we also conducted a rater agreement study for the use of ATLAs. Because this rater agreement study focused on the uses of ATLAs, this work therefore contributes to our validity argument discussed in the next section. For this work, three of the authors (two of whom we involved in the development of ATLAs codes, and one who was not) coded five out of 25 total LA cam videos chosen at random (20% of all videos), which included 88 interactions out of the total of 298 interactions observed (30% of all interactions). These 88 interactions included 482 coded turns of talk. For all three coders and across all of these coded turns of talk, we calculated interrater agreement (IRA) as described by Gisev, Bell, and Chen (2013). IRA describes “the extent to which different raters assign the same precise value for each item being rated” (Gisev et al., 2013, p. 331). IRA is calculated as the number of concordant responses divided by the total number of responses, times 100%. For all 482 coded turns of talk, IRA was 82%. Within each video, IRA across the three coders ranged from 72 to 89%.
In developing a validity argument for ATLAs, we follow the principals and procedures set forth by Kane (1992) and the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, National Council on Measurement in Education,, and Joint Committee on Standards for Educational and Psychological Testing (U.S.)., 2014). Although not a “test,” developing a validity argument for ATLAs can still follow the same procedure as outlined in the Standards. To that end, we first specifically define the proposed use of ATLAs and interpretation of data resulting from its use. ATLAs is meant to be used to categorize and classify the actions that LAs undertake in their interactions with students. Results from applications of ATLAs are intended to be used to identify specific LA actions in the supported classroom in order to inform research, LA preparation, and faculty professional development. Evidence to support the validity of ATLAs comes from four separate sources: (1) observations of classroom practice, (2) video of stimulated interviews with LAs, (3) application of ATLAs coding to a unique context, and (4) course logs administered to LAs about the classroom practice. We discuss each source in turn.
Observations of classroom practice were conducted using the Classroom Observation Protocol for Undergraduate STEM (COPUS; Smith et al., 2013) as discussed previously. The COPUS is a time-segmented observation protocol, where observers mark the occurrence of individual student and instructor actions during discrete 2-min intervals. As part of the larger research project, our team added a section to the COPUS to mark the occurrence of LA actions. These codes closely mirrored the COPUS codes for assigning instructor actions. From our observations, we can therefore determine how much of the class time LAs were moving through the room during group work time and how often they interacted with students one-on-one or one LA with a small group. From this data, we were able to validate that the number of interactions we captured on the LA cams was representative of the time LAs were interacting with students during the class session, as observed at the class level using the COPUS.
We also conducted stimulated interviews with 11 LAs from university A who wore the LA cams. In these interviews, LAs watched and reflected on three or four clips of their interactions with a student or group of students. Clips were selected based on how common the type of interaction was or whether there was uncertainty among coders about whether a code should be assigned to a turn of talk. From these interviews, we were able to both further develop and validate our coding of specific interactions. For example, we found that “Explain” was always coded accurately with respect to the LAs’ comments and intentions. We also discovered that our interpretation of “Check Knowledge” needed to be refined in order to be coded accurately. These interviews were of great importance in developing a valid taxonomy.
We also applied the ATLAs coding taxonomy to the videos from the LAs at university B. The courses at university B were all taught in studio-style classrooms and two of the three courses had very high amounts of active learning. This constituted a very different context than university A courses, which were all taught in stadium-style lecture halls with less active learning. We were able to apply the ATLAs codes to all actions in all turns of talk in these videos, which indicates that ATLAs itself is applicable to diverse contexts.
Finally, in the larger research project, we had administered course logs to all LAs that asked them to self-report the amount of time they spent performing certain actions during every supported class session. The actions that LAs reported on in these logs were taken from the COPUS list of actions. In this way, the logs served as a validity check for our COPUS observations. In these daily logs, LA self-reports of time spent interacting with students were consistent with our observations and also with the amount of interaction time we observed from the LA cams for the different courses.
Application of ATLAs
Following the development of the taxonomy, we conducted an applicability study. Our goal for this study was to see if we could identify specific LA actions in the LA-supported classroom, in order to inform research, LA preparation, and faculty professional development (i.e., our stated interpretation of ATLAs, from the validity argument). Specifically, in this study, we wanted to investigate LA actions both across and within courses. As such, we collected deeper samples (i.e., video from more class days) from a smaller number of LAs across a semester. In this section, we present our methods and findings.
Video data were collected from 11 LAs across four courses at university A. These data were collected throughout the semester, with video being collected in the beginning, middle, and end of the semester. Each LA wore the point-of-view cameras between four and eight times (mode = six) throughout the semester. Once again, we strived to have the same representation in our data sample as the whole LA program. LAs were recruited from four courses taught by instructors willing to participate (biology instructor B, biology instructor E, biology instructor F, chemistry instructor B). Details of the four courses are shown in Table 3 and more description presented in the Additional file 1. In total, 63 videos were collected. These were divided among two researchers and coded using ATLAs.
When applying ATLAs to our sample data, LA-Directed Facilitation and LA-Guided Facilitation were the two most common categories of actions that LAs carried out during their interactions with students, while Non-Course-Related Talk was the least common. This was expected since facilitation is one explicitly intended role of the LA in the supported course. LA-Directed Facilitation and LA-Guided Facilitation also have the most primary codes within them, and Non-Course-Related Talk has the least primary codes. ATLAs was applied to analyze LA actions at different levels, ranging from the macro, big-picture analysis using parent code distributions to a micro-level analysis using the primary codes of ATLAs. These levels include analysis of LA actions across and within courses, LAs, and individual interactions.
T tests were conducted to compare the percent of parent codes between courses. Our use of ATLAs revealed that the behavior of the LAs varied significantly depending upon the course in which they were working. The average percentage of each parent code differed among the LAs in each of the four courses represented in Fig. 4. The COPUS profiles for these courses are given in the Additional file 1. LAs in course B had a significantly higher percent of LA-Directed Facilitation compared to the other three courses (p < 0.05). Courses A and B had significantly higher proportions of the Advice parent code compared to courses C and D. Courses C and D had higher average of Feedback and of Course-Related Talk, and greater variability in the percent of each of these codes. This last trend may have been driven by three instances in which the LAs engaged in relatively few actions, and these actions were all categorized as Course-Related Talk. The greater amount of time spent on Feedback and Course-Related Talk in courses C and D is consistent with their COPUS profiles (see Additional file 1) which show a greater amount of active time for students in a different classroom context (a studio classroom) as compared to courses A and B which were held in auditorium classrooms and had less active learning time.
Comparison of LAs by characteristics
ATLAS can be used to compare LAs within a course by a characteristic or attribute. To illustrate this, we conducted a comparison of LAs by gender and LA experience. It would be perfectly reasonable to think that differences exist in the nature and frequency of LA actions based on whether LAs were female or male since it is well-documented that males and females experience the science classroom differently (Clark Blickenstaff, 2005). Therefore, we expected male and female LAs to communicate and facilitate learning within this context differently. Similarly, based on the research into novice versus experienced teachers’ beliefs and classroom practices, one might expect experienced LAs to interact with and respond to students differently compared to new LAs (Westerman, 1991).
T tests were conducted to compare the percent of parent codes between groups of LAs. Our use of ATLAs demonstrated that male and female LAs as well as new and returning LAs were generally similar in their interactions with students (Table 4). For example, no significant differences were observed between male and female or new and returning LAs in the percentages of any of the parent codes. Although no statistically significant findings were discovered from this data set, we provide data in Table 4 for descriptive purposes, and to display what type of data ATLAs can yield.
Comparing individual LAs
So far in this paper, we have used ATLAs descriptively to show what LAs are doing. Now we turn to that description in more depth, considering who the LAs are, which could lead us to start asking questions about why they might be doing what they are doing. In this way, we can use ATLAs to move from description to the foundation of inference. In this section, we further describe the actions of two different LAs working in the same context and begin to make sense of their observed actions based on stimulated interviews that we conducted with the LAs. We present this in order to demonstrate another promising use of ATLAs.
ATLAs can also be used for a finer-grain microanalysis of an LA’s actions, either by looking at the distribution of primary codes within each parent code or by examining individual interactions for a given LA. For example, we compared two LAs, Ryder and Sara. Both LAs were in course A and were interacting around the same curricular material and under the same classroom norms and expectations from their faculty member. Ryder is a returning LA and a white male. Sara is a new LA, an international student, and an Asian female. Their average distributions of parent codes were similar. When we compare their primary code distributions for a single class period (one video for each LA, on the same day) we see differences. While both had two actions coded as Course-Related Talk, Ryder’s actions are both checking in with students about their quiz, while Sara clarifies the goal of the activity and conducted an administrative task. Figure 5 shows the sequence of LA actions for Ryder and Sara. Both interactions were 2 min in length, and occurred in the same lecture, while students were engaging around the same tasks. In timeline 1, we can see that Sara carries out just 2 LA actions. Both “Explain” and “Answer Student Question” are LA-dominated facilitation actions. In contrast, Ryder carries out 5 LA actions including check understanding, elicit student thinking, point out incorrectness, and confirm correctness. These codes come from LA-Guided Facilitation and Feedback parent categories. The blank parts of the timeline represent the student talk. These timelines illustrate just how contextual and varied every interaction is between LAs and students, even within the same classroom context. We do not present this data to make inferences or conclusions but rather to illustrate the utility of ATLAs for micro-analysis as opposed to the broad course-level comparisons that we presented previously. Although coding the student voice section of the timeline is beyond the scope of this study, this could be a fruitful avenue for further investigation.
We have presented the development and application of ATLAs, a tool to examine what LAs do in the classroom. ATLAs enables researchers to code actions or LA “moves” from video data in a way that enables course level, individual level, and interaction level analysis. From our sample data, we saw how there were significant differences in LA actions at the course level but only qualitative differences at the LA and interaction level.
Action and interaction are the analytic categories that provide insight into mental functioning (Wertsch, 1991). So, ATLAs can be used as a springboard to examine how LAs approach an interaction with students and why LAs undertake certain actions. Since ATLAs allows us to look at LA moves around classroom tasks and activities, ATLAs can provide an avenue to examine variation in LA actions around different types of activities. For example, a research question that could be answered using this tool is “How do LA actions vary depending on the nature of the activity in which the student is engaging?” Such a study would help us understand how different activities or artifacts mediate these interactions.
LA-student interactions are deeply situated in the context of the classroom, which has its own norms for community interaction. Use of ATLAs could provide insight into how LA actions are influenced by the rules and norms of the classroom. Furthermore, the LA actions that we observe provide insight into the roles that they undertake in the classroom. Understanding classroom norms and the roles that LAs play would enable us to more effectively characterize how LAs support student learning and engagement within the complex social learning space of the classroom, which is one of the broader goals of our research program.
Not all videos from LAs in the same course were collected in the same days due to logistical issues during data collection. For example, on some days, not all the LAs were present in the lecture. We then had to collect additional data for some LAs on additional days. Due to the potentially sensitive nature of wearing a point-of-view camera, and then watching it back in an interview setting, we had to rely on a volunteer sample of LAs. This could have been a self-selecting group of LAs who were confident to wear cams and more confident interacting with students. We also had an incident where we had consenting LAs but the faculty member was not comfortable having cameras in his or her course. Therefore, these courses were self-selecting in that they were taught by instructors who were open to participating in our research endeavors and to having cameras in their classrooms. A more representative sample (e.g., more course subjects) would strengthen the inferences that one could make from ATLAs. Our sample included an intentional over-sampling of females and traditionally underrepresented minority groups.
ATLAs provides an exhaustive list of LA actions and revealed variability in LA actions at the course level in our dataset. ATLAs can be used to explore the interactions between LAs and students from a pedagogical standpoint. It can be used to observe LA actions across and within courses as well as analyze a single LA-student interaction. ATLAs can be used for training LAs on how to have more rich interactions with their students. For example, LAs can be shown interactions where they are engaging in a number of different action codes in a single interaction with a student. ATLAs can also be used by faculty to examine what their LAs are doing during class. ATLAs can be a tool to monitor what types of interactions are happening in different settings and contexts (e.g., studio classrooms or different disciplines).
ATLAs can be used to explore the variation among interactions of LAs of different fixed identities (e.g., gender identity, race). Although our application in this area is preliminary and inconclusive, this application of ATLAs could be used to help us better look at larger issues of inclusion and access for underrepresented groups in science education (Malcom & Feder, 2016). Because programs based on the LA model typically facilitate course transformations, it is important to investigate questions of equity and diversity in LA-student interactions. By understanding potential differences in LA-student interactions in relation to fixed identities, institutions with LA programs may be more able to acknowledge students’ varied experiences and histories in a more culturally responsive way.
Although some may want to use ATLAs as a tool to evaluate LAs, that is not its intended purpose. We conceptualize the categorization of an LA’s actions using ATLAs as their actions within a specific context. For LA performance evaluation, the validity of ATLAs would be incumbent upon the end user. For our purposes, it is purely a taxonomy to help us categorize and look at LA actions. ATLAs is not meant to support inferences about the intent driving LA actions. If one was to use it as an evaluative tool, one would need to construct a new validity argument for this tool and ensure it is appropriate for the context for which it is being used.
ATLAs could also be used to help examine the pedagogical decision making of LAs. For example, why do LAs choose to carry out certain actions during an interaction with a student? What influences their pedagogical decisions? These are important next steps that could be undertaken in part by using ATLAs.
ATLAs provides insight into the roles that LAs play in the classroom. LAs are not just facilitators of active learning tasks. LAs can also provide valuable advice and feedback to students, and behave in more of a mentor role, while maintaining peer-to-peer presence with less power differential than an instructor-student dynamic. These additional roles need to be better understood and articulated in order to maximize an LA’s presence in the classroom.
This research is a gateway for further investigation into the student voice during the LA-student interaction. We hope that researchers working in the context of LA-supported courses will use ATLAs in their work to help us better understand the nature of LA-student interactions. If we can start to gain resolution on the most supportive types and aspects of these interactions, then we can work to better support students and foster the development of high-quality peer learning support programs.
Action taxonomy of learning assistants
Communities of practice
Student-Centered Active Learning Environment with Upside-down Pedagogies
Standard error of the mean
Science, technology, engineering, and mathematics
Zone of proximal development
American Educational Research Association, American Psychological Association, National Council on Measurement in Education, & Joint Committee on Standards for Educational and Psychological Testing (U.S.) (2014). Standards for educational and psychological testing. American Educational Research Association.
Bargh, JA, & Schul, Y. (1980). On the cognitive benefits of teaching. Journal of Educational Psychology, 72(5), 593–604. https://doi.org/10.1037/0022-06188.8.131.523.
Chi, MTH. (1996). Constructing self-explanations and scaffolded explanations in tutoring. Applied Cognitive Psychology, 10(7), 33–49. https://doi.org/10.1002/(SICI)1099-0720(199611)10:7.
Clark Blickenstaff, J. (2005). Women and science careers: leaky pipeline or gender filter? Gender and Education, 17(4), 369–386. https://doi.org/10.1080/09540250500145072.
Close, EW, Conn, J, Close, HG. (2016). Becoming physics people: development of integrated physics identity through the learning assistant experience. Physical Review Physics Education Research, 12(1), 010109. https://doi.org/10.1103/PhysRevPhysEducRes.12.010109.
Colvin, JW. (2007). Peer tutoring and social dynamics in higher education. Mentoring & Tutoring: Partnership in Learning, 15(2), 165–181. https://doi.org/10.1080/13611260601086345.
Davenport, F, Amezcua, F, Sabella, MS, Van Duzor, AG (2018). Exploring the underlying factors in learning assistant—faculty partnerships, (pp. 104–107). Cincinnati: Proceedings of the Physics Education Research Conference. https://doi.org/10.1119/perc.2017.pr.021.
Freeman, S, Eddy, SL, McDonough, M, Smith, MK, Okoroafor, N, Jordt, H, Wenderoth, MP. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences of the United States of America, 111(23), 8410–8415. https://doi.org/10.1073/pnas.1319030111.
Gisev, N, Bell, JS, Chen, TF. (2013). Interrater agreement and interrater reliability: key concepts, approaches, and applications. Research in Social & Administrative Pharmacy: RSAP, 9(3), 330–338. https://doi.org/10.1016/j.sapharm.2012.04.004.
Gray, KE, Webb, DC, Otero, VK. (2016). Effects of the learning assistant model on teacher practice. Physical Review Physics Education Research, 12(2), 020126. https://doi.org/10.1103/PhysRevPhysEducRes.12.020126.
Henderson, C, & Dancy, MH (2011). Increasing the impact and diffusion of STEM education innovations. In Characterizing the impact and diffusion of engineering education innovations forum http://create4stem.msu.edu/sites/default/files/discussions/attachments/HendersonandDancy10-20-2010.pdf.
Kane, MT. (1992). An argument-based approach to validity. Psychological Bulletin, 12(3), 527–535.
Kim, K, Sharma, P, Land, SM, Furlong, KP. (2013). Effects of active learning on enhancing student critical thinking in an undergraduate general science course. Innovative Higher Education, 38(3), 223–235. https://doi.org/10.1007/s10755-012-9236-x.
Knight, JK, Wise, SB, Rentsch, J, Furtak, EM. (2015). Cues matter: learning assistants influence introductory biology student interactions during clicker-question discussions. CBE Life Sciences Education, 14(4), 41. https://doi.org/10.1187/cbe.15-04-0093.
Knight, JK, & Wood, WB. (2005). Teaching more by lecturing less. Cell Biology Education, 4(4), 298–310. https://doi.org/10.1187/05-06-0082.
Lave, J, & Wenger, E (1991). Situated learning: legitimate peripheral participation. Cambridge: Cambridge University Press; http://dx.doi.org/10.1017/CBO9780511815355.
Malcom, S, & Feder, M. (2016). National Academies of Sciences, Engineering, and Medicine. The Culture of Undergraduate STEM Education. In Barriers and Opportunities for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Students' Diverse Pathways. National Academies Press (US).
Nelson, MA. (2010). Oral assessments: improving retention, grades, and understanding. PRIMUS, 21(1), 47–61. https://doi.org/10.1080/10511970902869176.
Otero, V, Pollock, S, McCray, R, Finkelstein, N. (2006). Who is responsible for preparing science teachers? Science, 313(5786), 445–446. https://doi.org/10.1126/science.1129648.
Pollock, SJ. (2009). Longitudinal study of student conceptual understanding in electricity and magnetism. Physical Review Special Topics-Physics Education, 5. https://doi.org/10.1103/PhysRevSTPER.5.020110.
Pollock, SJ, & Finkelstein, N. (2013). Impacts of curricular change: implications from 8 years of data in introductory physics. AIP Conference Proceedings, 1513(1), 310–313. https://doi.org/10.1063/1.4789714.
Sellami, N, Shaked, S, Laski, FA, Eagan, KM, Sanders, ER. (2017). Implementation of a learning assistant program improves student performance on higher-order assessments. CBE Life Sciences Education, 16(4). https://doi.org/10.1187/cbe.16-12-0341.
Smith, MK, Jones, FHM, Gilbert, SL, Wieman, CE. (2013). The Classroom Observation Protocol for Undergraduate STEM (COPUS): a new instrument to characterize university STEM classroom practices. CBE Life Sciences Education, 12(4), 618–627. https://doi.org/10.1187/cbe.13-08-0154.
Talanquer, V. (2014). DBER and STEM education reform: are we up to the challenge? Journal of Research in Science Teaching, 51(6), 809–819. https://doi.org/10.1002/tea.21162.
Talbot, RM, Doughty, L, Nasim, A, Hartley, L, Le, P, Kramer, L, Kornreich-Leshem, H, Boyer, J (2016). Theoretically framing a complex phenomenon: student success in large enrollment active learning courses. In DL Jones, L Ding, A Traxler (Eds.), 2016 physics education research conference proceedings, (pp. 344–347). https://doi.org/10.1119/perc.2016.pr.081.
Talbot, RM, Hartley, LM, Marzetta, K, Wee, BS. (2015). Transforming undergraduate science education with learning assistants: student satisfaction in large-enrollment courses. Journal of College Science Teaching, 44(5), 24–30.
Top, LM, Schoonraad, SA, Otero, VK. (2018). Development of pedagogical knowledge among learning assistants. International Journal of STEM Education, 5(1), 1. https://doi.org/10.1186/s40594-017-0097-9.
Wertsch, JV (1991). Voices of the mind: a sociological approach to mediated action. Cambridge: Harvard University Press.
Westerman, DA. (1991). Expert and novice teacher decision making. Journal of Teacher Education, 42(4), 292–305. https://doi.org/10.1177/002248719104200407.
Whipple, WR. (1987). Collaborative learning: recognizing it when we see it. AAHE Bulletin, 4, 6 https://eric.ed.gov/?id=ed289396.
The authors thank the colleges, teachers, and LAs whose willing participation made this study possible. We thank our research team at the University of Colorado Denver for their valuable contributions to this project.
The authors of this manuscript were funded by the National Science Foundation DUE award #1525115.
Availability of data and materials
Please contact the author for data requests.
Ethics approval and consent to participate
The participants were all adults who volunteered for the program. The participating institutions/colleges currently do have and approved IRB protocol 14-0028 with the Colorado Multiple Institutional Review Board.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Thompson, A.N., Talbot, R.M., Doughty, L. et al. Development and application of the Action Taxonomy for Learning Assistants (ATLAs). IJ STEM Ed 7, 1 (2020). https://doi.org/10.1186/s40594-019-0200-5