Skip to main content

Correlations between modes of student cognitive engagement and instructional practices in undergraduate STEM courses

Abstract

Background

Within STEM education, research on instructional practices has focused on ways to increase student engagement and thereby reap the associated benefits of increased learning, persistence, and academic success. These meaningful-learning goals have been tied most specifically to cognitive engagement, a construct that is often difficult for instructors to assess on their own. While it has been shown that certain instructional practices are tied to higher cognitive engagement in students, tools to measure instructional practices and student engagement have remained largely isolated in their development and use.

Results

This research uses previously developed instruments to simultaneously assess modes of cognitive engagement in students (Student Course Cognitive Engagement Instrument [SCCEI]) and instructional practices (Postsecondary Instructional Practices Survey [PIPS]) within a course. A sample of 19 STEM courses was recruited to participate in this study, with instructors and students each self-reporting data. Results from the instructor and students in each course were scored, and ANOVA and partial correlation analysis were conducted on the sample. ANOVA indicated the significance of and classroom structure on student engagement. From the correlation analysis, a significant relationship was found between four student-reported modes of cognitive engagement and instructor-reported teaching practices.

Conclusions

With an understanding of student engagement response to classroom structure, instructors may consider their teaching environment when implementing instructional practices. Moreover, Interactivity with Peers, the deepest mode of cognitive engagement suggested by previous research, was correlated with instructional practices in our study, suggesting that instructors may be able to shape their students’ learning by encouraging collaboration in the classroom. We also found that assessment played a role in students’ cognitive engagement; this indicates that instructors may wish to thoughtfully consider their methods of assessment to facilitate modes of cognitive engagement associated with deeper learning of course material. By understanding factor correlations, the PIPS and SCCEI can be used in tandem to understand impacts of instructional practices on student cognitive engagement within a course. We conclude that there is a need for ongoing research to study the interplay of instructional practices and student cognitive engagement as instruments are developed to measure such phenomena.

Introduction

STEM education research aims to benefit students in science, technology, engineering, and math disciplines. Student engagement and its relationship with instructional practices are deeply influential in understanding how to better students’ learning experiences. As we seek to offer an understanding of the interplay of instructional practices and student engagement, we first present the relevant literature related to each. We also address the ways in which the literature suggests a relationship between instructional practices and engagement and the need for further empirical evidence of how the two are correlated.

Cognitive engagement

STEM education communities are interested in better understanding student engagement, as it has been hallmarked as a key factor in increased student grades, retention, and knowledge gains (Freeman et al., 2014; Prince, 2004). Engagement is a multi-dimensional construct (Appleton, Christenson, Kim, & Reschly, 2006) that is often best understood to be comprised of several key components: behavioral, emotional, and cognitive (Fredricks, Blumenfeld, & Paris, 2004). Behavioral engagement can be thought of as a student’s active involvement in learning and course tasks (Sinatra, Heddy, & Lombardi, 2015). This active involvement is observable and includes student’s participatory behaviors and adherence to rules (Fredricks et al., 2004). Emotional engagement is defined by a student’s affective or emotional responses to an academic subject or course (Fredricks et al., 2004). Positive and negative emotions have been shown to play an important role in choices surrounding emotional engagement. Like behavioral engagement, some of these positive or negative emotional engagement choices are external (Pekrun, 2006) and can be observed as happiness, sadness, interest, or boredom (Fredricks et al., 2004). Cognitive engagement is more abstractly defined in part by students’ psychological investment and motivation (Sinatra et al., 2015). While the construct of cognitive engagement is more difficult to clearly define and measure, it has been shown to have a positive influence on student performance, persistence, and goal orientation (Appleton et al., 2006; Meece, Blumenfeld, & Hoyle, 1988)

A prominent framework for assessing cognitive engagement was pioneered by Chi in 2009, when she first began to establish different modes of cognitive engagement based on observable behaviors in students. Later, Chi and Wylie published the ICAP framework, which empowered educators to observe their students and interpret their cognitive engagement as one of four modes: Interactive, Constructive, Active, or Passive (2014). Our recent work focused on the development of an instrument that utilized the ICAP observational schema in a self-report instrument (Barlow et al., n.d). This instrument, the Student Course Cognitive Engagement Instrument (SCCEI), was validated for the engagement of STEM students related to a specific course. We prompted students to reflect on both their behaviors and cognition, finding that students responded consistently to five factors: Interactivity with Peers, Constructive Notetaking, Active Processing, Active Notetaking, and Passive Processing. The SCCEI is intended to provide STEM educators with a depiction of how students are cognitively responding to their instructional practices.

Instructional practices

Researchers in STEM education have spent significant effort on uncovering what factors within a classroom influence a student’s engagement (e.g., see Felder & Brent, 2005; Heller, Beil, Dam, & Haerum, 2007, 2010; Ohland et al., 2008; Stump, Hilpert, Husman, Chung, & Kim, 2011). It has been shown that instructional practices often play a central role; the structure of exams, lectures, and student interactions have all been shown to influence the engagement of students (Nicol & Macfarlane-Dick, 2006; Prince, 2004; Wang, Yang, Wen, Koedinger, & Rosé, 2015). Problem-based learning, cooperative learning, and flipped classrooms have likewise all been researched for their impact on student engagement (Dard, Lison, Dalle, & Boutin, 2010; Prince, 2004; Smith, Sheppard, Johnson, & Johnson, 2005). Yet, it remains important for instructors to consider the practices of their own classrooms, even those that do not fit clearly within the bounds of a particular strategy. To do so, educators can be asked to report on their own instructional practices surrounding factors notably important to engagement, including assessment, student interactions, and delivery of the content.

Methods of assessing instructional practices include student surveys, self-report surveys, interviews, class observations, and artifact analysis (American Association for the Advancement of Science [AAAS], 2012). Quantitative measurement and analysis of phenomena such as engagement or instructional practices is inherently limited; any method of collecting and analyzing data provides insight, and thereby measurement, from a particular vantage point. It is therefore important to consider which data analytic method will provide appropriate insight for the study. Here, self-report has the unique advantage of collecting data from the perspective of the instructors, who have the power to enact change in their own classroom. A review of 12 prominent instructor self-report instruments was conducted by Williams et al. in 2015. They found half (6) of the instruments were related to specific disciplines, while the others had been validated in a variety of STEM disciplines. Broader instruments either emphasized teaching specifically or teaching and other elements of faculty work (Williams, Walter, Henderson, & Beach, 2015). The Postsecondary Instructional Practices Survey (PIPS) (Walter, Henderson, Beach, & Williams, 2016) is an instrument intended to span all disciplines that were found to focus most heavily on instructional practices (Williams et al., 2015).

PIPS is a validated, externally reviewed instrument composed of 24 instructional practice-related items (Walter et al., 2016). Validation studies at a broad range of institutions and departments support a breakdown of items into either 2- or 5-factor models. Authors suggest that the 5-factor solution is most appropriate when more details on the instructional practices of a participant are valuable (Walter et al., 2016). These factors are as follows: Student-Student Interactions, Content Delivery, Student-Content Engagement, Formative Assessment, and Summative Assessment. Within the original study, the PIPS was used to understand correlations between teaching practices and class size, instructor gender, and years of teaching experience (Walter et al., 2016). Though authors acknowledge that some of their constructs are intended to reflect how engaging the practices of the instructor are (see Student-Content Engagement of the 5-factor model), PIPS has yet to be used in conjunction with measures of student cognitive engagement.

Environmental factors’ relationship to instructional practices and student engagement

When considering student cognitive engagement, it has been important to consider the context in which students are asked to engage. Past research suggests that students are impacted by their environment, including the physical structure of the classroom (Lund et al., 2015). Here, we consider the relationship of the physical structure of the classroom to the SCCEI modes of student engagement; consideration of this relationship provides further insight as to how student reports may responsive to these course features.

Both the instructor and the student have the potential to be influenced by the classroom structure. Research has suggested that the implementation of student-centered instructional practices may be limited by classroom structure (Henderson & Dancy, 2007). Though classroom layout can be perceived as a barrier, it is not systematically tied to instructional practices that feature student-centered learning (Bathgate et al., 2019; Stains et al., 2018). Some have gone on to suggest instructors may be less likely to abandon newly adopted active learning-based instructional practices when the physical structure of their classroom is modified to specifically accommodate the teaching style (Knaub, Foote, Henderson, Dancy, & Beichner, 2016). When considering how students respond to classroom structure, Foote et al. indicated that studio classrooms are associated with higher levels of active learning (2014). Yet, others have suggested that student-student interactions can be facilitated even in classrooms with fixed, amphitheater-style seating (Lund et al., 2015). With some discrepancy in the literature, it becomes important to better understand how reported modes of student engagement are related to the classroom structure in which they learn.

Correlations between cognitive engagement and instructional practices

While the PIPS has yet to be used in conjunction with measures of student cognitive engagement, factors within the instrument suggest its relevance to engagement. Student-Student Interactions have been studied in the past through collaborative learning environments, finding that there are many circumstances when learning together is beneficial (Nokes-Malach, Richey, & Gadgil, 2015). This aligns with ICAP and the SCCEI, which posit that Interactive Engagement is the most sophisticated and beneficial for student learning (Chi & Wylie, 2014). Student-Content Engagement as measured by PIPS is what the ICAP framework was originally intended to measure—exploring how students make choices to cognitively engage with course content (Chi, 2009). Educators can design and redesign the curriculum to increase student engagement, yet there is often far too little information available about what mode of engagement is achieved with a given assignment. Research continues to assess how Content Delivery Practices impact cognitive engagement, including the influence of flipped versus traditional lecture courses, project or problem-based learning, and online course offerings. Finally, the Formative and Summative Assessment factors have notable relationships with cognitive engagement, including how students choose levels of sophistication to match those with which they are tested.

Overview of research

We see each factor of the PIPS as poised to reveal aspects of student cognitive engagement. Missing from the literature is an empirical correlation between modes of cognitive engagement and instructional practices as measured by instruments with evidence of validity. There also exists a lack of empirical evidence that explains how classroom structure explains the variance in the mode at which students cognitively engage. We therefore sought to answer the questions: (1) How does classroom structure differentiate modes of student engagement? and (2) What are the correlations between SCCEI modes of cognitive engagement and PIPS factors of instructional practices? Our aim was that educators may be empowered to make changes in instructional practices in their course with knowledge of how their practices correlate with student’s cognitive engagement, as well as have an understanding of how contextual features of classroom structure may impact their change efforts. To answer research question (1), we conducted an ANOVA and generated data visualizations to represent the means of each of the SCCEI factors’ correspondence with class structure. To address research question (2), we utilized partial correlation analysis to understand the relationship between instructional practices and modes of student engagement. We scored instructional practices using the PIPS and students’ modes of engagement using the SCCEI; the partial correlation analysis allowed us to understand the correlations between the two instruments’ factors within a course. Results indicate that there is statistical significance for students’ mode of cognitive engagement with the course structure.

Methods

The “Methods” section below describes participant selection and data analysis. We also outline our sampling strategy, overview the function of the two instruments, and present the items of the PIPS and SCCEI.

Sampling

We aimed to recruit STEM courses from a variety of institutions that differed in their course level, classroom structure, and primary means of instructional practice. In all, over 100 courses were recruited for participation in this study from universities and community colleges across the USA. The sample began as a sample of convenience and was followed by snowball sampling (Berg & Lune, 2014). Instructors were recruited via email from the research team for participation in the study. Of the 100+ courses recruited, 37 indicated their interest in participating in the study. Once an instructor agreed to participate, all students in the course were recruited for participation in the study via the course webpage. In this way, both the instructor and students in the class were considered participants in the study. Thirty-seven courses distributed the SSCEI to their students, and of those, 19 courses generated response rates greater than 10% and were included in the final study. Each of the 19 courses was taught by a different instructor (i.e., no instructor was surveyed and included in analysis twice). There were 645 student responses to the SCCEI, with an overall response rate of 58%; the average response rate of students was 51% with a standard deviation of 32%. The demographics of the courses in the study can be found in Table 1, and a summary of the student demographics can be found in Table 2.

Table 1 Summary of course demographics
Table 2 Summary of student population demographics

Notes on instrument use

Both the PIPS and the SCCEI underwent a development process to ensure that a set of items measured a single construct or factor. Detailed information can be found on the evidence of validity for the PIPS and SCCEI instruments elsewhere (Barlow et al., in press; Walter et al., 2016). The factors of both the PIPS and SCCEI were derived using oblique rotations, meaning that there is an assumed correlation between factors (e.g., Interactive Engagement is assumed to correlate somewhat with Constructive Notetaking, and Content Delivery Practices with Formative Assessment). The focus of our research was to explore correlations of factors across the two instruments; we do not present the correlation of factors within instruments due to the similarity between samples of the original studies and the participants in the present work. The analysis utilized to develop factors (exploratory and confirmatory factor analyses [EFA], [CFA]) assumed normal distribution and linearity in the data. While we test the relevance of such assumptions here, we rely on the more robust samples present in the original studies for assumptions of normality and linearity of factors.

Measure of instructional practices

The PIPS facilitated instructor self-report along with five distinct factors: Student-Student Interaction, Content Delivery, Student-Content Engagement, Formative Assessment, and Summative Assessment (Walter et al., 2016). Each factor of PIPS is comprised of multiple items to measure alignment with a given construct. The factor Student-Student Interaction contains items that measure how instructors facilitate students’ interaction with one another in the classroom, including both the structure of the course and the required activities of students. Content Delivery items relate to how instructors translate information to students, particularly through how the course is structured. Student-Content Engagement measures how instructors provide students with activities in the course from which they can reflect or make meaning of the material. Formative and Summative Assessment factors each address how students are tested within a course; Formative Assessment indicates testing that offers feedback to both instructors and students to shape the trajectory of learning whereas Summative Assessment measures formal testing and grading within a course. The items and descriptions as they relate to each of the five PIPS factors can be seen in Table 3 below.

Table 3 The 5-factor model of the PIPS survey

Instructors were given access to the PIPS via Qualtrics (2005), an online survey platform. Each instructor was directed to respond to items with regards to a single term of a single course (the same course where students completed the SCCEI). The PIPS was deployed to instructors when the term was approximately 75% completed. This allowed instructors to reflect on a term of a course without generating undue pressure at the completion of the term. A randomized order of items was used to minimize fatigue effect. The response scale was the original scale from the study—a 5-point Likert; a score of zero was given to not at all descriptive of my teaching, with values increasing by one up to a score of four for very descriptive of my teaching. Walter et al. explicitly indicate the ways in which the PIPS is to be scored: values relating to items for each factor are summed, then divided by the total value possible for that factor. Thus, for each factor, faculty members were given a score between 0 and 100 (i.e., a percent alignment with the factor). No items required reverse coding. Higher scores were indicative of a more descriptive fit of the factor, not necessarily more preferable for engagement.

In addition to questions related to instructional practices, an additional question was added to the instructor instrument to better understand the physical structure of the classroom in which instruction took place. Instructors were asked what is the physical structure of the course’s primary classroom? and were given the options of individual desks, students facing instructor; rows of tables, students facing instructor; pods of desks/tables, students facing other students. Previous work provided similar responses for reporting of classroom structure (Lund et al., 2015). The questions facilitated an initial study of the relationship between student cognitive engagement and physical classroom structure.

Measure of student modes of cognitive engagement

In addition to the measurement of instructional practices, we sought to measure students’ cognitive engagement at the course-by-course level. To understand modes of cognitive engagement of students within a course, it is important to clearly define differentiable modes. The ICAP framework provides a foundational understanding of modes of cognitive engagement (Chi & Wylie, 2014), with slight modifications being included from student self-report findings (Barlow et al.,in press). Interactive Engagement or Interactivity with Peers references a dialogue between two students in which they add further definition to a course construct via an equally-participatory conversation (Chi & Wylie, 2014). Interactively Engaged students will co-create knowledge and report high alignment to I discuss my position with others regarding the course content. Constructively engaged students will generate knowledge beyond that which is presented to them in a course. The SCCEI measures Constructive Engagement as students take notes (Constructive Notetaking); these students will integrate information and have a high alignment with I add my own notes to the notes provided by the teacher. Active Engagement, according to Chi and Wylie, requires focused attention and a basic level of information manipulation (i.e. underlining or highlighting) (2014). Work from the SCCEI measures two components of Active Engagement: Active Notetaking and Active Processing. Active Notetaking is related to overt activities during notetaking which are indicative of an underlying cognitive state, including statements of I take verbatim notes (meaning word for word directly from the board/PowerPoint slide/doc camera, etc.) (Barlow et al., in press). Active Processing is directly related to students’ reports on their own cognition, where I think about previous concepts covered in the course would be reported with high alignment. Active Processing highlights the focused attention component of Chi and Wylie’s definition, while Active Notetaking emphasizes the basic information manipulation. Passive Engagement is an orientation towards and receiving from the course content (Chi & Wylie, 2014). Passively Engaged students will listen without doing anything else and report I listen when my teacher or whomever is speaking.

Both the SCCEI and the original ICAP framework proposed by Chi & Wylie assume that students who are more deeply cognitively engaged will fall into Interactive Engagement, while students who are less cognitively engaged will be considered Passively Engaged. In this sense, the student measurement assigns a value assessment to modes of engagement: Interactive > Constructive > Active > Passive. The SCCEI differentiates modes of engagement along factors; items as they relate to each mode of engagement can be found in Table 4.

Table 4 The SCCEI used to measure student cognitive engagement

Students were given access to the survey via the course website and asked to participate by their instructors. Students were considered to be anonymous in their responses, yet some forwent anonymity to earn extra credit in their course (courses where extra credit were offered provided an equally weighted alternative extra credit assignment). The randomized survey was administered via Qualtrics (2005) when the term was approximately 75% complete.

We sought to determine how the reported instructional practices were related to the overall student cognitive engagement present in the course, as reported by students. Therefore, courses, not students, were scored as a result of the SCCEI. Modes of engagement were measured by two 3-point Likert scales. For each item, students were asked the frequency with which they behave/think in such a manner both inside and outside of the classroom. Students were scored only using the scale from previous instrument development studies—an in-class frequency scale. A score of zero corresponded with a low frequency (few to no lecture periods), while a score of two was given to the highest level of frequency (most lecture periods). For each course, five sums were generated, one for each of the cognitive engagement factors. These sums were then divided by the total possible score for each factor (the number of student responses times the total possible value in a given mode for each student). Similar to instructor survey scoring, courses also received scores ranging from 0 to 100 that pertained to their mode of engagement (i.e., their percent alignment with each factor). Data from instructor and survey data were then combined for further analysis.

ANOVA data analysis

We sought to analyze the relationship between class structure and SCCEI scores; we did not consider the relationship of class structure to PIPS scores due to the small sample of instructors present in this study. As each student generated a score for each of the five SCCEI factors, the study can be considered a crossover repeated measures experimental design (Ramsey & Schafer, 2002). Therefore, a two-way repeated measures analysis of variance (ANOVA) was chosen to individually evaluate the effect of the physical classroom structure on student engagement scores across the five SCCEI factors. Origin Pro 9.4, a statistical analysis software, was used to conduct ANOVA and post hoc tests. A Tukey HSD post hoc analysis was performed and allowed us to compare the mean engagement score differences pairwise for the three classroom structures within each of the five SCCEI factors. Significance was determined using these results. Additionally, visual representations were generated to present the means of each comparison.

Correlation data analysis

We posited that factors measured by the PIPS and SCCEI would indeed bear a relationship. To test this relationship, statistical analyses were considered for their relevance to the dataset. Correlation analysis is useful to symmetrically explore factors that are independent (Lindley, 1990). The PIPS asks instructors to report on their instructional practices and the SCCEI asks students to report on their cognitive engagement within the classroom. While related, the factors of these two instruments are independent; students were asked to report how they elected to engage in a course, not on how the instruction impacted their engagement. Therefore, a correlation analysis was utilized to determine the relationship between the PIPS and SCCEI factors. A partial correlation was selected—partial correlation is useful when it is desirable to remove the effect of a selected variable when determining the association of the remaining factors (Cohen, Cohen, West, & Aiken, 2003). In this analysis, the effect of class was removed in order to determine correlations between the PIPS and SCCEI factors across the sample.

SPSS version 25 was used to conduct a parametric correlation analysis. Parametric correlation uses Pearson’s r as an indicator of significance, requiring normality and linearity in the dataset. Linearity was visually inspected with scatterplots of each factor dataset. Shapiro-Wilk was used to test for normality, as it has shown to be useful for small sample sizes (n < 50) (Razali & Wah, 2011). Significance of the Shapiro-Wilk indicates non-normal distribution at the 95% confidence interval (W < 0.05) (Shapiro & Wilk, 1965).

Results and discussion

Here, we present statistical evidence in the form of descriptive statistics, ANOVAs, and correlation analyses to answer the research questions: (1) How does classroom structure differentiate modes of student engagement? and (2) What are the correlations between SCCEI modes of cognitive engagement and PIPS factors of instructional practices?

Overview of data

Presentation of the descriptive statistics serves a dual purpose: first, the reader is empowered with a foundational understanding of the dataset and how factors relate to one another; second, descriptive statistics provide support for the subsequent statistical testing on the dataset. Scores for each factor are presented as a percent alignment, with a zero-score indicating the factor does not at all describe the teaching practices (PIPS) or does not frequently occur in the class period (SCCEI). A score of 100 indicated perfect alignment with the factor. The n for all factors was 19, as each instructor generated a single score for each factor and each class of student respondents generated a single score for each factor. Descriptive statistics for each factor are presented in Table 5 below.

Table 5 Descriptive statistics

PIPS factors showed relatively large standard deviations across all factors, averaging 18%. This indicates that the sample was relatively diverse in nature, particularly with respect to Content Delivery, Student-Student Interactions, and Summative Assessment; instructors showed substantial variation in their alignment to each factor. Formative Assessment and Student Content Engagement were less diverse across the sample, suggesting that instructors in the study were more similarly aligned along these factors. Means remained relatively similar across factors, with Student Content Engagement being notably higher than other factors. The Shapiro-Wilk value was not significant for any of the PIPS factors, indicating normal distribution of the dataset.

Means of the SCCEI factors depicted a generally decreasing trend as modes of engagement increased in sophistication; Interactivity with Peers was found to have the smallest mean average and Passive Processing the greatest. This suggests that some students remain at a particular engagement level as modes of cognitive engagement deepened. These findings align with the underpinnings of the ICAP theory which is hierarchical in nature: students who are Constructively engaged would be actively and passively engaged as well (Chi & Wylie, 2014).

The Shapiro-Wilk value was significant only for Passive Processing. This indicates that within this sample, Passive Processing was not normally distributed. Visual inspection indicates a negative skew of responses, where students indicated consistent high alignment with Passive Processing. While some have affirmed that normality is important to understanding the power of Pearson’s significance (Kowalski, 1972), others have suggested normality is a needless assumption (Nefzger & Drasgow, 1957), particularly when samples do not largely deviate from normal (Edgell & Noon, 1984). Because our sample did not largely deviate from normal distribution, we proceeded with parametric correlation analysis, using Pearson’s r to indicate significance.

ANOVA results

The descriptive statistics presented above represented the five student cognitive engagement factors of the SCCEI and the five instructional practice factors of the PIPS across all courses. The instructors of the courses studied varied in the physical environment in which they taught. The mean difference in student engagement scores resulting from the interaction of classroom structure and the five SCCEI modes of engagement was evaluated using two-way repeated measures ANOVA. Descriptive statistics for data used in the ANOVA are presented in Table 6. A Greenhouse-Geisser corrected repeated measures ANOVA determined that there was a statistically significant effect on student engagement scores for the interaction of classroom structure and SCCEI factors [F(6.50, 2068.47) = 6.60, p < 0.001]. Results from the Tukey post hoc analysis are presented in Table 7; bar graphs are presented to help visualize trends on the impact of course structure on SCCEI scores.

Table 6 Descriptive statistics of ANOVA datasets
Table 7 Tukey results indicating the significance of pairwise comparisons

Classroom structure was considered as a potential influence on student cognitive engagement factors; it serves to reason that students may feel more or less comfortable in implementing particular types of learning activities based on the physical structure of the seating (e.g., discussions with classmates may be more frequent when students are seated in pods of desks). Students reported more significantly more interactivity when seated in pods than when seated in either individual desks or rows of tables. This supports reports of barriers suggested by instructors, which indicate that seats bolted to the floor make interactivity more difficult (Dancy & Henderson, 2008). Additionally, students reported significantly higher Constructive Notetaking when seated at rows of desks than when in individual desks. Though the reason for this difference remains unclear, one possibility is students have more physical room available to them when seated at tables than at smaller desks. When in pods of desks, students reported significantly higher levels of Active Processing than when in individual desks. As can be seen in Fig. 1, rows of tables exhibited advantages in notetaking, while exhibiting higher engagement scores (not always significantly) in nearly every category when compared with individual desks. Pods of desks resulted in lower (though not always significantly lower) engagement in modes that required notetaking; for modes of engagement where interaction or processing was required, mean scores of students in pods were at or near the greatest. These results arguably point to a need to minimize individual desks in classrooms and instead provide rows of tables to facilitate student engagement through notetaking and pods to facilitate student engagement through interactivity.

Fig. 1
figure 1

Variance in SCCEI factor means based on classroom structure

Correlations between instruments

A correlation matrix resulting from a partial correlation analysis was conducted in SPSS version 25 using Pearson’s r to indicate significance. The effect of class was removed by indicating class number as a control variable. The development of both the PIPS and SCCEI suggests some correlation between factors within either instrument; we therefore removed these correlations and significance from the matrix (e.g., correlation between PIPS factors Summative Assessment and Formative Assessment are not shown). In Table 8, we present the correlation matrix for factors between the PIPS and SCCEI. Correlations represent the strength of the relationship between factors, ranging from − 1 to 1. Larger negative values indicate a strong inverse relationship between the factors—as one factor increases, the other factor decreases. High positive numbers indicate that factors are directly correlated. Significance indicates the percent likelihood that the correlation is a result of the error. Confidence intervals can be derived from converting significance into percentages and subtracting from 100%. At the 95% confidence interval, three correlations were found to be significant. At the 90% confidence interval, an additional correlation was observed to be significant.

Table 8 Partial correlation matrix of SCCEI and PIPS

Active Notetaking was seen to have a strong correlation with Content Delivery. We see this as evidence of agreeance in how instructors report on their practices and how students respond to them. Content Delivery items include I guide students through major topics as they listen and take notes and My class sessions are structured to give students a good set of notes; Active Notetaking items include I take verbatim notes (meaning word for word directly from the board/PowerPoint, etc.). A positive correlation between the factors indicates that as instructors report stronger agreeance with items suggesting they provide students with structured notes, students likewise report increased frequency of copy notes from the board. Active Notetaking was also positively correlated with Summative Assessment at the 95% confidence interval. Summative assessment items include My test questions contain well-defined problems with one correct solution. Results suggest that as instructors report increasingly high agreeance with providing assessment with singular correct answers, students report increasing frequency of taking verbatim notes on course content. This aligns with work that suggests students’ learning strategies are influenced by the assessment demands of the course (Lucas & Ramsden, 1992).

A negative correlation at the 95% confidence interval was observed between Passive Processing and Formative Assessment. Passive Processing items include I follow along with the activities that take place during the course, with high alignment indicating students frequently listen to instruction. Formative Assessment items include I use student assessment results to guide the direction of my instruction during the semester, with high alignment indicating soliciting feedback in the form of assessment is descriptive of the course. A negative correlation between Passive Processing and Formative Assessment reveals that as instructors increase their feedback in the form of assessment, their students report less alignment with listening or following along in class. While at the onset this may appear counterintuitive, we see an alignment with these findings and the literature. Just as Chi and Wylie suggested in their original work with ICAP, Passive Engagement is simply orientation towards instruction (2014). By definition, active learning goes beyond listening and is extended to higher-order learning through activity (Freeman et al., 2014). This is echoed in work surrounding formative assessment—as students are presented with assessment that directs their learning, personal reflection and extension of knowledge are required (Kulasegaram & Rangachari, 2018). Here, we echo these findings and propose that as instructors are more aligned with Formative Assessment, their students will report lower frequencies of simply listening through Passive Processing in their courses.

At the 90% confidence interval, Interactivity with Peers and Student-Student Interactions were seen to be significant. Interactivity with Peers items included I discuss my position with others regarding the course content, and Student-Student Interactions items included I structure class so that students constructively criticize one another’s ideas. This correlation is strong evidence for the direct influence of instructional practices on student cognitive engagement; as instructors reported that facilitating student interaction was descriptive of their courses, students reported meaningfully sharing their ideas with their peers. This supports work that indicates instructional activities can either support or inhibit collaboration in the classroom (Nokes-Malach et al., 2015). Furthermore, this suggests that both instruments are indeed measuring the same construct (interactivity in the classroom) with the respective factors. The correlation of a single construct measured by factors in two separate instruments points towards the usefulness in interpreting results based on the findings of multiple independent instruments.

Conclusions

STEM educators have been observed to be resistant to change, notably as they are prompted to implement research-based instructional practices (Henderson & Dancy, 2007). Active learning techniques to increase student cognitive engagement are well-researched instructional practices (Freeman et al., 2014; Prince, 2004; Smith et al., 2005)—practices the STEM education community sees great value in instructors implementing. Our research aligns itself with existing work in the STEM education community on the development of instruments to measure both instructional practices (PIPS) and student cognitive engagement (SCCEI).

Researchers have suggested instructors need contextual understanding of how to implement strategies, lest deem them ineffective (Hutchinson & Huberman, 1994). Others have suggested resistance to change emerges as instructors believe their students oppose researched-based strategies in the classroom—particularly interactivity with their peers (Henderson & Dancy, 2011). We utilized both the PIPS and SCCEI in an effort to provide instructors with a more holistic understanding of their courses by correlating their own report on their practices to their students’ experience of them. Our results showed that indeed as instructors reported greater alignment with facilitating Student-Student Interactions, students reported higher alignment with lecture periods where they interacted with their peers (Interactivity with Peers). This may be one early step in aiding instructors in understanding the contextualization of instructional practices, while breaking down the notion that students are unwilling to engage interactively.

Dancy and Henderson also suggested that in order to facilitate change in instructional practices, STEM researchers ought to connect their models with models in their discipline and the broader STEM education research community (Dancy & Henderson, 2008). By utilizing previously developed instruments from the STEM community, we participated in a foundational movement towards connecting multiple outcomes of multiple research projects. The use of tools that already have evidence of validity not only adds credibility to such tools, but findings also are expanded. Here, we noted the significance of class structure on modes of student cognitive engagement. Our findings reinforced those found in literature that discussed the influence classroom structure on instructors practices; such significance is supporting evidence of the interconnectivity of instructional practices and student cognitive engagement.

We found that Content Delivery was significantly correlated with Active Notetaking. This becomes important as the continual development of these instruments and others is considered. Though empirical validation is often extensive during the instrument development process—as it was for both the PIPS and SCCEI—clarity on the construct being measured can be overlooked. Using two instruments measuring related constructs in tandem allows for greater clarity on the construct measured by either instrument. Here, we gained a better understanding of what was actually being measured by Content Delivery and Active Notetaking due to their positive correlation—a compilation of instructors leading their students through content and them responding in turn by taking notes.

More broadly, the findings of this work suggest future instrument development ought to consider its alignment not just with instruments of measuring similar phenomena, but those measuring the related and influenced phenomena. While both the PIPS and SCCEI are independently useful to instructors, their use together tells more than either could apart. From the study, it was seen that students are responsive to their instructor’s practices including assessment, content delivery, and peer work in class. Though student cognitive engagement and instructional practices are indeed individual constructs, their interconnectivity becomes important. As instructors, departments, and institutions seek to implement best practices in the classroom, a more holistic understanding of such constructs may be key. More work is needed to explore not only how cognitive engagement is related to instructional practice, but also how other important constructs to STEM are related.

Limitations and future work

The small sample of this study inherently limits the reliability of the work. Additionally, classrooms with low student response rates are susceptible to misrepresentations. That is to say, when only 10% of the students respond (the minimum response rate in this study), the perceptions of 90% of the students are not included in analysis. While 10% of the students do provide some insight into the classroom engagement experience, greater response rates would increase the integrity of the findings. Future work may wish to include classes with only higher response rates; this emphasis will likely result in other concessions, such as only including courses when instructors are willing to offer extra credit or courses with smaller class sizes where higher response rates are more achievable. Future work may also seek to increase the number of courses included in the study to add validity evidence to the findings presented here.

We cannot make claims that the PIPS and SCCEI will always correlate in a consistent manner; instead, we present evidence for continued study to better understand how these instruments, and others, may be utilized together to better understand STEM courses. Data collected does not represent the objective reality of either the classroom or students; rather, data is indicative of the perspectives held by the participants of the study. Future studies may find their participants to hold different perspectives. The ongoing discovery of how instructors and students report on their practices and modes of engagement respectively adds to the growing conversation on how both can be optimized for student learning. We welcome studies that continue to explore these relationships. In particular, we suggest implementing the PIPS and SCCEI in a broad range of STEM courses and correlating the results. Furthermore, we support work with these instruments and others that measure related constructs and the expansion of such work on to other related constructs across STEM.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

STEM:

Science, Technology, Engineering, and Mathematics

PIPS:

Postsecondary Instructional Practices Survey

SCCEI:

Student Course Cognitive Engagement Instrument

ANOVA:

Analysis of variance

References

  • American Association for the Advancement of Science (AAAS). (2012). Describing and measuring undergraduate STEM teaching practices. A report from a national meeting on the measurement of undergraduate science, technology, engineering and mathematics (STEM) teaching. In AAAS.

  • Appleton, J. J., Christenson, S. L., Kim, D., & Reschly, A. L. (2006). Measuring cognitive and psychological engagement: Validation of the student engagement instrument. Journal of School Psychology, 44, 427–445. https://doi.org/https://doi.org/10.1016/j.jsp.2006.04.002

  • Barlow, A. J., Lutz, B. D., Pitterson, N. P., Hunsu, N., Adesope, O., & Brown, S. A. (n.d.). Development of the Student Course Cognitive Engagement Instrument (SCCEI). In press.

  • Bathgate, M. E., Aragón, O. R., Cavanagh, A. J., Waterhouse, J. K., Frederick, J., & Graham, M. J. (2019). Perceived supports and evidence-based teaching in college STEM. International Journal of STEM Education, 6(1). https://doi.org/https://doi.org/10.1186/s40594-019-0166-3

  • Berg, B. L., & Lune, H. (2014). Qualitative research methods for the social sciences. In Qualitative Research (Vol. 8th). https://doi.org/https://doi.org/10.2307/1317652

  • Chi, M. T. H. (2009). Active-Constructive-Interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science, 1(1), 73–105. https://doi.org/https://doi.org/10.1111/j.1756-8765.2008.01005.x

  • Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. https://doi.org/https://doi.org/10.1080/00461520.2014.965823

  • Cohen, J., Cohen, P., West, S., & Aiken, L. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (Third). New York, NY: Routledge.

    Google Scholar 

  • Dancy, M. H., & Henderson, C. (2008). Barriers and promises in STEM reform. National Academies of Science Promising Practices Workshop.

  • Dard, D. B., Lison, C., Dalle, D., & Boutin, N. L. (2010). Predictors of student’s engagement and persistence in an innovative PBL curriculum: Applications for engineering education. International Journal of Engineering Education, 26(3), 1–12. Retrieved from https://s3.amazonaws.com/academia.edu.documents/31938733/Ijee2307.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1522253621&Signature=CFx1Aykm%2F3%2F%2BDI51eZb5N7FRdSA%3D&response-content-disposition=inline%3B filename%3DPredictors_of_Student_s_Engagemen

  • Edgell, S. E., & Noon, S. M. (1984). Effect of violation of normality on the t test of the correlation coefficient. Psychological Bulletin, 95(3), 576–583. https://doi.org/https://doi.org/10.1037/0033-2909.95.3.576

  • Felder, R. M., & Brent, R. (2005). Understanding student differences. Journal of Engineering Education, (January), 57. Retrieved from e:%5C02_11_2016

  • Foote, K. T., Neumeyer, X., Henderson, C., Dancy, M. H., & Beichner, R. J. (2014). Diffusion of research-based instructional strategies: The case of SCALE-UP. International Journal of STEM Education, 1(1), 1–18. https://doi.org/https://doi.org/10.1186/s40594-014-0010-8

  • Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Source: Review of Educational Research, 74(1), 59–109. Retrieved from http://www.jstor.org/stable/3516061

  • Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410–8415. https://doi.org/https://doi.org/10.1073/pnas.1319030111

  • Heller, R. S., Beil, C., Dam, K., & Haerum, B. (2007). Student and faculty perceptions of engagement in engineering. Journal of Engineering Education, 253–262.

  • Heller, R. S., Beil, C., Dam, K., & Haerum, B. (2010). Student and faculty perceptions of engagement in engineering. Journal of Engineering Education, 99(3), 253–261. https://doi.org/https://doi.org/10.1002/j.2168-9830.2010.tb01060.x

  • Henderson, C., & Dancy, M. H. (2007). Barriers to the use of research-based instructional strategies: the influence of both individual and situational characteristics. Physical Review Special Topics - Physics Education Research, 3(2), 020102. https://doi.org/https://doi.org/10.1103/PhysRevSTPER.3.020102

  • Henderson, C., & Dancy, M. H. (2011). Increasing the impact and diffusion of STEM education innovations. Characterizing the Impact and Diffusion of Engineering Education Innovations Forum. Retrieved from http://create4stem.msu.edu/sites/default/files/discussions/attachments/HendersonandDancy10-20-2010.pdf

  • Hutchinson, J. R., & Huberman, M. (1994). Knowledge and dissemination and use in science and mathematics education: A literature review. Journal of Science Education and Technology, 3(1).

  • Knaub, A. V., Foote, K. T., Henderson, C., Dancy, M., & Beichner, R. J. (2016). Get a room: the role of classroom space in sustained implementation of studio style instruction. International Journal of STEM Education, 3(1). https://doi.org/https://doi.org/10.1186/s40594-016-0042-3

  • Kowalski, C. J. (1972). On the effects of non-normality on the distribution of the sample product-moment correlation coefficient. Journal of the Royal Statistical Society, Series C (Applied Statistics), 21(1), 1–12.

    Google Scholar 

  • Kulasegaram, K., & Rangachari, P. K. (2018). Beyond “formative”: Assessments to enrich student learning. Advances in Physiology Education, 42(1), 5–14. https://doi.org/https://doi.org/10.1152/advan.00122.2017

  • Lindley, D. V. (1990). Regression and correlation analysis. In Time Series and Statistics (pp. 237–243). https://doi.org/https://doi.org/10.1007/978-1-349-20865-4_30

  • Lucas, P., & Ramsden, P. (1992). Learning to teach in higher education. British Journal of Educational Studies. https://doi.org/https://doi.org/10.2307/3120902

  • Lund, T. J., Pilarz, M., Velasco, J. B., Chakraverty, D., Rosploch, K., Undersander, M., & Stains, M. (2015). The best of both worlds: Building on the COPUS and RTOP observation protocols to easily and reliably measure various levels of reformed instructional practice. CBE Life Sciences Education, 14(2), 1–12. https://doi.org/https://doi.org/10.1187/cbe.14-10-0168

  • Meece, J. L., Blumenfeld, P. C., & Hoyle, R. H. (1988). Students’ goal orientations and cognitive engagement in classroom activities. Journal of Educational Psychology, 80(4), 514–523. Retrieved from http://psycnet.apa.org.ezproxy.proxy.library.oregonstate.edu/fulltext/1989-17194-001.pdf

  • Nefzger, M. D., & Drasgow, J. (1957). The needless assumption of normality in Pearson’s r. American Psychologist, 12(10), 623–625. https://doi.org/https://doi.org/10.1037/h0048216

  • Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self- regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/https://doi.org/10.1002/j.2168-9830.2010.tb01056.x

  • Nokes-Malach, T. J., Richey, E., & Gadgil, S. (2015). When is it better to learn together? Insights from research on collaborative learning. Educ Psychol Rev, 27. https://doi.org/https://doi.org/10.1007/s10648-015-9312-8

  • Ohland, M. W. (Purdue U., Sheppard, S. D. (Stanford U., Lichtenstein, G. (Stanford U., Eris, O. (Franklin W. O. C. of E., Chachra, D. (Franklin W. O. C. of E., & Layton, R. (Rose-H. I. of T. (2008). Persistence, engagement, and migration in engineering programs. Journal of Engineering Education, (December), 260–278. https://doi.org/https://doi.org/10.1002/j.2168-9830.2008.tb00978.x

  • Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational Psychology Review, 18. https://doi.org/https://doi.org/10.1007/s10648-006-9029-9

  • Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223–231. https://doi.org/https://doi.org/10.1002/j.2168-9830.2004.tb00809.x

  • Qualtrics. (2005). Qualtrics. Utah: Provo.

    Google Scholar 

  • Ramsey, F., & Schafer, D. (2002). The Statistical Sleuth. Pacific Grove, CA: DUXBURY.

    Google Scholar 

  • Razali, N. M., & Wah, Y. B. (2011). Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics, 2(1), 21–33. https://doi.org/doi:https://doi.org/10.1515/bile-2015-0008

  • Shapiro, A. S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, Published by: Oxford University Press on Behalf of Biometrika Trust Stable, 52(3), 591–611. Retrieved from https://pdfs.semanticscholar.org/1f1d/9a7151d52c2e26d35690dbc7ae8098beee22.pdf

  • Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist, 50(1), 1–13. https://doi.org/https://doi.org/10.1080/00461520.2014.1002924

  • Smith, K. A., Sheppard, S. D., Johnson, D. W., & Johnson, R. T. (2005). Pedagogies of engagement: Classroom-based practices. Journal of Engineering Education, 94(1), 87–101. https://doi.org/https://doi.org/10.1002/j.2168-9830.2005.tb00831.x

  • Stains, M., Harshman, J., Barker, M. K., Chasteen, S. V., Cole, R., DeChenne-Peters, S. E., … Young, A. M. (2018). Anatomy of STEM teaching in North American universities. Science. https://doi.org/https://doi.org/10.1126/science.aap8892

  • Stump, G., Hilpert, J., Husman, J., Chung, W.-T., & Kim, W. (2011). Collaborative learning in engineering students: Gender and achievement. Journal of Engineering Education, 100(3), 475–497.

    Article  Google Scholar 

  • Walter, E. M., Henderson, C. R., Beach, A. L., & Williams, C. T. (2016). Introducing the Postsecondary Instructional Practices Survey (PIPS): A concise, interdisciplinary, and easy-to-score survey. CBE--Life Sciences Education, Winter 201(15: ar53). https://doi.org/https://doi.org/10.1187/cbe.15-09-0193

  • Wang, X., Yang, D., Wen, M., Koedinger, K., & Rosé, C. P. (2015). Investigating how student’s cognitive behavior in MOOC discussion forums affect learning gains. International Conference on Educational Data Mining. Retrieved from https://files.eric.ed.gov/fulltext/ED560568.pdf

  • Williams, C. T., Walter, E. M., Henderson, C., & Beach, A. L. (2015). Describing undergraduate STEM teaching practices: a comparison of instructor self-report instruments. International Journal of STEM Education, 2(1), 18. https://doi.org/10.1186/s40594-015-0031-y

Download references

Acknowledgements

The authors would like to thank all of the participants of the study, including both instructors who were willing to distribute the survey in their courses and students who agreed to complete the SCCEI/PIPS.

Funding

This project was funded under NSF grant number 1664250. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Contributions

AB was the primary author of the work in this manuscript. AB managed the recruitment, distribution, and statistical analysis of the surveys for both instructors and students. AB conducted the write-up of the findings of this study. SB participated in data collection, analysis, and write up of the work via oversight and weekly discussions. SB developed the research questions regarding the exploration of correlations between two survey instruments. Additionally, SB provided multiple rounds of revisions and review of the work. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Allyson Barlow.

Ethics declarations

Ethics approval and consent to participate

All ethics were overseen by the Institutional Review Board; participants were invited to consent to participate and allowed to opt out of participation with no consequence at any time.

Consent for publication

All participants were informed that consent included agreeance to the publication of finding in aggregate. No personalized data is shared in the publication materials.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barlow, A., Brown, S. Correlations between modes of student cognitive engagement and instructional practices in undergraduate STEM courses. IJ STEM Ed 7, 18 (2020). https://doi.org/10.1186/s40594-020-00214-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-020-00214-7

Keywords