Skip to main content

Barriers to collecting student participation and completion data for a national STEM education grant program in the United States: a multiple case study

Abstract

Background

Billions of dollars are spent annually on grant-funded STEM (science, technology, engineering, and mathematics) education programs. These programs help students stay on track toward STEM careers when standard educational practices do not adequately prepare them for these careers. It is important to know that reliable and accurate student participation and completion data are being collected about these programs. This multiple case study investigates how student data are collected and reported for a national STEM education program in the United States, the National Science Foundation (NSF) Advanced Technological Education (ATE) program. Our overall aim is to provide insights to funding agencies, STEM education faculty, and others who are interested in addressing issues related to the collection and reporting of student participation and completion data within their own contexts. Emphasis is placed on the barriers encountered in collecting participation and completion data, particularly with regard to unduplicated participation counts and marketable credential data. The ATE program was selected for this study because there is already a mechanism (known as the ATE Survey) in place for annually collecting systematic data across all projects within the program.

Results

A multiple case study, including interviews of primary investigators, allowed for in-depth analysis of the ATE Survey’s point-in-time data on project-level participation in various activities, and for identification of the following barriers to tracking student-level data: lack of time and help to gather these data, lack of a consistent system for tracking students across different institutions, and a perceived lack of guidance from the funding agency about what data to track. We also saw that different data are needed from different projects to determine a project’s true impact. Defining “success” the same way across all projects is inadequate.

Conclusions

Although, due to the limited sample size, these findings cannot be generalized to the larger ATE population, they provide specific insights into the various barriers that projects encounter in collecting participation and completion data.

Introduction

STEM education has been defined as a standards-based approach in which science, technology, engineering, and mathematics are taught in an integrated manner and are addressed and treated as being “one lively, fluid study” (Brown & Brown, 2011). STEM began to be emphasized in the United States by the National Science Foundation (NSF) by the early 1990s, although it did not experience widespread support at first (Martín‐Páez et al., 2019). Martín‐Páez et al. (2019), citing Friedman (2005) and Sanders (2009), argued that this began to change when the first graduate degree in STEM education was offered by Virginia Tech in 2005. The primary goal of this first STEM degree was to develop STEM educators, leaders, scholars, and researchers “prepared as catalysts of change for teaching, disseminating, and investigating integrative approaches to STEM education” (Wells, 2013). STEM education has since been expanding both nationally and internationally and has gained increased recognition by legislators, educational administrators, and education researchers (Li et al., 2020; White, 2014). The U.S. federal government spends billions of dollars on STEM education programs annually (Li, 2014); the NSF plays a key role in these expenditures (Gonzalez, 2012).

As a result of interest in and support for STEM on the part of both government agencies and various private grant funding institutions, STEM education scholarship has developed tremendously in recent years, particularly since 2010 (Li et al., 2020). STEM education has been shown to be effective in preparing a diverse range of students for an ever-growing number of STEM career opportunities (Li et al, 2019) and has been linked to the economic success of nations (McGarr & Lynch, 2017). Other benefits of STEM have been identified, including strengthening students’ skills for transferring knowledge acquired to other contexts, contributing to the development of student creativity, and increasing interest and commitment on the part of students towards STEM subjects (Martín‐Páez et al., 2019).

The growing body of research has identified issues impacting the recruitment and retention of STEM students, including the quality of STEM programming in secondary schools. According to one study, most college students studying for degrees in STEM make the decision to do so in high school or earlier, and only 20% believe their high school education prepared them to succeed “extremely well” in STEM fields (Ejiwale, 2013). In addition, Shahali et al. (2019) found that STEM interest among middle school students increased after participation in a hands-on engineering design program but was not sustained two years after the program. The authors attributed the lack of sustained interest to relatively poor STEM instruction after the intervention ended.

The importance of identifying factors impacting the recruitment and retention of students in STEM fields early in their education is also reflected in the large number of assessment instruments that have been developed recently to measure student STEM perspectives. These instruments are focused on measuring the STEM interest levels, attitudes, and self-efficacy levels of K-12 students (e.g.,Luo et al., 2021; Roller et al., 2020; Summers & Abd-El-Khalick, 2018). They have also focused on students’ science capital or the academic, social, and cultural factors that influence students’ interest and participation (or lack thereof) in STEM (Jones et al., 2021).

Grant-funded STEM programs can be a vital component in addressing student retention in STEM, as they can help underperforming students stay on track (Armour-Garb, 2017). This is likely a factor that contributes to the significant amount of funding that goes into supporting these programs. Given this level of commitment, it is important that reliable and accurate data about these programs are being collected.

This research uses a multiple case study approach. Data were collected from a sample of nine case projects funded by the NSF Advanced Technological Education (ATE) program. This program was selected because it already had a mechanism in place for systematically collecting data across all projects within the program.

The study examines the extent to which participation and completion data are being collected, particularly data related to students obtaining marketable credentials (such as degrees and industry certificates) that help them successfully enter the STEM workforce.

This paper also examines the barriers that project principal investigators encounter in collecting and reporting participation and completion data, and considers what can be done to improve the process. While principal investigators can share data from their projects in many ways, including depositing them in a data archive, making them available online through an institutional website, or sharing them directly with other researchers (Irwin, 2012), this is not typically part of their formal training (Logan, 2021). Obtaining this data for all projects across a STEM grant program can be a challenging undertaking, particularly, because a grant program does not typically provide a standard framework for how data can be consistently reported across all of its funded projects. Principal investigators may, therefore, have many different ways of collecting and reporting data, which may make it difficult to get a clear picture of the impact of a program as a whole.

The overall aim of our study is to provide insights to public and private funding agencies, STEM education faculty, and others who are interested in addressing issues related to the collection and reporting of student participation and completion data within their own contexts. Our study was inspired by the reporting requirements in Public Law 115–402, the Innovations in Mentoring, Training, and Apprenticeship Act (2018). This act tasked NSF with enhancing associate degree programs and applied learning opportunities in STEM fields for the purpose of keeping the U.S. workforce competitive in the global economy. In Sect. 5 of the legislation, which covers evaluation and reporting requirements, the act asks for reporting on the following areas:

  • Assessment of the effectiveness of the grant programs in expanding apprenticeships, internships, and other applied learning opportunities offered by employers in conjunction with junior or community colleges, or institutions of higher education, as applicable.

  • Assessment of the number of students who participated in the grant programs.

  • Assessment of the percentage of students participating in the grant programs who successfully complete their education programs.

The importance of reporting accurate and valid data about student participation and success is evident in this request, as it has been throughout years of discussions between ATE principal investigators, NSF program officers, and EvaluATE (the NSF-funded evaluation resource hub for the ATE program). In response, we began to address barriers to and solutions for reporting through an investigation of existing data supplemented by nine case projects. As a first attempt to shed light on the expectations listed in Public Law 115–402, our multiple case study collected in-depth data, aligned with existing data sources, about student participation and successful completion of education programs.

The Advanced Technological Education (ATE) Program

The NSF Advanced Technological Education (ATE) program is used as an example of a national STEM education program in the United States. The ATE program is focused on strengthening the education of technicians in high-technology fields, particularly within 2-year institutions of higher education. According to the ATE solicitation (NSF, 2021), the ATE program “involves partnerships between academic institutions (grades 7–12, IHEs), industry, and economic development agencies to promote improvement in the education of science and engineering technicians at the undergraduate and secondary institution school levels. The ATE program supports curriculum development; professional development of college faculty and secondary school teachers; career pathways; and other activities” (NSF, 2021). The ATE program aims to enhance the STEM technical workforce through strengthening education programs, supporting faculty development, and engaging with business and industry. Examples of high-technology fields of interest include advanced manufacturing, biotechnology, energy and environmental technologies, engineering, information technologies, and nanotechnologies.

ATE is a fairly large program, awarding 45 to 80 new awards each year. Approximately $75 million is available annually for the granting of new or continuing awards, according to the most recent solicitation (NSF, 2021). New awards make up the majority of each year’s funding, totaling approximately $69 million. Grants are awarded in a wide variety of sizes and durations.

While all ATE efforts are working towards the service of students, not all ATE projects directly interact with students. ATE activities that serve students indirectly include faculty development, creation of educational materials, and curriculum development for use at institutions other than the grant’s host organization. Of the 325 active ATE projects in 2020, approximately 53% engaged in professional development for faculty, 47% developed educational materials, and 35% developed educational courses (Marshall et al., 2020). Other activities directly served students; for example, they developed academic programs, courses, or pathways, or they provided student support services or opportunities for workplace-based learning. Approximately 35% of projects engaged in course development, 33% offered workplace-based learning opportunities, and 28% offered direct support to students obtaining certifications or licensing (Marshall et al., 2020). An individual ATE project can engage in multiple activities, serving students both indirectly and directly.

Overall, ATE is an important program for increasing the number of students who are getting involved in STEM at the two-year college level. In accordance with the program’s emphasis, the majority of ATE projects (77%) are located at two-year colleges (Marshall et al., 2020).Footnote 1 As mentioned above, many students do not believe their high school experiences adequately prepared them to succeed in STEM fields (Ejiwale, 2013). Community colleges can play a key role in filling this gap, as they are often the most responsive segment of higher education in meeting the immediate needs of local communities (Lowry & Thomas-Anderson, 2017). Community colleges also play an essential role in providing students with educational opportunities that serve as foundations for their careers (Mullin, 2010).

The ATE Survey

The ATE program provided researchers with a unique opportunity to examine this issue, as it has a system already in place, hereafter referred to as the ATE Survey, for consistently collecting participation data across all projects within the program through an annual online survey of principal investigators. The ATE Survey has been administered by EvaluATE, the evaluation hub for the ATE program, since 2000 as part of a program-wide monitoring of ATE projects. The EvaluATE team is based at The Evaluation Center at Western Michigan University and supports the advancement of high-quality evaluation in the ATE program and in STEM education more broadly through open-access trainings and resources for project staff and evaluators on how to conduct, manage, and use evaluation. Through the ATE Survey, EvaluATE asks principal investigators to report the number of students who participated in various student service activities conducted during the previous calendar year. For example, to accommodate the time required to assemble each year’s data, the 2020 survey asked about student participation in activities conducted in 2019. Student service activity categories include the following:

  • Took at least one course in an academic program developed or modified with ATE funds.

  • Completed a course developed or modified with ATE funds.

  • Participated in workplace-based learning.

  • Received mentoring.

  • Participated in a student competition.

  • Used an instrument acquired with ATE funds.

  • Received business or entrepreneurial skills development.

  • Participated in a bridge or transition program.

The survey does not ask principal investigators to provide unduplicated counts of student participants across student service activities. As a result, the totals reported on the survey are considered to be duplicated counts, because the same students are often counted for several categories. Thus, the survey responses do not provide an accurate or complete picture of how many students are actually participating in a project. Nevertheless, data from this survey served as a stepping-stone in our exploration of the issues and barriers related to collecting and reporting accurate and reliable participation and completion data.

Methods

We used a multiple case study method. In the simplest form of this design, the researcher selects two or more cases among similar situations, with the multiple case inquiry focusing on why certain outcomes might have occurred (Yin, 2018). The findings of such a study are portraits that contribute to our understanding of the issues, both individually and collectively. The multiple case study design aligns with our goal of speaking to data collection and barriers at the program level. The purpose of our multiple case study was to produce a descriptive analysis, for a sample of nine projects, of (a) available data on the number of students served by ATE project activities, and (b) student completion of ATE academic programs. We also set out to document barriers to collecting and reporting these data and to describe challenges that would need to be overcome for consistent, program-wide collection of these data.

The case unit of analysis for this study was individual ATE-funded projects. We used a variety of data sources to assess and investigate counts of students served by each project. These included projects’ responses to the 2019 ATE Survey, internal and institutional records (participation and completion data provided by project principal investigators or by another representative of their institution, such as institutional research staff), and interview data (videoconference call interviews with project principal investigators and other project staff).

Research questions

The research team analyzed data from all sources described above to determine the following for each case project:

  1. 1.

    To what extent did students participate in any ATE project activities in 2019, based on the sample of nine ATE projects?

  2. 2.

    To what extent did students who began ATE-funded programs obtain marketable credentials (which are indicative of program completion) from the program or another in a related field in 2019, based on the sample of nine ATE projects?

  3. 3.

    If the ATE program wanted to compile information on student participation and completion program-wide, what are the barriers they would need to consider or overcome?

Sample selection process

Data from the 2020 ATE Survey were used to select projects to participate in the multiple case study. The 2020 survey had a response rate of 91%, with 294 out of the 325 ATE grantees responding. A count of participating students as reported on the ATE Survey was tallied for each of the 294 projects by adding the counts for each of the eight student service activity categories (identified above). As previously mentioned, the final count for each project is considered to be “duplicated,” because several students are assumed to be counted under more than one category.

ATE projects that met the following inclusion criteria were kept in consideration for sample selection:

  • The ATE project remained active at the time of the data collection.

  • The ATE project directly served students, according to the project’s response on the 2020 ATE Survey (in other words, at least one student participated in at least one ATE activity).

  • The ATE project was located at a two-year institution, the primary institution type served by the ATE program.

  • The ATE project was identified as a “project” or “small project for institutions new to ATE,” the primary ATE award types. Other ATE award types (such as “coordination networks,” “centers,” and “targeted research”) tend to be more removed from student services, serving students indirectly or through intermediaries.

This yielded 64 projects in the study’s sampling frame. Nine of these sites (14%) were selected for the sample. Due to the time-intensive nature of collecting case-level data, it was determined that only nine projects could be included in the study. The final sample selection took into account three selection factors: number of students served, designation as a minority-serving institution, and belonging to a large community-college system. These factors were chosen to ensure that a variety of conditions were included in the study, as we expected results to differ across these factors. For example, variations across projects in access to resources and the capacity of project staff might affect the barriers each project experienced in collecting student data.

The included projects were grouped into four categories based on their total duplicated counts of students, as shown in Table 1. The category cutoffs were chosen to ensure that projects of various sizes were included in the study. The research team considered dividing the sample into quartiles (or thirds) based on the total number of students, but that method would have biased the sample toward smaller projects. Two projects were selected from each grouping category. Due to other selection factors, the ninth project was selected from the “100 to 499 students” category.

Table 1 Grouping categories (based on the total duplicated count of students)

Data collection and analysis

The principal investigators for each of the nine selected case projects were contacted through email and invited to participate in interviews. The email provided an overview of the study. The interview questions were emailed to the principal investigators after they agreed to participate so they could look them over in advance. Formal interviews were scheduled and conducted via a videoconferencing service.

Interview questions were divided into the following categories:

  • Questions about the unduplicated number of students participating in ATE project activities. Interviewees were presented with the student participant counts they reported on the 2020 ATE Survey for each of the student service activity categories. Through a series of questions, the interviewees worked in collaboration with the interviewer to determine the extent to which the sum of their numbers represented an unduplicated count of students impacted by the project.

  • Questions about the percentage of ATE students who obtained a marketable credential. Interviewees were asked if they tracked the number of ATE students who obtained marketable credentials related to the program’s field of study. If they did, they were asked to provide this information. If they did not, they were asked to identify how this information could be acquired. A marketable credential was defined in this study as an industry certificate, associate degree, or other credential that is intended to increase the employability of students who obtain it. We asked about marketable credentials—as opposed to degrees and certificates—to acknowledge the variety of ways students can be served by community colleges and ATE programs. In addition to associate degrees or industry certificates, students may receive micro-credentials or badges that increase their employability. This study wanted to acknowledge those activities as indicators of success.

  • Questions about challenges and barriers to collecting student participation and completion data. Interviewees were asked to identify challenges and barriers they have encountered in tracking students impacted in their ATE grant, as well as what they would need to improve the process (resources, funding, assistance from others, institutional changes, etc.). This included identifying barriers to tracking marketable credentials.

A research team member worked closely with each of the principal investigators who initially could not provide unduplicated counts and/or identify the number of ATE participating students who obtained marketable credentials.

Three principal investigators were able to provide unduplicated counts of student participants at the time of the interview, by confirming either that the counts reported in the ATE Survey were already unduplicated, or that some of the category counts were subcounts of other categories which were unduplicated; no further follow-up related to participant counts was conducted with them. The remaining six, however, did not initially know how to acquire unduplicated counts of student participants and needed to look more deeply into their records or contact others at their institutions for help. Determining unduplicated participant counts, therefore, included work that occurred outside the scheduled interviews. This process required ample support from the research team; for example, the team provided the principal investigators with template formats into which the information could be inserted, and several email exchanges and follow-up phone calls between the research team and the principal investigators took place over a 3-month period. The research team ultimately acquired unduplicated counts of students for all nine case projects. Each principal investigator provided unduplicated participant count data in one of the following ways:

  • By confirming that the counts reported in the ATE Survey were already unduplicated, or that some of the category counts were subcounts of other categories which were unduplicated (five cases, including three who confirmed this during the initial interview).

  • By reviewing participant records and removing the duplicated counts of students, then providing an overall unduplicated count of students (two cases).

  • By providing detailed digital records of all activities each ATE student engaged in; researchers used these to remove the duplicates (two cases).

None of the nine principal investigators had credential data immediately at hand when the initial interview took place. To acquire the information, a follow-up process similar to the one described for unduplicated counts took place. In the end, eight of the nine projects were able to provide data about the number of students who obtained marketable credentials. Each principal investigator provided credential data in one of the following ways:

  • By providing researchers with detailed project records that documented every marketable credential (certificate, degree, etc.) received by every single ATE student (one project).

  • By providing researchers with an overall count of the total number of ATE students who received any marketable credential, with breakdowns of the counts of those who received each credential type (seven projects).

One principal investigator who was unable to provide data for students obtaining credentials stated that only “head counts” (not names) were collected for student participants, and thus there was no way to track participants’ attainment of credentials. Another principal investigator noted their project did not collect marketable credential data, because the current offerings of certificates in their department were not indicative of the focus of their ATE project. They eventually obtained the information via a request to their institution’s research office. The delay in obtaining the data was significant, due to an already tight workload and office personnel working from home due to COVID.

Results and discussion

Unduplicated count of students who participated in ATE activities in 2019

The research team was successful in determining an unduplicated count of all students who participated in any ATE project activity implemented by the nine case projects in 2019. The total unduplicated student count for all case projects combined was 4,060, which was 88% of the total (duplicated) count derived by summing the counts of the eight student service activities reported on the 2020 ATE Survey. Table 2 shows these counts.

Table 2 Unduplicated counts by project

Percentages of duplication varied considerably across the nine projects, suggesting that one should not use the overall percentage as a measure to estimate an unduplicated count for any individual project. Individual projects’ unduplicated counts (Table 2) ranged from 40% of the original count reported in the ATE Survey (indicating the majority of the students were counted more than once on the ATE Survey) to 121% (indicating principal investigators were able to identify additional unique students who were not included in the original ATE Survey count). Two projects had percentages of 100%, indicating that the numbers reported on the survey were an exact count of unique students, and seven projects had percentages under 100%, indicating that some students were counted more than once for the ATE Survey response.

Before producing unduplicated counts, two projects revised their ATE Survey responses, having identified additional students they had not originally reported. Each provided the researchers with a revised count before beginning the effort to determine an unduplicated count. These principal investigators acknowledged during the interview that their actual counts were “higher than I reported” or that their survey response was “probably not complete.” The fact that two principal investigators were able to identify additional students not originally counted on the ATE Survey suggests that some principal investigators are actually providing estimates on the ATE Survey, rather than complete counts based on their available records. This was confirmed by one of the principal investigators who stated during an interview, “I would rather underestimate the counts than overestimate them, so I tended to undercount.”

Number of students who participated in an ATE project who obtained a marketable credential

The total number of students confirmed to have obtained some sort of marketable credential was 341, which was 14% of the total (unduplicated) number of students who engaged in ATE project activities in 2019 (Table 3) with the eight projects that had access to data for student credentialing. Percentages should not be interpreted to indicate the total percentage of ATE students who will ultimately receive some sort of marketable credential. Many of the ATE students were still in middle or high school, while others were college students who were working on their degrees and certificates or who were planning to receive degrees at other institutions in the future. The final number who receive some sort of marketable credential will not be realized for several years to come.

Table 3 Number of ATE students who have so far obtained a marketable credential

Barriers to collecting student participation and completion data

From the start of this study, the research team recognized that existing data from the ATE Survey could not provide an unduplicated count of students who were served by ATE projects or completed academic programs. Therefore, one of the motivations behind this study is to more fully understand the barriers and challenges ATE projects experience when collecting and reporting data on students, particularly unduplicated student counts and counts of credentials obtained. Several barriers were identified through the case interviews. These barriers occurred on multiple levels, including the project, institutional, and funding agency levels, and they often involved factors related to more than one of these levels. Barriers generally fell within three overarching themes: time and personnel, use of a consistent data system, and guidance from the funding agency.

Time and personnel

This was the most commonly identified barrier that occurred at the project level. Principal investigators expressed willingness to more thoroughly document participation and credential completions, but felt they did not have enough time and help from others. One remarked, “It takes so much time.” Another asserted this even more strongly: “I am only one person. I need help so badly. It is hard to find the time to do this.” Although it may have been possible to write additional personnel or financial support into their budgets when first proposing their grants, these principal investigators seemed to indicate that they had been unaware that they would need this extra support and had not anticipated the need to track participation and credential completion data. One remarked, “If feedback from the [proposal reviewers] said that they wanted us to put a number on these transitional students and track them until they have acquired a credential, we would have followed through with that, but we weren’t asked to do that, and [nobody saw] a need to do that.”

Financial constraints within their community colleges were also described as contributing to their lack of time and help, indicating that this barrier was not simply at the project level but was influenced by factors occurring at the institutional level. One principal investigator described the financial environment of many community colleges as being “dire,” adding that “data is not going to be a priority when you are in survival mode.” The discretionary nature of state support for community colleges and vague funding initiatives have often led to inconsistent and unpredictable funding for two-year institutions (Phelan, 2014). The resulting financial instability can reduce the resources and personnel that are available to assist principal investigators with data collection at the project level. Challenges related to the global pandemic only seemed to add to this problem. As one principal investigator stated, “It has all gotten muddled with the pandemic.”

Use of a consistent data system

Another commonly identified barrier was a perceived inability to adequately track students after they leave the community college. Approximately half of first-year American undergraduates attend two-year institutions (Shapiro et al., 2015) for reasons that are often non-academic in nature: proximity to home, a need for a more flexible schedule, and lower cost (Glynn, 2019). Persistence rates tend to be lower at community colleges than at four-year institutions, because many community college students eventually transfer to other institutions to complete their degrees. Data sharing is essential for tracking students’ progress across these institutions, but principal investigators generally felt that data sharing was not being done. Reasons appear to be due to barriers at both the institutional and project levels.

At the institutional level, a consistent approach or system for tracking students was lacking. Principal investigators reported that the same system for tracking students is not used by every institution. One remarked, “Everyone tracks differently and does it their own way.” Another compared it to a competition: “We don’t have a good way to track credentials. We have a count that is like a contest among all the community colleges about who can get the students with the most credentials.” This is further complicated by the different types of ATE activities and audiences served. For example, tracking the progress of high school students after they complete a bridge program can require a different approach than tracking the progress of college students after they complete a course.

At the project level, there seemed to be a general unawareness among the principal investigators that resources do exist for tracking students across institutions (e.g., the National Center for Education Statistics (NCES)). Principal investigators generally were not making use of such tools, nor encouraging others to do the same. In spite of the fact that data sharing is essential for tracking students’ progress, one principal investigator remarked, “Few people are advocating for [this].” This is also a capacity issue, as principal investigators and their staff seemed to lack the knowledge of how to make use of such resources even if they knew they existed. This was not universal, as one principal investigator was already aware that such resources existed and was using one of them: “We’re using a software product called Handshake that if anybody puts [our college] within their bio, it automatically [generates] some sort of a connection of information to be able to collect data as to where the student is, what they are doing, and how they are doing it years after they leave our institution.” Overall, however, the lack of awareness and capacity to make use of available systems remains a barrier for most projects.

Guidance from the funding agency

Factors at the funding agency level also contributed to the perceived barriers to collecting consistent data across projects. Specifically, principal investigators reported a lack of clarity about the kinds of data NSF expected them to collect. One stated, “NSF should just tell us what information we should collect. That should be required.” Another remarked, “If NSF wants [specific data] consistently over time, that needs to be clearly communicated to the principal investigators and onward to [our project] evaluator so we know what to do, because you cannot always go back and reconstruct what happened before.”

Overall, principal investigators expressed willingness to gather more detailed data if given enhanced guidance about what to collect. One explained, “I’m willing to collect anything. I only need to know what…to collect.” Another commented that he “would have worked harder at that” if he had known what to focus on. There was also one who suggested that some principal investigators are unwilling to collect specific kinds of data unless they are required to: “Nobody is going to [collect tracking data] unless we are asked to do that.”

Other considerations about a project’s impact or success

Through the interviews, a couple of additional themes were identified that did not necessarily fall into the category of “barriers” but that should nonetheless be considered when collecting participation counts and completion data. These themes are broader in nature than the barriers identified above and highlight the variability that can exist from project to project about what data is most needed to determine impact, and the inadequacy of defining success by completion data.

Different data may be needed from different projects to determine impact

Principal investigators did not all agree about what data the funding agency should require projects to collect. One noted that it can be difficult to know what information is needed to determine a project’s full impact. For example, if a project is developing materials for others to use, principal investigators need clarification about whether to count how many students are being impacted at every institution that uses those materials. This principal investigator asked, “If the original scope of the grant was to create curriculum, do I also report [the number of] students who are touched by the curriculum?” Deciding the level of data collection to require can be considerably more complicated than it first appears.

Defining success in terms of completion data is inadequate

While having specific requirements for data collection might provide more focus for the principal investigators, not all principal investigators felt that having these requirements was necessarily a good idea. Rather, principal investigators felt that ATE (and NSF) should more strongly emphasize measures of success beyond participation and credential counts. For example, some students who begin an ATE program acquire a job in their field without completing a degree or credential. Focusing on credential counts fails to acknowledge that there are other ways projects define success. Particularly in the context of smaller projects, counts may have less meaning. One principal investigator stated during a case interview, “Not…everything that can be counted is important. If you are dealing with small programs like [ours], then counts don’t make a lot of sense.”

This is consistent with student success literature in higher education, which indicates that a wide variety of measures are needed to assess and understand student success. For example, Perna and Thomas (2008) developed a conceptual model that defines student success through 10 indicators associated with four student transitions (Table 4). The conceptual model suggests that the 10 student success indicators (along with others not identified by the model) are influenced by multiple levels of context (policy, school, family, and internal), student attitudes, and student behaviors.

Table 4 Student success indicators

Given the complexity of understanding and measuring student success, program level data collection systems should consider diverse indicators of success aligned with desired outcomes and goals. The need for diverse measures of student success is particularly important at the community college level. Open admissions policies at community colleges mean that the student population has a wide variety of educational goals. As a result, intermediate measures (such as course credit completion, percentage of program completion, and gateway course completion) may be more appropriate than completion or credential counts (Goldrick-Rab, 2010). Currently, the American Association of Community Colleges is trialing a voluntary framework of accountability (https://www.aacc.nche.edu/programs/voluntary-framework-accountability/), the first national system of accountability specifically designed for community colleges. This network of metrics takes into account the diversity of outcomes and considerations for student success at community colleges.

Limitations

The counts provided above do not necessarily represent the full impact of these ATE projects. The reach and influence of an ATE project can be extensive, impacting students well beyond immediate program activities. For example, one of the case study projects focused on refining existing materials for use in wider contexts. This project’s principal investigator described their efforts as being a “scaling” project. Specifically, their goal was to refine their curriculum so that it could be used by other institutions. Thus, the full number of students impacted by their curriculum extended well beyond their host institution and hence could not be reported. The principal asked during the interview, “[How] do I report [the number of] students who are touched by the curriculum, which would be many times what I could determine?”.

Also, most of the barriers identified above were based on comments made by a subsample of ATE principal investigators. It is possible that additional barriers that were not identified at these case sites would be encountered in a large-scale study. It is also possible that some barriers may not be as widespread as reported by these case sites.

Conclusions

The ATE Survey provides useful point-in-time data on project-level participation in various activities. It also provides a limited view of the number of students who complete their academic programs in a given year. These measures allow for the tracking of general trends at the program level. This being said, they do not allow for tracking individual student-level outcomes (e.g., credentials, job attainment, transfers to other institutions). The case interviews conducted for this study allowed for more in-depth analysis of data gathered through the ATE Survey and a better understanding of barriers and challenges involved in tracking student-level data in the ATE program.

ATE project principal investigators and staff experience many barriers to collecting these data. The processes can be surprisingly challenging and are influenced by factors at the project, institutional, and grant funding levels. For example, community college faculty often lack the time, help, and resources that are needed to collect student information completely and accurately. Current strains on institutional budgets create further challenges. A lack of principal investigator or project staff experience in systematically collecting data about students is also a barrier, as some simply do not know what information they need to collect and how to collect it. However, comments made during the case interviews indicate that principal investigators are open to collecting data, given sufficient guidance from the grant agency.

Other barriers may prove to be more difficult to overcome, such as the limitations of student activity and credential completion counts as measures of student success at community colleges (the primary institution type served by ATE grants). Many students transfer to other institutions to complete their education, and it can be challenging to track their progress after they move on. In addition, some students attain employment in the field without completing a college degree, indicating that some aspects of student success are not captured by completion and credential data. Additional, intermediate measures of success, such as course credit completion, percentage of program completion, and gateway course completion, should be considered (Goldrick-Rab, 2010). The ATE program also has other areas of focus for which student counts are inappropriate measures, including faculty development, curriculum development, and business and industry engagement.

Recommendations

Although findings from this multiple case study cannot be generalized to the larger ATE project population due to the limited sample size, the findings can be used to identify specific ways grant agencies and principal investigators can address the various barriers to collecting participation and completion data. While many of the recommendations below are based on findings from a subsample of nine ATE projects, the shared perspectives of case project principal investigators provide insights into how the data collection process can be improved for STEM-funding agencies in general.

Grant agencies should be explicit in the data they ask projects to collect

This includes how these indicators are defined and operationalized, and how they should be reported. Standardizing these elements will provide project staff with a clear process and set of expectations. For example, none of the sample of principal investigators were initially able to provide tracking information on students who obtained marketable credentials. Many had to search to acquire it. Explicitly defined metrics would help principal investigators provide such data more readily. Principal investigators understand that it is important to document and track student data. However, many lack the experience to know exactly how to do it. One remarked during an interview, “Those running these sorts of programs just don’t have experience with how to [acquire] unduplicated counts.” One drawback is that principal investigators may need to increase their budgets for the relevant staff, such as institutional research personnel and evaluators, to assist them with the data collection, which could potentially reduce the amount of funding for other aspects of their projects, as there are budget caps to consider. It should also be remembered that participation and marketable credential data do not always tell the whole story of a project’s impact and success. There should be some flexibility with data requirements, but the degree of flexibility should be dependent on the focus and goals of an individual project.

Grant agencies may consider investing in grant- or contract-funded initiatives to support project-level data collection and reporting

These initiatives should help principal investigators and their project staff to become more aware of existing data sharing services. At least one case project principal investigator was already aware that such resources existed and was using one called Handshake that allowed them to track students’ further education and work after they leave the community college. Initiatives should also involve targeted trainings and virtual Q-and-A sessions about how to access and use existing resources and services, and how to develop templates or standard formats for data collection. Support people could be assigned as points of contact to assist principal investigators and their staff with completing these templates or standard formats.

Principal investigators should build student tracking into project operations during the planning phase

This should be viewed by ATE project principal investigators as essential. It cannot be emphasized enough that plans for project tracking should be outlined at the beginning of the project’s development stage. Having this clearly articulated and carried out from the start will save some from the stress of trying to gather information after the fact (as the principal investigators in this multiple case study had to do). There may simply not be enough time to backtrack data once the project is underway.

Principal investigators should engage their project evaluators to assist with tracking students

Project evaluators should be brought into the process as early as possible. Those who feel overwhelmed by a lack of time may be able to alleviate some of their stress through discussions about what responsibilities the evaluator can take on. Evaluators will likely already be collecting participation records for various ATE activities through surveys and other means. Principal investigators should also have discussions about what additional ideas evaluators have for tracking credential data.

Principal investigators should seek out technical support to help meet data collection and evaluation expectations

While principal investigators expressed willingness to collect any information requested, they were not always inclined to do so without being explicitly asked. Principal investigators should draw on resources other than NSF for guidance about how to collect student participation/tracking data and better meet evaluation expectations. In addition to engaging the skills of their evaluator as early as possible (as described above), they should seek out technical staff or others within their institutions who have experience and skill with data tracking and reporting. This may involve requesting assistance from an institutional research office, the registrar’s office, or others.

Availability of data and materials

Deidentified data from case interviews and the ATE Survey that support the findings of this study are available from the corresponding author on reasonable request.

Notes

  1. Other project host organizations include four-year colleges (16%), nonprofit organizations (5%), and other types of organizations (2%) (Marshall et al., 2020).

Abbreviations

ATE:

Advanced Technological Education Program

EvaluATE:

The evaluation hub for the ATE program

NCES:

National Center for Education Statistics

NSF:

National Science Foundation

STEM:

Science, technology, engineering, and mathematics

References

Download references

Acknowledgements

The authors would like to thank Lori Wingate, Arlen Gullickson, Valerie Marshall, Mike Lesiecki, Connie Della-Piana, and Celeste Carter for their input and suggestions. We would also like to thank all of the case participants, as well as everyone who responded to the ATE Survey. Without their engagement and reporting, this study would not have been possible.

Funding

This study was supported by the National Science Foundation under Grant No. 1600992. Any opinions, findings, conclusions, and/or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Contributions

RR conducted the case interviews, oversaw the follow-up efforts to acquire participation and marketable credential data, analyzed ATE Survey and case interview data, and drafted this manuscript. CW contributed to the analysis of the case interviews and to manuscript reviews. LB and MZ contributed to the analysis of ATE Survey data and to manuscript reviews. LB and CW conceptualized this study. All authors read and approved the final manuscript.

Authors’ information

All authors work at Western Michigan University and were funded under the EvaluATE grant (No. 1600992) for this study.

Corresponding author

Correspondence to Robert J. Ruhf.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ruhf, R.J., Williams, C.T., Zelinsky, M. et al. Barriers to collecting student participation and completion data for a national STEM education grant program in the United States: a multiple case study. IJ STEM Ed 9, 30 (2022). https://doi.org/10.1186/s40594-022-00348-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-022-00348-w

Keywords