Skip to main content

ChatGPT and its ethical implications for STEM research and higher education: a media discourse analysis

Abstract

Background

With the increasing demand brought on by the beginning of the fourth industrial revolution in the period of post-digital education and bio-digital technology, artificial intelligence (AI) has played a pivotal role in supporting human intelligence and contributing to intellectuals within science, technology, science, and mathematics (STEM) and in the broader field of higher education. Thus, this study examines how writers for mainstream STEM journals and higher education magazines perceive the impact of ChatGPT, a powerful AI chatbot, on STEM research and higher education. ChatGPT can generate realistic texts based on user prompts. However, this platform also poses ethical challenges for academic integrity, authorship, and publication.

Results

Using a comparative media discourse analysis approach, this study analyzes 72 articles from four media outlets: (a) Springer Nature; (b) The Chronicle of Higher Education; (c) Inside Higher Ed; and (d) Times Higher Education. The results show that the writers expressed various concerns and opinions about the potential conflicts and crises caused by ChatGPT in three areas: (a) academic research and publication; (b) teaching and learning; and (c) human resources management.

Conclusions

This study concludes with some policy implications and suggestions for future research on ChatGPT and AI ethics in academia by reilluminating the most overarching policy concerns related to ethical writing in STEM research and higher education and limitations to the blindness to authorship and academic integrity among diverse stakeholders.

Introduction

With the increasing demand brought on by the beginning of the fourth industrial revolution (4IR) in the period of post-digital education and bio-digital technology, artificial intelligence (AI) has played a pivotal role in supporting human intelligence (Gan & Bai, 2023; Knox, 2019; MacKenzie et al., 2022; Miller, 2019; Moreno-Guerrero et al., 2022; Peters et al., 2023a; Salas-Pilco & Yang, 2022). Briefly defined, AI is a computing system that supports human intelligence through which individuals use their cognitive ability and intellectual knowledge to solve problems and tasks (Knox, 2019). Thus, AI chatbots facilitate how human intelligence can learn, adapt, synthesize, self-correct, and use data for complex tasks and help them comprehend common language requests through automatic responses and immediate assistance (Salas-Pilco & Yang, 2022). Indeed, AI has become a prominent vehicle for human intelligence in dealing with big data, increasing an understanding of the brain–computer interface (BCI) technology and information and communication technology (ICT) (Miller, 2019). AI has also promoted a knowledge-based economy and human capital within science, technology, engineering, and mathematics (STEM) and in the broader field of higher education. Notably, STEM has long contributed to developing the infrastructures of higher education and think-tank research, facilitating numerous human projects and tasks in diverse professional fields (Hughes et al., 2022; Li, 2014; Li et al., 2020, 2022; Marín-Marín et al., 2021; Peters, 2017; Wu et al., 2022).Footnote 1

Despite the rapid evolution of AI, the recent advent of the Chat Generative Pre-Trained Transformer (henceforth: ChatGPT) has influenced the knowledge ecology system, affecting public administration, medical and healthcare management, business enterprises, social agencies, cultural organizations, and educational institutions at large (Gan & Bai, 2023; Peters et al., 2023a, 2023b; Thorp, 2023). ChatGPT was introduced by OpenAI and officially launched on November 30, 2022. It optimizes language models for dialogues and provides detailed responses to specific questions, correcting premises and inappropriate requests (Kim, 2023; Thorp, 2023; see also OpenAI, 2015–2023). According to Dowling and Lucy (2023), ChatGPT retains the function of language model training “with a blend of reinforcement learning algorithms and human input over 150 billion parameters” (p. 1).

Research problems and gaps

In practice, a growing body of literature has underlined both positive influences and challenges to utilizing AI chatbots. Before the emergence of ChatGPT, previous scholars highlighted more positive aspects of the rapid evolution of AI Chatbots. For example, Dimitriadou and Lanitis (2023) viewed that AI chatbots could contribute to the field of STEM education, such as educational technology that designs smart classrooms; teachers and learners can collectively develop innovative curricula and promote pedagogical content knowledge. Furthermore, Salas-Pilco and Yang (2022) perceived that higher education administrators could also practically utilize AI chatbots to manage documents. For instance, they can easily organize charts on dropout and retention rates of students, design extracurricular activities and service-learning programs, and measure various performances among teachers and learners.

Concerning the advent of ChatGPT, a rapidly growing body of evidence (e.g., philosophical, positional, literature review papers, and monographs) argued that ChatGPT would be helpful to numerous professionals, instructors, and students (Gan & Bai, 2023; Rayner, 2023; Shen et al., 2023; Stojanov, 2023; Peters et al., 2023b). For instance, Rayner (2023) stated that diverse stakeholders, such as scientists, researchers, instructors, and students in business economics, mathematics, physics, data science, and information systems, could improve their creative writing, coding skills, and common-sense reasoning at large. However, ethical dilemmas will frequently arise in their academic tasks. In medical and health sciences, radiologists have viewed ChatGPT as a “double-edged sword” (Shen et al., 2023, p. 1). Even though the platform is undoubtedly convenient when dealing with documenting charts (i.e., automatic summarization, machine translation, and question–answering), its functions may generate incorrect answers. ChatGPT often performs manually fixed instructions instead of genuine interactions. Thus, if the users do not provide sufficiently specific requests, AI assumes their demands and needs (Shen et al., 2023).

Noticeably, as of May and through September 2023, OpenAI has launched ChatGPT-4, which alarms academics to rethink the rapid evolution of AI chatbots and its language models, despite its claims about creation, safety, and benefits for all of humanity (Barash et al., 2023; Lewandowski et al., 2023; Peters et al., 2023b; Tülübaş et al., 2023; see also OpenAI, 2015–2023). Despite the existing body of literature on AI chatbots in general and the recently growing body of evidence regarding the advent of ChatGPT, far too little empirical research has been conducted in this area. Accordingly, globally renowned academic publishers and their STEM and education journals (e.g., Springer Nature, Science, and Routledge) have called upon scholars to pay more attention to the impact of ChatGPT on research ethics, authorship, and academic integrity in STEM research and higher education development (Kim, 2023; Peters et al., 2023a, 2023b; Thorp, 2023). In this regard, an editorial in Science on January 26, 2023, stated that “ChatGPT is fun, but not an author,” which means “concerns related to how ChatGPT will change education”; the author further shared:

It certainly can write essays about a range of topics. I gave it both an exam and a final project that I had assigned students in a class I taught on science denial at George Washington University. It did well finding factual answers, but the scholarly writing still has a long way to go…Machines play an important role, but as tools for the people posing the hypotheses, designing the experiments, and making sense of the results (Thorp, 2023, p. 313).

Another editorial in Maxillofacial Plastic and Reconstructive Surgery on March 8, 2023, argued that even though ChatGPT is an innovative tool for scientists and medical researchers, they will face moral dilemmas regarding authorship, which “is an ethical issue of significant importance in scientific articles, and it has become a critical matter in scientific journals. A recent publication in Nature stated that an AI chatbot cannot be listed as an author of a scientific article since it cannot take responsibility for the article’s claims.” (Kim, 2023, p. 1). Earlier, another editorial in Educational Philosophy and Theory on January 15, 2021, anticipated a growing public concern about the use of AI and the future of human intelligence in higher education. In the current age of post-digital education and bio-digital technology, human beings can utilize AI pragmatically. However, AI will potentially influence human knowledge and their roles. Thus, academics must use their intellectual roles to engage with the public and promote mutual dialogues in a public forum setting (Peters et al., 2023a).

Research thesis, purpose, and questions

The potential conflicts and crises in academic research and publication, teaching and learning, and human resources (HR) management are visible among diverse stakeholders in STEM research and higher education development (i.e., journal editors versus authors, professors versus students, and higher education leaders versus academic faculty and administrative bodies). Hence, specific empirical research is needed to examine these contradictory problems, thereby increasing a more comprehensive understanding of the impact of AI chatbots on STEM research and the future of human intelligence in higher education (Crompton & Burke, 2023; Kim, 2023; Peters et al., 2023b; Thorp, 2023; Tlili et al., 2023).Footnote 2 In this regard, Altheide and Schneider (2013) maintained that media sources are prominent elements of empirical data, as professional writers’ experiences, observations, and reflections can demonstrate real-life stories as social phenomena. Each writer encounters sociopolitical and sociocultural problems, so their written and textual data can provide specific frames and discourses associated with specific academic subjects, theories, human behaviors, and social factors, such as STEM ideology and arts-based learning, as well as museology, visualization, and technology use (Bai & Nam, 2020), or cultural politics and intersectionality of class, gender, race, ethnicity, and national origin (Deeb & Love, 2018; Denzin & Lincoln, 2011).

The purpose of this study is to investigate how writers in the mainstream STEM journals and higher education magazines perceive the impact of AI Chatbots on STEM research and the future of human intelligence in higher education. Therefore, this study asks the following primary research questions:

  • RQ1: How do writers of mainstream STEM journals and newspapers and higher education magazines perceive the potential conflict and crisis in academic research and publication?

  • RQ2: How do writers of mainstream STEM journals and newspapers and higher education magazines perceive the potential conflict and crisis in teaching and learning?

  • RQ3: How do writers of mainstream STEM journals and newspapers and higher education magazines perceive the potential conflict and crisis in HR management?

To this end, this study adopts the concepts of research ethics, academic publishing, and integrity within the context of the knowledge ecology system and human capital in the age of post-digital education and bio-digital technology at a general level and, in turn, uses conflict theory and crisis management within the context of STEM research and higher education development, more specifically.

Definitions of the key concepts, theoretical relevance, and methodological applicability

Briefly defined, research ethics is closely intertwined with the authors’ morality, authorship, and integrity to secure intellectual property by avoiding misconduct and respecting grounded rules in academic publishing. Given this, academic editorials are the primary stakeholders who promote the knowledge ecology system as leaders, judges, advisors, and mediators (Peters et al., 2016). Authorship means each academic writer’s rights and ownership and asset to share scientific knowledge with members of intellectual society (Moorehead, 1966). Furthermore, human capital in education refers to human resources and the workforce in a specific field who can devote themselves to promoting a knowledge-based economy as symbolic power (Spring, 2015). Moreover, post-digital education defines the relationship between human intelligence and technology, more specifically referred to as “posterior,” suggesting “a different stage in the perception and use of technology” (Knox, 2019, p. 359) in the currently ongoing era of bio-digital technology, which indicates intrinsic and significant portions of the post-digital ideas (Peters et al., 2023a, p. 3). Conflict theory and crisis management are also intertwined as strategic tools and coping mechanisms for diverse stakeholders with different interests based on their divergent beliefs, norms, and benefits (Giddens & Sutton, 2014; Hong & Hardy, 2022).

Accordingly, grounded in a qualitative approach, this study uses a comparative media discourse analysis (CMDA) to navigate the research questions about how various academic editorials, newspapers, and magazines have framed potential conflicts and crises in academic research and publication, teaching and learning, and HR management at a general level. This study also aims to develop discourses about ethical issues and risk factors that may influence contemporary STEM research and, more specifically, higher education development. In this scene, CMDA can interpret whether there are any similar perceptions, different reflections, or potential biases toward specific study subjects (Altheide & Schneider, 2013). Overall, examining diverse stakeholders’ empirical voices can develop a more comprehensive understanding of the conflicts in the knowledge ecology system and have practical implications for STEM scholars and higher education policy-decision makers regarding crisis management strategies in the age of post-digital education and bio-digital technology.

Review of the literature and theoretical framework

Research ethics, academic publishing, and integrity in the knowledge ecology system

When considering STEM research and higher education development, ethics and integrity are crucial parts of the knowledge ecology system–academic publishing, teaching, and learning. First, concerning academic publishing, one of the significant aspects of research ethics involves human subjects, protecting participants’ identities and confidentialities (Cresswell, 2013; Merriam & Tisdell, 2016). Furthermore, research ethics illustrates authors’ integrity, morality, and conscience beyond human subjects while undertaking academic publication projects. Academic journals strictly control authorship and plagiarism, intervening in conflicts of interest among authors and their responsibility and accountability for their claims (Kim, 2023; Moorehead, 1966; Peters et al., 2016).

According to Peters et al. (2016), editors’ philosophy of academic publishing and their journal ecosystem comprises various dimensions, such as the “new knowledge ecologies and the global ecosystem of scholarly communications,”; “enlightenment continuities,”; “universal access and democracy,”; and “ownership and rights” among others (p. 1402). Notably, ownership and rights are the material products of each author’s intellectual labor. Their knowledge displayed in publications is their property and asset (Moorehead, 1966). Therefore, the role of academic editors and their intervention concerning authors’ ownership and rights in academic publishing relate to both tangible and intangible results of the scientific research society, respecting academics’ intellectual endeavors. Publishers provide robust intellectual profits by promoting human knowledge and creative thinking through the authors’ scientific writing (Moorehead, 1966; Peters et al., 2016).

In addition, one of the utmost values of teaching and learning in higher education is to educate students to develop academic integrity, helping them cultivate ethics and morality in academic writing in a classroom setting. In general, many students’ course assignments or thesis projects are not focused on publishing. However, their writing practices are entwined with academic integrity as moral behaviors and practices. Hence, the role of educators is to foster the next generation of educational and societal leaders (Besley et al., 2023; Jandrić et al., 2022; Nam et al., 2023). From the perspective of students, academic integrity is trustworthiness and responsibility that promotes their commitment and the spirit of collegiality. To foster academic integrity, educators encourage their students to undertake collective writing projects and practice peer-reviewing and editing, and construct a sense of collective academic identity (Jandrić et al., 2022).

Human capital in the age of post-digital education and bio-digital technology

Human capital refers to human resources, manpower, and the workforce in labor markets (Nam et al., 2019). From sociological and political–economic perspectives, Giddens and Sutton (2014) argued that the division of labor illustrates occupational trajectories in specific labor markets and socio-economic status. Given this, the formation of social structures accompanies bureaucracy, education, consumerism, capitalism, organization, and so forth, in which individuals strive to gain better social positions.

From viewpoints of post-digital education, the late 1990s through the early 2010s were considered the digital era. The rapid development of the Internet, digital and electronic media, and new technology provided numerous benefits to human society. The digital-based knowledge economy produced human capital in labor markets. It promoted radical STEM and ICT advancement and political–economic climate changes, such as a knowledge-based society, network society, and cyber society (Gan & Bai, 2023). In this regard, concern for the digital gaps and cultural leaps has grown for socioeconomically marginalized individuals; teachers, students, and employees may have faced various challenges associated with digital capitalism, so their personal academic and professional goals could be influenced by the level of digital literacy and technical proficiency of new technology and media (Gan & Bai, 2023).

Furthermore, Peters et al. (2023a) illuminated bio-digital technology at the nexus of human intelligence and AI. This shows the relationship between digital technology and bio-economy. Indeed, STEM education today is closely intertwined with “post-digital knowledge ecologies” and “bio-digital philosophy” (Peters et al., 2023a, p. 1). Thus, bio-digital dialogues entail philosophical and ideological aspects of higher education in the age of 4IR and develop scholarly conversations about morality and ethics in technoscience, bio-economy, and HR management (Peters et al., 2023a). In addition, Peters and his colleagues undertook a collective educational philosophy and theory (EPAT) writing project with the specific theme, “AI and the future of humanity: ChatGPT-4, philosophy and education—Critical responses” (Peters et al., 2023, p. 1). A total of 15 educational philosophers and theorists shared their perceptions of the use of ChatGPT. They promoted competing discourses about “the dawn of augmented intelligence” in the currently mobilizing age of 4IR and ChatGPT. These included the nature of (a) “mass industrial societies”; (b) “the “data-driven economies”; (c) “work and learning”; and (d) “human cultural evolution”; and (e) “critical reasoning and situated ethics” (Peters et al., 2023b, p. 17). The most overarching argument by Peters et al. (2023b) was that many intellectuals have anticipated that ethical concerns will consistently occur, while advanced countries and their companies participate in cosmopolitan AI competitions. Hence, the future of humanity may need to negotiate with AI chatbots in the age of post-digital education and bio-digital technology.

Conflict and crisis management in STEM research and higher education development

Conflicts are power struggles among individuals and groups with divergent ideologies that produce segmentation and dissension (Giddens & Sutton, 2014). According to Nam et al. (2018), conflicts can occur in any relationship when people compete for perceived values, interests, and advantages. In power dynamics, human beings often harm or eliminate their counterparts or negotiate and collaborate with the other parties. Thus, Nam et al. (2018) argued that conflicts are often entwined “with decision-making processes in which individuals or groups often face challenges due to unexpected situations.” (p. 600).

In addition, crisis management is related to strategic planning and problem-solving in diverse ecological and environmental dimensions of sustainability, entailing social, economic, cultural, political, organizational, institutional, and technological factors (Hong & Hardy, 2022). Crisis management illustrates how certain leaders identify risk factors and incidents and prepare for strategic planning to cope with emergencies. They also solve conflicts of interest among different parties as social justice advocators, critical mentors, mediators, facilitators, and influencers (Hansen, 2008; Nam, 2020). Peters et al. (2017) viewed these practices as the roles of public intellectuals.

Pertinent to the current study, the recent advent of ChatGPT can produce diverse conflicts of interest and crises in STEM research and higher education development, especially regarding research ethics, authorship, rights, and ownership. Given the context, numerous editorial concerns are related to ethical writing issues in the scientific research society in this early stage of the ChatGPT (Kim, 2023; Peters et al., 2023a; Thorp, 2023). Furthermore, ethical writing issues involve academic faculty and their publication works and students’ writing practices in classrooms.

In addition to the future of human intelligence in higher education, some arguments have been made that AI chatbots are practical instruments for various stakeholders that can provide enormous data and information. This factor can help save time and academic tasks in various disciplines (Moreno-Guerrero et al., 2022). However, the rapid evolution of AI and the recent advent of ChatGPT have been increasing anxiety about human intelligence, which can potentially have numerous academics facing ethical dilemmas in research activities, teaching practices, and sustaining themselves to retain their academic career positions. Thus, the key stakeholders’ perceptions of the advent of ChatGPT and its potential conflicts and crises in STEM research and higher education development are necessary.

Given this, the current study focuses on developing scholarly dialogues about writers of editorials and newspapers in mainstream STEM journals and higher education magazines and their perceptions of the advent of ChatGPT and the potential conflict and crisis in academic research and publication. STEM journal editors write editorials on behalf of their board members and academics in their disciplines and call upon scholars to pay close attention to specific issues or events. STEM journalists and higher education columnists are not academics but professional writers affiliated with academic journals and magazines. They provide diverse news articles related to particular issues and events. Their empirical voices through texts can be powerful in developing a critical media discourse.

Methods

A comparative media discourse analysis approach

This study adopted a comparative media discourse analysis (CMDA) approach as the primary methodological lens. Given the nature of the qualitative inquiry, a discourse analysis approach, in general, “is often descriptive, narrating stories about commonly shared ideas and norms.” (Bai & Nam, 2020, p. 270). This approach calls upon intellectuals to give more attention to untold or already told stories that are quite paradoxical, encouraging them to participate in scholarly debates in a public forum setting (Fairclough, 1992). Given the context, researchers consider the public forum as a particular social structure, sphere, and space, drawing “empirical” voices to contextualize “reality” (Fairclough, 2003, p. 14). This approach benefits researchers seeking to develop dialogues about particular public concerns, social issues, and cultural events in real-world situations (Denzin & Lincoln, 2011; Fairclough, 1992). Moreover, a media discourse analysis approach interprets the perceptions of members of media societies using “a hermeneutic and textual analysis” to explore the “embodied assumptions” or to draw ‘human experiences” as “empirical evidence.” (Bai & Nam, 2022, p. 887).

In addition, Altheide and Schneider (2013) stated that CMDA is the so-called “Ethnographic Content Analysis” (ECA), through which media analysts use different sampling and data collection strategies to interpret the commonalities and differences among divergent societal group members and attempt to conceptualize certain social phenomena (p. 26). Thus, this approach entails a content analysis of written and textual data sources of evidence, such as newspapers, magazines, and electronic documents from different media outlets (Altheide & Schneider, 2013). Therefore, this study aims to promote scholarly conversations about the impact of AI chatbots on STEM research and the future of human intelligence in higher education through the lenses of different media outlets, including STEM editorials, STEM newspapers, and higher education magazines.

Data collection

All media outlets selected for this study are valid according to qualitative and naturalist inquiry (Lincoln & Guba, 1985), through which the authors attempted to bolster the trustworthiness and the quality of the CMDA. Accordingly, the data collection strategies adopted entail transferability, confirmability, credibility, and dependability (Creswell, 2013; Denzin & Lincoln, 2011; Shenton, 2004). Transferability illustrates how data can be generalizable within qualitative inquiry (Denzin & Lincoln, 2011). As aforementioned, journalists/columnists affiliated with these outlets are professional writers, and their editors must approve their written reports. For confirmability, the authors in the current study conducted systematic data collection and reviewed all written and textual sources of evidence (Shenton, 2004). They directly visited the official websites of each outlet and manually used search engines to collect news articles and contextualize these data sources to develop units of discourse analysis.

Specifically, the authors collected a total of 72 articles from several key outlets, including Springer Nature (n = 20); The Chronicle of Higher Education (n = 11); Inside Higher Ed (n = 32); and Times Higher Education (n = 9). ChatGPT was launched on November 30, 2022, and the specific period examined was between December 1, 2022, and February 23, 2023, totaling 85 days. Concerning the rationale for the specific media outlet selections, the authors primarily contemplated Springer Nature as their starting point among many other publishers, including Nature, Nature Machine Intelligence, Nature Biotechnology, Nature Biomedical Engineering, and Nature Astronomy. This publisher is not only one of the most influential academic publishers in STEM that provides exceptional news articles but also a pioneering platform that primarily covers research ethics, authorship, and integrity issues. Notably, as aforementioned, the editorial boards of this publisher initiated to express concerns about the absence of grounded rules for authorship, in which ChatGPT or any AI chatbots will not be listed as co-author, because it cannot claim scientific research papers’ claims (see Kim, 2023). In addition, editorials in the Springer Nature journals offer opinions or discuss topical issues on behalf of board members and academics. The primary role of editorials in academic journals is to shed light on problematic public concerns and social issues. They share their viewpoints and opinions, call upon scholars to pay close attention to these concerns, and encourage academics to engage in scholarly conversations (Kim, 2023; Li, 2014). The other outlets represent mainstream higher education magazines, also known as industrial publications. Their journalists/columnists are experts in higher education administration, policies, teaching, and research.

The third of the four key factors listed above, credibility, defines “how” data can be triangulated and “what” essential contents can be constructed (Creswell, 2013). The authors collected various news/magazine articles, such as editorials, opinions, daily briefings, guest posts, teaching notes, and essay reviews. In the initial search phase, several keywords were considered, such as “ChatGPT,” “artificial intelligence,” “AI,” and “higher education.” This produced 141 news articles, many duplicated or cross-posted in the search terms. Hence, the authors contemplated another search phase, adding more specific key terms to narrow the scope of the contents: “ethics,” “academic,” “teaching,” “research,” “professor,” “student,” “writing,” “essay,” “paper,” “misconduct,” “cheating,” and “human intelligence,” among others. In this manner, the authors selected 72 of the most relevant and applicable articles to conduct a CMDA of the advent of ChatGPT.

Finally, the fourth key factor, dependability, describes how data collection is traceable, logical, and documented (Denzin & Lincoln, 2011). The authors followed the “Altheide Research Team Protocol for News Reports” (see Altheide & Schneider, 2013, p. 48), adopting several vital components, such as specific publication titles, author names, volume/issue/page numbers, publication dates, and source information (i.e., article links). Code names and numbers for each article were chosen by following the abovementioned key elements (see Table 1).

Table 1 Information about media representations of ChatGPT

Data analysis

This CMDA, specifically, used an inductive content analysis of textual and written data alongside its primary research questions that explored how different writers of mainstream STEM journals and higher education magazines have framed potential crises in academic research and publication, teaching, and learning. Moreover, HR management expressed concerns about ethical issues and risk factors influencing contemporary STEM research and higher education development. The authors adopted and applied Altheide and Schneider’s (2013) approach to CMDA, which indicates stages of textual and written data analysis, including protocol data collection, formatting, framing, and discoursing.Footnote 3

Initially, the authors manually reviewed all collected articles multiple times in the protocol data collection and formatting stages. They identified keywords and crucial points related to the rapid evolution of AI and the emergence of ChatGPT, along with the chosen conceptual maps and writers’ positionalities. They openly coded articles to identify four crucial items: (a) publication date; (b) positive influences; (c) challenges; and (d) neutrality. In the next framing stage, the authors carefully reviewed all articles. They focused on clustering specific types of articles and common or different ideas among writers to classify domains. In doing so, the authors considered the potential content and developed media frames, such as general concerns among editorials, positive influences, and potential challenges or crises in higher education at a general level. They found that most articles covered diverse topics rather than a single specific topic. Hence, they noted the media frames’ presence, frequency, and illustrative core ideas (Altheide & Schneider, 2013; Deeb & Love, 2018). At the same time, they selected influential terms, phrases, clauses, and sentences in coding schemes (Denzin & Lincoln, 2011; Fairclough, 1992).

Accordingly, the authors determined three primary media frames along with the research questions: (a) the potential conflict and crisis in academic research and publication; (b) the potential conflict and crisis in teaching and learning; and (c) the potential conflict and crisis in HR management. In this phase, the authors collapsed too general or too obvious frames, such as general concerns in academia and the neutral position that views ChatGPT as a pragmatic instrument. However, some articles that covered the neutral position were incorporated into the primary media frames as units of discourse analysis. Hence, the authors elaborated on illustrative core ideas to show media representations of AI and the advent of ChatGPT and their overarching concerns (see Table 2).

Table 2 Information about the categorical media frames and illustrative core ideas

In the next stage, the authors categorized different media outlets and articles based on the positionalities of each writer, thereby identifying the characteristics of each unit of media discourse content: (a) STEM editorials; (b) STEM newspapers; and (c) higher education magazines. Given the methodological nature of discourse analysis, drawing specific and rich quotations and verbatims from textual data can enhance the quality of descriptive and narrative analysis in a storytelling manner (Fairclough, 1992, 2003). Hence, the authors carefully reviewed all textual data and selected articles that included thick and rich descriptions of the established media frames. However, they excluded articles lacking depth or mentioning key terms without specific descriptions, implications, or inferences. Accordingly, the authors focused on comparing or contrasting how writers of divergent media outlets viewed the advent of ChatGPT. They analyzed similarities, differences, or biases related to the primary media frames (see Table 3). Finally, the authors shed new light on the most overarching concerns identified in the results. They also interpreted these issues by considering the discussion sections’ research purpose, questions, and theoretical lenses.

Table 3 Media frames, units of discourse analysis contents, selected articles, and the summary of the CMDA contents

Overall, the authors in this study acknowledge that each writer and media outlet may have certain biases toward specific subject–object relationships based on their positionalities and characteristics. For example, while STEM editorials focused more on mentioning potential conflict and crisis in academic research and publication, higher education magazines focused more on discussing potential conflict and crisis in teaching and learning, and vice versa, though both outlets illustrated potential conflict and crisis in HR management. However, it is meaningful to find collective thinking among diverse individuals and groups in the media society (Altheide & Schneider, 2013; Bai & Nam, 2020, 2022).

Results

The potential conflict and crisis in academic research and publication

STEM editorials: research ethics, scientific writing, authorship, and ground rules

Despite growing concerns about unclear ethical boundaries, STEM journal editorials discussed research ethics associated with scientific writing, authorship, and ground rules in publishable scholarship. Although most editorials represented negative aspects of using ChatGPT during academic research and publication works, Editorial-2 showed a neutral position, acknowledging that ChatGPT can be used as a practical tool to organize data in scientific research. For example, one editorial underlined, “[G]raph neural networks are a type of machine learning algorithm that use graph structures to encode spatial relationships between objects.” In this regard, the algorithms can be utilized to generate “spatial context for various applications, including image segmentation, disease classification, and tissue analysis.” Hence, Editorial-2 emphasized that scientists and researchers can experience ChatGPT as a means to organize data and charts, promoting the quality of scientific research (i.e., biomedical engineering, pathology, and radiology).

Nonetheless, other editorials showed different viewpoints. For instance, Editorial-1 raised a critical question regarding “AI ethics” and highlighted diverse potential problems in academic research and publication: “2022 has seen eye-catching development in AI applications. Work is needed to ensure that ethical reflection and responsible publication practices are keeping pace.” The editorial further addressed potential risk factors related to large language models (LLMs) in STEM research:

A persistent problem with many experimental AI tools, such as those based on LLMs, is that they have many limitations that are not sufficiently understood, but that could lead, intentionally or unintentionally, to harmful applications. Those who contribute to AI developments, therefore, need to engage more with ethical processes to ensure responsible publication and release of AI tools. This is urgent and necessary given the reach of AI, with many applications being pervasive in society and posing a substantial risk of potential harm and misuse.

Furthermore, Editorial-3 highlighted potential risk factors related to scientific writing and authorship. Due to ChatGPT’s 175-billion parameter LLM from OpenAI and self-trained mode, researchers can easily access an enormous amount of data on the internet. Thus, this editorial expressed concerns about publication ethics:

It has become an essential, largely underappreciated part of science publishing to carry out various quality checks such as whether authors and affiliations actually exist and whether parts of the text have been previously published elsewhere. ChatGPT’s ability to produce large amounts of plausible-sounding content and to rewrite the existing text in different styles, making plagiarism detection near-impossible, may stretch the current system to its limits and undermine trust.

Another concern in scientific writing is that a user’s prompt may generate text from ChatGPT that includes content that the user does not understand, but which the user may be tempted to incorporate into their writing…A downside is that ChatGPT may normalize a new form of writing in which the human user merely curates large swaths of text by rearranging the output from multiple prompts.

Notably, Editorial-5 clarified that ChatGPT threatens transparent science and urged researchers and publishers to recognize some ground rules. AI chatbots entail “[T]he big worry in the research community” that can produce unreliable work. This editorial clarified the scientific societies’ positions on research ethics and publication, stating “[S]everal preprints and published articles have already credited ChatGPT with formal authorship.” and further asserted:

That’s why it is high time researchers and publishers laid down ground rules about using LLMs ethically. Nature, along with all Springer Nature journals, has formulated the following two principles, which have been added to our existing guide to authors….First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility…Second, researchers using LLM tools should document this use in the methods or acknowledgments sections. If a paper does not include these sections, the instruction or another appropriate section can be used to document the use of the LLM.

STEM newspapers: authorship and moral dilemma to utilizing ChatGPT

Similar to journal editorials, professional writers of STEM journals directly mentioned or implied the potential crisis in academic research and publication. The risk factors vary, including authorship, content development, absence of standard publication metrics, and moral dilemmas to utilizing ChatGPT. According to Nature-11, STEM researchers could improve their scientific writing through academic editing services before the advent of ChatGPT. They had to spend an enormous amount of money on hiring copy editors, but currently, they can use ChatGPT to improve their manuscripts and save time and research funding. This news article provided an illuminating example of how scientists may feel favorable to AI chatbots:

In December, computational biologists Casey Greene and Milton Pividori embarked on an unusual experiment: they asked an assistant who was not a scientist to help them improve three of their research papers. Their assiduous aide suggested revisions to sections of documents in seconds; each manuscript took about five minutes to review. In one biology manuscript, their helper even spotted a mistake in a reference to an equation. The trial didn’t always run smoothly, but the final manuscripts were easier to read—and the fees were modest, at less than US$0.50 per document.

Nevertheless, the other STEM journal newspapers had different opinions. They disapproved of ChatGPT as co-authors in academic publications. For instance, Nature-7 represented a clear position that “many scientists disapprove” and implied growing concerns about preprint papers or positional papers that may list ChatGPT as co-authors, disregarding “formal debut in the scientific literature—racking up at least four authorship credits on published papers and preprint.” Similarly, a daily briefing issued by Nature-8 showed the same position and critiqued growing concerns related to authorship: “Publishers are starting to ban AI authorship.” Furthermore, Nature-6 perceived that “ChatGPT can write passable abstracts.”, but AI-generated abstracts pool scientists,” and these are not even appropriate and acceptable, which cannot persuade academic editors.

Higher education magazines: ambiguous authorship as research assistants, editors, or collaborators

Writers of higher education magazines directly mentioned or implied authorship issues involving academic research. Some of the most illuminating examples of this were related to ambiguous authorship. In other words, the writers raised critical questions about how higher education researchers should view the roles of ChatGPT and its AI chatbots. According to IHE-22, the AI capabilities of ChatGPT have many academics being, “both drastic and ecstatic, for the end of essay writing.” This quote implies that if AI chatbots are writing, who are the authors? IHE-8 also raised crucial points regarding manuscript development and asked readers how researchers should use the chatbots: “What will the writing process look like for them? Will they use models as research assistants? As editors?”.

Indeed, ChatGPT provides diverse answers about areas of faculty work. Given this, CHE-5 addressed:

Much academic research reads as if it were prepared by artificial intelligence. It follows strict conventions of form and objectivity and goes unread all too often. Artificial intelligence can teach academics the importance of having a distinctive writing voice—one has been conditioned by the experience of being a human and that a robot would have trouble replicating.

IHE-4 also implied ambiguous authorship but urged academics to rethink “AI simply as an automation tool or as an assistant.” And further, it emphasized that “[W]e might, instead, think of it as a collaborator—as a resource that we can use to in research, writing and thinking. I feel today a bit as I did in 1993 when the internet browser was introduced.”

The potential conflict and crisis in teaching and learning

STEM editorials: academic integrity issues of students

While most editorials focused on discussing potential issues involving scientific research and publication, Editorial-5 covered potential academic integrity issues of students. This editorial implied that many students might have experienced the use of ChatGPT in their academic assignments:

ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams, and generate helpful computer code. It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them. Worryingly for society, it could also make spam, ransomware, and other malicious outputs easier to produce. Although OpenAI has tried to put guard rails on what the chatbot will do, users are already finding ways around them.

STEM newspapers: academics’ anxiety versus concerns about students’ essays

The primary debate in this discourse was about academics’ anxiety and concerns about students’ essays. Nature-2 introduced the use of ChatGPT in a smart classroom setting: “The growth in tools based on [AI] that can generate text in response to a question has transformed how people use smartphones and computers…students can use such software to summarize articles, clean up text and even write code. But some worry that this type of software could lead to scientific misconduct.” Furthermore, Nature-4 supported that ChatGPT may “kill the essay assignment” and urged that “[a]cademics worry about students using artificial intelligence tools to write their homework.” Nature-9 also covered AI competitions in a STEM classroom setting. A news article entitled, “Arms race with automation’: professors fret about AI-generated course work,” portrayed that “[i]nstructors are rethinking student assignments to tackle an anticipated surge in bogus essays…[with] the rapid development and evolution of [AI] chatbots, students can generate seemingly insightful writing with the click of a button.” However, along with the rapid evolution of AI and the advent of ChatGPT, Nature-9 offered: “Although some academics blame these tools for the death of the college essay, a pool of Nature readers suggested that the resulting essays are still easy to flag, and it is possible to amend existing policies and assignments to address their use.”

Higher education magazines: biases toward students’ academic integrity in writing despite the reality

Along with growing concerns about the impact of AI chatbots on teaching and learning, especially associated with academic writing and essay assignments in a classroom setting, writers of higher education magazines not only acknowledged diverse risk factors but also expressed how instructors should negotiate with students’ academic integrity issues, indicating somewhat neutral positions. Initially, CHE-1 noted: “AI and the future of undergraduate writing –Teaching experts are concerned, but not for the reasons you think,” and further raised the question: “Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?” CHE-3 also remarked: “Why I’m Not Scared of ChatGPT–The limits of the technology are where real writing begins.” Furthermore, as CHE-8 pointed out, “ChatGPT has everyone freaking out about cheating….and continued, “This is not the first time a new technology has kindled worries among faculty, who have long feared that students will take shortcuts instead of doing their own work.”

Other news articles implied that instructors can handle students’ academic integrity issues in writing by setting clear rules and guidelines. For instance, IHE-15 mentioned, “[T]o help put text generators in the proper perspective, we need to turn toward each other to determine guidelines for the use of such tools.” IHE-18 also discussed, “ChatGPT raises questions about what we value in writing instruction,” but many instructors are “worried about ChatGPT.” Given this, this opinion article stated, “Don’t be” These quotes implied that at the end of semesters, many students are writing their final papers, and instructors are “exhausted from grading.” Both parties may consider using ChatGPT when they struggle to finalize their tasks on time. However, IHE-18’s texts implied that ChatGPT’s current ability is not superior enough to enhance students’ essays in a specific logical manner, such as Wikipedia or Google.

Despite ongoing debates regarding how instructors should negotiate with students’ academic integrity issues, some other news articles demonstrated a clear position that using ChatGPT is unethical, which will potentially disregard the value of higher education and its justification of existence. Notably, IHE-27 metaphorized the advent of ChatGPT as “a plague upon education” and stated:

Today we are facing a new sort of plague, one that threatens our minds more than our bodies. ChatGPT, the artificial intelligence chatbot that can write college-level essays, is going viral…A lecturer at an Australian university found that a fifth of her students had already used ChatGPT on their exams. Scores of Stanford University students reportedly used it on their fall 2022 final exams mere weeks after its release. A critical mass, a superspreader event, is clearly forming…While headlines warning about ChatGPT have populated the news cycle daily for more than a month now, most educators have yet to really feel the brunt of this viral sensation directly.

THE-5 stated that “ChatGPT can pass US medical license exams” and “AI-generated answers showed ‘new, non-obvious and clinically valid’ insights in texts usually taken by students after years of study.” In a nutshell, while some higher education magazine articles have argued that instructors should be steadfast and make active challenges and negotiate with their students’ academic integrity issues, other articles pointed out that these seem problematic, which could remain potential biases toward the promises and pitfalls of ChatGPT and its AI chatbots.

The potential conflict and crisis in HR management

STEM editorials: anxiety about the future of human intelligence in research communities

Even though there are advantages to using ChatGPT in scientific research communities, some editorials also expressed concerns about the future of human intelligence. One of the primary concerns was the limited ability to predict research data and analysis compared to AI algorithms. While AI chatbots forecast results and show better performances, many scientists may feel empty and questionable about their research capability. For example, in astronomy, Editorial-4 expressed “the potential for AI to replace human astronomers” and argued:

While AI algorithms can be very effective at analyzing data and making predictions, they cannot replace the human ability to ask questions, make creative connections and think critically about the data. There is a risk that the reliance on AI could lead to a reduction in human creativity and curiosity in the field of astronomy…It is important to be aware of the pitfalls of AI, including the risk of inaccurate predictions and the potential for it to replace human thinking and creativity. By being mindful of these potential pitfalls, astronomers can make the most of the benefits of AI while also maintaining the unique strengths of human intelligence.

In addition, Editorial-6 predicted the continual growth of AI, which may influence the future of human intelligence. This editorial stated, “In 2022, over $1.37 billion was invested into generative AI companies, and as this software gains more traction in the biomedical space, this amount is likely to increase. There have been predictions that generative AI could result in $1 trillion in value for the healthcare industry by 2040.” Given the context, the future of human intelligence is uncertain without specific knowledge and creative thinking in the scientific research society.

STEM newspapers: the knowledge competition between AI and human intelligence in STEM research

Writers of STEM journal newspapers suggest the potential impact of ChatGPT and its AI chatbots on human intelligence in the scientific research job market. Some news articles, including daily briefings or career features, provided texts and developed debates regarding AI versus HI. For example, Nature-1 portrayed: “Are Chat[GPT] and Alpha[-]code going to replace programmers?” and “OpenAI and DeepMind systems can now produce meaningful lines of code, but software engineers shouldn’t switch careers quite yet.” Nature-5 noted: “Abstracts written by ChatGPT fool scientists” and “Researchers cannot always differentiate between AI-generated and original abstracts.” Nature-10 described: “Science urgently needs a plan for ChatGPT” and “How artificial intelligence tools might remake the scientific enterprise.” Nature-12 wrote: “Approaches to capturing the benefits of research on society are improving—but huge challenges remain.”

More specifically, Nature-12 suggested the potentially limited role of scientists in the age of AI and raised critical questions: “Every researcher wants to write their work to matter—and increasing competition for funding is compelling scientists to show their worth. But what is the real value of an experiment, a finding, or a public lecture?” In addition, Nature-1 warned about the potential human intelligence crisis in the scientific research society:

Artificial intelligence (AI) researchers have been impressed by the skills of AlphaCode, an AI system that can often compete with humans at solving simple computer-science problems. Google sister company DeepMind, an AI powerhouse based in London, released the tool in February and has now published its results in Science, showing that AlphaCode beat about half of humans at code competitions.

Higher education magazines: will AI replace human intelligence in higher education or co-exist?

Numerous higher education magazine articles directly mentioned the term “human intelligence” and implied issues involving the future of HR management in higher education. Like STEM editorials and newspapers, the most dominant debate was related to academic faculty. Nevertheless, some articles provided deeper and richer explanations regarding the potential risk factors beyond academic faculty positions. Initially, CHE-10 mentioned a general concern about academic faculty: “[I]t’s not easy to write like a human when AI or the worn-in grooves of scholarly habits are right there at hand.” This meant that each faculty member has their writing style and scholarly habits. They are concerned about “what this technology means for academic integrity, writing instruction, and essay assignments. But in the meantime, ChatGPT offers a clear message about another major area of faculty work: scholarly writing.” Given the context, ChatGPT may influence future academic job markets and faculty careers as academic writing abilities in their chosen disciplines are significant.

Beyond a simple mention of HR management in higher education, some writers have raised critical questions concerning future employment. For instance, IHE-24 illustrated the landscape of academic job markets and demographic shifts:

As a field that has remained relatively unchanged over decades, higher education is overdue for a major makeover to adjust to the changes over the decades in our society. We have lost affordability and relevance to many prospective students and employees. As a result, four million fewer students are attending college than a decade ago. Now, fewer employers are requiring college degrees. The advent of new technical capabilities such as generative artificial intelligence promises to create even greater pressure to replace positions with less expensive and more efficient AI applications. These factors combine with other socioeconomic conditions to create a downward pressure on the budgets of colleges and universities.

Although some articles narrated a more specific explanation of the future academic faculty job market, others discussed non-faculty positions in higher education as leading to a certain blindness about questioning the future of higher education that AI may provoke. For instance, IHE-31 provided an illuminating example of admissions counselors. In specific, those admissions professionals are significant. Considering their roles, IHE-31 stated that “industry standards such as travel season (visiting high schools and attending college fairs) are viewed as a rite of passage,” and they “all have fond (or not-so-fond) memories of scouring the earth to meet prospective students, collecting information, and racking up hotel points.” However, their roles may be limited, because parents and students themselves could collect general information about college admissions and develop their own admissions strategies using ChatGPT.

Finally, THE-2 raised a critical question about ChatGPT as a “tool or terminator” and generated a debate regarding the potential human intelligence crisis in higher education: “AI will replace academics unless [their] teaching challenges students.” This article concluded with a reader’s comment:

The emergence of AI and ChatGPT is an inevitable evolution. It's only a threat to educational institutions if they don't evolve with it. This is an opportunity to finally rid ourselves of traditional assessment formats, which disadvantaged many anyway (by punishing those who were slow to develop skills to write academically) and were increasingly open to misconduct, with many lazily relying on Turn It In. Unfortunately, I strongly suspect that institutions will be very slow to react and even to respond on how staff should deal with it.

Discussion

This study explored how writers of the mainstream STEM journals and higher education magazines perceived the impact of AI chatbots on STEM research and the future of human intelligence in higher education. Three prominent writer groups were chosen to develop CMDA with the advent of ChatGPT, including STEM editorials, STEM newspapers, and higher education magazines, and asked research questions about how these stakeholder groups engaged with scholarly dialogues concerning the potential conflicts and crises in academic research, publication, teaching, learning, and HR management. These stakeholder groups mentioned ethical issues, moral dilemmas, and risk factors that may influence individuals’ academic integrity and future careers in scientific research communities and higher education. However, contradictory and problematic debates have arisen regarding the promises and pitfalls of ChatGPT and its AI chatbots.

Among the diverse media outlets, STEM journal editorials anticipated a growing public concern about the potential conflict and crisis in academic research and publication. In the current age of post-digital education and bio-digital technology, the editorials demonstrated precise positions regarding intellectual knowledge that should be protected, focusing more on authorship and integrity issues, while higher education magazines expressed concerns regarding academic integrity, especially the potential conflict and crisis in teaching and learning relationships. Although both parties acknowledged that the advent of ChatGPT could influence the future of human intelligence in STEM research and higher education development, there have still been limitations to reforming appropriate policies and practices. More specifically, despite Springer Nature Journals’ urging for grounded rules for authorship, mentioning specific grounded rules in teaching and learning was largely overlooked. Thus, in this CMDA, their critical voices and empirical thinking can offer implications for research ethics and academic norms, suggesting plausible guidelines to be aware of the significance of protecting intellectual assets in the knowledge ecology system.

Within STEM research and higher education development, crisis management illustrates strategic planning and problem-solving. Academic editors and higher education leaders intervene with conflicting groups of interests, such as institutional and technological factors that may hinder the effectiveness of scientific research, knowledge-based economy, and human capital development (Hong & Hardy, 2022; Nam et al., 2019). Hence, academic leaders and policy-decision makers should recognize potential risk factors and incidents, preventing crises in certain emergencies. They can serve their societies as public intellectuals, critical mentors, mediators, facilitators, and influential figures to promote critical conflict resolutions (Hansen, 2008; Nam, 2020; Peters et al., 2022). Accordingly, the most overarching conflicting relationships, potential crises, and risk factors, as well as plausible crisis management strategies and praxis, will be presented in this discussion section.

Philosophy of academic publishing and ground rules for authorship

The first research question was how mainstream STEM journals and higher education magazines perceived the potential conflict and crisis in academic research and publication. In general, the primary conflict is between academic editors and authors. In the contemporary scientific research society, the roles of academic editors are significant. They are the key stakeholders and ethical judges in the global academy. Thus, they play pivotal roles in intervening with conflicts of interest among authors. They must encourage investigators to follow ethical guidelines, adopt experimental and innovative approaches, and develop critical thinking and problem-solving skills in academic publishing; these researchers must navigate their curiosities by raising and answering their critical questions (Peters et al., 2016). Indeed, publication work describes academic labor, entailing trustworthiness, candor, integrity, and perseverance. Editors are responsible for protecting intellectual benefits and respecting fairness and justice for all academics (Kim, 2023; Peters et al., 2016). However, persistent concerns about the advent of ChatGPT have invaded academic research and publication.

In the current study, the results showed that despite a majority of writers’ concerns about the potential ethical writing issues, especially authorship, ownership, and rights, a growing body of literature on ChatGPT and AI Chatbots has neglected to respect these important items, adding ChatGPT as co-authors or developers of research design contents. Notably, prominent journal editorials (i.e., Springer Nature and Science) and monographs have expressed concerns about countless positional, preprint, and conference papers (i.e., non-empirical and peer-reviewed papers) that often test the ability of ChatGPT and share their perceptions and opinions about their use. They argued that academics should not negotiate with AI chatbots (Kim, 2023; Thorp, 2023).

More recently, Tülübaş et al. (2023) tested the Human-AI collaboration to generate scientific research projects via ChatGPT-3.5 and 4. The investigators interviewed ChatGPT on emergency remote teaching (ERT) and generated texts regarding the research themes and evaluations. The result showed that ChatGPT-4 responded with synthesized and detailed information about the investigators’ requests. This study discussed bias issues involving content development via ChatGPT but neglected to provide an in-depth discussion about authorship and rights. Meanwhile, Peters et al. (2023b) discussed using ChatGPT-4 and authorship issues: “…it emphasizes the collaborative and distributed nature of knowledge production across various entities, including non-human ones. Similarly, ChatGPT generates responses based on a vast corpus of data, rather than on the authority of a single author or expert, and does not offer a response to the idea of a ‘public intellectual’” (p. 19).

Springer Nature Journals have initiated editorial policies and practices concerning ground rules for authorship that ChatGPT will not be co-authors because of responsibility and accountability issues. Texts generated by AI chatbots will not be acceptable (Kim, 2023). Nevertheless, there have still been numerous open-access journals, especially potentially predatory publishers or journals (i.e., Beall’s list) (see Peters et al., 2016), or archive organizations that disregard the ethical and moral standards of academic publishing (Peters et al., 2023b). Therefore, promoting a continuum of social change in academic research and publication is significant.

How to cope with students’ academic integrity in writing

The second research question was about how mainstream STEM journals and higher education magazines perceived the potential conflict and crisis in teaching and learning. The primary conflict is between instructors and students. As the results found, ChatGPT and its AI Chatbots can be available for college students. Instructors are anxious about students’ academic violations, such as cheating, misconduct, and penalties. Teachers and students may lose faith in each other because of the unforeseen conflicts arising from the increased concerns for ChatGPT.

In reviewing the relevant literature, students’ academic integrity instills in them how to construct ethical and moral behaviors, using their thinking and abilities to perform their academic tasks (Besley et al., 2023). higher education institutions value students’ reliability and accountability, showing their active challenges in dealing with their academic performances and writing tasks. Educators observe their morale to encourage them to develop academic integrity and suggest guidelines to develop subjectivities and citizenship behaviors while undertaking their essay assignments and collective writing projects (Jandrić et al., 2022; Nam et al., 2023).

Nevertheless, the current study’s findings underlined that students’ misconduct and cheating have long been growing concerns in higher education, and the recent advent of ChatGPT contributed to accelerating these issues. Despite the importance of ethical writing, some scholars maintained that course instructors should allow students to use AI chatbots appropriately. For example, Tlili et al. (2023) conducted an empirical case study using chatbots for educational purposes in a smart classroom setting. The investigators conducted in-depth interviews with three educators and 19 learners and their perceptions as ChatGPT users. They developed contents as units of analysis: (a) educational transformation; (b) response quality; (c) personality and emotions; (d) usefulness; and (e) ethics. Notably, one of the primary discussions was about embracing the technology rather than banning it; despite growing concerns about ethical dilemmas in teaching and learning, ChatGPT provides educational opportunities for smart learners, which should be negotiated along with the rapid evolution of AI. However, still, there are no clear ethical boundaries, ground rules, and policies in teaching and learning, even though editors of prominent journals began to raise critical questions regarding ground rules for authorship.

Jandrić et al. (2022) suggested that educators should encourage students to undertake collective writing projects to cope with students' academic integrity in the current age of post-digital education. They can serve their classmates as peer reviewers and collaborators, building mutual trust, responsibility, accountability, and intellectual property. The students can also learn the nature of the academic “ecosystem of new (and original) ideas (without foundations)” and the “ethical system—trust, integrity, and collegiality.” (Jandrić et al. 2022, pp. 20–21). To enhance the ground rules for students’ academic writing, Besley et al. (2023) suggested that course instructors can develop a draft of an integrity statement, set clear course policies, and avoid abuses of academic power, thereby boosting mutual teaching and learning relationships.

Rethinking STEM ideology at the nexus between AI and human intelligence in the 4IR era

The third and final research question was how mainstream STEM journals and higher education magazines perceived the potential conflict and crisis in HR management. The major conflict involved academic faculty members in teaching and research, because they are the stakeholders who may feel anxious about the potential structural problems that could affect HR management and future academic job markets and employment patterns. As the results of the current study demonstrated, many members of the scientific research society and college instructors may feel anxious about their future careers and muse on the potential crisis of whether AI may replace human intelligence in STEM research and higher education development. In reality, as THE-2 stated:

If history tells us anything, it's that focusing on policy and punitive measures for academic misconduct was not an adequate solution to essay mills. I hope HEIs [higher education institutions] don't make the same mistake with AI. The difference here is that AI is going to be incredibly useful for studying and for employment, so hopefully that's recognized and quickly.

In addition, numerous columnists and influencers on public media outlets (e.g., The New York Times, Wall Street Journal, and CBS News) and social media platforms (e.g., Twitter, YouTube, and Facebook) have voiced concerns about the implications of AI and the impact of ChatGPT on human labor relations, as well as anticipated AI competitions among companies in the ICT industry (Cao, 2023; Cerullo, 2023; Hao, 2023; Hsu & Thompson, 2023). The age of digital capitalism has been shifting toward post-digital capitalism, in which ICT and STEM companies in “Silicon Valley” have been promoting cosmopolitan market competition (Fast, 2021, p. 1616). The ICT and STEM industry leaders (e.g., Elon Musk and Mark Zuckerberg) began expecting to recruit employees armed with cutting-edge new technology skills, such as AI, robotics, DNA mapping, 3D printing, nanotechnology, biotechnology, and so on (see Cao, 2023; Khine & Areepattamannil, 2019; Peters, 2017). However, these columnists also expressed that numerous white-collar jobs may disappear in the near future due to the rapid evolution of AI (Cao, 2023; Cerullo, 2023; Hao, 2023; Hsu & Thompson, 2023). The current study claims that it is significant to reflect on the nature of STEM ideology and praxis in the age of 4IR and refine the role of STEM scholars and their contributions to higher education development.

To recall, in recent decades, STEM scholars have been promoting scientific knowledge and learning outcomes with equity for students in the contemporary neoliberal academic and capitalist market economic structures (Gan & Bai, 2023; Hughes et al., 2022). They have conceptualized the smart campus and classroom, which promotes the digitalization of education by combining high-end technologies to support teachers and learners. They have also encouraged teachers and learners to use ICT tools and digital devices like the Internet, AI chatbots, smartphones, and robotics (Cox, 2021; Dimitriadou & Lanitis, 2023).

Moreover, STEM scholars have also conceptualized Technological Pedagogical and Content Knowledge (TPACK) to increase a practice understanding of teaching and learning in higher education (Gan & Bai, 2023; Jiang et al., 2023; Khine & Areepattamannil, 2019; Soler-Costa et al., 2021). For example, Soler-Costa et al. (2021) stated that TPACK was introduced by Mishra and Koehler (2006) to support instructors and learners in understanding content-focused learning strategies. This framework helps teachers and students accomplish their learning goals by cultivating new technology proficiency and creative thinking skills. Likewise, STEM scholars have collectively attempted to construct the knowledge ecology system and foster a knowledge-based economy and human capital in the current age of post-digital education and bio-digital technology (Peters et al., 2023a).

Finally, previous scholars have refined the meaning of post-digital capitalism and the future of human intelligence in STEM education and praxis; human intelligence has the characteristics of conscious human practical activities and emphasizes art as an essential practical feature thereof, and STEM theories and its ideological principles are related to human thinking and spiritual communication with machines (i.e., AI chatbots). These are also associated with the nexus between spiritual and material productions, in which AI cannot rule human intelligence (Gan & Bai, 2023). However, other scholars have raised a crucial point regarding intellectuals’ careers in the current era of 4IR; other risk factors can influence their occupational opportunities because of social, economic, or environmental factors (i.e., the COVID-19 pandemic and shrinking academic job markets), but not because of the AI chatbots (). In a sense, the potential conflict and crisis in HR management cannot be generalizable yet, at least in the current early stage of the ChatGPT era. As numerous scholars have represented their positionalities as neutral, the public dialogues about the impact of AI chatbots on STEM research and the future of human intelligence in higher education have remained an ongoing sociocultural and sociopolitical discourse, viewing the promises and pitfalls of ChatGPT as a “double-edged sword” (Shen et al., 2023, p. 1).

Overall, there were similarities and differences regarding the impact of AI chatbots on STEM research and the future of human intelligence in higher education, provoked by the recent advent of ChatGPT. The STEM editorials expressed concerns related to publication ethics. Notably, Kim (2023) represented Springer Nature journals’ grounded rules in general and clear standpoints regarding authorship that ChatGPT cannot take full responsibility or accountability, such as human intelligence. The current study also claims that intellectual property is of utmost value in academic publishing. By the same token, higher education magazines also have lamented the limited roles of educational policymakers, so thus, they have issued calls for policy reforms to suggest a new AI policy paradigm in higher education. Due to the different characteristics of those academic and professional writers, they focused more on their specific areas, whether issues involving research ethics, teaching, learning, or HR management. However, they commonly raised crucial points regarding morality and integrity in academic writing. These are their critical discourses regarding the advent of ChatGPT and uncertainty about the future of human intelligence in STEM research and higher education development.

Practical implications, limitations, and future research

The current study calls for educational policy reforms and urges setting more transparent grounded rules within STEM research and higher education development. Although Springer Nature Journals proclaimed that any AI chatbot-generated articles will not be considered publishable scholarship, there are no clear ethical boundaries among many other publishers and their journals. Furthermore, the writers of the chosen articles have lamented the limited roles of educational policymakers, so thus, they have issued calls for policy reforms to suggest a new AI policy paradigm in higher education. Regarding scientific research and publication, Editorial-3 urged that there is no clear policy yet. This should “evolve when the impacts of large language models on scientific publishing to resolve legitimate and unwanted applications of AI-generative tools, and we will be actively involved in discussions about this.” By the same token, Nature-12 stated that science-policy researchers should contribute to improving specific policies and practices to measure “societal impact” on “how to go beyond standard publication metrics”, in which currently “approaches to capturing the benefits of research on society are improving–but huge challenges remain.”

Concerning teaching and learning in higher education, CHE-8 addressed that “many faculty members are debating what ChatGPT might mean for the future of teaching and academic integrity.” However, no clear policies exist, which may produce diverse conflicts between teachers and learners. Furthermore, IHE-2 pointed out that “AI will augment, not replace”, which will cause “a likely series [of] freaking out about ChatGPT” among diverse stakeholders. Notably, IHE-7 addressed ethical college admissions influenced by ChatGPT. Many high school students may consider utilizing Chatbot to develop their life experience and qualification to apply to universities.

At this point, this study claims that the students’ imaginary and virtual reflections on their personal identities could be described in their admission essays. While they obtain admission to high-profile universities, many higher education institutions may face challenges recognizing students’ specific qualifications and endowments. Nevertheless, there is no clear policy to control the ethical writing issue. The current study further argues that the promotion of AI tools for students may hinder the effectiveness of teaching and learning, which may jeopardize students’ academic integrity and actual learning goals. However, in this rapid sociocultural and sociopolitical climate shift toward the ChatGPT era, higher education is in numerous crises without plausible policy reform agendas. The academic integrity issues in higher education have remained significant limitations. Hence, future scholars should continue investigating ways to strengthen grounded rules for mutual teaching and learning. In addition, even though written and textual data from mainstream STEM journal editorials and higher education magazines are prominent empirical data sources of evidence, researchers’ and students’ perceptions about the impact of ChatGPT on academic research and teaching and learning in higher education will be valuable as they are also primary stakeholders. Finally, as aforementioned, OpenAI launched the version of ChatGPT-4, and future versions or similar AI platforms will continually be introduced. Therefore, future scholars should consider both quantitative and qualitative investigations regarding this topic, expanding existing knowledge in global scholarship.

Conclusion

This study examined the impact of AI chatbots on STEM research and the future of human intelligence in higher education. Accordingly, the current study asked research questions about how the writers of mainstream STEM journals and higher education magazines framed potential conflicts and crises in academic research and publication, teaching and learning, and HR management. The results showed commonalities and differences based on the writers’ positionalities and characteristics of their disciplines or occupational areas. In retrospect, numerous scholars, both within STEM research and higher education development, have explored the recent advent of ChatGPT and dedicated themselves to increasing an in-depth understanding of the knowledge competition between AI and human intelligence. However, using empirical data to explore potential conflicts and crises in key risk areas—academic research and publication, teaching and learning, and HR management—has so far been limited. Accordingly, the current study presented a CMDA of the expert groups in STEM research and higher education development as an exemplar. The empirical voices from academic editors and professional writers of the mainstream STEM journals and higher education magazines can be powerful means to increase a more comprehensive understanding of the movement toward post-digital education and bio-digital technology in the age of 4IR.

By addressing the most overarching concerns, this study focused on promoting scholarly dialogues among the chosen writer groups, emphasizing the social importance of research ethics and academic publication, and urging scientists and scholars to give more attention to the philosophy of academic publishing and ground rules. The current study also focused on illuminating growing concerns about students’ academic writing issues and professors’ anxiety and worries about their teaching morale and faith in their classes. These issues are not only instructors’ concerns but higher education policy-decision makers’ tasks. Hence, this study called upon more intellectuals to take their educative, social, and political roles to engage with the public and share their authentic voices to enhance ground rules in teaching and learning.

Meanwhile, there were a few notable limitations. Initially, this study identified various risk factors, incidents, and conflicts of interest among academic editors, instructors, and students, and their positionalities concerning the use of AI chatbots in STEM research and interpreting the future of human intelligence in higher education have still been ambiguous. Thus, this study recommends that future scholars expand the current research topic and continually investigate the future of human intelligence in the global academy. Furthermore, it was limited to drawing on the key policy-decision makers’ empirical voices (higher education leaders, such as presidents, deans, and chairs) and their opinions about potential conflicts and crises in STEM research and higher education development. Therefore, the current study suggests that future scholars can consider investigating the perceptions of the key policy-decision makers and their ideas about strategic planning and critical conflict resolution strategies. Overall, this study contributes to the field of STEM education and promotes a continuum of policy reforms in higher education.

Availability of data and materials

All data utilized for this research are publicly available online. The specific data materials are informed in Table 1.

Notes

  1. Indeed, STEM represents “sparking innovation,” “ensuring opportunity for all,” and “strengthening the teaching profession,” namely expanding existing knowledge in global scholarship for larger audiences (Li, 2014, p. 1). The contemporary STEM research society acknowledges a broader context of interdisciplinary studies in higher education, including “STEM + the arts” (STEAM) (Li et al., 2020, p. 8). Today, STEAM broadly includes but is not limited to the humanities, arts, languages, and social sciences that utilize various forms of new technology, including AI, ICT, Internet-based learning, digital learning, 3D printing, DNA mapping, biotechnology, nanotechnology, and so forth (Gan & Bai, 2023; Khine & Areepattamannil, 2019; Peters, 2017). Yet, the current study will consistently use the acronym “STEM” throughout this paper, thereby helping readers “focus attention on and efforts in STEM education” and its research development and scholarship (Li et al., 2022, p. 4).

  2. Empirical research refers to examining the lived experiences of individuals and generalizing these experiences as particular social phenomena. Thus, investigators undertake either quantitative or qualitative investigations to theorize particular social phenomena (see Creswell, 2013; Merriam & Tisdell, 2016).

  3. The authors analyzed data manually. One recent strategy is to use computer-assisted qualitative data analysis software (CAQDAS) (Merriam & Tisdell, 2016). However, Seale (2008) argued that CAQDAS packages are not significantly influential when it comes to “discourse analysis” because it requires researchers to use their logic to contextualize data (p. 242). Notably, in conventional qualitative research, Denzin and Lincoln (2011) maintained that investigators spend an enormous amount of time and effort to collect and analyze data and report their data analysis systematically and coherently, so “there is no such thing as value-free inquiry (Lim et al., 2015, p. 35). Accordingly, the authors relied on their own time and logic to develop media frames and units of the CMDA.

References

Download references

Acknowledgements

We are grateful to Dr. Feng Gan in the School of Art at Southeast University. His scholarly advice and encouragement helped us complete this research successfully. We also appreciate his endless emotional support.

Funding

This research received from grants from a general project of the National Fund of Philosophy and Social Science of the People's Republic of China. [Project No: 23BA022] and a major project of the National Fund of Philosophy and Social Science of the People’s Republic of China. [Project No: 21ZD11].

Author information

Authors and Affiliations

Authors

Contributions

BHN conceptualized this study and its research design. BHN and QB collected data and analyzed the data together. Both authors equally developed the manuscript.

Corresponding author

Correspondence to Qiong Bai.

Ethics declarations

Ethics approval and consent to participate

This research does not involve human subjects. As this research adopted a qualitative media analysis method, ethics approval by the university ethics committee does not apply.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nam, B.H., Bai, Q. ChatGPT and its ethical implications for STEM research and higher education: a media discourse analysis. IJ STEM Ed 10, 66 (2023). https://doi.org/10.1186/s40594-023-00452-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-023-00452-5

Keywords