Skip to main content

Fostering computational thinking through unplugged activities: A systematic literature review and meta-analysis

Abstract

Unplugged activities as a low-cost solution to foster computational thinking (CT) skills seem to be a trend in recent years. However, current evidence of the effectiveness of unplugged activities in promoting students’ CT skills has been inconsistent. To understand the potential of unplugged activities on computational thinking skills, a systematic review and meta-analysis were conducted. Our review of 49 studies examined the influence of unplugged activities to improve students’ CT skills in K–12 education between 2006 and 2022. The literature review showed that studies on CT skills were mainly (81.64%) conducted in computer science and STEM education, with board and card games being the most common unplugged activities for fostering CT skills in K–12 education. CT diagnostic tools (36.37%) were frequently used as assessment tools. A follow-up meta-analysis of 13 studies with 16 effect sizes showed a generally large overall effect size (Hedges’s g = 1.028, 95% CI [0.641, 1.415], p < 0.001) for the use of unplugged activities in promoting students’ CT skills. The analysis of several moderator variables (i.e., grade level, class size, intervention duration, and learning tools) and their possible effects on CT skills indicated that unplugged activities are a promising instructional strategy for enhancing students’ CT skills. Taken together, the results highlight the affordances of unplugged pedagogy for promoting CT skills in K–12 education. Recommendations for policies, practice, and research are provided accordingly.

Introduction

The last decade has witnessed an increased interest in using computer programming and coding to foster students’ learning of computational thinking (CT) as one of the most crucial twenty-first century skills. CT is defined as a problem-solving and thinking process composed of computer science ideas and skills that can be applied to better understand the world around us (Wing, 2006). Guided by Wing’s definition and call to action since 2006 (Grover & Pea, 2018), CT is “the thought processes involved in developing problems and their solutions such that an information-processing agent may efficiently carry out the solutions” (Wing, 2011). The concept has been further redefined as “the thinking processes required in creating issues such that their solutions may be expressed as computational steps and algorithms” (Aho, 2012).

Although initiatives to include CT in school curricula are relatively recent, the concept itself is not new. Alan Perlis advocated in the 1960s that all college students should understand programming and the “theory of computing” (Guzdial, 2008). The roots of CT in education can be seen in Papert’s work from the 1980s, which focused on how children might use computer programming to hone their thinking skills. CT as an essential skill for all students has received much interest from education systems worldwide. Countries such as the United States, the United Kingdom, China, Finland, Korea, and Japan have adopted initiatives and policies to develop CT skills through compulsory schooling and their national curricula (Israel et al., 2015; Kim et al., 2021; Kong, 2016; Kucuk & Sisman, 2017). Moreover, CT is recognized as the foundation of all science, technology, engineering, and mathematics (STEM) disciplines (Henderson et al., 2007; Li et al., 2020), especially in K–12 education (Grover & Pea, 2013) as scholars believe that STEM provides a natural setting for incorporating CT (Li et al., 2019; Özdinç et al., 2022; Weintrop et al., 2016; Ye et al., 2023). With such a focus on the importance of both STEM and computer science, the idea that computer science should be a part of STEM to promote CT and integrate the strengths of both disciplines is becoming increasingly prevalent (Barr & Stephenson, 2011; Century et al., 2020; Johnson et al., 2013; Sengupta et al., 2013; Shute et al., 2017). According to Weintrop et al. (2016), CT makes scientific and mathematics instruction more compatible with contemporary professional practices in these subjects. The potential benefits of CT have led to its incorporation into the K–12 STEM curriculum (Hurt et al., 2023). Specifically, the US Next Generation Science Standards (NGSS) recognize CT as a key scientific and engineering practice (NGSS Lead States, 2013), and it was also mentioned in the 2012 NRC K–12 science education framework (Grover & Pea, 2013).

Unplugged activities and plugged-in exercises are the two main approaches educators and researchers use to teach CT (Brackmann et al., 2017). Unplugged activities have become popular because of their low-cost approach in helping students understand computer science concepts and grasp CT skills without using a computer, digital devices, or any specific hardware (Bell et al., 2009; Busuttil & Formosa, 2020). Moreover, those activities enhance students’ cognitive capabilities since children in the preoperational stage rely on their perceptions to solve problems (Sigelman & Rider, 2012). Therefore, tangible materials should be used to cultivate CT and problem-solving skills (Chevalier et al., 2020). One possible strategy is to offer unplugged (without devices) CT activities prior to their plugging (with devices) counterparts (Looi et al., 2018; Saxena et al., 2020). Board games, cards, stickers, or physical movements are common unplugged activities that have gained recent research interest (Busuttil & Formosa, 2020; Chen & Chi, 2020; Csizmadia et al., 2019; Minamide et al., 2020; Saxena et al., 2020; Threekunprapa & Yasri, 2020).

CT has been manifested in several prior literature review to understand its characteristics and outcomes better. For example, Shute et al. (2017) reviewed earlier studies on CT from the perspectives of definition, interventions, models, and assessment. Their work classified CT into six aspects: decomposition, abstraction, algorithm design, debugging, iteration, and generalization. Similarly, Grover and Pea (2018) proposed CT practices, including problem decomposition, creating computational artifacts, testing, debugging, iterative refinement (incremental development), and collaboration and creativity (now regarded as a cross-cutting twenty-first century skill). Moreno-León et al. (2018) concluded that the focus should shift from general programming to CT. However, these aforementioned studies are limited, as they only discussed the components of CT, associated outcomes, and tools used to foster CT skills. Based on a synthetic review of the purpose, theoretical basis, scope, type, and employed research design of the literature focused on CT, Kalelioglu et al. (2016) proposed a framework for the notion, scope, and elements of CT. Moreover, regarding educational level, extant studies focused on the K–12 level (Huang & Looi, 2021) and, quite recently, on early childhood education (Bati, 2022).Taken together, despite several literature discussion on CT skills, extensive review of studies regarding unplugged activities is missing on a large scale, and there is still a lack of comprehensive understanding of unplugged activities in the CT education literature (Kite et al., 2021). More recently, a review work by Huang and Looi (2021) critically analyzed how appropriate K–12 “unplugged” pedagogies could support CT development. However, the analysis was limited to the field of computer science, placing priority on addressing pedagogical issues. Thus, the current literature overlooked the thorough picture of how unplugged activities have been implemented and how effectively they fostered CT skills. It is crucial to evaluate the efficacy of an (unplugged) curriculum that incorporates CT elements (Grover & Pea, 2013), and whether in-class interventions produce the desired results is yet to be answered (Settle et al., 2012; Shute et al., 2017).

To fill the knowledge gaps in the current literature, this review aims to provide a clear picture and deep understanding of the current state of CT and unplugged activities in education to synthesize and summarize previous work, focusing on landscape, methodology, and design of unplugged activities to foster CT skills. In addition to the literature review, we conducted a meta-analysis with evidence of the effectiveness of unplugged activities on the development of CT skills.

Literature review

Computational thinking

Computational thinking has received tremendous attention from educational researchers in the last decade, and the concept of CT has been understood from different perspectives. For example, CT has been related to problem-solving, the construction of artifacts, situated learning, the use of cognitive tools, and “thinking like a computer scientist” (Wing, 2006). Researchers have highlighted the top five CT skills, namely abstraction, algorithmic thinking, problem-solving, pattern recognition, and design-based thinking (Kalelioglu et al., 2016). It is evident that the definition of CT includes thinking types such as algorithmic and design-based concepts (Kalelioglu et al., 2016). Other researchers developed several definitions for their own research fields. For example, Barr and Stephenson (2011) suggested that in K–12, CT requires problem-solving abilities and specific dispositions, such as confidence and persistence, when tackling specific issues. Furthermore, CT refers to “students using computers to model their ideas and develop programs” (Israel et al., 2015).

In addition to the definitions, various CT frameworks were also proposed. For example, Brennan and Resnick (2012) stated that CT has three key dimensions: computational concepts, computational practices, and computational perspectives. Kalelioglu et al. (2016) developed a framework for teaching CT skills via a problem-solving process. Weintrop et al. (2016) categorized CT into four major groups: data practices, modeling and simulation practices, computational problem-solving practices, and systems thinking practices. More recently, Tsai et al. (2020) indicated that CT could be understood from either domain-general or domain-specific perspectives. The domain-general refers to the competencies required for methodically solving problems in daily life and across all learning domains. By contrast, the domain-specific characterizes computational thinking as abilities necessary to systematically address problems in the subject domain of computer science or computer programming (Tsai et al., 2020).

Unplugged activities in CT education

Since Wing’s (2006) work promoted CT as a transversal competence for every child to learn and use, extensive efforts have been made to operationalize CT in the K–12 context. Unplugged activities are commonly described as “learning computer science without a computer” (Bell et al., 2009), which are being implemented with tools such as board games, toys, cards, puzzles, and papers. CT education has several benefits and offers superior features, such as low cost, independence of the use of computers, no need for teacher’s information and communication technology (ICT) skills, and ease of implementation (Busuttil & Formosa, 2020; Minamide et al., 2020). Some researchers have conducted action research or case studies using or developing new unplugged games for CT training courses. For example, Tsarava et al. (2019b) developed three life-size board games to provide an unplugged, gamified, low-threshold introduction to CT for primary school children. Minamide et al. (2020) described unplugged programming activities using stickers and a scanner, which were popular among children. Torres-Torres et al. (2019) showed two graph paper games—Avatar and Carpet—taken from a series of activities implemented with primary school students to introduce and motivate females’ interests and strengthen their education equity and empowerment in the STEM area.

Other researchers have leveraged quasi-experimental studies to assess the application of unplugged activities in the development of CT. For instance, Brackmann et al. (2017) carried out a quasi-experiment with unplugged tools from the “Hello Ruby” book and the “Code Master” board game, demonstrating that unplugged activities have a significant positive effect on motivation promises in the development of CT. Furthermore, Tonbuloğlu and Tonbuloğlu (2019) proposed a nested mixed design to prove that unplugged activities with the coding worksheet positively affect the improvement of students’ CT skills without significant change in their problem-solving skills. Delal and Oner (2020) employed a one-group pre-experimental design with pre-test and post-test to examine the effect of using the Bebras challenge on fostering CT skills. Del Olmo-Muñoz et al. (2020) experimented with both unplugged and plugged-in activities among children in second grade, finding that unplugged activities with text blocks seem beneficial as they significantly increased the children’s level of CT skills and learning motivations. Sun et al., (2021a) conducted a quasi-experimental study to compare the effect of the learner-centered unplugged activity mode based on games and puzzles with the traditional instructor-directed lecturing mode. The findings showed that students in the learner-centered unplugged activity mode scored higher on programming knowledge, behaviors and attitudes.

The results of these prior endeavors are promising, as they hold the potential to expand and deepen our understanding of unplugged activities in CT education. However, although different tools (e.g., board games and blocks) to foster CT skills has been implemented in those studies, there is less consensus regarding the effectiveness of these tools in the acquisition of CT skills. Thus, there is a need for a systematic review to summarize recent empirical studies on unplugged activities and to examine the influence of various factors on promoting CT skills through unplugged activities.

Factors of unplugged activity designs for promoting CT skills

Prior studies have shown that different research designs may influence the promotion of CT skills (Sun et al., 2021c). Specifically, (1) Students’ grade level affects perceived CT skills differently (Hu et al., 2021). (2) The intervention sample size can exert different degrees of influence on the experimental results (Chen et al., 2018). (3) The length of intervention in cultivating students’ CT varies from several minutes to one academic year. (4) Investigating the effectiveness of educational tools is the primary research purpose in many CT studies (Sun et al., 2021b). In the current study, we considered the research design factors involved in unplugged educational activities, such as grade level, class/group size, length of intervention, and unplugged learning tools.

Grade level

Researchers have tried to foster students’ CT skills across different educational levels. Atmatzidou and Demetriadis (2016) found that students of different academic levels varied in specific dimensions of CT skills in plugged-in activities. Although unplugged activities were also used in various grade levels from early education to secondary school (Delal & Oner, 2020; Saxena et al., 2020), only few studies have explored how grade level could affect students’ development of CT skills in the context of unplugged activities. Hu et al. (2021) emphasized that the level of education was a significant moderator variable influencing academic achievement in block-based visual programming learning. Fidai et al. (2020) conducted a meta-analysis of “Scratch”-ing CT with Arduino, and the results showed that grade level had no significant moderating effect on students’ CT concepts, practice, and perspective skills. This denotes that there is a lack of systematic understanding of how grade-level variations contribute to the use of unplugged activities to promote CT skills. Therefore, in the present study, we opted for the meta-analysis to understand whether the grade level has a relevant difference in unplugged activities on students’ CT skills. We consider grade level to be a possible moderator in this study, as we investigate the differences in effectiveness between the studies.

Class size

The class size (or sample size) is an essential factor affecting learning outcomes in both traditional and online learning environments (Breton, 2014; Li & Konstantopoulos, 2016; Parks-Stamm et al., 2017; Shen & Konstantopoulos, 2022). Quasi-experimental studies of unplugged activities and CT skills vary greatly in class size, from as small as 25 students (Busuttil & Formosa, 2020) to as many as 148 students (Miller et al., 2018). According to Sun et al. (2021b), educational games have a different impact on students’ CT skills based on sample sizes, as a sample size from 0 to 50 has the greatest effect. Moreover, Sun et al. (2021c) indicated that when the sample size increased, the impact of programming on students’ CT skills gradually decreased. In summary, the current evidence of the impact of sample size on CT skills is insufficient. Therefore, the current review analyzed quasi-experimental studies’ class/group size to discover whether there were differences in the effects of unplugged activities on CT achievement. This enables us to recommend good practices regarding sample size for instructional designers and researchers.

Length of intervention

The effectiveness of plugged-in activities on CT skills is related to the length of the intervention. Fidai et al. (2020) conducted a meta-analysis on computational thinking with Arduino and Scratch, which pointed out that the length of intervention positively affected CT achievement. However, the length of implementing unplugged activities differs across studies, and therefore is yet to be assessed on CT skills, with some being as short as one or two hours (Chen & Chi, 2020; Tsarava et al., 2019b), and others extending over 10 weeks (Hsu & Liang, 2021; Tsarava et al., 2017; Twigg et al., 2019;). Therefore, in this study, we investigated whether the length of the intervention might explain the differences in effectiveness between the studies.

Unplugged learning tools

In CT education, different programming tools may produce different teaching effects (Sun et al., 2021c). Unplugged activities positively affect students’ confidence in understanding CT concepts (Hermans & Aivaloglou, 2017). Many unplugged learning tools are currently used to help students learn CT, such as board games, blocks, graph paper games, and coding worksheets. However, it is pivotal to examine whether different unplugged tools have relevant effects on students’ CT skills. According to Grover and Pea (2013), current learning tools for fostering students’ CT skills seem to vary in effectiveness. Thus, as a factor, unplugged learning tools are considered to explore which is the best-unplugged tool for cultivating CT.

The present study aims to synthesize and examine evidence on promoting students’ CT skills through unplugged activities. We conducted a systematic literature review and a meta-analysis to comprehensively understand the current state of unplugged activities concerning CT skills and evaluate the efficacy of unplugged activities on students’ CT skills. Overall, we formulated five research questions:

RQ1:

What are the landscapes (publication type, publication year, country, and educational settings) of the identified studies?

RQ2:

What are the characteristics of the methodology (i.e., research type, research method) among the identified studies?

RQ3:

How have unplugged activities been designed to support the learning of CT skills?

RQ4:

Do unplugged activities effectively enhance K–12 students’ CT skills?

RQ5:

Do the research design factors (i.e., grade level, class size, length of intervention, and unplugged tools) moderate the effect of unplugged activities on the development of students’ CT skills?

To answer the aforementioned research questions, a systematic review of unplugged activities in K–12 CT education was conducted to answer RQ1, RQ2, and RQ3. Further, a meta-analysis was carried out to examine the average impact of unplugged activities on students’ development of CT skills. To this end, we used moderator analyses to explore an effective unplugged activities design to answer RQ4 and RQ5.

Methods

Data sources and search strategy

The literature search was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) paradigm (Moher et al., 2009), a commonly used approach that ensures a transparent and rigorous framework for conducting and reporting systematic reviews (Taherian Kalati & Kim, 2022). Following the guidelines, we searched literature via electronic databases, i.e., Web of Science, Springer Link, Taylor & Francis Online, Wiley Online Library, ERIC, and ScienceDirect, due to their comprehensive coverage of publications in the fields of education, computer sciences, and psychology. Since the flourishing of CT research was in 2006, motivated by Wing’s work (when she proposed the concept of computational thinking for the first time), we considered the time span of searching the published research from January 2006 to May 2022 to synthesize the empirical evidence from the broadest range of qualified papers with Boolean operators (AND and OR). The following search terms were combined differently: (1) unplugged board games, without computer, story, or game; and (2) computational thinking, CT. For example, ‘unplugged * AND (computational thinking OR CT)’. A pool of 479 records was retrieved from the databases. After deleting duplicate records, 446 articles were considered for further screening.

Inclusion and exclusion criteria

We applied selection criteria to refine the search results. Since the study aimed to explore unplugged tools and activities that foster students’ computational thinking, the inclusion criteria limited the literature scope to the full-text empirical studies published in English between 2006 and 2022. The full inclusion and exclusion criteria are described in Table 1.

Table 1 Inclusion and exclusion criteria for the literature review

To conduct the meta-analysis, we further extended the selection criteria of the collected articles to the following criteria: (1) must be experimental studies or quasi-experimental studies, which contained at least one control group or designed pre-test and post-test; (2) used tests, questionnaires, scales, or tasks to examine the impact of unplugged activities on CT skills; (3) reported sufficient data to calculate the effect value, such as mean, standard deviation, t/F value, and sample size by quantitative statistical methods.

Screening process

Two authors of this research applied the inclusion and exclusion criteria to screen 446 articles by reading the titles, keywords, and abstracts. As a result, 366 records were removed, and 80 articles matched the initial screening. After that, the authors read the full text to exclude articles that met the exclusion criteria. Ultimately, 49 articles were included in the literature review, and 13 articles (16 effect sizes) were eligible for the meta-analysis. During the screening and selection process, inconformity was discussed and resolved by the authors. Cohen’s kappa score was computed to test interrater reliability agreement that reached a coefficient of .93, suggesting “perfect agreement” (Hsu & Field, 2003). The article selection process is shown in Fig. 1. In addition, an overview of identified studies can be found in Appendix (Table 10) and a full list of the reviewed articles can be found in Additional file 1.

Fig. 1
figure 1

PRISMA flow diagram of the article screening process

Coding and data analysis

Systematic literature review coding

A coding schema was developed that included publication type, publication year, affiliation of the first author, participants (i.e., grade level, age, number of participants), subject area in which unplugged activities were researched (e.g., computer science, STEM), the design and implementation of unplugged curricula or activities (e.g., tools, experiment duration, CT assessment), and characteristics of methodology (i.e., types of research, research methods). Two authors worked collaboratively to code the studies and analyze the data. The researchers dealt with their disagreements through constant dialogue and multiple analyses during the coding process.

Data analysis process used for meta-analysis

We synthesized the effect size and analyzed the moderator variables using Comprehensive Meta-Analysis software (CMA, version 3.0). The original data from different independent studies, including the number of samples (N), mean, and standard deviation (SD), were input into the CMA software.

Calculating effect sizes for each study

First, we calculated the effect sizes of each study by the metric of Hedges’s g, which is better than Cohen’s d for setting small sample size bias (Borenstein et al., 2010). The effect size was interpreted by applying Cohen’s assertion, where g is less than 0.2 indicates a small effect, 0.2–0.8 suggests a medium effect, and larger than 0.8 indicates a large effect (Cohen, 1992). A 95% confidence interval (CI) for Hedges’s g was also calculated to test for significant differences.

$${d}_{i}=\frac{{\overline{X} }_{1}i-{\overline{X} }_{2}i}{\sqrt{\frac{\left({n}_{1}-1\right){S}_{1i}^{2}+\left({n}_{2}-1\right){S}_{2i}^{2}}{{{n}_{1}+n}_{2 }-2}}}, i=\mathrm{1,2},3\dots \dots k$$
$${g}_{i }={d}_{i}*\left(1-\frac{3}{4\left({n}_{1}+{n}_{2}\right)-9}\right)$$

where \({X}_{1}\) and \({X}_{2}\) are the mean scores, \({n}_{1}\) and \({n}_{2}\) are the sample sizes, and \({S}_{1}\) and \({S}_{2}\) are the standard deviation of the experiment and control groups, respectively.

Testing for heterogeneity and calculating overall effect sizes

The overall effect size was estimated using fixed-effects or random-effects models, which depended on the heterogeneity among these studies. When there is significant heterogeneity, using a random-effect model would better address the differences between the research effect sizes (Wang et al., 2020). This is because the fixed-effects model would be appropriate if one had strong evidence that the primary studies included in the meta-analysis were virtually identical (Schmidt et al., 2009). The fixed-effects model allows deductions about the studies involved in the meta-analysis. By contrast, the random-effects model permits generalizations with the studies that have been or may be carried out beyond the studies included in the meta-analysis (Hu et al., 2021).

The heterogeneity analysis was computed with the Q statistic and the I2 value, which evaluate how much of the variance between studies can be attributed to actual variance instead of sampling bias (Borenstein et al., 2021). The larger the I2 value, the greater the heterogeneity. 0–25% indicates that heterogeneity is considered low, 25–75% indicates moderate heterogeneity, and 75–100% indicates substantial heterogeneity (Higgins et al., 2003).

Examining publication bias

Since publication bias from multiple sources can affect the validity of the research (Borenstein et al., 2021), publication bias was examined before starting a meta-analysis. A funnel plot was used to examine the validity of the meta-analysis, which helped to distinguish publication bias from other asymmetry factors (Peters et al., 2008). If there was a publication bias in the meta-analysis, the funnel plot would be asymmetrical (Egger et al., 1997). We also employed Egger’s test for the asymmetry of the funnel plot and the classic fail-safe N test as complementary procedures for investigating potential publication bias. Rosenthal’s (1979) fail-safe N (i.e., classic fail-safe N) was used to estimate how many insignificant effect sizes (unpublished data) would be necessary to reduce the overall effect size to an insignificant level. If the fail-safe N is larger than 5n + 10, then the estimated effect size of unpublished research is unlikely to affect the effect size of the meta-analysis.

Moderator analysis

The heterogeneity of studies also indicated that further moderator analysis was needed to determine which moderator accounted for the variance among the studies (Higgins et al., 2003). Three statistical models have commonly been applied to moderator analyses: the fixed-effects model, the random-effects model, and the mixed-effects model. The fixed-effects model attempts to report the results of included studies rather than for a larger population, while the random-effects model infers the results to a wider range of population (Borenstein et al., 2010). The mixed-effects model is an adjusted random-effects model in which the random-effects model is used to combine effect sizes within subgroups and fixed effects between subgroups. In this study, we considered grade level, class size, length of intervention, and unplugged tools to explore which variables could moderate the influence of unplugged activities on CT skills, and to report the results based on the mixed-effects model.

Results

RQ1: What are the identified studies’ landscapes (publication type, publication year, country, participants’ profiles, and educational settings)?

Publication types and timelines

Regarding the publication types, of the 49 papers reviewed in this research, 28 were journal articles (57.14%), 20 were conference papers (40.82%), and one was a book chapter (2.04%). The distribution of the published research by year is illustrated in Fig. 2. There is a clear upward trend in publishing empirical studies on unplugged activities for developing CT skills, indicating the growing popularity and importance of CT in recent years. The earliest literature on CT was published by Wing (2006), while the earliest empirical study regarding unplugged tools for fostering CT skills was published in 2008 (e.g., Nishida et al., 2008). Despite the fact that there have been constant unplugged activities and CT skills research growth from 2017 to 2022, published research peaked with 10 articles in 2019 and 2020.

Fig. 2
figure 2

Distribution of the selected studies by publication year (Till May 15th 2022)

Country and region

We used the country affiliation information of the first author as an indicator of the country and region information. Overall, most of the studies were affiliated to European countries (N = 25; 51.02%), followed by Asia (N = 12; 24.49%) and North American countries (N = 10; 20.41%). In Europe, Spain led six studies (12.25%) and Germany contributed five studies (10.21%). Turkey has four studies (8.16%), whereas Croatia, Italy, and the United Kingdom each contributed two articles (N = 2; 4.08%%). Studies conducted in Asia were scattered, covering Japan, mainland China, Taiwan, Hong Kong, Singapore, and Thailand. The United States topped North American countries with 10 articles (20.41%). The rest of the studies were spread across Oceania (Australia) and South America (Brazil). More details can be found in Table 2.

Table 2 Distribution of publications by country and region

Educational setting

We analyzed the subjects, level of education, and sample size of the selected studies. As indicated in Table 3, the vast majority of the research was applied to computer science subjects (N = 31; 63.27%). Nine studies (18.37%) were conducted in the field of STEM. Two unplugged activities (4.08%) were undertaken in the social sciences. However, seven studies (14.29%) did not mention the subject. Overall, the results indicated that unplugged activities primarily train CT skills in computer science classes and STEM subjects to cultivate problem-solving abilities.

Table 3 Distribution of publications by subject, level of education, and sample size

Regarding grade level, studies on unplugged activities for computational thinking mainly recruited primary school students (N = 24; 48.98%), while (N = 13; 26.53%) of the studies involved participants from secondary schools. In addition, five identified articles (10.21%) included participants from multiple grades (e.g., both primary and lower secondary schools). Five studies failed to report the education level of the participants.

In general, studies involved a wide range of participants and grades. The largest number of participants was recruited in a study taken in a computer science class with 667 primary school students (e.g., Relkin et al., 2021), and the smallest number of participants was reported in a study with 11 participants in kindergarten (e.g., Saxena et al., 2020). Almost one out of four studies (N = 14; 28.57%) featured more than 100 participants, with the largest sample size of 667 (Relkin et al., 2021). However, approximately half of the studies (N = 26; 53.06%) recruited fewer than 100 participants, and nine studies (18.37%) did not specify the sample size.

RQ2: What are the characteristics of the methodology among the identified studies?

Distribution of research types

We grouped research types by experimental, case study, and action research based on the classification of Johannesson and Perjons (2014). As illustrated in Table 4, nearly half of the studies (N = 23; 46.94%) implemented an experimental method, 16 featured case studies (32.65%), and 10 studies (20.41%) used action research. Most quasi-experimental studies (N = 15; 30.61%) were designed with experimental and control groups. The rest of the quasi-experimental studies applied a single-group design, employing pre-test and post-test to assess students’ learning achievement and CT skills. The case studies introduced unplugged tools or courses, providing examples without an experimental design. Moreover, 10 studies (20.41%) used action or design-based research; only four were designed with compared groups but no comparison data analysis.

Table 4 Summary of characteristics of methodology in the literature

Distribution of research methods

In terms of research methods, quantitative methods (N = 18; 36.73%) were the most common in the included studies, followed by qualitative and mixed methods (see Table 4). Among the quantitative research designs, 12 studies (24.49%) used pre-test and post-test design (e.g., del Olmo-Muñoz et al., 2020; Tsarava et al., 2019b), and one study used post-test design (e.g., Csizmadia et al., 2019). As for qualitative studies, researchers tend to develop unplugged tools (e.g., Saxena et al., 2020) or teach the unplugged curriculum while gathering data through observations, interviews, videos, and worksheets (e.g., Busuttil & Formosa, 2020; Twigg et al., 2019). For instance, Chen and Chi (2020) used participation observations and interviews to explain students’ first-time learning experiences using board games.

Regarding studies that used mixed methods, most studies (N = 8; 16.33%) employed pre-test and post-test design to generate quantitative data in addition to student or teacher interviews, observations, and videos of CT experiences for qualitative analysis (e.g., Looi et al., 2018; Tsarava et al., 2018). Three studies (6.12%) performed post-test and qualitative analysis, and designed quasi-experimental research that compared between the unplugged CT activity group and the control group.

RQ3: How have unplugged activities been designed to support the learning of CT skills?

Learning tools

There are numerous unplugged tools to develop students’ CT skills (see Table 5). The most popular unplugged tools that teachers used for designing CT learning activities were board and card games (N = 22; 44.90%), followed by paper activities (e.g., Bebras, paper programming) (N = 17; 34.70%). For instance, Delal and Oner (2020) applied Bebras tasks to construct unplugged classroom activities of three difficulty levels, while another study used building construction board games to spontaneously promote students’ knowledge in CT and English (e.g., Hsu & Liang, 2021). Unplugged robots and blocks were adopted in four studies (8.16%) to increase students’ engineering interest and attitudes and their acquisition of computational thinking (e.g., Miller et al., 2018; Threekunprapa & Yasri, 2020). For example, Miller et al. (2018) used unplugged robotics to develop an introductory engineering lesson for secondary school students (43% had no prior knowledge of engineering). Two studies (4.08%) used textbooks, such as children’s literature, to introduce computing principles and concepts (e.g., Kirçali & Özdener, 2022; Twigg et al., 2019). Finally, four studies (8.16%) used more than one tool as a research intervention (e.g., Gaio, 2017; Saxena et al., 2020; Storjak et al., 2020).

Table 5 Publications by learning tools and length of intervention

Length of intervention

We sorted the information on intervention length to further examine the learning design for fostering computational thinking. As indicated in Table 5, the studies ranged from a few weeks to more than 10 weeks. Whereas most of the studies lasted for 6–10 weeks (N = 18; 36.73%) (especially in quasi-experimental research), some studies applied a short intervention (1–5 weeks) design (N = 15; 30.61%) and a long intervention (> 10 weeks) design (N = 6; 12.25%). For example, a short, one-hour activity design was used in one study by Chen and Chi (2020) where they focused on primary school students’ first-time experience of playing the board game ‘coding ocean’. The research of Chibas et al. (2018) features a 6-month longitudinal design, in which they followed how teachers used an unplugged approach to teach preschool children basic programming.

Assessment tools

We classified the various assessment methods and instruments in the reviewed studies. Tools were classified as CT diagnostic tools, CT summative tools, CT perceptions-attitudes tools, and CT formative–iterative tools, referring to the work of Román-González et al. (2019) (see Table 6).

Table 6 Classification of the reviewed studies for assessment tools

CT diagnostic tools were the most frequently used since they appeared in 18 (36.73%) studies. For example, Brackmann et al. (2017), Merino-Armero et al. (2022), and Tsarava et al. (2019a) used the Computational Thinking test (CTt), which consist of 28 multiple-choice items formulated from images based on a journey through a maze, while Sun et al. (2022) and other four studies (e.g., del Olmo-Muñoz et al., 2020; Delal & Oner, 2020; Rodriguez et al., 2017; Sun et al., 2021d) used the ‘Bebras Computational Thinking Challenge’ to assess the extent to which students can transfer their CT skills to different types of problems and contexts. Six studies (12.24%) used self-designed instruments to assess the acquisition of CT skills (e.g., Hsu & Liang, 2021; Threekunprapa & Yasri, 2020), learning performance (e.g., Saxena et al., 2020), coded patterns (e.g., Léonard et al., 2019), debugging (e.g., Ahn et al., 2021), and algorithms (e.g., Gresse Von Wangenheim et al., 2019). Others used CT worksheets to measure students’ learning through unplugged activity (e.g., Busuttil & Formosa, 2020; Looi et al., 2018; Storjak et al., 2020).

The second most frequent category was CT perceptions-attitudes tools (N = 15; 30.61%), which aimed at assessing the perceptions (e.g., self-efficacy perceptions) and attitudes of the subjects not only about CT, but also about related issues. Five studies (10.21%) used the Computational Thinking Scales (CTS) developed by Korkmaz et al. (2017), to investigate the impact of CT on learning competencies, including creativity, algorithmic thinking, critical thinking, problem-solving, and cooperativity. In addition, 10 studies (20.41%) used surveys to examine students’ attitudes (e.g., Leifheit et al., 2018; Miller et al., 2018; Minamide et al., 2020; Sun et al., 2021a; Vlahu-Gjorgievska et al., 2018), satisfaction (e.g., Gresse Von Wangenheim et al., 2019; Storjak et al., 2020), self-efficacy (e.g., Ahn et al., 2021; Threekunprapa & Yasri, 2020), and motivation (e.g., del Olmo-Muñoz et al., 2020) towards unplugged activities or courses.

Furthermore, 11 studies (22.45%) used instruments that fall under the CT summative tools category, including projection, observations, and problem-solving interviews. For example, Sun et al., (2021a) assessed the final programming projects created by learners and observed students’ behaviors through behaviorism videos. Seven studies (14.29%) also used observations to understand the engagement levels of students in class examinations and their understanding of CT concepts (i.e., Busuttil & Formosa, 2020; Chen & Chi, 2020; Dwyer et al., 2014; Peel et al., 2021; Saxena et al., 2020; Tonbuloğlu & Tonbuloğlu, 2019; Torres-Torres et al., 2020). Moreover, Ahn et al. (2021) and Peel et al. (2019) conducted individual interviews to measure problem-solving skills and to determine the intervention’s effect on programming skills.

Lastly, only two studies (4.08%) were categorized under CT formative–iterative tools. For instance, Léonard et al. (2019) analyzed students’ activities to investigate their learning processes, while Csizmadia et al. (2019) coded their mapping tools to analyze computational thinking and constructionist learning.

RQ4: Do unplugged activities effectively enhance K–12 students’ CT skills?

Analyses of publication bias

To answer RQ4, 13 studies with 16 effect sizes and 1736 participants were examined using the meta-analysis approach. Before conducting the meta-analysis, the study examined publication bias using a funnel plot, Egger’s test, and the classic fail-safe N test. The results of the funnel plot graphic are shown in Fig. 3, which demonstrates slight asymmetry. Egger’s test and the fail-safe N test were conducted to determine whether this asymmetry was statistically significant. In the Egger’s regression intercept test, the results (intercept = 2.658, 95% CI − 1.552 to 6.869, p = 0.197 > 0.05) revealed no evidence for publication bias. The classic fail-safe N was 1771, which was greater than the tolerance level of 70 (5*12 + 10), showing that an additional 1771 missing studies were required to make the overall effect size statistically insignificant. Therefore, based on the calculation of these statistics, we can conclude that there was no over-exaggeration of the effect of publication bias.

Fig. 3
figure 3

Funnel plot of standard error by Hedges’s g for unplugged activities on CT skills

Overall effect size and heterogeneity analyses

The results of the overall effect size and heterogeneity are reported in Table 7.For the 16 effect sizes among the studies, the fixed-effect model indicated a mean effect size of 0.751 with a 95% confidence interval of 0.681–0.821. The random-effects model revealed an overall effect size of 1.028, with a 95% confidence interval of 0.641–1.415. The statistics (Q = 348.667, I2 = 95.698, p < 0.001) showed that the effect sizes in the meta-analysis were heterogeneous. This indicated that there were differences among the effect sizes that were attributable to sources other than sampling error. Therefore, we adopted the random-effects model for the overall effect size analysis, and moderator analysis to explore the possible causes.

Table 7 Overall effect size and the heterogeneity test

As shown in Table 7, the random-model result revealed that unplugged activities significantly positively affected students’ CT skills (Hedges’s g = 1.028, 95% CI [0.641, 1.415], p < 0.001). The forest plot of all the included effect sizes in the random-effects model is shown in Fig. 4.

Fig. 4
figure 4

Forest plot of all included effect sizes in the random-effects model

RQ5: Do the research design factors (i.e., grade level, class size, length of intervention, and unplugged tools) moderate the effect of unplugged activities on the development of students’ CT skills?

To better understand the design factors of unplugged activities centered on developing students’ CT skills, this study explored possible moderator variables that affect their effectiveness. Grade level, class size, length of intervention, and unplugged learning tools may contribute to the heterogeneity of effect size differences. All moderator information for the included studies is provided in Table 8.

Table 8 All moderator information of the included studies

This study reports the results of the moderator variables based on a mixed-effects model. Table 9 shows the distribution of the moderator variables and their effect sizes. According to Cohen (1992), when g is less than 0.2, it indicates a small effect, 0.2–0.8 suggests a medium effect, and larger than 0.8 indicates a large effect.

Table 9 Results of moderator variable analysis of unplugged activities on CT

Grade level

The result suggests that unplugged activities had a great effect on students’ CT skills both in primary school (g = 0.914, p < 0.001) and in lower secondary school (g = 1.117, p < 0.05) (see Table 9). It seemed to work better for lower secondary school students when compared with their primary school counterparts. However, the meta-analysis of the control treatment variable demonstrated no statistically significant difference between the control group treatment at the grade level (Qb = 0.179, p > 0.05).

Class size

As indicated in Table 9, the class size with the larger sample (> 100) had the highest impact on the CT skills of the students (g = 1.228, p < 0.01). Medium (g = 0.408, p < 0.05) and small (g = 1.006, p < 0.001) sample size of unplugged activities also significantly enhanced students’ CT. Again, the unplugged activities effect differences among the class size were insignificant (Qb = 5.591, p > 0.05).

Length of intervention

Regarding the length of intervention, the results in Table 9 suggest that the medium intervention time with the interval (6–10 weeks) (g = 1.033, p < 0.001) and short durations (1–5 weeks) (g = 1.226, p < 0.05) both had a high effect on students’ CT skills. The effect size of the long intervention time with the interval (> 10 weeks) (g = 0.576, p < 0.001) was smaller. However, there were no significant differences in the moderating effect of intervention time on the relationship between unplugged activities and the development of CT skills (Qb = 3.595, p > 0.05).

Learning tools

As shown in Table 9, the use of paper activity tools such as Bebras (g = 1.090, p < 0.01) had a high impact on students’ CT skills, and board game tools (g = 0.922, p < 0.001) also seemed to have an obvious large effect too. Coding worksheets, paper programming, and robots in unplugged activities had the greatest impact (g = 1.163, p < 0.05). The heterogeneity test results (Qb = 0.435, p > 0.05) demonstrated that the learning tool was not a significant moderator variable influencing CT skills in unplugged activities learning.

Discussion

Using a systematic literature review and a meta-analysis approach, this study comprehensively synthesized and examined evidence of the current state of unplugged activities to improve students’ CT skills. Through the lens of research landscapes, methodology characteristics, and the effectiveness of unplugged activities to foster CT skills, we ultimately identified the state-of-the-art and the potential of using unplugged activities to promote students’ CT skills among K12 CT education.

Overview of studies on unplugged activities in CT education

The current state of published research on unplugged activities to promote the learning of CT skills is promising. It has risen substantially in recent years, highlighting its importance for teaching and learning future CT skills (Li et al., 2020). A similar trend was also reported by Hsu et al. (2018). In line with the prediction made by Huang and Looi (2021), we believe such a trend will continue as interest in unplugged pedagogy and CT skills grows. The unplugged activities in learning settings were mostly introduced to computer science and STEM subjects at the primary and lower secondary school levels. This is in line with the recent review work by Tang et al. (2020), who reported that the majority of studies have focused on the development of CT in primary and middle schools and can be explained in light of the relationship between computer science and CT skills, in which unplugged activities were adopted to promote CT skills. Given that there is no evidence to imply that primary and middle school are the only essential stages for children to learn CT, it is necessary to expand the research on CT assessments suitable for high school (Buitrago Flórez et al., 2017) and college students (Dong et al., 2020). STEM subjects are the most common for primary and secondary education. Thus, we suggest extending the subject scope to other school subjects (e.g., language) to find implementation mechanisms and methods and face the challenges of applying CT to other subjects (Denning, 2017).

The characteristics of the methodology among the identified studies

Assessment is crucial when introducing technology in K–12 classrooms (Grover & Pea, 2013). In the methodology section, we focused only on the types of research and research methods. Not surprisingly, the included studies have used different methods to investigate unplugged activities’ impact on CT skills. The identified studies intensively applied a quasi-experimental design. Quasi-experimental studies primarily used the pre-test/post-test experimental method to evaluate students’ CT outcomes; this is reasonable, as scholars need to measure how CT skills change in response to unplugged pedagogy. Besides quasi-experimental studies, case studies and action studies were applied in many of the studies, mostly with qualitative methods (Peel et al., 2021; Tsortanidou et al., 2022). They tended to collect data via course observation, students’ notes, and semi-structured interviews. However, exploring how students develop CT skills during unplugged activities using multiple data sources also matters.

Regarding types of CT measurement and instrument, quantitative approach seems to be a dominating way to assess students’ CT skills. Recent reviews also support this finding (e.g., Kalelioglu et al., 2016; Zhang & Nouri, 2019). Zhang and Nouri (2019) found that observation and pre-test/post-test designs were the favored analytic instruments and methods in studies on Scratch use in CT education. Some studies employed more than one assessment instrument to collect multiple pieces of evidence (e.g., Hsu & Liang, 2021; Sun et al., 2021a; Threekunprapa & Yasri, 2020; Tonbuloğlu & Tonbuloğlu, 2019), given that triangulated evaluation of students’ CT learning can be achieved through the combination of different assessment tools (Tang et al., 2020). Qualitative approaches, such as interviews, have been rarely used in the current CT literature. This issue has also been reported by Tang et al. (2020). To investigate students’ in-depth thinking processes in unplugged activities, future studies could combine surveys with other methods, such as interviews or think-alouds, to collect additional data.

Unplugged activities designed to foster CT skills

Board games and card games were the most applied learning tools, as reported in nearly half of the identified studies. This is probably because board and card games feature high interactivity and high-level thinking (Gresse Von Wangenheim et al., 2019; Kuo & Hsu, 2020), and correspond to structural programming, including sequential structure, conditional structure, repetitive structure, and the modeling concept of calling a procedure in programming languages (Kuo & Hsu, 2020). Further, the use of board games in education has a long history where scholars have found it to be associated with significant teaching results and improved learner motivation (e.g., Berland & Lee, 2011; Hinebaugh, 2009).

We attempted to identify the study duration in the literature. However, the results revealed large variations, as some have a more than 10-week intervention design, and others could be as short as several minutes. Some scholars argue that an intervention period too short may bring problems, such as the “memory effect” of using an identical set of items at both administrations, thus suggesting increasing the sample and avoiding too-short intervention period (e.g., several hours) to avoid the undesirable effect on the result (Brackmann et al., 2017).

Regarding assessment tools, CT diagnostic instruments were the most used assessment tools, while formative-iterative techniques were the least used tools. CT diagnostic instruments were often used to collect the subject’s CT level. Such tools work best when delivered in a pure pre-test condition (e.g., participants with no prior programming experience) or occasionally in a post-test condition (e.g., following an educational intervention) to determine how much CT skill was gained. Moreover, formative-iterative tools are implemented to evaluate students’ learning outcomes (often programming projects), instead of assessing the students themselves. As a result, these tools are mostly employed throughout the learning process and are specially built for a specific programming environment. Using only one of the aforementioned CT assessment instruments may create an incomplete portrait of pupils’ CT. This inadequate and biased evaluation approach might lead us to misunderstand our children’s CT development and make poor instructional judgments. As a result, scholars have emphasized using different (or combined) assessment methods (Brennan & Resnick, 2012; Grover et al., 2015).

The effectiveness of unplugged activity on CT skills

According to the meta-analysis of the 16 independent effect sizes, we found that unplugged activities can play a positive role in cultivating students’ CT. As Kazimoglu et al. (2012) noted, being unplugged can enable students to connect and develop their CT skills with little or no programming knowledge. Based on the results of the moderator variable analysis (grade level, class size, intervention time, and learning tools), we discuss the effectiveness of unplugged activities on CT education below.

First, moderate factors, such as grade and class size, significantly affected students’ CT skills. Unplugged activities seemed more effective on lower secondary school students than primary school students, and they worked better in larger classroom sizes (> 100), despite such differences being non-significant. This finding is contrary to the conclusions of del Olmo-Muñoz et al. (2020) and Li et al. (2022), who presented the feasibility of developing CT in students through unplugged activities at a younger age. Although grade level has no significant influence on unplugged activities and CT, we posit that there are two main reasons for this difference in effect size. On the one hand, secondary school students’ cognitive and operational skills gradually mature so that they can easily understand the rules of the activity and master CT in a short time (Li et al., 2022). In addition, unplugged activities design may lack richness and interactivity for lower secondary school students, so long-term interventions might reduce their enthusiasm (Sun et al., 2022). On the other hand, students at a younger age do not fully understand the game’s rules and may easily be attracted to other things during the learning process (Sun et al., 2021b).

Second, regarding the length of the intervention, unplugged activities can better develop students’ CT in a short time. We found a trend toward a negative moderating effect on the length of learning. With the extension of time, the influence of unplugged activities on CT gradually declined. This indicates that an intervention time of more than 1 month might weaken the effectiveness of unplugged activities’ effectiveness. This result is consistent with Chauhan’s (2017) finding that interventions for primary school students to learn technology for more than 6 months were not as effective as short-term interventions. According to this finding, we assume that the duration impact may be related to the learning task’s difficulty. If the task is easy to complete in a short time, there is no need to extend the learning time, as the effect may decrease with time. Thus, we suggest that educators consider implementing effective approaches to maintain students’ learning motivation and cognition for a longer time to promote learning performance. For example, increasing task difficulty as duration extends and integrating unplugged and plugged-in activities (Sun et al., 2022). This encourages conducting studies to identify the relationship between the learning task difficulty level and the duration impact on different types of thinking skills.

Third, in terms of learning tools, the current studies used various unplugged tools, including board and card games, paper activities, unplugged robotics and blocks, and textbooks. The most prominent positive effects on students’ CT skills were in favor of board games and paper activities among different tools. Educators found that board games can substantially improve students’ CT skills and develop English proficiency in the target vocabulary and sentences (Hsu & Liang, 2021), spatial abilities, reasoning, and problem-solving (Tsarava et al., 2019a). Gaming in unplugged activities, such as robotics and blocks, can give students an “intuitive” sense of accomplishment, which can better promote students’ interest and motivation, thereby developing CT skills (Sun et al., 2021b). Thus, educators are advised to appropriately conduct gamified activities according to students’ characteristics and teaching conditions to maximize the impact of unplugged pedagogy on students’ CT skills.

Implications

Unplugged approaches succeed in making CT education accessible to K–12 students in outreach settings. Such unplugged approaches have the potential to achieve the same outcomes in formal education (Huang & Looi, 2021). Along these lines, the current review has several important policy, research, and practice implications.

Policy

The last decade has seen a heated discussion of CT skills as a crucial twenty-first-century competency. In this review, we highlighted the growing popularity and importance of CT in recent years and provided evidence of the potential of unplugged activities to promote students’ CT skills. Therefore, it is necessary for policymakers to promote unplugged pedagogy in CT education. Countries worldwide have started modifying their national curricula to introduce CT skills (Balanskat & Engelhardt, 2015; Brackmann et al., 2016), albeit less emphasis is being placed on unplugged activities. Scholars have shown concerns about too much dependency on plugged-in approaches to foster CT skills, stating that code-centric operationalization can deprive students of computationally rich learning experiences and limit numerous kinds of representation (Basu et al., 2016; Kite et al., 2021). Unplugged pedagogy could be a perfect alternative due to its characteristics such as cost-effectiveness, independence of the use of computers, no need for teacher’s ICT skills, and ease of implementation (Busuttil & Formosa, 2020; Minamide et al., 2020), and it can provide rich learning experience using tools available in almost every classroom. Therefore, policymakers, educational agencies, and other stakeholders should hold discourses on the national level to facilitate integrating unplugged pedagogy in K–12 education and work together to promote the policies and guidelines regarding the integration of unplugged pedagogy into school curriculum. Furthermore, constant exchanges with policymakers from other nations or regions, particularly those more advanced in the field, can produce feasible proposals and experience for the effective implementation of unplugged pedagogy to the school curriculum.

In addition to supportive policies and consolidated understanding, educational agencies should also aim at developing appropriate curricula and standards for unplugged pedagogy. Despite multiple policies to push the development of international standards (e.g., CSTA, 2017; ISTE, 2018; K–12 Computer Science Framework, 2016), existing standards and curriculaum mostly failed to provide instructions for using unplugged pedagogy. For instance, the current International Society for Technology in Education (ISTE) standard offers no explicit descriptions of unplugged activities (ISTE, 2018). Given the potential for of unplugged activities to promote inclusive education by enabling students of all abilities to participate in a course fundamental to future working life, regional and national educational agencies, governments, and other groups with policy influence should work together toward more explicit standards and curriculum of unplugged pedagogy for CT skills.

Practice

First, our meta-analysis has evidently emphasized the effectiveness of unplugged activities in fostering students’ CT skills across K–12 educational levels. This implies that unplugged pedagogy should be encouraged in educational practices. Globally, the use of ICT in education is still far from being close to 100% in most African and in some Asian nations (Wallet, 2014, 2015). Rural areas lack resources, even in most European countries (Brackmann et al., 2017). One advantage of unplugged activities is that they are cost-effective, as the majority of unplugged activities require materials that are typically available in every classroom (e.g., papers, pencils, markers, and playing cards) (Busuttil & Formosa, 2020). Our review has confirmed the effectiveness of using unplugged tools such as board games and paper activities to promote CT skills. Stakeholders such as teachers and instructional designers should work on creating solutions to integrate such unplugged pedagogy into the classroom (Delal & Oner, 2020). This requires teachers to be well trained in integrating such knowledge and instructional strategies. In practice, schools should work with stakeholders, such as educational agencies and non-governmental organizations (NGOs), to carry out specialized teacher development programs and exchange good teaching practices and innovative teaching methods. This addresses several questions regarding the best practices when using unplugged activities in learning environments (Huang & Looi, 2021). For instance, what kind of teaching strategies and methods look like? What are the teachers’ competencies to provide practical experience and successful outcomes? Therefore, we suggest searching the best strategies for effective design of these activities, considering the teachers’ roles in facilitating students’ engagement. Furthermore, since the most popular unplugged tools that teachers used for designing CT learning activities were board and card games, teachers can discover innovative game elements by adopting gamification as an effective approach to increase motivation and engagement (Metwally et al., 2021), and propose diverse activities, including mixed and non-digital gamified activities (Qiao et al., 2022). Additionally, they can use diverse tools that are not limited to board and card games, but also paper activities and unplugged robotics and blocks, in which the unplugged activities can be integrated to the plugged-in activities according to the learning environment, grade level, and the intended learning outcomes.

Second, this study found that using unplugged activities to promote students’ CT skills were mostly applied to computer science and STEM. Along with the popularity of such topics in domains such as STEM and computer science, scholars argue that current understandings regarding how CT can be integrated into non-STEM subjects and informal educational contexts are insufficient (Dong et al., 2020). Moreover, both researchers and organizations have identified the need to emphasize the alignment between CT skills and domain knowledge, so that they can better serve the trending integration of CT into STEM and non-STEM subjects. CT has the potential to deepen students’ understanding of various subject domains, including STEM, non-STEM, and everyday life problem-solving (Hurt et al., 2023; Tang et al., 2020; Weintrop et al., 2016; Wing, 2008). In fact, unplugged activities have already shown promise in some non-STEM subjects. For example, a recent study by Hsu and Liang (2021) provided evidence of an unplugged approach promoting CT skills and foreign language learning outcomes. Therefore, we invite more teachers, educators, and grassroots initiatives to explore the possibilities of integrating unplugged approaches into subjects other than STEM and computer science.

Research

There are several implications for future research. First, although unplugged pedagogy is effective in fostering CT skills, whether it is comparable to the plugged-in approach is still unknown. For example, we still lack knowledge on how both approaches should be applied (e.g., separately or combine, sequences of use) to produce the best outcomes. As suggested by some researchers, a combined approach seems to work better (i.e., Del Olmo-Muñoz et al., 2020; Sun et al., 2022), while others suggest using one against the other (e.g., Polat & Yilmaz, 2022; Tonbuloğlu & Tonbuloğlu, 2019), resulting in inconsistent conclusions. Scholars should explore whether encouraging the integration of unplugged activities into the curriculum at the primary level could promote students’ CT skills. Moreover, in our review most of the studies applied a relatively short intervention that is less than 10 weeks. How students’ CT skills develop across a larger timespan (i.e., from one term to few years) is largely unknown. This raises questions for future research to understand: what are the type of activities? How long can intervention be effective for lower secondary and primary school students? How to design innovative hands-on activities to promote twenty first-century skills?

Second, gender and subject may affect unplugged CT skills. In our meta-analysis, we found grade, class size, time of intervention, and tools were all significant mediators. We had planned to include gender as a potential moderator of CT skills, but found there to be insufficient evidence to conduct this subgroup analysis, as the large majority of the identified studies failed to provide data on CT skills separately by gender. In science, girls have been found to show a decline in interest after interventions that include unplugged activities. On the other side, there is still a lack of evidence for unplugged approaches to increase underrepresented students’ participation (Huang & Looi, 2021), which needs further research in the future. Given that unplugged CT activities can be carried out in different subjects, it would be interesting to explore whether unplugged activities affect other disciplines’ knowledge. As scholars have suggested, issues such as the long-term retention of CT abilities and their applicability to different settings and domains (i.e., non-STEM subjects) are still in their infancy (Grover & Pea, 2018; Shute et al., 2017).

Limitations

Several limitations should be acknowledged. First, this review has the “file drawer problem.” Our meta-analysis included 13 studies and 16 effect sizes. Despite using the most relevant search terms, we may still ignore some studies. However, Dalton et al. (2012) suggested that the problem does not generate inflation or threaten meta-analytic results. Second, this study only looked at unplugged activities and CT skills from a general perspective. However, there are divergent definitions or dimensions in the CT field. According to a recent note, CT skills can be categorized as domain-general or domain-specific (Tsai et al., 2020). Future research could broaden the literature by exploring how unplugged activities could promote specific CT skills across contexts. Finally, our meta-analysis investigated four moderator variables (i.e., grade level, participant size, intervention time, and tools). Other factors, such as socioeconomic status (SES), may be potential moderators. Miller et al. (2018) found that integrating unplugged computing classes in low-SES classrooms may help students achieve the same engineering interests and attitudes as their high-SES counterparts. However, contemporary studies investigating the effect of SES factors on the acquisition of CT skills are insufficient (Grover & Pea, 2018). Thus, future research might compare the effect of unplugged activities on CT skills among students of various SES, ethnic groups, or socio-cultural backgrounds (Additional file 1).

Conclusion

The proliferation of unplugged approaches in education has accelerated in the last few years. Using a literature review and a meta-analysis approach, this study summarized the development of unplugged activities research in CT education. The results show that interest in research on unplugged activities has substantially increased in recent decades. Most unplugged activities were applied in computer science and STEM at the primary and lower secondary school levels. Board and card games were frequently used to design unplugged activities for assessing CT skills. Moreover, CT diagnostic tools were frequently used as assessment tools. A follow-up meta-analysis demonstrated that grade level, class size, length of intervention, and tools all contributed to the development of CT skills. Taken together, the present work supports the view that unplugged activities are useful for developing students’ CT skills. This research can inform researchers, educators, and teachers about the state-of-the-art of CT and unplugged activities and evaluate the differences in the effectiveness of relevant teaching factors with valuable advice on unplugged activities.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

References

Download references

Acknowledgements

The authors want to thank the anonymous reviewers for their valuable comments and suggestions, which helped us improve the manuscript’s quality.

Funding

This study is supported by R&D Program of Beijing Municipal Education Commission (KM202310028004).

Author information

Authors and Affiliations

Authors

Contributions

PC contributed to the design of the work; acquisition, analysis and interpretation of data; drafted the work and substantively revised it. DY contributed to the design of the work; acquisition, analysis and interpretation of data; drafted the work and substantively revised it. AHSM contributed to the interpretation of data and substantively revised it; writing, revising, and editing the manuscript. JL contributed to the supervision and substantively revised it. XW contributed to the acquisition, analysis, and interpretation of data. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Dong Yang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

A full list of the reviewed articles.

Appendix

Appendix

See Table 10.

Table 10 Information of included studies (N = 49)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, P., Yang, D., Metwally, A.H.S. et al. Fostering computational thinking through unplugged activities: A systematic literature review and meta-analysis. IJ STEM Ed 10, 47 (2023). https://doi.org/10.1186/s40594-023-00434-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-023-00434-7

Keywords