Fagerlund, Janne, et al. “Computational Thinking in Programming with Scratch in Primary Schools: A Systematic Review.” Wiley Online Library, John Wiley & Sons, Ltd, 8 May 2020, onlinelibrary.wiley.com/doi/10.1002/cae.22255.
Computer programming is being introduced in educational curricula, even at the primary school level. One goal of this implementation is to teach computational thinking (CT), which is potentially applicable in various computational problem-solving situations. However, the educational objective of CT in primary schools is somewhat unclear: curricula in various countries define learning objectives for topics, such as computer science, computing, programming or digital literacy but not for CT specifically. Additionally, there has been confusion in concretely and comprehensively defining and operationalising what to teach, learn and assess about CT in primary education even with popular programming akin to Scratch. In response to the growing demands of CT, by conducting a literature review on studies utilising Scratch in K–9, this study investigates what kind of CT has been assessed in Scratch at the primary education level. As a theoretical background for the review, we define a tangible educational objective for introducing CT comprehensively in primary education and concretise the fundamental skills and areas of understanding involved in CT as its “core educational principles”. The results of the review summarise Scratch programming contents that students can manipulate and activities in which they can engage that foster CT. Moreover, methods for formatively assessing CT via students’ Scratch projects and programming processes are explored. The results underpin that the summarised “CT-fostering” programming contents and activities in Scratch are vast and multidimensional. The next steps for this study are to refine pedagogically meaningful ways to assess CT in students’ Scratch projects and programming processes.
The ubiquity of computing and computer science (CS) has expanded rapidly in modern society [1]. Meanwhile, countries such as Finland, England and Estonia have incorporated computer programming as a compulsory topic in primary education (K–9) [27, 39]. Programming with Scratch, a graphical, block-based programming language, is especially popular in this age group, thus providing a potentially impactful context for educational research. However, several scholars regard programming education not as an end in itself but essential—though nonexclusive—for fostering computational thinking (CT) (i.e., supporting the cognitive tasks involved in it) [23]. CT is an umbrella term that embodies an intellectual foundation necessary to understand the computational world and employ multidimensional problem-solving skills within and across disciplines [56, 61].
Despite its popularity, there has been some shortcomings and uncertainty surrounding CT in terms of, for instance, teacher training needs concerning the aims and intents of CT education. In fact, curricula in different countries pose various educational objectives for such topics as CS, computing, programming or digital literacy but not for CT specifically [27]. Relatedly, there have been shortcomings in concretising what to teach, learn and assess regarding CT in schools, although previous literature portrays particular concepts and practices (e.g., “Algorithms”, “Problem decomposition”) that can shape students’ skills and understanding in CT and contribute to its educational objective [8, 34]. However, CT potentially learnt while programming with tools as Scratch has been typically perceived as, for instance, the code constructs that students use in their projects, which can be asserted to represent mere programming competence instead of the predictably higher level CT. When using such tools as Scratch, various programming contents that students manipulate and programming activities in which they engage can foster the skills and areas of understanding involved with CT in different ways. Previous literature has not systematically and thoroughly investigated how the practical programmatic affordances in Scratch can represent and foster the manifold skills and areas of understanding associated with CT as described in its core concepts and practices.
The aims of this study are to contextualise CT comprehensively in the Scratch programming environment for teaching and learning in primary school classrooms and explore the assessment of CT through Scratch in this context. In practice, a literature review for studies involving assessments in Scratch in K–9 is conducted. As a theoretical background, we define a tangible educational objective for CT in the context of programming in primary education based on previous literature. Moreover, as a springboard for investigating the skills and areas of understanding included in CT in Scratch, we concretise CT’s core educational principles (CEPs)—fundamental computational facts, conceptual ideas, and techniques that students can learn—from CT concepts and practices presented in earlier research. The goals of the review are to gather Scratch programming contents and activities, use the CEPs as a lens to view them specifically as “CT-fostering” contents and activities, and explore ways in which they could be formatively assessed in classroom settings.
Wing [61, 62] originally defined CT as “the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can effectively be carried out by an information-processing agent”. Michaelson [43] underlined that CT is a way of understanding problems whereas CS provides concepts for CT in search of a praxis. Aho [1] revisited Wing’s original definition and emphasised that solutions pertinent to CT are namely algorithmic. However, CT still has no solid core definition [24]. It has been viewed as a competence [58], a thought process [1, 62], a set of skills [61] and a problem-solving process [54]. However, the consensus is that it draws on disciplinary concepts and models central to CS and utilises the power of computing [56].
The purpose of primary education is to learn about the world and to prepare for subsequent studies and working life. Although CT’s transferability across problem-solving contexts has been questioned [14], Wing [61] posited that CT as a collection of transversal skills and knowledge is necessary for everyone. Lonka et al [33] underlined that students, regardless of their future profession, should learn to identify the central principles and practices of programming and understand how they influence everyday life.
I totally agree that besides what profession you will do in the future, Scratch and computational thinking helps everyone to connect with the world ad giving answers to problems. Using Scratch for me was a game-changer, I was able to introduce to my students and instead wasting their time playing in computer they use Scratch to improve their skills how to create a project or how to invent a game that requires some skills.
To include CT’s such essential characteristics and purposes [33, 53, 56, 61] tangibly in primary education, we define the following educational objective for it: students learn to understand what computing can/cannot do, understand how computers do the things that they do and apply computational tools, models and ideas to solve problems in various contexts. According to recent reviews of curricula in various countries, such educational ideas are relevant in schools via CS education, programming or embedded within different subjects, but not for CT specifically [27, 39]. By exploring computing, students should also gain certain attitudes and perspectives, such as understanding computational ethics [33]. However, this study limits its scope by focusing on CT’s key concepts and practices, which have been often highlighted in previous literature to characterise fundamental areas of understanding in computing and skills in computational problem-solving.
Definitions for the key concepts and practices in CT have varied throughout previous literature. For instance, in the context of Scratch, Brennan and Resnick [9] presented a concrete CT framework that comprised concepts (e.g., loop, variable), practices (e.g., debugging, iteration) and perspectives (e.g., expressing, questioning). Although meaningful for CT, such context-specific frameworks may be unsuitable for framing CT across programming contexts and promoting deeper learning. [24] Therefore, based on prior research framing CT concepts and practices in a broader fashion, we concretise the fundamental skills and areas of understanding involved in CT as its core educational principles (CEPs) as a background.
Several studies have framed CT’s key concepts and practices more generally in programming, computing or CS in various ways. CT is an elusive term that continues finding clear borders, and it involves areas that could be interpreted to be more in its “central” or “peripheral zones”. Concise views of CT can be rather programming-centric and omit potentially essential areas in the general-level CT. In turn, generous views may overlap with other competence areas, such as math. By framing our view of CT based on several previous works, we strive to adopt a relatively generous rather than a concise view. The motivation is that the more generous views have been adopted less often, and they can expand our understanding of the potentially meaningful borders of CT assessment through Scratch in K–9 and be feasibly reduced to the extent, as needed.
Settle and Perkovic [51] developed a conceptual framework to implement CT across the curriculum in undergraduate education. In 2009, the International Society for Technology in Education and the Computer Science Teachers Association [3] devised an operational definition for CT concepts and capabilities to promote their incorporation in K–12 classrooms. In the aftermath of computing having been introduced in British schools in 2014, Czismadia et al [13] developed a framework for guiding teachers in teaching CT-related concepts, approaches and techniques in computing classrooms. Relatedly, Angeli et al [2] designed a K–6 CT curriculum comprising CT skills and implications for teacher knowledge. To demystify CT’s ill-structured nature, Shute et al [53] reviewed CT literature and showed examples of its definitions, interventions and assessments in K–12. Similarly, Hsu et al [28] reviewed prior literature and discussed how CT could be taught and learned in K–12. To further illuminate CT’s application in different contexts, Grover and Pea [24] elaborated what concepts and practices CT encompasses.To concretise the skills and areas of understanding associated with CT concepts and practices in these works as atomic elements to enable their systematic contextualisation in Scratch, the definitions of the concepts and practices can be summarised to include CT’s CEPs for teaching and learning at the primary school level.
In practice, various programming tasks can foster skills and understanding in the ways of thinking and doing involved in CT as described in the CEPs. In Scratch, students manipulate programmatic contents, that is, the objects and logic structures that establish computational processes in their projects, and engage in certain programming activities while designing said contents [9]. Hence, it is meaningful to examine how various Scratch programming contents and activities contextualise the CEPs in practice.
Scratch is a free web-based programming tool that allows the creation of media projects, such as games, interactive stories and animations, connected to young peoples’ personal interests and experiences. Projects are designed by combining graphical blocks to produce behaviours for digital characters (“sprites”). Block-based languages typically have a “low floor”: students cannot make syntactic mistakes because only co-applicable blocks combine into algorithmic sets of instructions (“scripts”) [9, 38].
Despite the affordances of graphical tools, programming is cognitively complex, and rich conceptual mental models may not emerge spontaneously [4, 40]. An “in time” pedagogy in which new knowledge is presented whenever necessary through various project-based activities is a popular approach; however, it requires the careful formulation of authentic problems and selection of projects (i.e., ways to introduce CT appropriately via programming contents and activities) [20, 34]. Moreover, learning can be supported with a formative assessment that determines “where the learner is going”, “where the learner is right now” and “how to get there”. In practice, instructors should clarify the intentions and criteria for success, elicit evidence of students’ understanding and provide appropriate feedback that moves learning forward [6]. Programming is a potentially fruitful platform for enabling these processes because it demonstrates students’ CT and provides a potential accommodation for timely and targeted learning support [23, 34].
Several previous empirical studies have shown in part how specific programming contents and activities in Scratch could be assessed. However, the contents and activities have been scarcely contextualised in CT. To examine how CT could be thoroughly introduced and respectively assessed in Scratch in K–9 (primary education), this study reviews prior literature focused on assessing Scratch contents and activities in K–9 and aligns them to CT concepts and practices according to the summarised CEPs (see Section 2.2). The purpose is to derive elementary CT-fostering learning contents and activities and to explore appropriate methods for their formative assessment in primary schools. Hence, the research questions are:
What Scratch programming contents and activities have been assessed in K–9?
How have Scratch programming contents and activities been assessed?
How do different Scratch programming contents and activities contextualise CT concepts and practices via the CEPs?
To begin answering the research questions, literature searches were performed for peer-reviewed studies focusing on the assessment of Scratch programming contents and activities in K–9 (Figure 1). First, searches were conducted with the terms “computational thinking” and “Scratch” in the ScienceDirect, ERIC, SCOPUS and ACM databases. Publications were sought as far back as 2007 when Scratch was released [9]. The searches resulted in 432 studies (98 in ScienceDirect, 27 in ERIC, 217 in SCOPUS and 90 in ACM) on November 27th, 2019. Duplicate and inaccessible publications were excluded from this collection.
The abstracts of the remaining studies were screened, and both empirical and nonempirical studies were included if they addressed assessment in Scratch (or highly similar programming languages) in K–9. Publications conceptualising generic assessment frameworks were included if Scratch and primary education were mentioned as potential application domains. Studies set in other or unclear educational levels were excluded to maintain a focus on primary schools. Studies written in other languages than English were excluded.
The remaining 50 studies was not presumed to cover all potentially relevant work. Further searches were conducted similarly with the terms “computational thinking” and “Scratch” on Google Scholar, which provided a running list of publications in decreasing order of relevance. These publications were accessed individually until the search results concluded to no longer provide relevant studies. Simultaneously, the reference lists of all included studies were examined for discovering other potentially relevant publications.
Altogether 81 obtained studies were then screened for the assessment instruments that they employed. Studies analysing students’ Scratch project contents or their programming activities in Scratch were included. Studies analysing the learning of other subject domain contents or addressing other theoretical areas such as motivation, attitudes and misconceptions were excluded. Assessment instruments that were defined in insufficient detail or were adapted in an unaltered form from prior studies were excluded since they provided no additional information for the RQs. For example, we found that several articles employed the assessment instrument called “Dr. Scratch” (see results). To attain information regarding what Scratch programming contents and activities have been assessed in K–9 and how said contents and activities have been assessed altogether, we only included the paper that originally introduced said contents and activities, granted that the work was attainable. Finally, 30 publications were selected for review.
The Scratch programming contents and activities assessed in the studies were described based on their type (RQ1) and the employed assessment method and taxonomy or rubric (RQ2). Simultaneously, by employing content analysis, the contents and activities were aligned to CT concepts and practices according to the CT’s CEPs (see Section 2.2) that they contextualised (RQ3) (indicated in results by CT concepts and practices highlighted in parentheses). The analysis was carried out by the first author.
Due to the complexity of CT, however, there is an immense level of detail to which the contextualisation in RQ3 could potentially reach. For instance, reducing unnecessary detail (Abstraction) can involve various broader programming tasks and detailed subtasks. However, Voogt et al [58] stated that it is important to discover “what matters” for CT. Therefore, as our first step, we settled on merely describing what the assessed contents and activities that contextualised CT were instead of attempting to further analyse how they could foster CT in different ways.
The analysis resulted in rubrics to Scratch contents and activities that foster skills and understanding in CT concepts and practices. The discovered assessment methods were examined according to how they potentially enabled formative assessment processes as presented by Black and Wiliam [6].
Potential limitations in reviews especially concern the definition of the RQs, search procedure, selection of articles, bias in the source material and its quality and the ways of presenting the results [26]. Therefore, we wish to make the following remarks concerning the repeatability, objectivity and transparency herein. By describing the procedure comprehensively and in detail, we aimed to reveal any bias (e.g., concerning the use of appropriate search strings in representative databases) [12, 26]. Additionally, we strived to describe the inferences made and the logic behind them clearly and give equal weight to all reviewed work, though spotlighting evidence that stands out in the process and potentially suggests subjectivity in the source material [26]. Furthermore, we aimed to reinforce consistency in the analysis by iteratively evaluating the contents of the articles, ensuring that we interpreted them the same way at different times [35]. By externally checking the research process and debriefing the results among the authors, we aimed to verify further that the meanings and interpretations resonated among different researchers [12].
Prior studies utilising Scratch in K–9 involved the assessment of various programming contents and activities with diverse assessment methods and taxonomies or rubrics (RQ1, RQ2) (Table 1). Four distinct programming substance categories were found and were named as “code constructs”, “coding patterns”, “programming activities” and “other programming contents”. Altogether, 20 studies assessed code constructs as the logic structures (e.g., sequence of blocks, “repeat” [44]) that programmers use to establish algorithmic sets of instructions in Scratch projects. Ten studies assessed coding patterns, combinations of code constructs that act as larger programmatic units for specific semantical purposes (e.g., “Animate Motion” [50]). Eleven studies examined students’ programming activities (e.g., “script analysis” [30]), whereas six studies examined other programming contents (e.g., “project genres” [19]). Only six studies considered the direct assessment of CT, and the remaining studies assessed the contents or activities with or without presenting CT as a motivational theme.Table 1. A summary of studies involving the assessment of Scratch programming contents and activities in K–9
#
Authors
Assessment in Scratch
Contents/activities
Method
Taxonomy/rubric
1
Benton et al [5]
Coding patterns (CT)
Self-evaluation
Difficulty rating
2
Blau et al [7]
Other programming contents
Artefact analysis
Presence/frequency
3
Brennan and Resnick [9]
Code constructs + programming activities (CT)
Artefact analysis
Presence/frequency
Performance evaluation
Skill description
Interview
4
Burke [10]
Code constructs
Artefact analysis
Presence/frequency
Programming activities
Observation
Description, data-driven
Interview
5
Chang et al [11]
Code constructs (CT)
Artefact analysis
Presence/frequency
6
Ericson and McKlin [15]
Code constructs
Test
Correct answer
Coding patterns
Correct drawing
7
Franklin et al [16]
Coding patterns
Observation
Correctness level
Code constructs
Test
Correct answer
Programming activities
Observation
Behaviour type
8
Franklin et al [17]
Code constructs
Artefact analysis
Content completion (percentage)
Coding patterns
9
Funke et al [19]
Coding patterns
Artefact analysis
Progression level
Code constructs
Presence/frequency
Other programming contents
10
Funke and Geldreich [18]
Code constructs
Log data analysis
Description
11
Grover and Basu [21]
Code constructs
Test
Correct response
Coding patterns
Think-aloud
12
Gutierrez et al [25]
Other programming contents
Artefact analysis
Presence/frequency
13
Israel et al [29]
Programming activities
Observation + discourse analysis
Behaviour type
14
Ke [30]
Code constructs
Artefact analysis
Presence/frequency
Programming activities
Observation
Behaviour type
15
Lewis [31]
Code constructs
Test
Correct answer
Self-evaluation
Likert
16
Lewis and Shah [32]
Programming activities
Discourse analysis
Behaviour type
Hypotheses, data-driven
17
Mako Hill et al [36]
Programming activities
Artefact analysis
Presence/frequency
Other programming contents
18
Maloney et al [37]
Code constructs
Artefact analysis
Presence/frequency
19
Meerbaum-Salant et al [41]
Programming activities
Observation
Behaviour type
20
Meerbaum-Salant et al [42]
Code constructs
Test
Correct response
Coding patterns
21
Moreno-León et al [44]
Code constructs (CT)
Artefact analysis
Presence
22
Ota et al [46]
Coding patterns
Artefact analysis
Presence
Code constructs
23
Sáez-López et al [55]
Code constructs
Test
N/A
Programming activities + other programming contents
Self-evaluation
Performance level
Observation
24
Seiter [49]
Coding patterns
Artefact analysis
Presence
25
Seiter and Foreman [50]
Code constructs + coding patterns (CT)
Artefact analysis
Presence
26
Shah et al [52]
Programming activities
Discourse analysis
Behaviour type
27
Tsan et al [57]
Programming activities
Discourse analysis
Behaviour type
Observation
28
Wangenheim et al [59]
Code constructs (CT)
Artefact analysis
Presence
29
Wilson et al [
0 archived comments