NowComment
2-Pane Combined
Comments:
Full Summaries Sorted

Making Classroom Writing Assessment More Visible, Equitable, and Portable through Digital Badging

Author: Stephanie West-Puckett in College English, Volume 79, Number 2, November 2016

7 additions to document , most recent over 7 years ago

When Why
Nov-06-16 Wording change
Nov-06-16 Wording change
Nov-06-16 Wording change
Nov-08-16 Consider Writing Curriculum as Analogous to DRM
Nov-08-16 DRM and Its Discontents
Nov-08-16 Wording change
Nov-08-16 Wording change

5 changes, most recent over 7 years ago

Show Changes

For writing teachers, the term assessment conjures a host of negative reactions as assessment is often framed as a mandate from administrators or legislators, those with more institutional power but less direct contact with students and their writing. With good reason, teachers are skeptical of assessments as top-down mandates, what Madaus sardonically labeled “manna from above” (10) because they have seen the ways that top-down assessments enable classroom surveillance (Condon; Spellmeyer), undermine student and teacher agency (Perleman; Berlack; Gallagher), and pigeonhole curriculum by privileging that which is easy to measure as opposed to that which is collaboratively valued by students and teachers (Moss; Callahan). In this context, the push for “local control” (Huot; Gallagher; Gere) has been a welcome intervention in the politics and practice of writing assessment; it has given us a rhetoric of writing assessment praxis that is grounded in disciplinary expertise and responsive to local concerns.

As part of the movement to localize writing assessment, it is true that some writing teachers have found their places at programmatic or institutional assessment tables, working with administrators to codesign and implement locally responsive assessment. Bob Broad and his coauthors’ work with dynamic criteria mapping, for example, foregrounds teacher engagement with students in the work of assessment and also brings these classroom commitments to the programmatic level, norming and negotiating values and interpretations with other stakeholders. Still, local initiatives often continue to operate with a top-down logic as decisions made on the programmatic level can have constraining effects when they trickle down into writing classrooms. For example, at my own university, as part of our Quality Enhancement Plan, composition instructors worked with administrators to develop rubrics for program-level assessment of our new Writing About the Disciplines course (Sharer et al.) . These rubrics were designed to collect aggregated data and measure how well students’ writing samples met the new outcomes; however, the same rubrics were then required for classroom use by graduate teaching assistants and strongly encouraged for other faculty, constraining the kinds of projects students and faculty might take on for fear that the writing might score poorly on the predesigned rubric. In this case, teaching faculty members were asked to accommodate to an assessment instrument as opposed to accommodating assessment instruments to the teaching and learning context. As my university’s case illustrates, a sense of instrumentality takes hold when classroom assessment is uncritically iterated and transacted with bits and pieces of leftover machinery from what started as rich democratic processes at the institutional or programmatic levels (Huot and Neal). Thus, “local control” often stops just short of classroom control and just short of engaging all teachers and all students in active, participatory, and critical negotiation of assessment paradigms.

To remove the stigma from the term “writing assessment” for classroom teachers, I argue that we should recognize the classroom as the primary site of writing assessment, making visible the ways students and teachers can and do actively negotiate writing assessment discourses, histories, values, and their own biases in this hyperlocal space. As such, this article offers a rich case study of these negotiations in my own first-year writing classroom, demonstrating the affordances and constraints of using digital badging to make visible, with visual modes, those discussions and decisions about what counts as “good” writing and who gets to decide. Borrowing from research in participatory literacies as well as social semiotics, I argue that social justice in classroom writing assessment is attainable only when students and teachers participate in assessment design, interpretation, and continuous inquiry into pre- and post-assessment conditions. In adopting digital badging platforms, I have found the technological means to engage students in this participatory process. This article demonstrates how we can, in our classrooms, pair this open-source technology with practices of critical validity inquiry (Perry) to consciously interrupt bias,1 promote a more diverse construct of writing, and better meet the needs of students of color and lower income students. While theoretical concerns about validity are most often taken up at the programmatic level, I argue that the theorizing and practice of critical validity inquiry must also occur in the writing classroom as it reorients our attention to the purposes of assessment as a practice of participatory social justice. Thus, I conclude this article with a discussion of how digital badging’s portability can amplify often unheard student and teacher voices in writing assessment through a “bubbling up” and “bubbling out” of hyperlocal practices designed to support writers as they move across contexts.

Assessment in Participatory Cultures

As Henry Jenkins argues in Confronting the Challenges of Participatory Culture, many learners today are vigorously engaged in networked digital environments where they connect with others to develop their interests and accomplish shared goals through deeply contextual literacy (xiv). Collectively referred to as “participatory literacies,” these ways of knowing, doing, being, communicating, and learning are necessary for social, civic, and economic participation in a digitally connected world. While young people from well-resourced families and communities often have both access and motivation to develop these literacies, engaging what Jenkins calls the “hidden curriculum” (xii) of popular culture and new media, many students of color and students from working class backgrounds may not have these rich opportunities outside of school. These inequities, then, create not simply an access gap but a participation gap that threatens our vision of a more democratic future and underscores the need for all learning institutions, both formal and informal, to develop curricular opportunities to build these participatory literacies, designing both curriculum and assessment practices that can build learners’ agency and institutional capital, helping young people to translate academic success into social, civic, and economic capital.

Connected Learning, a research-based agenda forwarded by the MacAr-thur Foundation, is one such initiative meant to help educators at all levels to reimagine learning in a participatory culture. Based on educational theories and principles that privilege student interest and passion as well as peer networks for academic, economic, and social achievement, this framework positions students as active makers of products and knowledge who fully participate in their communities, with learning as a by-product of the collaborative negotiation of tools, discourses, values, and ideas. With commitments to equity, connectivity, and full participation, Connected Learning presents a new vision of education that holds promise for more democratic and socially just futures for all learners.

While many institutions are embracing the Connected Learning principles in their curriculum as a commitment to social justice, they are struggling to understand how to design assessments that support learners in gaining access to “hidden curriculum,” negotiating the power discourses of popular media as well as institutions, academies, and disciplines. So while writing studies scholars have argued for two decades that we need to adopt new practices for digital assessment (Ball; Penrod; Taykayoski) and have since explored issues of privacy and data tracking in online environments (Crow), the use of formative assessment structures to support iterative practice (Reilly and Atkins), and the productive connections between assessment and usability for public digital writing (Zoetewey et al.) among other concerns, our practices, such as electronic portfolios, have largely remained tethered to the print-based logics early digital writing scholars argued against.2

In addition, because assessment is often contained within classrooms, programs, or institutions, schools have done little in assessment to help learners recognize rich and nuanced literacy practices and performances that thread through home, community, and academic spaces. Assessment practices should allow learners to leverage their achievements from one context in another, recognizing that, as Connie Yowell writes, “In the digital age, the fundamental operating and delivery systems are networks, not institutions such as schools, which are one node of many on a young person’s network of learning opportunities,” and grades, certificates, and credit hours are not the only metrics we can use to construct our learner identities. As such, our classroom assessment practices need to be designed with what Ridolfo and DeVoss call “rhetorical velocity,” developing a sense of how assessment instruments, artifacts, and decisions can be repurposed, reused, and recirculated by students beyond the duration of a semester as they move across learning and professional contexts. Projects like Mozilla’s Open Badging initiative may help us make these connections across learning contexts.

Digital Badging as Multimodal Assessment

Digital badging is an assessment technology that offers promise for providing a meaning-making system that operates on the principles and practices of open culture—a social movement that values collaboratively produced information and knowledge which is freely distributed and built on through accessible networks. Like traditional badges worn by scouts, civil servants, and military personnel, digital badges are graphics that symbolize achievement, experience, or affiliation in particular communities. In digital spaces, these web graphics are encoded with metadata that provide descriptive information about the badge issuer, the criteria for earning the badge, the date the badge was earned, and the evidence or artifacts that were submitted in consideration of the badge application. Digital badges are commonly circulated in video game environments and on e-commerce sites like eBay, but they made their big debut in academic communities in 2011 when Mozilla announced its Open Badging Initiative to create and share a free, open-source badging infrastructure. Mozilla’s announcement was soon followed by a call for proposals from the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC), who provided significant funding for formal and informal learning institutions who would develop digital badging pathways that systematize their curricular offerings. Often talked about as part of the gamification of education (deWinter and Moeller; Sierra and Steadman; Gee) few have considered these digital artifacts as assessment technologies with the potential to reclaim and remake (writing) assessment at the collegiate level.

Only four institutions of higher education—Carnegie Mellon University, Colorado State, UC Davis, and Ohio State University—were among the list of HASTAC Badges for Lifelong Learning grant recipients. The other recipients, more than fifty in total, included K12 institutions, but a majority of the recipients were informal learning centers such as museums, after-school programs, libraries, and community youth partnerships. So despite a theoretical orientation toward semiotics and our disciplinary understandings of the ways images can communicate with immediacy (Kress and van Leeuwen) in many of our writing studies departments, digital badging as an assessment technology has yet to be taken seriously by writing studies practitioners or by collegiate faculty and administrators more broadly.

As assessment artifacts, digital badges, unlike portfolio scores, function as “talkative boundary objects” (Rughinis¸ 2009), symbols of skill, experience, or achievement that are grounded in the values, epistemologies, and resources available in one particular context but are attempting to move with the learner to new contexts. Unlike traditional assessment artifacts such as grades and scale scores that pretend at acontextuality, digital badges belong to learners and contain the messy baggage of what it means to know, to do, and to learn with particular people and resources in particular times and places. As these artifacts move across networks, they are meant to foster negotiation between learners and their learning sponsors or organizations, promoting articulation while resisting standardization. This kind of artifact-centered assessment “talk” is never easy or “efficient,” as I’ll demonstrate, but it works to map the opportunities and challenges particular learners and groups of learners face in different networks, making assessment a negotiated participatory practice of integrated judgment based on relevant theoretical, empirical, and consequential concerns for how learners and organizations are going to use assessment data.

Badging is certainly not a new practice as societies ancient to contemporary have used visual symbolism to mark merit, to indicate belonging or membership, and to signal honor or dishonor. While Halavais has written more thoroughly on the genealogy of badging across historical periods and world cultures, it will suffice here to say that badging’s longstanding history of use, reuse, and appropriation into the digital realm is rooted in the primacy of visual modes of communication and the accessibility and interpretability of visual rhetoric. On one level, Kress and van Leeuwen show us how visual images communicate with an immediacy, making initiatives like digital badging potentially attractive to those who wish to open assessment practices and make learners key stakeholders in assessment conversations. Visual modes have historical alliances with the meaning-making practices of the masses—proletariats, women, multilingual English speakers, and so on—those whose bodies were not invited into the inner sanctum of a numerical and linguistic academy (Arola and Wysocki). Thus, embracing the visual assessment artifact has potential for unsettling monolithic structures of assessment by focusing on meaning-making modes that are socially authorized across cultures and communities. Remixing visual modes into assessment discourses through the use of badges provides additional opportunities for students and teachers to claim agency as they select, manipulate, and combine modes and signs to make, unmake, and remake meaning about their learning, their achievements, and their identities as learners.

Invoking social semiotics to resituate practices of writing assessment should be a soft sale for writing studies as the field continues to find value in socially constructed theories of knowing, doing, writing, and evaluating together with a diversity of signs and modes. It can also help us understand how both curriculum and assessment work are always already multimodal communication acts, working at the intersections of alphalinguistic, visual, aural, spatial, and embodied modes. What’s more, as Kress argues, learning any “curriculum” is the process of receiving, remixing, and remediating that curriculum’s message, using our culturally constructed semiotic repositories to produce responses to a curriculum. The learner’s choice of modes in responding tells us important information about the learner’s interest and engagement with that curriculum and shouldn’t be divorced from the assessment of what has been learned, particularly if we are interested in building participatory literacies that honor a diversity of culturally constructed meaning-making practices. Kress writes:

15
Paragraph 15 changes

The maker of the sign has made the form of the sign to be an apt expression of the meaning to be represented. For the recipient of the sign, therefore, the shape, the form of the sign, is a means of forming a hypothesis about the maker’s interest and about the principles that they brought to their engagement with the prompt that led to the making of the sign . . . When the “recipient of a sign” is an assessor, the question is “What metric will be applied? Will it be a metric oriented to authority— a metric that indicates the distance from what ought to have been learned, whether in terms of modes used or in terms of conformity to the authority of the teacher/assessor; or will it be a metric oriented to the learner’s interest and that evaluates the principles the learner brought to the engagement of the curriculum?” (28)

The maker of the sign has made the form of the sign to be an apt expression of the meaning to be represented. For the recipient of the sign, therefore, the shape, the form of the sign, is a means of forming a hypothesis about the maker’s interest and about the principles that they brought to their engagement with the prompt that led to the making of the sign . . . When the “recipient of a sign” is an assessor, the question is “What metric will be applied? Will it be a metric oriented to authority— a metric that indicates the distance from what ought to have been learned, whether in terms of modes used or in terms of conformity to the authority of the teacher/assessor; or will it be a metric oriented to the learner’s interest and that evaluates the principles the learner brought to the engagement of the curriculum?” (28)

If we take semiotic theories of learning seriously and use them to engage Kress’s questions, we can see that our most easily recognizable classroom writing assessment instruments such as rubrics, grades, and even instructor commentary tend to operate on a deficit model that seeks to measure the distance between assessors’ interests and expectations and students’ ability or willingness to reproduce those interests and expectations in their response. For example, Lillian Brannon and Cy Knoblauch pointed to this tendency over twenty years ago when they argued that instructor feedback to student writing often directed the writer to produce the ideal text the instructor had in mind as opposed to helping students pursue their own interests and leverage their own semiotic storehouses through acts of composition. Sadly, it seems, our most visible and easily recognizable classroom assessment practices—practices too often taken up by our graduate teaching assistants and new faculty in writing studies—have not moved us much beyond approaches that limit learners’ agency, authority, interests, and meaning-making capabilities.

This condition is peculiar considering that our field has a rich history of demonstrating how both students and teachers can and do participate in the negotiation of the meaning and value of writing inside the classroom. For example, Inoue’s work around “community-based” assessment in writing classrooms outlines how students and teachers can negotiate values through dialogic engagement about what matters in writing. While these discussions result in a rubric, it is the engagement of multiple interpretations and priorities in the classroom context that is foregrounded in Inoue’s work (“Community-Based”; Antiracist). Similarly, Broad argues for the collaborative creation of dynamic criteria maps, on both the classroom and programmatic levels, underscoring the need for community-building processes that engage stakeholders in collaboratively articulating their values and commitments about writing. While the map is the artifact of this social process, the process itself is the most important contribution Broad offers. These practices represent some of the most valuable work that writing assessment can do to foster social justice through participatory practice, yet they aren’t the images that readily come to mind when we, in the larger field of English studies, think of classroom assessment. What will it take to disrupt the instrumental fetish that privileges the artifact over the process of promoting participatory literacies that build agency and collaborative capital? How do we capture these ephemeral, ongoing negotiations in the classroom and make them more visible, accessible, and portable through digital badging without privileging the rubric, the map, or even the badge as an assessment instrument? What types of heuristics and interpretive processes can we use to safeguard us from falling in the trap of fetishizing digital badges while exploiting their potential as opportunities for and markers of institutional and cultural capital for students?

Critical Validity Inquiry to Avoid Object Fetish

Over the last quarter century, as Huot and Neal note, writing assessment scholars have moved from defending the practices of direct writing assessment (as opposed to indirect measures such as multiple-choice testing about writing) to adopting more sophisticated and nuanced understandings of its use and impact, practices that can fall under the broad umbrella of validity inquiry. As such, validity inquiry (Huot and Neal), has helped us understand how assessment instruments (direct placement exams, portfolios, standardized writing assessments such as the SAT or ACT) and the data gained from them impact writers, teachers, classrooms, and programs. From this work we’ve reached consensus on the notion that assessment instruments themselves are not “validated” and shouldn’t be fetishized in an assessment scene; instead, local— and I argue, hyperlocal—stakeholders must work to make evidence-based decisions about the design, implementation, and impact of assessment instruments, making tentative interpretive arguments about “validity” in service to (hyper)local practices, values, and commitments.

Extending this commitment to validity praxis, Jeff Perry argues for critical validity inquiry (CVI), an approach to constructing validity (or invalidity) arguments that engages depth hermeneutics—the practice of using multiple lenses to vision and revision evidence derived from assessment instruments in order to interpret their impact. Borrowing from depth hermeneutics as deployed in critical discourse analysis, CVI works to uncover ways that assessments can pervert the teaching and learning context, coercing students and teachers into adversarial relationships with each other and with administrators and policymakers as the aims of assessment come into conflict with the aims of teaching and learning. Using critical validity inquiry, then, writing assessment scholars can interrupt bias through the application of particular critical theories to an assessment context including, but not limited to, Marxism, feminism, queer theory, decolonial theory, disability or crip theory, critical race theory, and a host of intersectional approaches like queer crip or decolonial feminist theory. CVI remixes the discourses of educational measurement with our critical understandings of how power operates. According to Perry:

21
Paragraph 21 changes

By focusing on sites of educational exploitation like race, class, and gender, CVI allows researchers and assessors the capability of recognizing abuses of power that might be missed in a more generally focused inquiry. This misuse of educational assessments that fail to represent an equitable process for all and, instead, serve the purpose of reproducing the social relations necessary for the perpetuation of a vast disparity in the distribution of wealth, health services, and educational opportunity will be challenged. (199)

By focusing on sites of educational exploitation like race, class, and gender, CVI allows researchers and assessors the capability of recognizing abuses of power that might be missed in a more generally focused inquiry. This misuse of educational assessments that fail to represent an equitable process for all and, instead, serve the purpose of reproducing the social relations necessary for the perpetuation of a vast disparity in the distribution of wealth, health services, and educational opportunity will be challenged. (199)

As he demonstrates through a critical discourse analysis of the ACT placement test, CVI enables us to trace out ways particular groups have been systematically disadvantaged by institutions of power, including educational institutions. In addition, CVI positions us to consider how we use assessment technologies to restrict or allow particular groups access to capital resources and opportunities, representation of interests and engagement, as well as ownership of assessment technologies and authorization of institutional identities. Since CVI directs us to consider a multiplicity of ways that validity evidence might be interpreted, we can put this tool to use in order to build a participatory culture of assessment in which all stakeholders perpetually interrogate the conditions and implications of assessment.

Perry’s work shows that CVI can be useful in understanding assessment’s impact on an institutional level; however, I argue that this tool, like Broad’s dynamic criteria mapping, is flexible and scalar, making it useful at the classroom level as well. By picking up CVI as a practice that engages students and teachers, we move beyond blanket understandings of equity and value the specificities of uneven classroom writing experiences and their varied expressions through multiple modes in a participatory culture. In what follows, our hyperlocal work with CVI threads throughout the classroom case study as the students and I wrestled with issues of access to resources, language, and consensus around issues of power and embodiment in writing, acknowledging that dissensus can be a welcomed practice in a pluralistic classroom culture. As such, I will demonstrate how CVI can be an essential part of negotiating power in classroom writing assessment, disrupting an instrumental fetish with digital badges as just objects of writing assessment.

Digital Badging with First-Year Writers

I teach writing at a mid-size, rural university in the south where nearly one-third of undergraduate students identify as students of color and nearly two-thirds were determined to meet need-based financial aid requirements (IPAR, Fact Book and Common Data Set). As nearly one-third of our undergraduates also identify as first-generation college students, many struggle with the practices and conventions of academic writing, and our foundations writing courses are designed to foster a rhetorical approach to composition, requiring significant practice negotiating multiple audiences and purposes to build evidence-based arguments.

Bothered by the disconnects I had observed between my own understanding of learning outcomes statements that guide the teaching of writing and my students’ sense of those outcomes—in other words, my own biases toward creativity, critical thinking, and writing-as-meaning-making that often clash with students’ biases toward convention, completeness, and correctness—I wanted to design classroom assessment practices that could put a diversity of perspectives into play as the students and teacher attempted to foster “hard agreements” (Inoue, “Community-Based” 216) about what constitutes “good writing” at the collegiate level, practicing assessment and critical validity inquiry as activities through which “agency, citizenship, and literacy intersect in the writing classroom” (Kalikoff 121).

One particularly interesting example of the disconnect between what students read in the outcomes and what I read occurred regularly at the intersection of meaning-making around East Carolina University’s first-year writing outcome, “Discover and address significant questions via writing.” While I comprehend this statement as an articulation of the importance of curiosity, focus, audience awareness, and significance in framing purposes for a particular communication, students largely read this outcome as a directive to ask professors questions about the parameters of an assignment, such as word counts, due dates, citations, and other logistical and stylistic concerns. Their interpretations of this outcome were linked to their particular lived experiences with writing, most of which had not included opportunities to wrestle with and define the parameters of rhetorical context, leveraging the kind of rhetorical decision-making that is essential to the freedom and responsibility of choice.

As such, I came to see the necessity of the claim that assessment must be a social practice negotiated in the classroom. Thus, community-based assessment through open badging was a solution that refused deficit models of assessment that seek to measure the distance from an assessor’s interest and a learner’s performance (Kress) and would, instead, create the conditions for the negotiation of diverse interpretations of value. In doing so, I sought to amplify student voice and teach discipline-based heuristics for making rhetorical evaluations, embracing Huot’s claim that “learning to assess is learning to write” (70). My affiliation with the National Writing Project had long given me access to a wide variety of digital writing practitioners and educational technologists as well as open learning organizations like MacArthur and Mozilla who were experimenting with digital badging as a means of creating more equitable outcomes for learners. Because my classroom was already a space where digital and multimodal writing were happening, digital badging seemed like a good solution for supporting the “bubbling up” and “bubbling out” of participatory assessment practice.

During the early weeks of my first-year composition class in Fall 2014, students wrote in various media and modes, creating popular culture “Who ‘X’ Thinks I Am/ Who I Really Am” memes, metro maps of important “stops” on the academic, civic, professional, and self-sponsored writing journeys, and letters to the class about the identities they were bringing with them to the classroom. They shared their writings through a host of open digital tools such as Google Docs and Google+, using these prompts and spaces to develop the evaluation-free zone that Peter Elbow encourages (197) as a way of building familiarity with a host of writing technologies, experimenting with modes and genres, and creating community inside the classroom. I then introduced assessment through an open badging project that engaged students in exploring a rhetorical approach to composition that embedded assessment into the curriculum through critical validity inquiry and participatory design. The open badging project occupied the majority of our instructional time for approximately six weeks then faded to the background as students used the assessment artifacts they created to make judgments about their peers’ writing produced in response to other projects.

For the digital badging project, small groups of three or four students were tasked with selecting an outcome statement or habit of mind from either our university’s revised student learning outcomes developed through our writing-focused Quality Enhancement Plan (QEP), from the Framework for Success in Postsecondary Writing (FSPW), or from the Writing Program Administrators Outcomes Statements for First-Year Writing (WPAOSFW). After selecting their focus, students conducted inquiry into the meaning and significance of the outcome or habit, produced a host of multimodal compositions that argued for why that statement should matter to other first-year students, and analyzed examples in practice from previous student compositions. Next, students designed digital badges using the Credly open badge platform and negotiated the parameters of classroom assessment, as I’ll describe next, using these badges as artifacts that make visible and apparent community-held values about good writing.

The badges consisted of visual symbols that students designed to represent what they found compelling and important in the outcome statement or habit of mind, a title that encapsulated that construct, a description of the badge that summarized the important points of their arguments, and a detailed explanation of the kinds of evidence that their peers should submit if they wished to earn that particular badge.

rhetorical

For example, one group chose to focus their assessment inquiry around the FSPW’s habit of mind, “rhetorical knowledge,” and produced a number of communications—a collaborative, source-based digital essay, a website, an embodied tableau vivant, a digital badge, and an ignite talk—all directed toward their peers, that would open classroom critical validity discussions about “rhetorical knowledge” as a construct of writing that some student groups might struggle with based on previous experiences, culturally constructed values systems, and types of writing instruction available to them. Through these discussions, we came to see how particular students or student groups might be impacted by assessments that privileged concepts such as rhetorical knowledge, especially groups from lower-income schools whose writing pedagogies encouraged “skill-and-drill” methods in an attempt to improve standardized test scores as opposed to offering students rich opportunities to engage in authentic writing contexts with multiple audiences and purposes. From these discussions, the group decided to make their badge in the likeness of a compass, using the metaphor to indicate to their peers that rhetorical knowledge, while sounding like scary, discipline-based jargon, is a useful tool for orienting oneself in a writing journey. In their badge description they write,

34
Paragraph 34 changes

Rhetorical knowledge is . . . the compass; it is the guide to writing a superior paper . . . It is the ability to have a mind set [sic] for your writing, knowing where you are going while also having a three hundred and sixty degree understanding of how and why you are going there. Rhetorical knowledge is knowing your audience, the context, your purpose, and how you will choose arrangement, medium and strategy.

Rhetorical knowledge is . . . the compass; it is the guide to writing a superior paper . . . It is the ability to have a mind set [sic] for your writing, knowing where you are going while also having a three hundred and sixty degree understanding of how and why you are going there. Rhetorical knowledge is knowing your audience, the context, your purpose, and how you will choose arrangement, medium and strategy

To claim the rhetorical knowledge badge from this group, writers in the class could submit evidence in the form of a hyperlink, a sound file, a video, a text document, or an audio clip, allowing for the individual learner to choose the medium and mode and the kind of response that would demonstrate their understanding and performance of rhetorical knowledge. The rhetorical knowledge group would then review the evidence provided and make a decision to award the badge or to ask for additional evidence if necessary, basing their assessment on a collective interpretation of the artifacts that learners chose to submit. The class had agreed unanimously that evidence could be taken from multiple fields of production, including compositions produced in response to invitations in the first-year composition course as well as those produced for other courses and in out-of-school contexts such as professional and civic domains as the assessment practices were inextricably linked, but not confined, to the present learning context.

Throughout this process of negotiating professional and academic discourse, students began to demystify the language in the framing documents, remediating what they referred to as “jargon” into visual, embodied, and alphalinguistic representations of the outcomes and habits of mind, working to address the inherent biases of disciplinary language. Through their research, they engaged the framing documents in the field of writing studies, bringing their own experiences and discourse practices to this “curriculum,” which they then negotiated and remixed through multimodal production-centered responses. The badges also came to serve as signposts that signaled particular pathways through this first-year composition course as the badges “made visible” the hidden curriculum of writing studies that operates for writing teachers but too often remains in the absent-present of the writing classroom.

What emerged in this practice, then, was a dialogic model of assessment that worked to lessen but not erase the distance between the assessors’ and the learners’ interests as students actively challenged each other’s interpretations on the basis of fairness, conducting informal critical validity inquiry through the remainder of the semester as they considered the impact of their assessment instruments on particular groups of students using critical discourse analysis to better understand how the assessment instrument was wielding certain kinds of power and how that power was impacting their peers and the construct of writing in our classroom. For example, a lively discussion ensued that stretched over two weeks of class when a student of color applied to another student group for the Editing MVP Badge that they had designed to recognize writers’ performances of proofreading and editing. The student group denied the application inside Credly with a justification that textual evidence contained too many grammatical errors. Interestingly, the description of this badge constructed editing as a solitary process, stating that a writer should “[p]roofread and edit your ownwriting— avoiding errors” while the image of the badge featured two figures sitting across a desk, engaging in conversation over documents. This “failed response” and the disconnect between the image and the text sparked a vigorous conversation about the construct of proofreading and editing that was being forwarded by the badge as students asked the badge designers questions about collaboration and shared agency in textual production: “If I go to the writing center for help, can I still argue that I’ve edited my own writing?” or “What if I submit a document that I wrote with my writing group and another teammate did most of the line editing while I worked on organization but we tracked changes in the document, and I accepted them? Does that count as editing my own writing?” These questions point to the ways that the construct of writing must be reframed in a participatory culture where “collective intelligence” (Jenkins xiv) means that students don’t have to be “autonomous problem-solvers” (259) but can instead work to develop deep engagement with parts of a curriculum, skimming others when they are able to “plug- and play” (Gee, Anti-education 153) in groups that value collective knowledge construction. This example also demonstrates how CVI, with its focus on discourse analysis and the teasing out of the students’ failed response, can help us consider the impacts of assessment instruments on the teaching and learning context, yet our assessment process and the badge allowed us to critically interrogate our assumptions about editing and autonomy in writing and revising processes.

Soon after, questions about linguistic diversity came up as an African Ameri-can student asked, “What if I want to submit my critical identity narrative, which has characters ‘speaking real’ to each other? I spent a lot of time editing and proofreading to make it sound right, but I didn’t want to ‘avoid error’?” In this question, we can see the writer wrestling with the cultural biases that exist in our local and national outcomes statements as paying close attention to dominant, white language, punctuation, and grammar norms. To ground the discussion, we read the NCTE position statement “Students’ Right to Their Own Language” and talked about how standardization privileges some people’s ways of making meaning with language over others because those with power set the standards. From that document, we explored markers of race, class, gender, and sexuality that are coded into our language and literacy practices, and worked to understand how the Editing MVP Badge criteria were privileging some of these markers, and thus some bodies, over others. Here, we were working explicitly to interrupt the white, middle-class biases that exist in our frames about what constitutes “good writing,” but as this example will demonstrate, interruption is momentary and must be engaged systematically throughout an assessment system.

Interestingly, in this case, the badge designers decided to award the badge to the African American student who had proofread and edited but chose to leverage, as opposed to avoid, error in her critical identity narrative. The student group, composed of white students, did not, however, change the language of the badge description. Thus, the badge designers reinterpreted the evidence through a new lens afforded by our collaborative critical validity inquiry and reversed their decision regarding one particular student, but they were not willing to reframe the assessment instrument to avoid the systematic bias. The badge designers engaged in what might be viewed as an individual act of charity instead of a foundational effort to produce a fairer assessment instrument, ignoring bias at the structural level. In turn, the student chose not to share the badge in her public badging profile, but she did choose to include it in her course ePortfolio at the conclusion of the semester.

This student’s refusal points to the ways that assessment artifacts such as test scores, grades, certificates, diplomas, degrees, and digital badges construct learner identities. Thus, if we are interested in learner agency, identity issues, and representation issues, we should develop a techne of assessment that allows learners to embrace or refuse those identities across different contexts. While this particular Editing MVP Badge might mean something was gained in the context of a first-year writing class, it might also mean something was lost in another context since the wearing of that digital blazon could indicate that the learner has “sold out” to the raced, classed, gendered, and sexual values of the academy that are actively resisted in other contexts.

So while this case study of badging in one writing classroom context does point to ways we can operationalize community-based assessment without reducing our interpretations to numbers on a rubric, it also underscores the need for CVI and attention to validity as lived consequences of assessment. As an intervention in community-based assessment, CVI offers promise for helping writing communities better understand the ways normative assumptions and logics can shut out difference and standardize meaning-making practices. Gallagher's invocation to “assess locally and validate globally” (“Assess Locally” 10) can work to reproduce cultural bias systemically from the looping circuit of the macro to the micro levels and back again unless interrupted with critical validity inquiry. It also demonstrates how digital badging’s multimediated discourses of visual and alphalinguistic modes can uncover disconnects in our constructs of writing, a gap between what we say we value as teachers and program directors and what we actually measure as classroom and program assessors (see also Broad et al.) . Most importantly, as an instructor committed to the values and practices of both equity and teacher inquiry, this classroom assessment case study and my deep reflections on these classroom experiences have both informed and transformed my first-year composition learning environment. As Huot and many since in our field have argued, assessment should be inextricably linked to the teaching and learning environment (69); thus, in what follows, I’ll demonstrate the ways my early work in badging and critical validity inquiry has moved badging from a discrete class project to the primary assessment economy in my first-year writing courses.

Economies of Badging: Digital Badging 2.0

During the first instantiations of digital badging described in the case study, students were both badge issuers and badge receivers, both designing and awarding as well as earning and collecting badges. In common badging parlance, my own role was relegated to that of a badge consumer, one who reviews and decides what the badges mean for classroom assessment. Admittedly, I was hesitant at first to invest these badges any official currency, giving students the option to display their digital badges in their end-of-course reflections, allowing individual students to accept or refuse these blazons at will. This was partially due to the limited construct of good writing some students clung to, the Editing MVP group in particular, and partially due to my own inability to understand how to implement badging economies that asserted my own expertise as a writing instructor while honoring the experiences, viewpoints, and subject positions of student writers, sharing the authority and decision-making about assessment. As I looked to informal learning organizations like PEER 2 PEER University (P2PU) who were awarded those early HASTAC grants and talked with other National Writing Project teachers experimenting with badging in their own classrooms, I was able to imagine a classroom badging infrastructure that could integrate expert assessment with peer and self-assessment. Thus, in Fall 2015, I implemented two different but complementary badging economies—the student-made and -awarded Habits of Mind Badges and the instructor-made and -awarded Project Badges. This move better supported multiple student interests, enabled a diversity of self-directed yet socially connected learning pathways without undermining a culture of community-based assessment and encouraged first-year writers to increase their capacity—to “level up” as think-ers, writers, and rhetoricians.

Much like the initial project described in the previous case study, Habits of Mind Badges allow students to take ownership of, remix, and become experts on the FSPW, recognizing and building essential dispositions necessary for success at the collegiate level by designing and awarding those badges to each other as well as earning them themselves. The Project Badges, however, are under my purview, as I designed and parsed them as writing studies curriculum pathways.3

project-badges

Similar to Linda B. Nilson’s specifications grading system in which students’ course grades are calculated by demonstrating a number of outcomes in a class—the more demonstrated, the higher the grade—student evaluation in my course is now directly tied to badges. Students are required to earn all eight student-designed Habits of Mind Badges to pass the course and can choose to complete any combination of the Project Badges according to the grade they are seeking. Students who desire an A must earn the eight Habits of Mind badges and four Project Badges, while those seeking a C would earn all eight Habits of Mind badges and two Project Badges.

To earn a Project Badge, students must complete the badging pathway and submit the collection of required composing activities in an organized Google Folder as a link to Credly. For example, a student might submit Level I activities such as exploratory memes and freewritings or drawings about a topic, Level II activities like analyses of mentor texts, article annotations, or mini-essays, and Level III activities like planning documents, peer reviews, and culminating projects such as composed digital essays, websites, and classroom teach-ins or hack jams. As I review the badge applications, I am looking for evidence of engagement as opposed to competency or mastery, taking a descriptive rather than evaluative stance (see Hicks) as I am most interested in the badge’s capacity to signal experience as opposed to achievement. Thus, earning a badge is a visual symbol of a student’s directed engagement across our “nomothetic span,” a concept that White and his collaborators describe as a “hypothetical taxonomy of the writing construct” (74), one that we’ve mapped out at the intersections of the FSPW, our university’s student learning outcomes, and my classroom’s embodied experiences with and understandings of writing. Unlike Nilson’s specifications grading, where a demonstration of proficiency (according to the teacher) is read for and by the teacher, as I review a particular badge application, I am reviewing a rich context of writing across a protracted time span, usually four or five weeks, that takes place in multiple modes, genres, tasks, and forums, sampled from a badging pathway that was designed to allow students robust and repeated opportunities to develop and demonstrate engagement of the span. Un-like mechanistic approaches to assessment that focus on a particular set of facets or traits, this approach looks broadly at the construct of writing environments, multimodalities, rhetorical knowledge, and cognitive and interpersonal domains.

Thus, the badge application materials are a distributed and distal construct sample and describe engagement across the span, noting where on the nomothetic span there is and is not evidence of engagement, creating a scatter plot of points of engagement from the sample. During this process, I share this assessment data with students, asking them to help me see evidence I might have missed and to better understand how my teaching might better help them engage other parts of the span that they are ignoring, struggling with, or resisting. This dialogic process of collaborative assessment moves us from a nomothetic and static representation of the writing construct into a dynamic validation loop, one that allows for the possibility of impacting the writing construct and allowing a diversity of student experiences to inform our theories of writing. In this way, the power of digital badging as an assessment marker is not in the technology itself but instead in the use of this particular technology to increase student and teacher agency in the practice of writing assessment.

Assessing Digital Badging

My commitment to writing assessment as social justice means I must continually threaten the equilibrium of the badging system by gathering data and using CVI to better understand the limitations and opportunities. Preliminary data from a survey administered at the conclusion of the Fall 2015 semester with responses from sixty-six students from three sections of English 1100 indicates that digital badging did help nearly half of all students surveyed to feel more in control of their course grade and to better understand the “Habits of Mind for Success in Postsecondary Writing.” In addition, 40 percent of students indicated that they had displayed or planned to display an earned badge outside the classroom context, suggesting that digital badges are being used as portable boundary objects that can spur conversations outside the classroom context about the construct of college-level writing. Finally, more than half of the students reported that digital badging helped them to see diverse and unique ways they could approach writing assignments, working to support writing-as-remix rather than writing-as-reproduction.

Yet despite evidence that digital badging is, at least in part, fostering agency, participation, portability, and writing diversity, overall, students didn’t enjoy digital badging as an assessment innovation. In fact, only 21.5 percent indicated that they would like to take another writing class that used digital badging as an assessment method. Survey comments indicate that this displeasure was rooted in three areas: concerns about workload as multiple writing tasks, not just “final papers,” were required for each badging pathway, dissatisfaction with the lack of traditional grades that they felt could indicate competency or mastery, and consternation over increased responsibility placed on students in a nonsequenced course. One student wrote, “I did not like this class very much, mainly because I’m not disciplined enough to do my own work and make my own working schedule.”

When the survey data were disaggregated to see how particular groups experienced digital badging, however, a different picture of satisfaction emerged. Of the 15 students who identified as students of color in the digital badging survey, 53 percent agreed or strongly agreed that they would enjoy taking another writing class that employed digital badging. With 33 percent responding as neutral, only 14 percent indicated that they would disagree or strongly disagree compared with 53 percent of the total sample. In addition, of the 12 total students who identified as lower middle class (none identified as poverty level), 50 percent either agreed or strongly agreed that they would enjoy another writing class with digital badging in contrast to the students who identified as upper middle class and wealthy. One Latina student wrote, “I would not change a thing, it [digital badging] is organized very well and any student can complete these tasks given if they set their mind to it,” showing how students of color felt empowered by digital badging because it supported their classroom success. Similarly, a student who identified as both African American and low-income reported,

I think that the badging system allows you to work for the grade you want. Each badge pathways takes hard work and dedication to complete the various task that allows you to obtain the badge. I would suggest more badges to apply for so you have a lot of variety to chose from,

indicating that, to minority students, digital badging was an accessible technology for scaffolding success. In addition, his response shows that choice and curriculum diversification matter, both of which were enabled by project badging.

So while we cannot equate enjoyment with equity because of the many other factors to consider, such as how the badging assessment paradigm impacted course grades and longitudinal success in the writing curriculum which are beyond the scope of this essay, preliminary survey data does indicate that students of color and lower-middle-class students are far more interested in engaging with digital badging in the writing classroom than their white, affluent peers, findings that are consistent with research into alternative assessment models (Inoue and Poe, Race; Inoue Antiracist) .

The evidence I have presented is, of course, preliminary and tentative, but it demonstrates how we can begin the process of validating the course decisions (like grades) in a digital badging assessment paradigm. As White and colleagues note, the inquiry process is ongoing and dynamic, as we are not validating badges as instruments but validating their use in the context of a writing classroom or program at a particular time and place with particular groups of people. In keeping with critical validity inquiry, it is a process of amassing multiple points of evidence that speak to the ongoing ethical dilemma of how to assess the broad and dynamic construct of writing. And it is necessary if we are to avoid the trap of the sexy digital assessment object fetish and uncover the material impacts of badging by interrogating who is served by particular badge curricula, who earns which badges, who is motivated or alienated by these badges, and where and how badging might matter for particular groups as they move through the university and beyond.

Conclusion

In open, participatory cultures where learning happens in collective, distributed contexts across curricular and paracurricular spaces in both face-to-face and digital environments as Jenkins outlines, it follows that our assessments should also be participatory and open, designed to support and not distort participatory learning practices. As Elyse Eidman-Aadahl, Executive Director of the National Writing Project, so aptly described in a 2012 Teachers Teaching Teachers Open Webcast, the early intensity and promotion of digital badging in education contexts meant that these technologies were “over promised” as instruments that could solve our collective and persistent problems with performance-based assessment by both motivating learners and providing an efficient method of “seeing” learner engagement and performance while monitoring and signaling movement through curriculum. Open badging is but one technology whose structures have been coded to accomplish these goals, but as I’ve demonstrated, digital badges, when attendant to validity, can help us to move assessment beyond socially conservative practices of gatekeeping and support more democratic aims of learning and assessment.

In classroom spaces, this means that we can take up Joyce Locke Carter’s invocation during her keynote address at the 2016 College Composition and Communication Conference to coconstruct (assessment) tools, technologies, and texts that do something meaningful in and beyond the walls of the classroom. So while CVI gives us a method for concretizing the ways power moves in and around assessment instruments, one of its limitations is that it doesn’t immediately call us to loop that knowing back into doing—or in this case doing assessment differently (Caswell and West-Puckett). Thus CVI must be paired with new forms of assessment “making” that build participatory literacies and sustainable, accessible artifacts meant to usher in more equitable futures for individuals.

In writing programs more broadly, the uptake of digital badging can work to flatten institutional hierarchies, honoring and encouraging the on-the-ground assessment practices developed, negotiated, and circulated by the very people most impacted by them, students and teachers. As these artifacts and the student evidence attached to them bubble up to programmatic and institutional levels, however, we’ll need continued ethical commitment to the practice of CVI, asking which writers and what kinds of writing and writing instruction are privileged and supported by instruments and data interpretations. A very real limitation to the practice of digital badging is the time and space that must be carved out not just to deploy assessments but to build and make them together and construct new validity arguments. These processes, as demonstrated in the classroom, are neither easy nor efficient, but as Gallagher reminds us, efficiency should never be the aim of assessment.

Thus, from my work in badging, I argue that participatory assessment can be used as a technology of social justice if we design with the following principles in mind:

  • The assessment scene is constructed as a rhetorical situation in which learners make decisions about audience, purpose, and context of their assessments.
  • The assessment technologies and decision-making protocols are designed and implemented through discourses and practices that are open and accessible to learners.
  • Assessment paradigms are continually validated for disaggregated populations through the use and expansion of validity inquiry at the classroom, program, and institutional levels.
  • The assessment technologies afford multimodal production as learners make choices about modes and media of assessment responses and metrics.
  • Assessment enhances the teaching and learning environment and makes learning pathways visible.
  • Assessment technologies allow learners to accept or refuse particular identities that are constructed through the assessment.
  • Assessment artifacts are designed to be portable, helping learners to leverage avail-able resources to turn in- and out-of-school learning into institutional, social, and economic capital.

As writing studies scholars who actively pursue the practices, experiences, and implications of digital writing, we are well poised to take up the rhetorical velocity of our digital writing assessment instruments, working to better understand how technologies such as digital badges can support a bubbling up and bubbling out of the competencies and practices engaged in. With digital badging, students can begin to demystify what is meant by “good writing” and the practices and habits of mind that operate invisibly alongside those understandings, inviting more diverse interpretations and negotiations. Open badging is but one technology whose structures have been coded to accomplish the goals of providing greater access to learning resources across formal and informal learning experiences, but as I’ve demonstrated, it has potential for promoting social justice only when paired with critical ideologies and decision-making heuristics that guide its use for more democratic futures for all writers.

Acknowledgments

Thanks to the students who permitted me to use their work and who have shown me what’s possible with digital badging. Also thanks to the two anonymous CE reviewers, my colleague William Banks, and the special issue editors Mya Poe and Asao B. Inoue for their generous feedback on this article. Portions of this research were developed in Nicole Caswell’s Writing Assessment graduate seminar at East Carolina University, and I’m grateful for her support, encouragement, and suggestions throughout the drafting of this manuscript.

Endnotes

  1. Gipps and Stobart construct fairness in educational assessment as the consideration of what comes before and after an assessment, such as access to resources and decisions made for learners based on assessment data. Poe’s argument about fairness is similar; however, she points to the idea that access (the availability of digital tools, technologies, and software) is not enough. Instead, assessments should be designed for accessibility, meaning that students should understand their choices in an assessment setting and also understand the purpose and goals of the assessment as well as the ways that assessment decisions are made and how these decisions will impact their access to resources and opportunities.

  2. White and his coauthors consider the ePortfolio as “part of the gold standard of writing assessment” (104); however my experiences are closer to those that Michael Neal describes because the ePortfolio is executed as a print-based document container, failing to take advantage of the affordances of hypertext (78), hypermedia (91), and hyperattention (101) enabled and invoked by digital composing environments.

  3. In Badging 2.0, The Habits of Mind Project Badge becomes the badging pathway that guides students in creating their own student-circulated badges for creativity, persistence, curiosity, responsibility, metacognition, flexibility, openness, and engagement.

Works Cited

Arola, Kristin L., and Anne F. Wysocki. Composing(media): Composing(embodiment): Bodies, Technologies, Writing, the Teaching of Writing. Utah State UP, 2014.

Ball, Cheryl. “Designerly ≠ Readerly: Re-assessing Multimodal and New Media Rubrics for Use in Writing Studies. Convergence, vol. 12, no. 4, November 2006, pp. 393–412, doi: 10.1177/1354856506068366

Berlak, Harold. “Toward the Development of a New Science of Educational Testing and Assessment.” Toward a New Science of Educational Testing and Assessment, edited by Harold Berlak et al., State U of New York P, 1992, pp. 181–206.

Brannon, Lil, and C. H. Knoblauch. “On Students’ Rights to Their Own Texts: A Model of Teacher Response.” College Composition and Communication, vol. 33, no. 2, May 1982, pp. 157–66.

Broad, Bob, et al. Organic Writing Assessment: Dynamic Criteria Mapping in Action. All USU Press Publications, 2009, digitalcommons.usu.edu/usupress_pubs/165

Callahan, Susan. “All Done with the Best Intentions: One Kentucky High School After Six Years of State Portfolio Tests.” Assessing Writing, vol. 6, no. 1, 2000, pp. 5–40, doi:10.1016/S1075-2935(99)00005-7

Carter, Joyce Locke. “Making, Disrupting, Innovating: Modified Notes from CCCC Keynote Address.” Sailing the Four Cs: My Year of Living Dangerously as CCCC Chair, 22 Apr. 2016, joycelockecarter.com/CCCC/

Caswell, Nicole, and Stephanie West-Puckett. “Assessment Killjoys: Queering the Return for a Writing Studies World-Making Methodology.” Re/Orienting Writing: Queer Methods, Queer Projects, edited by William Banks forthcoming.

Condon, William. “The Future of Portfolio-Based Writing Assessment: A Cautionary Tale.” Writing Assessment in the 21st Century: Essays in Honor of Edward M. White, edited by Norbert Elliot and Les Perelman, Hampton Press, 27 Mar. 2012, Part 1, Chapter 13.

“Connected Learning Principles.” Connected Learning. MacArthur Foundation, connectedlearning.tv/connected-learning-principles

Crow, Angela. “Managing Datacloud Decisions and ‘Big Data’: Understanding Privacy Choices in Terms of Surveillant Assemblages.” Digital Writing Assessment & Evaluation, edited by Heidi A. McKee and Dànielle Nicole DeVoss, Computers and Composition Digital P/Utah State UP, 2013, ccdigitalpress.org/dwae/02_crow.html

DeWinter, Jennifer, and Ryan M. Moeller. Computer Games and Technical Communication: Critical Methods & Applications at the Intersection. Ashgate, 2014.

“Digital Media and Learning Competition 4.” HASTAC, 2014, www.hastac.org/competition/ digital-media-learning-competition-4

Eidman-Aadahl, Elyse, et al. “Badges: Peril/ Possibility.” Teachers Teaching Teachers. Ed Tech Talk #309, 11 Aug. 2012, edtechtalk.com/node/5121

Elbow, Peter. “Ranking, Evaluation, and Liking: Sorting Out Three Forms of Judgment.” College English, vol. 55, no. 2, 1993, pp. 187–201.

Gallagher, Chris. “Assess Locally, Validate Globally: Heuristics for Validating Local Writing Assessments.” Writing Program Administration, vol. 34, no. 1, 2010, pp. 10–32.

——— “The Trouble with Outcomes: Pragmatic Inquiry and Educational Aims.” College English, vol. 75, no. 1, 2012, pp. 42–60.

Gee, James P. The Anti-Education Era: Creating Smarter Students Through Digital Learning. Palgrave Macmillan, 2013.

———. What Video Games Have to Teach Us About Learning and Literacy. Palgrave Macmillan, 2003.

Gere, Anne Ruggles, et al. “Local Assessment: Using Genre Analysis to Validate Directed Self-Placement.” College Composition and Communication, vol. 64, no. 4, 2013, pp. 605–33.

Gipps, Carolina, and Gordan Stobart. “Fairness in Assessment.” Educational Assessment in the 21st Century: Connecting Theory and Practice, edited by Claire Wyatt-Smith and Jay Cumming, Springer Netherlands, 2009, pp. 105–18.

Halavais, Alexander M. C. “A Genealogy of Badges: Inherited Meaning and Monstrous Moral Hybrids.” Information, Communication, and Society, vol. 15, no. 3, 2012, pp. 354–73.

Hillocks, George. “How State Assessments Lead to Vacuous Thinking and Writing.” Journal of Writing Assessment, vol. 1, no.1, 2003, pp. 5–21.

Huot, Brian A. (Re)articulating Writing Assessment for Teaching and Learning. Utah State UP, 2002.

Huot, Brian, and Michael Neal. “Writing Assessment: A Techno-History.” Handbook of Writing Research, edited by Charles A. MacArthur et al., Guilford Press, 2006, pp. 417–32.

Inoue, Asao B. Antiracist Writing Assessment Ecologies: Teaching and Assessing Writing for a Socially Just Future. WAC Clearing House/Parlor Press, 2015.

———. “Community-Based Assessment Pedagogy.” Assessing Writing, vol. 9, no. 3, 2005, pp. 208–38. Inoue, Asao B, and Mya Poe. Race and Writing Assessment. Peter Lang Publishing, 2012.

———. “Racial Formations in Two Writing Assessments: Revisiting White and Thomas’ Findings on the English Placement Test After 30 Years.” Writing Assessment in the 21st Century: Essays in Honor of Edward M. White, edited by Norbert Elliot and Les Perelman, Hampton Press.

Institutional Planning, Assessment, and Research (IPAR). ECU Fact Book. East Carolina University, Greenville, NC. 2015. www.ecu.edu/cs-acad/ipar/research/factbook.cfm

———. Common Data Set. East Carolina University, 2015, www.ecu.edu/cs-acad/ipar/research/ CommonDataSets.cfm

Jenkins, Henry. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. MIT Press, 2009.

Kalikoff, Beth. Berlin, New York, Bagdad: Assessment as Democracy. Journal of Writing Assessment, vol. 2, no. 2, 2005, pp. 109–24.

Kane, Michael. “Validation.” Educational Measurement 4th Edition, edited by Robert L. Brennan, Praeger, 2006, pp. 17–64.

Kress, Gunther. “Assessment in the Perspective of a Social Semiotic Theory of Multimodal Teach-ing and Learning.” Educational Assessment in the 21st Century: Connecting Theory and Practice, edited by Claire Wyatt-Smith and Jay Cumming, Springer Netherlands, 2009, pp. 19–41.

Kress, Gunther, and Theo Van Leeuwen. Reading Images: The Grammar of Visual Design. Routledge, 1996.

Madaus, George. “A National Testing System: Manna from Above.” Educational Assessment, vol. 1, no. 1, 1993, pp. 9–26.

Messick, Samuel. “Meaning and Values in Test Validation: The Science and Ethics of Assessment.” Educational Researcher, vol. 18, no. 2, 1989, pp. 5–11.

Moss, Pamela. “Reconstructing Validity.” Educational Researcher, vol. 36, no. 8, 2007, pp. 470–76.

Neal, Michael R. Writing Assessment and the Revolution in Digital Texts and Technologies. Teachers College Press, 2011.

Nilson, Linda B. Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time. Stylus Publishing, 2014.

“Open Badges.” Mozilla Foundation, 2013.

Parks, Jay. “Reliability as Argument.” Educational Measurement: Issues and Practice, vol. 26, no. 4, Winter 2007, pp. 2–10.

Penrod, Diane. Composition in Convergence: The Impact of New Media on Writing Assessment. Erlbaum, 2005.

Perelman, Les. “Mass-Market Writing Assessments as Bullshit.” Writing Assessment in the 21st Century: Essays in Honor of Edward M. White, edited by Norbert Elliot and Les Perelman, Hampton Press, 27 March 2012, pp. 425–36.

Perry, Jeff. “Critical Validity Theory.” Practicing Research in Writing Studies, edited by Katrina Powell and Pam Takayoshi, Hampton Press, 2012, pp. 187–211.

Poe, Mya. “Making Digital Writing Assessment Fair for Diverse Writers.” Digital Writing Assessment & Evaluation, edited by Heidi A. McKee and Dànielle Nicole DeVoss, Computers and Composition Digital P/Utah State UP, 2013, ccdigitalpress.org/dwae/01_poe.html

Reilly, Colleen, and Anthony Atkins. “Rewarding Risk: Designing Aspirational Assessment Processes for Digital Writing Projects.” Digital Writing Assessment & Evaluation, edited by Heidi A. McKee and Dànielle Nicole DeVoss, Computers and Composition Digital Press/Utah State University Press, 2013, ccdigitalpress.org/dwae/04_reilly.html

Ridolfo, Jim, and Dànielle Nicole DeVoss. “Composing for Recomposition: Rhetorical Velocity and Delivery.” Kairos: A Journal of Rhetoric, Technology, and Pedagogy, vol. 13, no. 2, 15 Jan. 2009, kairos.technorhetoric.net/13.2/topoi/ridolfo_devoss/intro.html

Rughinis¸, Ra˘zvan. “Talkative Objects in Need of Interpretation. Re-Thinking Digital Badges in Education.” Computer Human Interaction 2013 Extended Abstracts on Human Factors in Computing Systems. 27 Apr.–2 May 13, Association for Computing Machinery (ACS), 2013, pp. 2099–108), doi:10.1145/2468356.2468729

Sharer, Wendy, et al., editors. Reclaiming Accountability: Using the Work of Re/Accreditation and Large-Scale Assessment to Improve Writing Instruction and Writing Programs. Utah State UP, 2016.

Sierra, Wendi, and Kyle D. Steadman. “Ode to Sparklepony: Gamification in Action.” Kairos: A Journal of Rhetoric, Technology, and Pedagogy, vol. 16, no. 2, 2012, kairos.technorhetoric.net/16.2/ disputatio/sierra-stedman/

Spellmeyer, Kurt. “Response: Testing as Surveillance” Assessment of Writing: Politics, Policies, Practices, edited by Edward White et al., MLA, 1996, pp. 52–7.

“Resolution on the Students’ Right to Their Own Language.” NCTE/CCCC, 1974, www.ncte.org/positions/statements/righttoownlanguage

Takayoshi, Pamela. “The Shape of Electronic Writing: Evaluating and Assessing Computer-assisted Writing Processes and Products.” Computers and Composition, vol. 13, no. 2, December 1996, pp. 245–57, doi:10.1016/S8755-4615(96)90013-4

White, Edward M., et al. Very Like a Whale: The Assessment of Writing Programs. Utah State UP, 2015.

Williamson, Michael M. “The Worship of Efficiency: Untangling Theoretical and Practical Considerations in Writing Assessment.” Assessing Writing, vol. 1, no. 2, 1994, pp. 147–74.

Yowell, Connie. “Connected Learning: Reimagining the Experience of Education in the Information Age.” Huffington Post, 2 Mar. 2012, www.huffingtonpost.com/connie-yowell/connected-learning-reimag_b_1316100.html

Zoetewy, Meredith, et al. “Assessing Civic Engagement: Responding to Online Spaces for Public Deliberation.” Digital Writing Assessment & Evaluation, edited by Heidi A. McKee and Dànielle Nicole DeVoss, Computers and Composition Digital P/Utah State UP, 2013, ccdigitalpress.org/dwae/10_zoetewey.html


Stephanie West-Puckett is a non-tenure-track faculty member in the English department at East Carolina University, where she is currently finishing her PhD in in Rhetoric, Writing, and Professional Communication. Her dissertation analyzes the knowledge-making practices of composers in both online and offline maker spaces, and her digital writing research has appeared in journals like Education Science and in the books The Next Digital Scholar: A Fresh Approach to the Common Core State Standards in Research and Writing and Assessing Students Digital Writing: Protocols for Looking Closely. West-Puckett has been a member of NCTE since 2008. To contact her about this article or the research, visit her website at https://stephwp.me/

DMU Timestamp: November 03, 2016 14:13

Added November 06, 2016 at 11:26am by Paul Allison
Title: Wording change

The text below is the previous wording for paragraph 16 (click to return there).

The maker of the sign has made the form of the sign to be an apt expression of the meaning to be represented. For the recipient of the sign, therefore, the shape, the form of the sign, is a means of forming a hypothesis about the maker’s interest and about the principles that they brought to their engagement with the prompt that led to the making of the sign . . . When the “recipient of a sign” is an assessor, the question is “What metric will be applied? Will it be a metric oriented to authority— a metric that indicates the distance from what ought to have been learned, whether in terms of modes used or in terms of conformity to the authority of the teacher/assessor; or will it be a metric oriented to the learner’s interest and that evaluates the principles the learner brought to the engagement of the curriculum?” (28)

DMU Timestamp: November 03, 2016 14:13

Added November 06, 2016 at 11:27am by Paul Allison
Title: Wording change

The text below is the previous wording for paragraph 15 (click to return there).

137
Paragraph 137 changes

The maker of the sign has made the form of the sign to be an apt expression of the meaning to be represented. For the recipient of the sign, therefore, the shape, the form of the sign, is a means of forming a hypothesis about the maker’s interest and about the principles that they brought to their engagement with the prompt that led to the making of the sign . . . When the “recipient of a sign” is an assessor, the question is “What metric will be applied? Will it be a metric oriented to authority— a metric that indicates the distance from what ought to have been learned, whether in terms of modes used or in terms of conformity to the authority of the teacher/assessor; or will it be a metric oriented to the learner’s interest and that evaluates the principles the learner brought to the engagement of the curriculum?” (28)

The maker of the sign has made the form of the sign to be an apt expression of the meaning to be represented. For the recipient of the sign, therefore, the shape, the form of the sign, is a means of forming a hypothesis about the maker’s interest and about the principles that they brought to their engagement with the prompt that led to the making of the sign . . . When the “recipient of a sign” is an assessor, the question is “What metric will be applied? Will it be a metric oriented to authority— a metric that indicates the distance from what ought to have been learned, whether in terms of modes used or in terms of conformity to the authority of the teacher/assessor; or will it be a metric oriented to the learner’s interest and that evaluates the principles the learner brought to the engagement of the curriculum?” (28)

DMU Timestamp: November 03, 2016 14:13

Added November 06, 2016 at 6:47pm by Paul Allison
Title: Wording change

The text below is the previous wording for paragraph 21 (click to return there).

By focusing on sites of educational exploitation like race, class, and gender, CVI allows researchers and assessors the capability of recognizing abuses of power that might be missed in a more generally focused inquiry. This misuse of educational assessments that fail to represent an equitable process for all and, instead, serve the purpose of reproducing the social relations necessary for the perpetuation of a vast disparity in the distribution of wealth, health services, and educational opportunity will be challenged. (199)

DMU Timestamp: November 03, 2016 14:13

Added November 08, 2016 at 6:13am by Terry Elliott
Title: Consider Writing Curriculum as Analogous to DRM

Reference for para 2, sentence 2
Van der Sar, Ernesto. “DRM Is Used to Lock In, Control and Spy on Users’.” Torrent Freak, 8 Nov. 2016, https://torrentfreak.com/drm-is-used-to-lock-in-control-and-spy-on-users-161108/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Torrentfreak+%28Torrentfreak%29.

DMU Timestamp: November 03, 2016 14:13

Added November 08, 2016 at 6:29am by Terry Elliott
Title: DRM and Its Discontents

 

 

 

 

 

GENERALCOUNSEL

the

 

 

 

-

-

 

 

 

 

 

 

 

 

 

 

OFCOVP R

c

 

 

 

 

L

 

!::!::x:m.:lo-t€&tts

opyright Off ice,

 

 

 

 

 

 

LIBRARY OF CONGRESS

 

 

 

 

 

ADDRESS

 

 

 

 

 

 

 

 

 

 

 

)

 

 

 

 

In the matter of:

 

 

)

 

 

 

 

 

 

 

)

 

 

 

 

Section 1281 Study

)

Docket No.  2815-8

 

 

 

 

 

 

)

 

 

 

 

Notice and Request

)

 

 

 

 

for Public Comment

)

 

 

 

 

 

 

 

)

 

 

PubECEtVEO

ltc Information Office

 

OCT 27 2016

 

 

COMMENTS BY THE FREE SOFTWARE FOUNDATION

 

 

 

Donald Robertson,  III

 

Free Software Foundation

 

51 Franklin St.

 

Fifth Floor

 

Boston,  MA 82118

 

October 23rd,  2816

 

 

Background

 

 

 

On December 29th, 2015, the United States Copyright Office (Copyright Office) released a public call

 

for comment, Section 1201 Study: Notice and Request for Public Comment (Docket No. 2015-8) in

 

order to assess the operation of Section 1201, Title 17, including the triennial rulemaking process established under the Digital Millennium Copyright Act (DMCA) to adopt exemptions to the prohibition against circumvention of technological measures that control access to copyrighted works. On September 21st, 2016, the Copyright Office issued a request for additional comments regarding proposed new permanent exemptions, modifications to current permanent exemptions, and changes to anti-trafficking provisions. In response to this call, the Free Software Foundation submits the following comment.

 

 

 

 

About the Free Software Foundation

 

 

 

The Free Software Foundation (FSF) is a charitable 501(c)(3) corporation, founded in 1985, with the mission to expand and defend computer user freedom. The FSF is the largest single contributor to the GNU operating system (used widely today in its GNU/Linux variant), and the FSF's GNU General

Public License (GPL) is the most widely used free software license, covering major components of the GNU operating system and tens of thousands of other computer programs used on hundreds of millions of computers around the world. The FSF has inspired and significantly influenced numerous other initiatives focused on creating free licenses and free works, including Creative Commons and Wikipedia.

 

The FSF's Licensing and Compliance Lab is the preeminent resource of free licensing information for developers and publishers of free software and free documentation. The Licensing and Compliance Lab provides numerous resources and public services including: no-cost licensing consultation for developers of free works; continuing legal education workshops; certification of devices that run exclusively on free software via the Respects Your Freedom certification program; maintaining a directory of over 15,000 works of free software; and myriad educational publications on choosing and making use of free licenses.

 

 

 

Comment

 

 

 

The DMCA's anti-circumvention provisions should be repealed, and the exemptions process ended. Technological protection measures and Digital Restrictions Management (DRM) play no legitimate role in protecting copyrighted works. Instead, they are a means of controlling users and creating "lock in." Companies use this control illegitimately with an eye toward extracting maximum revenue from users in ways that have little connection to actual copyright law. In fact, these restrictions are technological impediments to the rights users have under copyright law, such as fair use. DRM enables companies to spy on their users, and use that data for profit. DRM requires the use of proprietary software, which exposes users to security vulnerabilities, as was the case in 2005 when Sony infected users' computers with a rootkit as part of their music album DRM system, or last week, when users' locked-down Digital Video Recorder machines were hijacked and used to launch a giant Distributed Denial of Service attack on the Internet.

 

 

 

Companies seek to use DRM to lock down and control users, using potential copyright violations as an excuse. DRM is frequently used to spy on users by requiring that they maintain a connection to the Internet so that the program can send information back to the DRM provider about the user's actions. DRM is used to restrict the ability of users to switch to a competing piece of software on their devices, or to prevent them from switching to a competing digital store for the purchase of software, movies and music. Even if it were about enforcing copyright law, the power given to companies by DRM and the DMCA is unacceptably over broad. We should not give companies the authority to pursue criminal charges against users seeking simply to have full control over their own computer systems and security, just because some users might use that control to violate copyright law.

 

 

The FSF is the copyright holder for much of the software that comprises the GNU/Linux operating system, one of the most ubiquitous operating systems in the world. FSF-copyrighted software is found in thousands of devices, from wireless routers to the web servers that run much of the Internet. The FSF

faces numerous copyright violations every day on the software to which it holds copyright, and has a long and successful track record of resolving these violations without resorting to the use of DRM. Thousands of other free software developers likewise do not utilize DRM, and are actually harmed by the DMCA's anti-circumvention rules, which grant anti-competitive advantages to developers who chose to harm their users. Free software has spread all around the world, enriching the lives of users and the bottom lines of developers who understand the value in respecting the rights of everyone by avoiding harmful DRM.

 

 

The exemptions process was supposed to address some of these concerns, but it does not. Exemptions are of no value if they cannot be practically enjoyed by average users. Sharing tools and asking third parties for assistance are always required for ordinary users to be able to exercise their rights. The 1201(b) restrictions on sharing tools necessary to exercise their rights should be rescinded, so that users can help each other break free of the control imposed by DRM. Users should be free to ask third parties to disable DRM on their behalf.

 

 

All DRM is a violation of the rights of users. The exemptions process as outlined by section 1201 is completely broken beyond repair. No amount of exemptions, except a permanent exemption for all uses, can rectify the situation. Requiring users to continually fight for exemptions in order to maintain them is inherently unfair. The ultimate solution to the problem is not to try and fix a broken process, but to end it. It is unethical and harmful for the law to treat all users as criminals -- which is exactly what DRM does. The DMCA's anti-circumvention provisions do too much harm and should be repealed, so that users may once again enjoy their rights under the law without interference.

 

 

Failing a full repeal of the anti-circumvention provisions, extending permanent exemptions to more uses can help alleviate the damage caused by those provisions. Users that rely on assistive technologies should not have the tools they need toyed with by a broken process. Granting a permanent exemption on these tools will ensure at least that these users are not locked out from their everyday lives by restrictive DRM. Additionally, DRM should not be permitted to lock users into abusive contracts by holding their devices hostage. Congress has already had to intervene previously in the DMCA's exemptions process to correct the failure to renew the right to unlock mobile devices. The exemption for unlocking devices should be made permanent, and should extend to all devices, including tablets. The ability to research or repair devices should likewise not be impaired by DRM. The simplest permanent exemption for ensuring the right to research or repair a device is simply to make all uses permanently exempt. Finally, the DMCA's anti-circumvention provisions should not lock away older technologies for all eternity. Users should be able to continue to have full control over their devices regardless of how long they possess them. DRM is malfunctioning software as it does not serve the interests of users; when this malfunctioning software further breaks down and ceases to allow access to the work, users have every right to disable this software to regain access. Exempting the ability to disable broken DRM from the DMCA's anti-circumvention provisions permanently will help to reduce the damage created by this unfair system.

 

 

 

 

 

 

The FSF's previous comment in this study calling for the end of the DMCA's anti-circumvention provisions received over 1200 co-signers. There is a great deal of interest in ending the broken system

brought about by the DMCA, but failing a full repeal of the DMCA's anti-circumvention provisions, we encourage any efforts to limit the damage caused, and granting permanent exemptions would be beneficial to that cause.

 

Sincerely,

 

Donald Robertson, III

 

Copyright and Licensing Associate

 

Free Software Foundation

DMU Timestamp: November 03, 2016 14:13

Added November 08, 2016 at 3:00pm by Paul Allison
Title: Wording change

The text below is the previous wording for paragraph 34 (click to return there).

Rhetorical knowledge is . . . the compass; it is the guide to writing a superior paper . . . It is the ability to have a mind set [sic] for your writing, knowing where you are going while also having a three hundred and sixty degree understanding of how and why you are going there. Rhetorical knowledge is knowing your audience, the context, your purpose, and how you will choose arrangement, medium and strategy.

DMU Timestamp: November 03, 2016 14:13

Added November 08, 2016 at 3:02pm by Paul Allison
Title: Wording change

The text below is the previous wording for paragraph 34 (click to return there).

336
Paragraph 336 changes

Rhetorical knowledge is . . . the compass; it is the guide to writing a superior paper . . . It is the ability to have a mind set [sic] for your writing, knowing where you are going while also having a three hundred and sixty degree understanding of how and why you are going there. Rhetorical knowledge is knowing your audience, the context, your purpose, and how you will choose arrangement, medium and strategy.Rhetorical knowledge is . . . the compass; it is the guide to writing a superior paper . . . It is the ability to have a mind set [sic] for your writing, knowing where you are going while also having a three hundred and sixty degree understanding of how and why you are going there. Rhetorical knowledge is knowing your audience, the context, your purpose, and how you will choose arrangement, medium and strategy.

Rhetorical knowledge is . . . the compass; it is the guide to writing a superior paper . . . It is the ability to have a mind set [sic] for your writing, knowing where you are going while also having a three hundred and sixty degree understanding of how and why you are going there. Rhetorical knowledge is knowing your audience, the context, your purpose, and how you will choose arrangement, medium and strategy.Rhetorical knowledge is . . . the compass; it is the guide to writing a superior paper . . . It is the ability to have a mind set [sic] for your writing, knowing where you are going while also having a three hundred and sixty degree understanding of how and why you are going there. Rhetorical knowledge is knowing your audience, the context, your purpose, and how you will choose arrangement, medium and strategy.

DMU Timestamp: November 03, 2016 14:13





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner