NowComment
2-Pane Combined
Comments:
Full Summaries Sorted

An Invitation to CALL Unit 2: Finding and Evaluating CALL Resources

An Invitation to CALL

Unit 2: Finding and Evaluating CALL Resources

OVERVIEW

Goal 2, Standard 1 of the TESOL Technology Standards (2008, 2011) states: "Language teachers identify and evaluate technological resources and environments for suitability to their teaching context." In line with that standard, identifying and evaluating resources and environments is the general topic of Unit 2. Relevant resources can include CALL courseware, online materials for teachers, online materials for students, and resources for connecting teachers and students. The focus here is not on the resources themselves--later units will provide a lot of examples for English language learning and some for other languages as well. Rather, we discuss the process of identifying candidate resources and more importantly evaluating their suitability for your curriculum and students.

IDENTIFYING RESOURCES

Finding suitable resources is not an easy task despite the increasingly large amount available for English and other commonly taught languages. Dedicated resources (those designed specifically for language learning), both free and commercial, abound. To locate desired materials on the web, good searching skills are needed. Becoming familiar with Google's more advanced search techniques (http://www.google.com/advanced_search?hl=en) and trying a range of search terms rather than just the first one that comes to mind will usually yield more favorable results than a basic search suing a broad category term like "ESL". Other sources include professional organizations. For example, the TESOL CALL Interest Section has a virtual library with hundreds of tagged resources: http://www.diigo.com/user/call_is_vsl. Although the focus is on ESL, some of the materials, tools, and activities there can be used for foreign language teaching and learning as well. As noted in the previous unit, for those interested in tutorial software, the TESOL CALL Interest Section Software List is maintained by Deborah Healey at http://www.eltexpert.com/softlist/index.html, and the CALICO Journal Software Reviews are available for English and other languages on the journal website: http://www.equinoxpub.com/journals/index.php/CALICO. The reviews can be found by issue in the archives, but many can also be located using the search term "software review" on the journal site. Another source of CALL activities and materials can be found on book publisher's websites. A number of textbooks published in recent years include additional online resources for teachers and supplementary materials and exercises for students. Some of these can be useful even if you are not using the textbook. For examples from Pearson see http://www.pearsonelt.com/classroomresources.

IDENTIFYING ENVIRONMENTS

The overall technology environment includes both the local environment and the online environment (the emerging mobile environment has elements of both). The local environment is that of the user--the teacher and students--and includes the institutional setting, home setting, mobile options such as mp3 players, smartphones and tablets, and other settings such as libraries or Internet cafes in proximity to the institution or student homes. The local environment consists of a number of factors.

  • The hardware resources available: computers, other digital devices such as mp3 players, audio and video recorders, peripherals like printers and scanners, and so on.
  • The software tools: applications such as word processors, communication applications, audio and video recording software, and media players. Increasingly, we see these in the online environment as well. In some cases the users are not even aware whether the software resides on their own computer or or somewhere else (consider the local Microsoft Office applications vs. online Office 365 online (http://www.office.com/online) or Google Docs (https://drive.google.com/).
  • Openness to the Internet, including bandwidth: note that upload and download speed can vary depending on the number of users, and in some cases local connectivity as well and in some places students may need to pay for access..
  • Accessibility: availability of institutional and online resources. For example, if there is a computer lab, how open is it to your students for class or drop-in use? What commercial software are you licensed to use?

The online environment has some of these same considerations plus some additional ones.

  • Delivery formats: are your local machines able to use the resources you find? For example, audio and video material may require players capable of handling Adobe Flash, Windows Media, Real Media, or Apple Quicktime. (Here and elsewhere, do a search for these terms on www.google.com or another search engine if you are unfamiliar with them.)
  • Free or fee: online materials and applications vary as to cost. Some are provided free and hosted by institutions or authors (like the present site), some are hosted by "open-source" communities (e.g., the popular Audacity sound recorder at http://audacity.sourceforge.net/), some are hosted by individuals or companies that use advertising to provide income, some have both free and fee versions (e.g., www.englishbaby.com), and some are only available for a fee, usually on a subscription basis.
  • Security: security and safety are issues at any time online for both students and teachers. The TESOL Technology Standards for Learners, Goal 1, Standard 3, states: "Language learners exercise appropriate caution when using online sources and when engaging in electronic communication." However, in addition to the normal precautions, there is a question of what information the sites may be collecting on students and how student privacy is being assured. On some free sites especially, personal information may be collected and a non-exclusive right to the student's work may be a condition for use. In the US, such practice can be seen as a violation of FERPA, a federal act, and teachers are advised to take care accordingly: http://www2.ed.gov/policy/gen/guid/fpco/ferpa. Teachers should be aware of TESOL's Technology Standards for Teachers Goal 1, Standard 4, regarding responsibility for legal and ethical use of technology.
  • Uptime and downtime: it is important for an online site to be available when you need it. In some cases, sites may go offline for hours or days or even disappear altogether. It's a good idea to have an alternative ready.
  • Speed: although this seems to be changing for the better, sometimes popular free educational sites may be slow during the school day.

The mobile environment: apps, apps, and more apps...
The rise of smartphones and tablets like iPads and their Android and Windows competitors has opened up a new environment for learning that is "anytime, anywhere." Apps come in various types: some are akin to the disk-based program of the past, allowing mobile learning and review of vocabulary, grammar and pronunciation. Others, like Whatsapp (http://www.whatsapp.com/) and the many mobile versions of social networking sites, are primarily for taking communication and social interaction beyond the level of the phone call and text message. Despite the fact that they are convenient, ubiquitous, and often free or very inexpensive, apps need to be treated no differently than other potential learning tools. First, you need to find them. A place to start is here: http://mastersinesl.com/essential-esl-app-guide/. Then, candidates must be evaluated judgmentally for their potential value as language learning supports for your teaching context. The section below provides guidelines for doing so.

Understanding any environment is a key step in determining what resources you will actually be able to use effectively. A 2013 paper I co-authored with Glenn Stockwell openly discusses general aspects of the mobile environment and offers a set of principles for teachers, developers and learners to consider in implementing mobile language learning: http://www.tirfonline.org/wp-content/uploads/2013/11/TIRF_MALL_Papers_StockwellHubbard.pdf.

EVALUATING COURSEWARE (INCLUDING APPS)

The evaluation component of this unit begins by looking at the sub-field of tutorial CALL from the perspectives of both of the end users: teachers and students. It introduces the term courseware, which refers to software that is used to support formal language learning. In practice, courseware has been used to refer to everything from complete software packages that can be used without a teacher to software that is just a part of a language learning course, sometimes a minor or optional supplementary part. We will use the term interchangeably with that of tutorial software to include any software designed for language learning purposes. Although CALL courseware has arguably lost its dominant position in CALL over the past decade, it is still widely used and continues to be a significant part of the field. At the very least, it is worth exploring so that you can make an informed decision about whether to incorporate it in your own teaching or recommend it to your students for independent study. It is worth noting that more and more free courseware is showing up on the web on institutional sites or those supported by advertising. Also, there is educational, native-speaker courseware that can sometimes be adapted for language learning purposes.

I have been interested in evaluation for some time, and in a series of papers from 1987 to 1996, I attempted to develop a comprehensive methodological framework for CALL that integrated evaluation with development and implementation. The CALL world has turned out to be more complex than that original vision (it did not anticipate the rise of CMC (Unit 3), for example, and other uses centered on the computer as a tool), but it still serves a purpose in laying out areas of consideration for any software that has an identifiable teaching presence. As we will see, it can be adapted somewhat for use in evaluating a broader range of CALL tasks and activities. The framework expanded on an earlier one by Martin Phillips (1985) and used the Richards and Rodgers (1982) framework (Method: approach, design, and procedure) as an organizing scheme to characterize the apparent relationships between elements of language teaching and learning and the computer. The driving force behind it was the observation that existing approaches to instructional design and in particular evaluation did not pay sufficient attention to language learning or else limited themselves to specific teaching approaches. I introduce a simplified version of the framework here. Although the focus of this unit is evaluation, I discuss its relationship to development and implementation as well.

ORGANIZING PRINCIPLES

Development, evaluation, and implementation are part of a logical progression in any situation that has an end product. If a company produces a computer program for balancing your checkbook, for instance, they need to 1) design it with the needs of the end users in mind, 2) evaluate it in house and encourage outsiders to review it, and 3) have a mechanism to implement it, including figuring out how to make it available and training end users in its effective operation. Of course this can be and often is cyclic rather than linear, with the feedback from evaluation and implementation providing data for subsequent development.

CALL software is a bit different from a simple checkbook balancing program in that it typically involves a more diverse view of who the evaluators and end users are. Evaluation, for instance, may be connected to the developer and be used for improving the courseware prior to release, or it may be done by an outside reviewer for a professional journal. It may also be done by an individual teacher representing a school or institute, selecting materials for his or her own class, or even blogging for the wider language teaching community. It may even be by a student evaluating for possible use or purchase, or to communicate impressions to other users. As Chapelle (2001) notes (see http://llt.msu.edu/vol6num1/review1/default.html for a review), evaluation can be done judgmentally at the level of initial selection, based on how well-suited a piece of software appears to be, and it can also be done empirically, based on data collected from actual student use. We focus on the former here.

Development, evaluation, and implementation are thus simultaneously part of a logical progression of a courseware project and interacting manifestations of its reality. This is true whether the project is for CALL or for some other educational purpose. However, the specific domain of language teaching and learning imposes on these three a set of considerations that are not exactly the same as we would find in courseware for, say, history or chemistry or math. The framework that follows addresses those considerations. This is a revised and simplified form of the content in Hubbard (1996) and in the papers listed below (see references). The others go into more depth in language teaching approaches (1987), evaluation (1988), and development (1992). Note that an updated version for evaluation can be found in Hubbard (2006): www.stanford.edu/~efs/calleval.pdf, also covering Chapelle's (2001) framework and evaluation checklists.

Two final notes. First, in an extensive critique of this framework in Levy (1997) (see http://llt.msu.edu/vol2num1/review/levy.html for a review of this work) argues that "Hubbard's framework for CALL materials development, which assumes that all CALL is tutorial in nature, is not generally applicable to the computer as a tool. Similarly, the Richards and Rodgers model...only has limited application for the computer as tool" (p. 211). I think there is more applicability than he suggests, but for the moment we will follow Levy's view and assume this is a framework for tutorial CALL only. We will return to a more expanded application of it below.

Second, like Richards and Rodgers' framework, but unlike most others for CALL, there is an attempt to be agnostic here with respect to what actually constitutes good language teaching and learning through computers. For the field as a whole, we need a framework which can be used equally by those whose language teaching approaches might be as diverse as those of grammar-translation, lexical, communicative, sociocultural, or interactionist proponents. Thus the framework is descriptive rather than prescriptive.

FRAMEWORK FUNDAMENTALS

The three modules (development, evaluation, implementation) share core components inspired by Richards & Rodgers (1982). In each case their original components are adapted, interpreted, and supplemented to include the reality of the computer as the interface between the teacher/developer and materials and the learner. (Realistically, in any tutorial program there IS a teacher (or at least a teaching presence) in addition to the materials themselves, just as there is a teaching presence in a textbook.) The development and evaluation modules are most closely related in terms of the elements considered. Implementation feeds on the output of evaluation. However, each module can impact the others over time, as when information from evaluation and implementation is returned to developers for updates, patches, or considerations in later versions of the product.

CALL Framework Interrelationships

THE EVALUATION MODULE

Evaluation involves three kinds of considerations. A crucial aspect is to understand what the courseware does first before attempting to judge it: this is, not surprisingly, difficult to do because as soon as we start interacting with a program we want to judge it. If an evaluator wants to approach the problem a little more objectively (and hence effectively), the first consideration then is the operational description of the software, which essentially focuses on the procedure level elements. The design elements essentially can be subsumed under the label "learner fit." That is, based on the information from the operational description, you are looking to see how well the design elements (see Development Module, below) of language difficulty, program difficulty, program content, etc. fit the students you are evaluating for. The approach elements, in this case approach-based evaluation criteria, can be subsumed under the label "teacher fit"--broadly, what does the software appear to represent in terms of assumptions about what language is and how language is learned, and how compatible are such assumptions with those of the teacher doing the evaluation? More generally, what kind of "teaching" is the software likely to be doing? Ultimately, then, evaluation consists of getting a clear understanding of what the software actually has in the way of material and interaction, and then judging how closely it fits with the learner's needs as determined by their profiles and learning objectives (perhaps themselves determined by a course syllabus) and your own language teaching approach. This relationship is sketched below.

CALL Evaluation Framework

It is worth noting that a modified version of this framework is still used by the CALICO Journal, http://www.equinoxpub.com/journals/index.php/CALICO for its courseware reviews. See the resources and references sections below for more details about this and alternative conceptions.

EXTENDING THE MODEL TO OTHER RESOURCES

In a more recent paper (Hubbard, 2011) I have extended the preceding model to the web more generally, that is, to resources beyond those that have a clearly tutorial component. While Levy's tutor/tool framework still has value in describing the applications themselves, in the case of tools and resources in particular (e.g., discussion boards, email programs, media players, social networking sites, learning management systems, repositories of authentic audio and video, etc.), the key is how they are used for language learning, the activities and tasks that are built upon them. Admittedly, the methodological framework in its original form is not a perfect fit: operational description, for example, will not include input judgment and feedback in non-tutorial materials, but other components such as screen layout, types of input accepted, help options, and so on still need to be addressed. Teacher fit and learner fit similarly remain relevant--whatever resource is utilized, it should be done in a way consistent with your assumptions of how languages are learned and with the curricular objectives and students characteristics taken into account as well.

ALTERNATIVES

The methodological approach I present here has proved useful over the years but there are at least two other approaches that deserve mention, especially once we begin to look beyond tutorial CALL. First, despite some of the limitations and biases in checklists, they have persisted over the years. In fact, the methodological framework above may be rather awkward to use in its raw form, and translating it into a checklist format for a specific combination of teacher fit and learner fit considerations (representative of a teacher's own language learning approach, course design, and student characteristics for example) provides a practical instantiation of its intent. Another general approach, that of building a framework on theoretical principles derived from SLA research, is seen in the work of Chapelle (2001): see http://llt.msu.edu/vol6num1/review1/default.html for a review. She identifies six general evaluation criteria, usable not only for software but more broadly for CALL tasks: language learning potential, meaning focus, learner fit, authenticity, impact, and practicality. It is important to note that these criteria are relevant for both judgmental purposes and for evaluating outcomes. In line with the latter, another TESOL Technology Standard, Goal 3, Standard 3, references the need to evaluate "specific student uses of technology" for effectiveness. An example application of Chapelle's framework can be seen at https://www.calico.org/html/article_133.pdf. Unlike the methodological framework, which was developed originally for courseware evaluation and requires some adaptation to accommodate other types of CALL activities, Chapelle's framework was designed for what she refers to more generally as "CALL tasks", encompassing a broader set of options. J. B. Son (2005) has offered criteria specifically for the evaluation of websites, in particular the notion of authority: http://eprints.usq.edu.au/820/1/Son_ch13_2005.pdf.

DEVELOPMENT AND IMPLEMENTATION CONSIDERATIONS

Development, evaluation, and implementation are part of an integrated process yielding supportable CALL materials, tasks and activities. Implementation considerations are relevant during the evaluation process, but they become crucial when deciding how best to use software that is available. Some of the key questions to address in implementation are the following.

- What is the setting in which the students will be using the software (classroom, lab, home, etc.?) - What kinds of training or preparatory activities are warranted? - What kinds of follow-up activities either in or out of class will there be? - Given the options provided by the program, how much control will the teacher exert, and how much control will be left to the learner?

Whether they are done in class together, in a lab with individuals or pair working on computers, or outside of class at a computer cluster, the student's own computer, or even on a mobile device like a cell phone, computer exercises should be clearly linked to the rest of the course. This does not mean they have to be fully integrated. Arguably, activities with CALL courseware can be supplementary or complementary to the classroom part of the course (including the virtual classroom in an online setting), required or optional, and still be useful. However, the instructor needs to be sure that learners see the connections and that the computer work is compatible in terms of content, level, and approach to the rest of the course material and activities. For a more detailed description of the components to consider in implementation and their interrelationships, see Hubbard (1996).

RESOURCES FOR EVALUATION

Besides the evaluation framework presented here, it is common to see evaluation checklists or other procedures. Here are a few examples.

CALICO's Online Software Review Guidelines: http://media.equinoxpub.com/home/wp-content/uploads/2014/09/Learning-Technology-Resource-Evaluation-Rubric.pdf
Guide for Using Software (http://www.cal.org/caela/esl_resources/digests/SwareQA.html) in the Adult ESL Classroom by Susan Gaer
A Place to Start in Selecting Software (http://www.deborahhealey.com/cj_software_selection.html) by Deborah Healey & Norm Johnson
ICT4LT evaluation form: http://www.ict4lt.org/en/evalform.doc.

SUGGESTED ACTIVITY. Visit the CALICO website at http://www.equinoxpub.com/journals/index.php/CALICO. The reviews can be found by issue in the archives, but many can also be located using the search term "software review" on the journal site..Find an interesting-looking piece of software and read the review, noting 1) what you can learn from it and 2) any questions that arise that might help inform your own evaluation process. If you feel energetic, try two or three. You should note the difference between a published review intended for a wide audience and your own evaluation, which should be situated with respect to your own approach, your students' abilities and needs, and the environment of your class.

DMU Timestamp: July 13, 2017 01:02





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner