In the document below, I have selected key passages out of the following three articles, purposefully picked from three different decades and describing three different populations of students:
As you read, I want you to provide at least two comments on each article (six total) as well as two replies to classmates on each article (six total). For your two initial comments on each article, please address the following:
Then, of course, please reply to two of your classmates in a substantive manner.
In short, your goal for discussion this week is to analyze the research designs of each study as well as to engage in substantive dialogue about what "counts" as technology, teaching, and learning in each example, drawn across multiple decades.
ABSTRACT. Past research on cognition has demonstrated that cognitive learning strategies used to complement instruction can have beneficial effects on memory and subsequent achievement. The utilization of microcomputer technology to deliver instructional content to students provides an optimum environment to examine the instructional effectiveness of embedded instructional strategies. The purpose of this study was to examine the effect of an imagery cue and an attention directing strategy within a context of a microcomputer learning environment that provided both selfpaced and externally paced instruction. Achievement was measured on five different tests designed to measure different educational objectives. One hundred eighty freshman students were randomly assigned to one of nine treatment groups. The results of the study indicate that embedding an imagery cue and an attention directing strategy in an instructional sequence increases student achievement. A combination of the two embedded strategies was also effective in improving students' achievement; however, the combining of the two strategies did not have a cumulative effect. It was also determined that the effectiveness of the embedded strategies was dependent on whether the instruction was self-paced or externally paced.
THE INTEGRATION of the microcomputer into the instructional environment has led to new interest in cognitive-oriented learning strategies. Consequently, considerable research is being conducted on microcomputer based instruction (MCBI) to determine the relative merits of self-paced versus externally paced delivery strategies (Belland et al., 1985). Although much of the early research on basic programmed instruction and computer-assisted instruction attempted to adjust for learner individual differences by using self-paced instructional models, there is indication that these selfpaced models may not be appropriate for all learning conditions . For example, some researchers have noted problems with learner procrastination with completely self-paced instructional models (Reiser, 1985). Self-paced instruction has proved to be effective for some learners (Keller, 1974; Postlethwait, 1974); however, Carrier (1984) has questioned the validity of allowing students to exercise their own judgments about how much instruction they need and in what order.
Wittrock (1979) and Travers (1972) in citing the attentional and instructional research models contend that self-pacing may not be the most effective delivery strategy for all instructional and learning conditions because self-pacing may reduce the attention and motivation levels below those necessary for effective interaction with the content material. This conclusion was supported by Belland et al. (1985). Results from this study found that moderate levels of external pacing of microcomputer-based instruction were significantly more effective than completely self-paced microcomputer-based instruction in facilitating student achievement of complex concept learning and free recall of spatial problems.
Materials and Procedure
The instructional materials used in this study were developed originally by Dwyer (1972) and consisted of an 1,800-word instructional unit on the human heart describing its parts, part locations, and the internal functions during the diastolic and systolic phases.
This content was subsequently revised for this study.
This content was selected because it permitted evaluation of several types of learning objectives that are directly generalizable to those commonly taught in the classroom.
Students participating in the study were 180 first-term freshmen enrolled at Ohio State University.
Participation in the study was one of the available options for receiving extra credit in their psychology course .
After signing up for the study, names on the sign-up sheet were randomly assigned to the instructional treatment conditions, and each instructional treatment group and the control had 20 students.
Treatments
Each of the nine microcomputer-based treatment groups were designed to teach students about the parts and operation of the human heart during systolic and diastolic functioning.
The instructional content was an adaptation of the Dwyer (1972, 1978, 1987) instructional stimulus materials.
The instructional content in each of the three instructional programs and sequence of content were identical.
During the MCBI instructional programs, students viewed and interacted with 57 different instructional segments that consisted of a visual with a verbal description and arrow, or arrows, pointing out the important information in that display.
The instructional display, in its basic form without strategies, consisted of visual information, verbal labels, and a verbal description.
This information in each instructional display was generated with visual first, then part names or operation names appear, then three to eight lines of verbal text appear under the visual.
There were three types of visuals used in the instructional programs.
Each instructional display consisted of some combination of one of the visuals and a verbal explanation (Figure 1).
...
Discussion
The results of this study indicate that an imagery cue strategy embedded in the instructional content increases the amount of information acquired and the students' ability to use that information.
Similar findings also resulted when the attention directing strategy was embedded into the instruction.
A combination of the two embedded strategies was also effective in improving the students' information acquisition; however, the combination of the two strategies did not have a cumulative effect. It was also determined that the effectiveness of the embedded strategies was dependent on the basic format design of the microcomputer-based instruction, e.g., selfpaced versus externally paced.
The findings of this study may be explained in part by the fact that the imagery cue and attention directing strategies are different forms of rehearsal. Focusing attention allowed time for incoming information to remain in short-term memory long enough to be elaborated on and encoded for longterm memory (Anderson, 1980; Atkinson & Shiffrin, 1968; Dwyer, 1987; Lindsay & Norman, 1972; Murray & Mosberg, 1982).
...
Conclusion
The program embedded learning strategies of imagery cue and attention directing can be used individually or in combination.
Both strategies tend to increase learning in terms of amount and ability to use learned information.
If information to be learned is mostly spatial, combining the two strategies would be helpful for most learners.
However, if the basic MCBI program design is self-paced, the program embedded learning strategies may not be as effective.
The results of this study fit past research results on cognitive learning strategies using imagery cue and attention directing. Further work needs to be done on cognitive learning strategies that are practical and can be easily used by students. Such learning strategies should be examined within the context of innovative instructional methods using the new electronic technologies. In addition, learning strategies should be evaluated in terms of the amount of information processing and the effect that the different levels of information processing have on student achievement of different types of educational objectives.
Abstract
This study examines the effects of Interactive Multimedia instruction upon the variables of achievement and problem solving skills on non-science majors in an Environmental Science course at a mid-western university.
The findings indicate that the Interactive Multimedia had a significant effect on both of the variables.
The findings are discussed in terms of the impact on self-study when students are learning outside of the classroom in a distance learning environment.
...
Introduction
During the 1990s critics and supporters alike have been questioning the effectiveness of science education within the American Education System.
Pearlman (Gandolfo, 1993) states, "In the United States alone, education costs $450 billion a year.
It is a huge burden, yet almost everybody agrees that schools are failing."
Pearlman believes schools have not taken advantage of the teaching and learning enrichment that technology tools provide and it is one of the major reasons they are failing.
Perhaps schools are having some difficulty in catching onto many of the new opportunities that technology tools offer, but computers are widespread in the schools and some good things are happening.
This article examines how schools are now using interactive computer-based multimedia as a tool to develop thinking skills needed to assimilate and transform massive quantities of information into solutions for today's fast paced changing society.
Case study
One of the problem areas for schools today revolves around the lack of student interest in science.
Enrollment is down and performance is low.
However, today's generation shows an interest in things pertaining to the environment.
In this study a package entitled Environmental Science: Computer Lab Simulations, Hirschbuhl, Bishop and Jackson (1996) an interactive multimedia (IMM) program based on actual geological field studies, was used by undergraduate non-science majors at a mid-western university.
For this study these simulations were added to one section of an environmental studies course and compared to a section that used only the traditional method of instruction (classroom lecture).
According to Trollip and Alessi (1988) one of the purposes of adding computers to classroom instruction is to facilitate learning for students by improving the quality and quantity of what they know. Schwier and Misanchuk (1993) believe an advantage of interactive multimedia instruction is the creation of meaning developed by the learner's interaction with the new information in the program.
At the time of this investigation there was very little empirical evidence regarding the use of interactive multimedia instructional technologies in higher education (Sports and Bowman, 1995 March/April). Most articles were anecdotal describing outstanding professors using the latest technological invention. The problem stems from the lack of research (Heller, 1990; Park, 1991: Park and Hannafin, 1993: Preece, 1993: Zachariah, 1995) regarding the effects of a self-paced interactive multimedia computer simulation on students' learning, motivation, and attitude. Reeves (1993) stated for a worthwhile study of interactive multimedia, a diverse population spending several hours in purposeful study should be examined.
Purpose of the study
This research examined the impact on students' grades and higher level thinking skills when computers were added to the classroom.
Interactive multimedia simulations of "real world situations" (actual field trips of a geology professor with 22 years' experience) were incorporated into one section of an environmental geology course.
The interactive multimedia modules, which promoted participation and interaction, were designed for students to gain scientific knowledge and concepts, and develop problem-solving skills without the heavy use of math.
Research design
The research design was quasi-experimental because it combined the use of "naturally assembled" intact groups (Campbell and Stanley, 1963), pre-test and post-test, and the use of a control group.
The control group research design used is shown in Table 1.
Sample
One hundred and fifty-two students were involved in the study in the spring of 1996.
The control group (113 students) received the traditional lecture method of instruction, while interactive multimedia replaced part of the traditional instruction for the treatment group (39 students).
GALT instrument
The Group Assessment of Logical Thinking (GALT) (Roadrangka et al., 1982, 1983) was designed to measure student cognitive development.
This instrument has been used by many as a predictor of student's math and science achievement (Bitner, 1986, 1988, 1991).
Roadrangka et al.
(1983) reported a coefficient alpha of 0.85 for the total test and validity was reported with a strong correlation (0.80) between the GALT and Piagetian interview results.
The GALT consisted of 21 questions of which 18 questions not only required the student to pick the most appropriate answer, but also the reason the student chose that answer. For the question to be considered correct the student must correctly choose both the answer and the reason. Students are classified as concrete thinkers with a GALT score of 0 to 8, transitional with a GALT score of 9 to 15, and those with GALT scores of 16 to 21 are recognized as formal thinkers.
Interactive multimedia instruction
Multimedia can be loosely defined as computer-based technology integrating some, but not necessarily all, of the following: text, graphics, animation, sound, and video (Barron and Orwig, 1995).
There are several definitions of IMI (interactive multimedia instruction (Galbreath, 1992)) and, as the multimedia environment changes rapidly, so may the meaning of interactive multimedia instruction.
According to Schwier and Misanchuk (1993) IMI is "instructional, multiple-sourced (ie, multiple media sources are involved) intentionally designed, and coherent" (p. 4).
Interactive multimedia modules for environmental geology
The topics of the eight units of interactive multimedia are: 1) Introduction to Environmental Science, 2) Energy from Coal, 3) Geology of Homesite Selection, 4) Minerals for Society, 5) Legal Control of the Environment, 6) Stream Pollution, 7) Streams and Floods, and 8) Radiation in the Environment.
All of the modules are based on actual field studies, and the student assumes the role of investigator, with each module presenting a different environmental problem.
All modules have a consistent screen format and each consists of an introduction to the problem to be addressed, the role that the student must play in the investigation, a means for the student to collect data relevant to the problem, a modeling book where the student enters, reviews and draws conclusions based on the collected data, a multiple choice test and an essay test. The essay test puts the responsibility of solving the problem on the shoulders of the student, who takes on the identity of an official in charge, who must make critical decisions based on the problem and data collected. The student is always in control and may access additional data, recheck data collected, access a glossary of appropriate terms, listen to audio associated with the problem, and may elect to take or repeat the multiple choice test. From the menu page a student can access any one of the eight modules, the progress report, or send the instructor email. Students must make some data collections before they are allowed to take the test or write their essays; thus forcing active participation and preventing students from rapidly paging through the modules and declaring themselves finished.
...
Summary of Conclusions
The following conclusions were drawn from this study:
Discussion
Overall, this study validates the effectiveness of the IMM treatment in significantly increasing student achievement and problem solving skills in environmental science. The following statements support this claim:
First, this study appears to validate the use of the GALT as a predictor of student performance because the probability of those students with a GALT score of 11 or above receiving a passing grade (B or better) was significantly greater than those with GALT scores less than 11.
Next, both groups had post-test GALT score gains over the pretest. The treatment group showed a significant reasoning gain while the gain for the control group did not. When the gains of the GALT scores between groups were compared the difference was not significant. Part of the increase in the GALT scores might be attributed to students taking the same GALT test for both the pre- and the post-test.
Finally, the proportion of students with a passing grade (B or better) was significantly higher for students in the treatment group when compared with those in the control group. This increase was so significant it is hard to suggest uncontrollable variables normally attributed to the difference, such as; classes meeting at different times of the day, different days of the week, being taught by different instructors, grading standards or student attendance could account for all of the variance. This result is supported by the findings of Massaro's (1995) study.
According to Wills and McNaughton (1996) educational software using interactive multimedia must actively engage the student, which is exactly what we witnessed one morning when not a sound could be heard as students were intensely engrossed interacting with the computer program in the Multimedia Lab.
This study validates the effectiveness of the use of interactive multimedia as field trip simulations for an environmental geology course. However future studies should be conducted using different research design, methodologies, disciplines and quality software to determine the long-term consequences of the use of interactive multimedia.
Abstract
An experimental study of the Technology Immersion model involved comparisons between 21 middle schools that received laptops for each teacher and student, instructional and learning resources, professional development, and technical and pedagogical support, and 21 control schools. Using hierarchical linear modeling to analyze longitudinal survey and achievement data, the authors found that Technology Immersion had a positive effect on students’ technology proficiency and the frequency of their technology-based class activities and small-group interactions. Disciplinary actions declined, but treatment students attended school somewhat less regularly than control students. There was no statistically significant immersion effect on students’ reading or mathematics achievement, but the direction of predicted effects was consistently positive and was replicated across student cohorts.
...
Introduction
The present vision for educational technology imagines technology's infusion into all aspects of the educational system. Many educators, policymakers, and business leaders recognize technology's pervasive presence in individuals’ daily lives and its ties to future opportunities for students who must compete in a global, knowledge-based economy (Friedman, 2005). Providing the technological, informational, and communication skills needed by 21st century learners, however, challenges schools to move beyond conventional modes of teaching and learning as well as the traditional boundaries of the school day and school walls.
Some researchers believe widespread technology use in society is moving schools inevitably toward more extensive and innovative applications of technology in curriculum and instruction (Dede, 2007; Smith & Broom, 2003). This view acknowledges that students who attend schools today are different from those of previous years because using technology in nonschool settings is altering their “learning styles, strengths, and preferences” (Dede, 2007) New technologies are reshaping how students access information, communicate, and learn within and outside of classrooms (Smolin & Lawless, 2007). Schools, accordingly, must capitalize on students’ natural inclinations as learners.
Emerging technologies are also supporting more innovative forms of teaching and learning. For example, lessons supported by technology can involve real-world problems, current and authentic informational resources, virtual tours of remote locations, simulations of concepts, or interactions with practicing experts and global communities. These kinds of experiences are important because research shows that students learn more when they are engaged in meaningful, relevant, and intellectually stimulating work (Bransford, Brown, & Cocking, 2003; Newmann, Bryk, & Nagoaka, 2001). Technology-enhanced learning experiences also can help students develop 21st century competencies, such as thinking and problem solving, interpersonal and self-directional skills, and digital literacy (Partnership for 21st Century Skills, 2006).
Texas, similar to other states, recognizes that students’ long-term success is tied to their preparation as lifelong learners, world-class communicators, competitive and creative knowledge workers, and contributing members of a global society. Yet, despite high aspirations for technology, the piecemeal way in which most schools have introduced technology into the educational process has been an obstacle to the effective use of technology for teaching and learning (Texas Education Agency [TEA], 2006).
Recognizing this limitation, the Texas Legislature in 2003 set forth a different vision for technology in Texas public schools. Senate Bill 396 called for the TEA to establish a Technology Immersion Pilot (TIP) that would immerse schools in technology by providing individual wireless mobile computing devices and technology-based learning resources along with teacher training and support for effective technology use. In response, the TEA has used more than $20 million in federal Title II, Part D monies to fund Technology Immersion projects for high-need middle schools. Concurrently, a research study, partially funded by a federal Evaluating State Educational Technology Programs grant, has investigated whether exposure to Technology Immersion improves student learning and achievement.
The Present Study
The present article reports third-year findings for students involved in a comprehensive experimental study of the effects of Technology Immersion on schools, teachers, and students. Specifically, we contrast outcomes for two cohorts of middle school students who attended Technology Immersion schools with students in control schools on measures of technology-related learning experiences and competencies and measures of academic achievement (reading and mathematics test scores). We present longitudinal outcomes for Cohort 1 students who attended schools across three project implementation years (Grades 6–8) and Cohort 2 students who attended schools during two implementation years (Grades 6–7).
Research Questions
The overarching purpose of the study was to investigate the effects of Technology Immersion on students’ academic achievement—however, we also examined the relationships among Technology Immersion and intervening factors at the school, teacher, and student levels. The research involved 42 middle schools assigned to either treatment or control conditions (21 schools in each group). In the present study we addressed two research questions:
Research Question 1: What is the effect of Technology Immersion on students’ learning opportunities (i.e., classroom activities, engagement)?
Research Question 2: Does Technology Immersion affect student achievement?
...
Effects of Technology Immersion on Academic Achievement
Given that changes in students and their learning experiences were expected to mediate academic performance, we next estimated treatment effects on students’ TAKS T scores. Our analyses concentrated on reading and mathematics scores because students completed TAKS tests for those subjects annually, whereas they completed TAKS tests for writing, science, and social studies at intermittent grade levels. We used three-level HLM growth models to examine how students’ TAKS reading and mathematics achievement varied across time (the point at which students completed TAKS assessments each spring), students, and schools. As Table 5 shows, we estimated school mean rates of change as well as the separate effects of student economic disadvantage and the school poverty concentration on TAKS reading and mathematics performance. Each HLM analysis included approximately 3,000–3,330 students divided nearly equally between the 21 treatment and 21 control schools. Comparable proportions of students were retained in analyses across years (58%–59% of treatment students, 58%–61% of control students).
Discussion
The study of Technology Immersion is distinguished from previous research on one-to-one computing environments by its experimental design and use of a theoretical framework to investigate causal mechanisms. The theory of change assumes that treatment students experience technology-rich school and classroom environments that foster more active and meaningful schoolwork, which in turn, enhance students’ personal competencies and engagement and ultimately increase academic achievement. Before discussing results, it is important to note that teachers and students in control schools typically had access to computers and digital resources in computer labs or media centers, as classroom stations (usually 1–3 computers), or on checkout laptop carts. Thus, control schools continued the traditional approach with technology integration resting largely on the motivation of individual teachers, whereas Technology Immersion schools committed to whole-school integration. In sections to follow, we discuss key findings relative to the study's research questions and the implications for one-to-one laptop programs at other schools.
Summary of Effects
Implications for Technology in Schools
The relationship between technology and student achievement continues to be an important topic and the focus of considerable research. Some recent and influential studies have raised concerns about the viability of financial investments in educational technology (e.g., Cuban, 2001; Dynarski et al., 2007). Likewise, if improved standardized test scores is the primary justification for investments in one-to-one laptop programs, then results probably will be disappointing. Evidence from this study suggests that large-scale one-to-one laptop programs are difficult to implement, and, as a result, programs may produce either very small or no improvements in test scores. Nonetheless, as the costs of laptops decline and the uses of wireless computers expand (e.g., digital textbooks and resources, online testing, school-to-home communication), interest in laptop programs is increasing (Zucker & Light, 2009; Zhao, Y. and Frank, K. A. 2003.) . This pilot study of the Technology Immersion model offers lessons for school leaders as well as policymakers who are considering laptop programs for their schools.
Foremost, effective technology use clearly involves more than just buying computers and software. This study and others suggest that laptop programs may be more effective when technology is part of comprehensive school reform initiatives (Ringstaff & Kelley, 2002; Zhao & Frank, 2003 Woodul, C., Vitale, M. and Scott, B. 2000.) . Successful Technology Immersion schools had highly committed administrative leaders who secured teacher buy-in for student laptops and provided the support components specified by the model. Particularly important were investments in technical support for school networks and timely laptop repairs, and the provision of ongoing professional development for teachers (Shapley, Maloney, Caranikas-Walker, & Sheehan, 2008). Consistent with other research, schools that served mainly economically disadvantaged student populations encountered numerous obstacles in trying to implement a complex school reform model (Desimone, 2002; Vernaz, Karam, Mariano, & DeMartini, 2006). Thus, those schools needed additional planning time to build capacity and secure adequate supports prior to implementing an immersion project.
Additionally, one-to-one laptop programs were more likely to be well implemented and sustained if laptops advanced overall goals for student learning and achievement. District and school leaders who embraced Technology Immersion believed that individual student laptops had benefits above and beyond simply raising standardized test scores. Financial investments in laptops were part of an overall migration toward digital school environments, including electronic textbooks, online assessments, and virtual coursework. These leaders believed laptops helped prepare their students for the 21st century, exposed them to worldwide cultures, expanded learning outside of school, and moved students toward product creation and away from drill and practice for tests. Technology Immersion supported their vision for learning opportunities that intellectually challenged and motivationally engaged students, inspired students to learn on their own, and prepared students for life, further education, and careers.
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
Canelos et al. used an experimental design in their study. I think that using this design would produce, overall, good results in terms of a sample of the population. One of the disadvantages, I would think, might be related to different students’ level of knowledge of technology etc., which might have had a bearing on the results.
New Conversation
Hide Full Comment Hide Thread Detail
Good point, Julie, about the advantages and disadvantages. Given the time period — and the ways in which we perceived “what counts” as evidence in educational research — what else makes you think that these decisions were purposeful?
New Conversation
Hide Full Comment
Looking back to the 80’s, I think college freshmen’s exposure to computers (technology) were limited to word processing which makes me think this study’s sample population had similar levels of technology knowledge.
New Conversation
Hide Full Comment
The study outlined that 180 freshman students were randomly assigned one of nine treatment groups. Incentives in terms of extra credit were given to participate so this may have bias the sample in terms having a higher attitude to learning and motivation.
New Conversation
Hide Full Comment Hide Thread Detail
That is a good point, Jennie, and the study mentions that the names on the list were randomly assigned to one of the treatment groups, but it doesn’t really describe the process by which they were randomly assigned.
New Conversation
Hide Full Comment
I couldn’t agree with this finding about more about self paced instructional models. I think this also falls under the terms self directed and personalized learning and we need to be careful that it is not “individualized” learning to such an extent that students lose momentum and start to procrastinate. Learning need to incorporate a variety of different strategies as students need to be exposed and experience different strategies. .
New Conversation
Hide Full Comment Hide Thread Detail
Thanks, Jennie, for your thoughts on this.
Given that this research was conducted in 1989, to what extent do you believe that educational technology designers - as well as curriculum directors and teachers -
heeded these concerns?
How do the findings of educational research affect (or, in many cases, not affect) the ways in which we adopt new technologies?
New Conversation
Hide Full Comment
I agree with you Jennie. I think embedded instructional strategies used in this study are still true today for self-directed learning.
New Conversation
Hide Full Comment
The technology is described in greater detail below, but the comments about self-paced instruction are interesting in their positioning of students. One of the sources (Carrier) questions the validity of allowing students to exercise their own judgments about pacing and order. I think most 21st century readers would recoil from this kind of blunt authoritarian stance toward students. (However, in my professional experience, most who design online courses do believe that student pacing should be more structured than unstructured, even if they won’t admit it.)
New Conversation
Hide Full Comment Hide Thread Detail
I completely agree with you Robert in that we need to have more structured self pacing if we going to adopt technology in this way. I think the role of the teacher still has a place and provides not only motivation to students but guidance, prompts, questions to support the learning that is happening with the tech.
New Conversation
Hide Full Comment Hide Thread Detail
I believe there are studies out there that show that at least two required deadlines per week in an online course is key to student success.
New Conversation
Hide Full Comment Hide Thread Detail
Wow that’s so good to know! I am running an online course and they have a whole list of things to complete every week! I need to rethink that!
New Conversation
Hide Full Comment Hide Thread Detail
I think in many ways it depends on the discipline, or the content of the course. For example, I have taught some writing courses that are not structured necessarily in weekly modules, but rather, units. Particularly in my Technical Writing course, it seems to work best. But, then in other courses—particularly lower level courses— I have noted that the weekly set up with two or so deadlines (and sometimes it helps to have an initial activity midweek) keeps students motivated. The textbook we had in 811 was really helpful with describing the structure of online courses to foster student motivation.
New Conversation
Hide Full Comment Hide Thread Detail
And of course, a lot would depend on the student population you’re working with. Some have better success with smaller projects and more deadlines, some with larger projects and fewer deadlines.
New Conversation
Hide Full Comment Hide Thread Detail
That is a good point, Robert. The educational level, discipline, student population and many factors have to be considered. That is why I think it is difficult to have a one-size-fits-all approach.
New Conversation
Hide Full Comment
Good points here, Robert, and certainly worth noting. Looking at it now, given what we can do with the WWW, it does seem like we would be worried about this approach (though, as Jennie noted, it still exists).
As you consider when this article was written (pre-WWW), to what extent does that change your perception about how the way in which this intervention was designed?
Can it tell us something for the present day — both about what we assume good teaching and learning are, as well as what we might actually focus on as researchers?
New Conversation
Hide Full Comment Hide Thread Detail
To me, it shows that educational innovation is always searching for a way to spring forth, even when world wide networks have yet to give us opportunities for collaboration. With techniques like these, instructors are looking for the best way to pace a course, but the thinking is necessarily more local than global.
I think it has a lot to say about the present day, especially in online course design, which is one of my primary areas of concern. The issue of pacing as a component of course design is even more apparent now than it was during this time period, because online course designers have to make more deliberate, more public, more transparent decisions about sequencing and interaction, and it’s all part of the course record.
New Conversation
Hide Full Comment
the microcomputer based instruction is a great way to describe using computers in a classroom. The study though is a great early question to ask with new tech, does the amount of pacing, internal and external impact the use. I feel like this can be repeated for any new tech.
New Conversation
Hide Full Comment
In this experimental study, the sample population, in my opinion, does not lend itself to generalizability. The sample was from one particular course at a university, not representative of all majors and only one class standing, freshman, were included. In my view, freshman may not have as much experience with microcomputer technology as a junior or senior in college.
New Conversation
Hide Full Comment Hide Thread Detail
This is always my question Pam. As I think about this, I am wondering if the population for this study does not include all majors. The article never really does identify the population that the sample will represent, does it? I cannot find it.
New Conversation
Hide Full Comment
I completely agree, the study could have been repeated any multiple universities, or within other classes, no?
New Conversation
Hide Full Comment
Having the three interventions strengthened the study and having 20 students in each group also gave the researchers a large sample to strengthen the impact.
New Conversation
Hide Full Comment
Extra credit for the assignment definitely affects the type of student who would be involved in the study. This makes me question some of the validity of the study.
New Conversation
Hide Full Comment Hide Thread Detail
That little extra credit tidbit stood out for me too.
New Conversation
Hide Full Comment
yes, it should be disclosed as a limitation to the results.
New Conversation
Hide Full Comment
Control Group, Treatment Groups, Random Assignment.
New Conversation
Hide Full Comment Hide Thread Detail
As you consider this RCT design, Coop, I wonder if you could elaborate more on the questions and ideas it raises for you. In what ways does it position students? Educators?
New Conversation
Hide Full Comment
The word “treatment” tells me that this is an experimental study with a control group and a treatment group.
The advantage of an experimental study is that if the sample is random and the assignment is random, then the results are generalizable to a larger population. In education in particular, randomized experiments seem to carry more weight than, say, qualitative experiments for initiatives such as No Child Left Behind.
Note however that students in this study signed up to receive extra credit; therefore, they are a self-selected population, not a randomly selected population. The assignment to control or treatment group seems to be random, however.
The disadvantage is that if you are exploring a new phenomenon that you don’t know much about, it’s not always apparent what variables you should be controlling for in an experiment. In this case, however, the experiment is a straight-ahead comparison where the independent variable seems to be the materials themselves, so I can’t find fault with the design.
New Conversation
Hide Full Comment
I found it very interesting that they found self-paced learning was less effective for this study. This helps point out the importance of the instructor still in these early days of technology integration. I wonder if this holds true as time, technology, and pedagogy moves along.
New Conversation
Hide Full Comment
I think the reason (in part) of their finding regarding self-paced may not be as effective is that they have an embedded learning strategy within the MCBI. That learning strategy is based in cognitivism (earlier in the article) so it a strategy that is step-by-step in logical sequence as set by the course designer. Thus self-paced through a cognitive construct may not yield the best result.
New Conversation
Hide Full Comment Hide Thread Detail
If you substituted another learning strategy for the treatment group, how do you imagine the outcome might have been different here?
New Conversation
Hide Full Comment
This helps to support the need for students to have a facilitator (ISTE 6b) to set the pace. How this might affect teaching/teacher has to do with educators supporting students with the technology, not just the technology replacing the educator.
New Conversation
Hide Full Comment Hide Thread Detail
Thanks, Julie, for offering this idea about the educator as the guide. Certainly, these ideas go back decades, if not centuries.
So, now, as we think about what it means for us (both as educators and researchers), how do you feel we need to think about “doing” research with or “on” educators?
New Conversation
Hide Full Comment
The authors discussed innovative methods using new technologies, which align well with ISTE #5, the use of technology o create personalized learning experiences. This study examined self-paced learning which could be characterized as personalized learning.
New Conversation
Hide Full Comment
IN these paragraphs 27-29 the technology discussion seems to focus on three aspects. Increasing Student interest, using computers to create meaning and interaction, and to knowledge in the field due to a lack of research. I think their focus is the generating interest and motivation through the use of Interactive Multi-Media. In this use they are looking to empower learners more so than teachers.
Purpose
How does it position Teachers
New Conversation
Hide Full Comment
Pearlman (Gandolfo, 1993) stated that schools were failing because of their lack of using technology tools to enrich learning. Frear states students’ lack of interest in science is also a problem schools faced in 1999. Compared to schools today and the increased interest in science, specifically STEM initiatives, do you think technology tools have helped increase this interest in science and better position teachers and students?
New Conversation
Hide Full Comment Hide Thread Detail
I do think they have increased the interest. There is so much more to learn and amazing interaction and demonstration with tech in science specifically. No offense to Jennie, but interactive tech in science is WAY cooler than in math.
New Conversation
Hide Full Comment
I feel like this is convenient, but when it comes to tech, small cases are very limiting, especially when variables and environmental factors are so vast, they generalizeability is particularly challenging.
New Conversation
Hide Full Comment
Here, I see a lot of evidence of the “enhancement” stage as opposed to the “transforming” stage, which makes sense for the time period this was written in.
The phrase “improving the quantity and quality of what [students] know” positions students a bit more passively than most modern scholarship would. Instead of students being a receptor, modern ed tech usually envisions students as evaluators and creators.
New Conversation
Hide Full Comment Hide Thread Detail
I think that is an entirely accurate statement. Very good thoughts.
New Conversation
Hide Full Comment
This makes me think that they multimedia is used as simulations to replace field trips with the professor. In that case, these would be much more relevant to the learning design than in comparison to just lecture content. The wording is not clear as far as I can tell from these excerpts.
New Conversation
Hide Full Comment Hide Thread Detail
good point here. it is hard to compare both interventions when they are so drastically different. You are on a limitation kick aren’t you!
New Conversation
Hide Full Comment
Despite the fact the study states a quasi experimental approach was adopted, we can also look for other elements that describe this type of research design. There is no random assignment of participants- they were a target population of “naturally assembled” intact groups. The causal impact of some kind of intervention was incorporating real world situation through using interactive multi media to assess any differences in cognitive development.
New Conversation
Hide Full Comment Hide Thread Detail
We have to be cautious about internal validity here. How do we know all the students in both experimental and control groups had the same foundation knowledge and baseline?
New Conversation
Hide Full Comment Hide Thread Detail
You are asking a profound question, Jennie… when we think about any sociological research, to what extent can we assume that any group of people (whether it is 2, 20, 200, or 2000) have “exactly” the same kinds of knowledge, skills, experiences, and dispositions.
When we layer in technology as part of the equation, how does the digital divide exacerbate this problem?
New Conversation
Hide Full Comment Hide Thread Detail
This is a concept I wrestle with when I think about types of research, populations, and samples sizes. I listened to one statistician who claimed that the right representative sample can be even more accurate than a complete census, because even a census has the weakness of missing particular hard-to-find populations that a sample size could account for by adjusting the model.
New Conversation
Hide Full Comment Hide Thread Detail
That is a very profound statement for researchers. I never thought about using a sample group like that before. That is a very interesting take on how to solve some of the issues we get with certain populations that we want to study.
New Conversation
Hide Full Comment
You can certainly tell that this is so as the groups are “naturally assembled” they don’t state it but one would suppose this is just based upon which class an indiviudla selected. This could impact the study in a number of ways. Maybe these were highly motivated folks that signed up before others for an optimum class time. We can’t tell by what is stated about the participants or this portion of the procedure. This “naturally assembled” and recognizing that one group would be selected for the treatment" are strong indications that they have correctly labeled this as quasi experimental.
New Conversation
Hide Full Comment
The researchers indicate the design as quasi-experimental. The study’s design allowed the researchers to identify that, in the short term, the use of the interactive multimedia improved student outcomes, particularly related to improving critical thinking skills. I agree with the identified limitations of this design/study as not giving information about long-term consequences of using this tool.
New Conversation
Hide Full Comment
This is a quasi-experimental study because it uses an intact population instead of a random sample. Elsewhere I’ve seen these called prospective observational studies. The advantage here is that there is no disruption to the participants, since the groups were self-forming. The disadvantage is that without random assignment, only association can be established, not causation.
New Conversation
Hide Full Comment
This study discusses the defining characteristics of interactive multimedia instruction and suggests it should be intentionally designed. This positions teachers to be Analysts (ISTE #7), to use technology in different ways to help students learn and demonstrate a particular skill or competency.
New Conversation
Hide Full Comment
It is crazy to think that this was an intervention. The definition they use has been adopted and accepted by every science program. I can’t think of one program that doesnt use sound, video or graphics. It seems less of an intervention and more of a typical teaching strategy. Makes me question if that changes the definition of technology.
New Conversation
Hide Full Comment
It appears that treatment and control groups not only had the difference in terms of ed tech but the instructional strategies were completely different which may account for the differences in learning. The control group were exposed to lectures while the treatment group were given problem and inquiry based learning environments.
New Conversation
Hide Full Comment Hide Thread Detail
Do you think a better alternative would have been for the control group to also go through the investigative process, but without the multimedia? To me, that seems like it might be a more valuable comparison.
New Conversation
Hide Full Comment Hide Thread Detail
This is a good line to pursue — think about what variable, specifically, the researchers are looking at. What is it that they are really trying to discern? Is the focus on the student, or on the technology?
New Conversation
Hide Full Comment Hide Thread Detail
I think the focus is on the effectiveness of the technology. I feel like the researchers almost got distracted by trying to create an quasi experimental approach and should have just focused on a good correlational study. Simply showing that the tech provided good learning results would have been a stronger course for me than their attempt to try and compare back to a “normal” classroom.
New Conversation
Hide Full Comment
Again, the software described sounds like an excellent way for students to engage in scientific inquiry, but I can hear and feel the limitations of the software at the time. Here, students have the power to collect and view, and the power to be assessed, but it doesn’t sound like students can add anything to the existing structure of the software. The communication seems like a one-way affair.
New Conversation
Hide Full Comment Hide Thread Detail
I agree with you Robert about the communication being a one-way route. There was no mention of immediate feedback to the student, either by the multi-media or the teacher. Maybe a sign of the limitations of technology during the 90’s.
New Conversation
Hide Full Comment
I think the point of this study is really whether the tech is effective. The Galt scores and grade scores show that using the IMM provides success in both grades and pre/post tests. The comparison with the normal classroom was less important and honestly not as well backed since the course design differences are not clear. But I feel the “point” of the study is well supported by the data collected.
New Conversation
Hide Full Comment
This helps to exemplify the affordance of technology related to capturing students attention and thus, interest. Remembering back to 811 last semester, students’ interest is a driving factor in their motivation, particularly intrinsic motivation, so this is certainly an area where technology can fulfill a positive purpose related to students/learning.
New Conversation
Hide Full Comment Hide Thread Detail
This is a good point, Julie, but I wonder how you think Liz Kolb might reply to this… what does she say about “engagement” as a construct for examining the effectiveness of educational technology?
How have studies like this, perhaps, set us on a track for looking at the effectiveness of educational technology?
New Conversation
Hide Full Comment Hide Thread Detail
Yes, so maybe this is not a good example of engagement since there was not a sound in the room. Liz Kolb would look at engagement as related to collaboration and active learning, so maybe this is not a good example of actively engaging students because of the lack of activity (discussion, collaboration, etc.).
New Conversation
Hide Full Comment
Julie, I agree, I think this is an important result of this study, that the use of multimedia has the potential to increase intrinsic motivation.
New Conversation
Hide Full Comment
I thought the figure in the article (in its entirety) was very helpful in representing theoretical model of technology immersion. The researchers broke up factors associated with achievement in four ways: technology use, technology proficiency, engagement, academic achievement—to show the ways in which increased access to technology would affect these factors. Further, the researchers were careful to match the treatment and control schools in terms of the variables chosen (enrollment % economically disadvantaged, % minority, %ESL, %Special Ed, % Student mobility, % passing 2003 TAKS, % passing 2004 TAKS—all which aided in the validity of the study. One disadvantage I can see from this research design is not really being able to get a good idea of the perceptions of the participants—either the students or the educators.
New Conversation
Hide Full Comment Hide Thread Detail
Do you think including something like a post-survey would have been a good idea to measure those perceptions, or would that just muddy the waters?
New Conversation
Hide Full Comment Hide Thread Detail
Based on the purpose and research questions, what is included is sufficient, I suppose. But, particularly with the effect that was found regarding those who used the laptops and digital resources attending school less regularly, it might have shone some light on why that might have been an effect.
New Conversation
Hide Full Comment Hide Thread Detail
I do particularly like the studies that are primarily quantitative, but include some interview components as well to get some direct quotes from the participants. Sometimes those qualitative elements help shed some light on the issue that we might not get right away from the data.
New Conversation
Hide Full Comment
This study adopted an experimental design where 21 schools were given laptops and 21 schools represented the control group- without laptops. Something interesting is that treatment students attended school less regularly than control students. Was this because students with laptops thought they could learn everything without the teacher and most of the learning was self paced, self directed and individualized?
New Conversation
Hide Full Comment Hide Thread Detail
Regarding attended less frequently. Kind of a learning point. It was important enough to mention but not to suggest a reason? C’mon
New Conversation
Hide Full Comment
Control Groups, Treatment Groups, 21 X 21 w/without laptops. I liked that this was a cohort study and longitudinal. I just like that model seems more interesting and useful than most. I found it interesting that their was no substantial difference in math and reading scores.
New Conversation
Hide Full Comment
Comparing 21 schools addresses the issues of generalizing that was presented in the previous study. A longitudinal aspect also adds validity to the results. I think this is the strongest of the study designs for these reasons.
New Conversation
Hide Full Comment
This is an experimental study. The advantage here is that having a group of schools as a control group and a group of schools as a treatment group allows for the comparison of outcomes that the authors are looking for. The disadvantage might be that you don’t get the personal detail that you would with a case study that relied on interviews and observation.
New Conversation
Hide Full Comment
What were the socioeconomic standings of the schools? This would play a major role in the ability to use the computer outside of school and possibly the students comfort with the technology, both playing a large role in the effectiveness of the tech.
New Conversation
Hide Full Comment Hide Thread Detail
The authors did mention that computers were accessible at school but if the students are not learning outside of the classroom using technology, they will be a different tech level.
New Conversation
Hide Full Comment
I’m having a difficult time wrapping my mind around the “traditional approach with technology”. Although I do not teach, I integrate technology daily in my profession. I struggle with identifying the factors that prevent teachers and students from being motivated to integrate technology into their teaching and learning.
New Conversation
Hide Full Comment Hide Thread Detail
I think many of the factors are related to not having the training, time, or resources to integrate the technology. Or having technology but not knowing how to integrate it successfully (TPACK).
New Conversation
Hide Full Comment
Hi Pam,
I meet many educators in certain part of the world who still write on the board , lecture and “deliver” material to students. There is no technology in the classroom as some schools have computer labs that need to be booked in advance. I know this is hard to believe but many places have very limited resources still.
New Conversation
Hide Full Comment
Its interesting how the immersion of tech in the 1999 study involved what we would see in a powerpoint, and in 2011 this study shows that technology related to connecting students and subsequently, the definition of immersion and tech has drastically changed.
New Conversation
Hide Full Comment
This study seems to imagine technology in a realistic way—not so much as an abstract savior, but as a mobile cart that contains tools that can potentially improve student engagement. I like the realist approach, because it keeps expectations within the realm of reality. I don’t see a lot of discussion about instructors or their role—the discussion is mainly focused on students and administrators.
I’m glad administrators were included in the discussion section, because many studies I read about the transformational potential of technology ignore the fact that administrative buy-in is absolutely key in making anything happen.
New Conversation
Hide Full Comment Hide Thread Detail
it was a strength to involve other stakeholders, I think they have a huge impact on how tech is immersed. Good catch with this.
New Conversation
Hide Full Comment
This study reported no statistical significance difference between technology immersion and reading/ math achievement. This all depends on how the technology was used e.g. in the SAMR model if the tech was used as a mere substitution with little learning enhancement then this would account for little differences,
New Conversation
Hide Full Comment Hide Thread Detail
Great point Jennie!
New Conversation
Hide Full Comment
I think no difference is powerful. If they are so different, and have the same impact, isn’t that useful?
New Conversation
Hide Full Comment
I am not seeing any major quantifiable benefits. 1 on 1 computers is expensive and difficult to manage for schools systems. The payoff would have to be significant to justify the expense and effort that is put forth. But, this study shows negligible if any benefits to the switch. That would give me significant pause if I were a schools system looking into doing this.
New Conversation
Hide Full Comment
A paradigm shift beyond the scope of preparing students for standardized tests is part of the school reform mentioned in this study. The shift includes how do we prepare students for 21st Century digital literacy and is its worth as a technology investment.
New Conversation
Hide Full Comment
This seems to me to be key. Integrating technology for the sole purpose of integrating it seems to not produce desired results. As indicated in this study, just having laptops does not in and of itself guarantee good outcomes. The beliefs of educators (I am thinking back to our readings in week 4) in relation to the technology use have a direct impact on whether or not the technology will in fact affect outcomes.
New Conversation
Hide Full Comment
General Document Comments 0