Framing the Assessment of Educational Technology Professional Development in a Culture of Learning
Melissa Pierson
University of Houston
Arlene Borthwick
National-Louis University
2010. Journal of Digital Learning in Teacher Education, 26(4), 126–131.
Abstract
Assessing the effectiveness of educational technology professional development (ETPD) must go beyond obtaining feedback from participants about their level of satisfaction with a workshop presenter. Effective and meaningful assessment of ETPD requires that we design inservice learning activities that can be measured using methods that are consistent with what we know about teaching and learning, recognize teacher and student change as it relates to the larger teaching and learning context, and view evaluation as an inseparable component of ongoing teacher action. We therefore offer for consideration an ETPD assessment model that merges three theoretical constructs through which professional development consumers might interpret research findings: (a) technological pedagogical content knowledge (TPACK), (b) organizational learning, and (c) participant research and inquiry.
(Keywords: professional development, assessment, TPACK, action research)
We’ve all been there.
As you gather your notes and belongings at the end of a long day of learning some trendy, new technology tips and tricks, someone hands you a page with a few questions to answer: What did you learn today?
How likely are you to use what you learned?
Was the presenter effective?
You’re anxious to get home, eyes weary from peering at the computer screen all afternoon, so you do your best to circle the numbers that would be complimentary to the presenter—she tried hard to be funny and keep our attention, after all—but you skip past the large, open-ended comments box in your rush to get out the door.
Or maybe you are the professional developer charged with collecting data to evaluate the outcomes of a grant-funded project.
You plan ahead with teachers and
other school staff.
You provide an introduction during large- and small-group activities.
Later you proceed to “collect data” in classrooms, but instead you find yourself helping students and teachers with new software features and collaborative student projects.
While in the thick of providing support to the success of classroom implementation, you might find yourself setting aside your observation protocol.
Soon the class period is over and students have no time to enter comments for the daily journal prompts you developed (those data you planned to use for the evaluation).
These scenarios are all too common, and yet these data—haphazardly solicited and carelessly offered—may represent the total information source we have for judging the effectiveness of our efforts in preparing teachers to teach with 21st-century skills and tools. Any conclusions we draw from these data are necessarily limited, skewed, and of questionable validity. In the first scenario, data are likely more representative of how personable the presenter was, or how tired the participants were, than how well facilitated, well designed, or well matched to learners’ needs the learning experience was. In the second scenario, the presenter frets over lost opportunities for meaningful data collection, acknowledging that the data set will be smaller than desirable.
Assessing the effectiveness of educational technology professional development must go far beyond obtaining feedback from participants about their level of satisfaction with a workshop presenter. The “impact the professional development activity had on pedagogical change or student learning” (Lawless & Pellegrino, 2007 p. 579) is of particular importance to our field’s efforts to prepare teachers at all levels to use technology effectively. However, basing our evaluation on incomplete data leads us to an incomplete understanding and incomplete assumptions about the role of professional developers, teacher educators, and teachers.
The genesis of our thoughts on the assessment of professional development in educational technology was our work in co-editing the text Transforming Classroom Practice: Professional Development Strategies in Educational Technology (TCP) (Borthwick & Pierson, 2008). Through the solicitation of proposals and collaboration with chapter authors, what became clear to us in so many ways is the necessary connection between professional development models and the assessment strategies used in the evaluation of each model.
Effective Evaluation of Educational Technology Professional Development
The climate of accountability in the early 21st century has heightened the awareness of stakeholders at all levels of the education system to the need for data. These many audiences for the findings of educational technology professional development (referred to as ETPD throughout this article) demand more than descriptive studies reporting isolated anecdotal narratives, even if those narratives share compelling stories of success. Clearly, a planned evaluation strategy, in place from the inception of the project, could assist professional developers in understanding the extent to which ETPD is effective, rigorous, and systematic.
Unfortunately, the field has not yet arrived at a level we can term effective by means of rigor and system. Instead, PD efforts are deemed effective based on teacher self-report and opinion—those data so easily collected on photocopied surveys and so unrealistically depended upon as meaningful facts. These means fall considerably short of demonstrating “whether professional development has changed practice and ultimately affected student learning” (Hirsch & Killion, 2009, p. 468) and have led to a literature base of well-intentioned descriptions of promising, yet isolated, implementations and informal lessons learned from individual programs. This “show-and-tell” line of publication misses the definition of rigor with the absence of stated research questions, planned designs, and matched and multiple data collection. It hinders the flow of research into practice, leaving educators to wonder what PD workshops they should attend to improve their teaching or their students’ learning, as little or “virtually no information exists to help consumers of professional development” (Hill, 2009, p. 473). Further corroborating the troubling state of professional development literature, a 2009 review of published accounts of professional development with respect to student learning outcomes found that “only nine of the original list of 1,343 studies met the standards of credible evidence set by the What Works Clearinghouse” (Guskey & Yoon, 2009, p. 496).
Researchers have suggested frameworks to guide the assessment of ETPD. Desimone (2009) suggests that researchers seek a consistent framework to enable PD to be “based on a strong theoretical grounding” confirmed through multiple methods including case study, correlational, and experimental approaches (p. 186). Lawless and Pellegrino (2007) have recommended an evaluation schema that addresses (a) types of professional development, (b) unit of analysis, and (c) designs and methods. Much earlier, professional developers had been advised to make concerted efforts to systematically collect data on professional development in terms of the teacher and student outcomes (Guskey, 2000). The persistent challenge for professional developers as consumers of evaluation research reports is sifting out effects on teaching and learning that can be attributed to technology use from those results that are the result of other initiatives. Along these lines, Pianta & Hamre (2009) assert that the value of “observational assessment of teachers for leveraging improvements in educational outcomes is that they can be directly related to the investigation and experimentation of specific interventions aimed at improving teaching” (p. 115).
Effective and meaningful assessment of ETPD requires that we design inservice learning activities that can be measured using methods that are consistent with what we know about teaching and learning; recognize teacher and student change as it relates to the larger teaching and learning context; and view evaluation as an inseparable component of ongoing teacher action. We therefore offer for consideration an ETPD assessment model that merges three theoretical constructs currently enjoying much note and utility, through which professional development consumers might interpret research findings: (a) technological pedagogical content knowledge (TPACK); (b) organizational learning; and (c) participant research and inquiry.
Evaluating PD According to TPACK: The What
Professional development comes in all sizes and flavors, and to make an accurate assessment of the quality and impact of an activity, professional developers must consider the variety of ways teachers learn and the variety of variables that could affect teacher learning. However, simply recognizing that there are differences among professional development attributes, and recognizing just how those attributes can be interrelated for effective technology use, are two very different things. Layering any examination of ETPD findings with the TPACK model (Mishra & Koehler, 2006; Pierson, 2001) provides a helpful lens through which to view the process in light of current pedagogical thinking for 21st-century learners and teachers. The fields of educational technology and teacher education have come to agreement around the concept of TPACK to describe the meaningful use of technology in teaching and learning. Derived from Shulman’s (1986) notion of teaching as the intersection of content knowledge and pedagogical knowledge, the definition of 21st-century teaching also demands excellence in technological knowledge. True technology integration is said to be at the intersection of all three elements (Pierson, 2001). Further, the intersection of any two of the elements defines worthwhile knowledge sets: technological content knowledge, or technologies used for specific content applications; and technological pedagogical knowledge, or technology use for specific pedagogical purposes. Evaluating the effectiveness of professional development, then, must consider how well teachers are prepared to meaningfully use technologies in discipline-specific ways as well as ways that are compatible with multiple teaching and learning strategies and roles.
It stands to reason that if the elusive goal of effective technology-integrated teaching can be found at the intersection of content, pedagogy, and technology, then it logically follows that at this same center will be effective assessment (see Figure 1, p. 128). Assessment is an integral and inseparable part of the curriculum development and teaching process— one leading to the next and cycling back again. And if effective technology integration—and assessment of such—is the goal of educational technology professional development, then these elements should be prominent in any evaluation model. Lawless and Pellegrino hint at this importance when they say that “the most important impact a professional development activity can have on a teacher is that of pedagogical practice change ostensibly reflecting a deeper change in pedagogical content knowledge” (p. 597). Likewise, Guskey and Yoon (2009) found that PD projects are effective when they focus on enhancing teachers’ content knowledge and pedagogical content knowledge.
So, as the field continues to explore the usefulness of the TPACK construct to define teacher knowledge with technology, it must also push that exploration into how TPACK can shape evaluation of ETPD efforts.
Evaluating Professional Development within the Context of Organizational Learning: The Where
The problem with those descriptive studies of isolated pockets of successful professional development is that the authors are not always clear enough—and the readers make incomplete assumptions about—the surrounding context of school, student, and administrative factors. Of course, a highly successful effort in one instance may not work as well, even if implemented to the letter, in a less supported or funded or engaged context. In short, context matters, for both ETPD implementation and assessment. To assess the effectiveness of professional development in leading teachers to longlasting gains in knowledge, attitudes, and instructional behaviors, we must examine supporting factors within the teaching and learning context.
Professional development has been characterized in the literature as a variety of leveled developmental models with fixed sequences of stages and levels of knowledge and skills acquisition (Dall’Alba & Sandberg, 2006). Early frameworks for understanding how teachers learned to use technology aimed to shift the focus from the technology tools themselves to the teachers and their developmental needs, and in doing so, they addressed the uniqueness of learners who participate in any professional development session. The role of the school organization, then, was to address these individual needs, moving teachers to more advanced stages.
However, although assessing teachers’ needs, as well as understanding the types of assistance teachers might require and concerns they might have based on developmental levels, are indeed necessary steps, the focus on a single teacher’s needs and progress cannot reveal all requirements for success. In fact, the implied linear nature of these staged models may conceal “more fundamental aspects of professional skill development” (Dall’Alba & Sandberg, 2006, p. 383). Rather, we need to examine the learning of an individual within the learning of the system as a whole; this implies a shift toward thinking about how individuals fit into the larger organization and the additional learning that must take place on the organizational levels. Successful educational technology PD initiatives are characterized by an expanded, informed, and connected view of learning on both the individual and the organizational level.
A systems view suggests a focus not only on individual teacher and student growth, but also on changes in organization policies and procedures, infrastructure, curriculum and instruction, expectations for stakeholders, and organizational climate (Newman, 2008). Teacher growth and change will not be sustained without organizational support and, in fact, may be sabotaged (Borthwick & Risberg, 2008). Harris (2007) recommends that making choices about the particular professional development methods and strategies—selecting, combining, and sequencing aspects to craft an overall approach based on its unique needs as a learning organization—is exactly what a school or district must do. Trying out new technologies requires teachers to assume a level of risktaking; to allow for success, teachers will need to work in “a climate of trust, collaboration, and professionalism” where administrators “promote technology-related risk taking among teachers on behalf of students” (Borthwick & Risberg, 2008, p. 39). Desimone (2009) sees context as “an important mediator and moderator” for implementation of professional development models (p. 185), and even further, Zhao, Frank, and Ellefson (2006) suggest, in correlating classroom technology use with professional development activities, that “schools should develop a culture instead of a program of professional development” (p. 173). Evaluation of ETPD effectiveness must, then, include an assessment of the context in which that development is occurring. In that way, consumers of ETPD findings can determine in what ways they can scale a successful project, locally situated within the context of a school whose staff are committed to continuous learning, to different contexts.
Evaluating Professional Development through Practitioner Research: The How
Change in pedagogical practice is the ultimate goal of professional development. However, few studies use data on teacher use to inform their practice (Lawless & Pellegrino, 2007). So, even if professional developers ensure that assessment of ETPD focuses on teacher knowledge (TPACK) and occurs in a supportive context, how can they feed the results back into practice and allow that practice to inform continued research? The simple answer is that the two—research and practice—must in fact be one and the same.
The evaluation of such a complex educational endeavor as ETPD must employ multiple, flexible, systematic, and rigorous methods. It is the latter two on this list that are infrequently found in reports of ETPD. Yet, if approached from a rigorous stance, even such research methods as action research and case study can meet the “platinum standard” of research. Editors of journals in technology and teacher education propose: “The platinum standard requires rigorous research in authentic school settings that approaches idealized designs as nearly as possible, given the constraints of schools and real-world learning environments” (Schrum et al., 2005, p. 204). These editors recommended undertaking research based on authentic classroom settings as long as it is grounded in theory and builds upon the existing knowledge base. Operating within a platinum research standard opens up as “acceptable” those forms of experimental research that do not require a control group. The approach is a more reasonable methodological goal for classroombased research than a true experimental model, which is often referred to as the “gold standard.”
In particular, action research, with its practical and indigenous classroom applications (Nolen & Putten, 2007, p. 403) can provide a framework to explicitly connect professional development to evaluation methodology. As explained by Ham, Wenmoth, & Davey (2008), a self-study approach, then, might not only be the professional development method, but also serve as a form of assessment, leading to larger, common answers about student outcomes. Such a spiraled sequence of inquiry, data collection and analysis, and reflection uses the results of systematic inquiry to inform and lead into the next phase of questioning. The role of teachers as participant researchers is critical to the diagnosis of learning outcomes, identification of subsequent instructional strategies, and input to policy development (Borthwick, 2007–2008). Participant research as a 21st-century professional development assessment model allows teachers as professionals to look at their practice in new ways (Linn, 2006); respects teacher knowledge and experience; and provides a long-term as well as immediate evaluation and feedback loop, with small findings continuously driving the next steps of instruction (Fullan, Hill, & Crévola, 2006).
In other words, instead of just outside evaluators coming in to assess teaching practice at the end of a prescribed period and some time later feeding those findings back into another professional development workshop, teachers as researchers constantly ask questions of their teaching, collect and analyze multiple forms of data, collaborate with one another, and feed ongoing findings into tomorrow’s teaching plans. This metacognitive approach to the evaluation of professional development enables teachers’ lifelong learning, thus extending the reach of every formal professional development effort.
Unfortunately, in actual practice, classroom teachers are rarely supported or encouraged to engage in research or scholarly writing; thus, this metacognitive approach may not ever come to be.
Ideal partners to facilitate this process are those in higher education—researchers in educational technology and related fields who are regularly engaged in such scholarly activity (Cunningham, et al., 2008).
Teachers who have experienced the opportunity to collaborate and co-teach with such outside technology partners were afforded opportunities to experiment with teaching with new
technologies (Zhao, Frank, & Ellefson, 2006).
Such school–university partnerships can create the framework for ongoing co-research habits that will continually inform classroom practice and research alike.
A Model for Effective, Rigorous, and Systematic Evaluation of Educational Technology Professional Development
Assessment of ETPD must recognize the interaction among the TPACK variables of content, pedagogy, and technology to understand the extent of meaningful technology-supported teaching.
Assessment
of ETPD must consider the teaching and learning context, both locally and broadly.
And assessment of ETPD must close an action research loop joining research with practice.
We propose extending the TPACK model, defining effective technology-enhanced teaching, to guide the assessment of ETPD when supported by the surrounding “frame” of context and participant research, specifically phases of Reflection, Inquiry, Collaboration, and Sharing (see Figure 2, p. 130).
In this model, individual and organizational learning occur over and over again as educators and their research partners engage in the action research process in light of the organizational context, thus positioning assessment of ETPD as a culture of learning.
This contextually situated and inquiry-framed model of ETPD positions TPACK as not only the center of a conceptual model for effective professional development, but also as the center of a comprehensive plan for assessment embedded in the work and culture of teaching.
Further, it implies that successful
assessment of PD efforts must have at their centers the basic definitional components of TPACK elements.
They must resemble the iterative action research cycle: encouraging PD facilitators to begin any staff development session by asking participants to think about what they can learn from it and how what they will learn is situated in the work that they already do, by posing questions about how teaching and learning can improve, by collaborating with peers and more experienced colleagues to solve problems of practice, and by evaluating and sharing findings with one another as part of an ongoing effort at collective improvement.
Such a cycle of inquiry exemplifies Fullan, Hill, and Crévola’s (2006) Breakthrough model, in which “teachers will operate as interactive expert learners all the time” (p. 95).
Observing and documenting teacher behaviors “in the thick” of classroom practice, whether done by teachers as participant researchers or in collaboration with research partners, suggests the most direct route to evaluating effects of both formally supported and personally initiated professional development.
The Utility of this Model
The potential power of ETPD to enhance teacher knowledge and skills, and thus improve student learning, means it is worth our time to understand what works and in what contexts.
Although there are some good, reliable tools available, “they have not been subject to repeated use and validation and are not widely available” (Desimone, 2009, p. 192).
Collaborative research efforts such as the Distributed Collaborative Research Model (DCRM) (Pierson, Shepherd, & Leneway, 2009) show the promise of providing guidance about how assessment of ETPD might meet the “platinum standard” of educational technology research.
The intentionally collaborative nature of DCRM embeds
into the research design from the outset the capacity for rigorous and valid practice.
Partnerships, both in the form of higher education partners working with school partners and partners collaborating across distant locations to obtain a larger database about which to speak more broadly regarding joint findings, are at the heart of this planned collaborative research.
The DCRM model further recommends the need for consistent methods and data collection tools to facilitate interinstitutional collaborations.
Assessment tools with which we are familiar, including observations, surveys, interviews, and text and video analysis, all have value.
The quest for rigor will not be easy. Chapter authors in our TCP text spent much more time describing the activities undertaken as part of the implementation process of their model than providing evidence of student achievement. There are several logical explanations for this, including minimal budgets for program evaluation and minimal time for participant observers to collect data when they were in the thick of preparing for and implementing professional development activities. But our expectations as consumers of professional development reports need to change. We must expect a reporting of evaluation measures, something that now is rarely done (Desimone, 2009). We must expect that we see detail about “what works, for whom, and under what conditions” (Lawless & Pellegrino, 2007, p. 599). We must expect that the ETPD community will share findings among themselves, such as in a clearinghouse (Nolen & Putten, 2007) or in database format, such as the Action Research for Technology Integration (ARTI) (http://etc.usf.edu/fde/arti.php) at the University of Florida (Dawson, Cavanaugh, & Ritzhaupt, 2008).
Instead of viewing the complexity of classroom implementation of new instructional approaches as a detriment, we must learn to base ETPD in the heart of where the action takes place. And in fact our proposal of a contextually situated and inquiry-framed TPACK model as a template for the design of ETPD capitalizes on the work of teachers and research partners who embrace an inquiry approach to teaching and learning, connecting systematic evaluation of their participation in professional development activities directly to their effectiveness in the classroom. No matter what format PD takes, through all types of partnered approaches—including mentoring, peer coaching, students as professional developers, and professional learning circles—a contextually situated and inquiry-framed model of ETPD assessment can scaffold the assessment process. If we seek validity in our work and reporting, we must commit ourselves to systematic study of our work and documentation of related outcomes. In this way, our expectations as consumers of—and participants in—professional development evaluation can change.
See references and additional info in the original article
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
There are four questions in the reading task this week. Please answer each of my original questions individually (4 responses) and then reply to two of your classmates initial comments in any of the four elements in a substantive manner (2 responses).
4 initial posts + 2 responses to classmates = 6 total comments
New Conversation
Hide Full Comment
It requires designing learning activities that can be measured using methods aligned with teaching and learning principles, recognizing changes in both teachers and students within the broader educational context, and viewing evaluation as an integral part of ongoing teacher actions. To achieve this, a comprehensive ETPD assessment model is proposed, combining three key concepts: technological pedagogical content knowledge (TPACK), organizational learning, and participant research and inquiry.
In light of this, it is recommended to consider the development of education software https://inoxoft.com/industries/education-industry/ that aligns with the principles of the proposed ETPD assessment model. This software should support teachers in enhancing their technological pedagogical content knowledge, facilitate organizational learning within educational institutions, and encourage participants to engage in research and inquiry as part of their professional development journey. By incorporating these elements into the software, it can effectively contribute to meaningful and effective ETPD outcomes, ultimately benefiting both teachers and students.
New Conversation
Hide Full Comment
Before delving into the article, consider the initial argument here… how well do typical end-of-session surveys do at measuring the deeper impact of professional development?
What are the other measures — both quantitative and qualitative — that we would need to see, over time, in order to draw valid conclusions about the long-term effects of a professional development program?
New Conversation
Hide Full Comment Hide Thread Detail
I’m not disagreeing with the picture they painted about rushing out of PD, etc., but then they also discussed gathering data in a classroom and the many ways this gets disrupted as well. I THINK I get where they were going with this, but it seemed odd to discuss if their focus is PD instead of general data-gathering. It made my brain begin to go in a different direction.
Anyway, I think one issue that hasn’t been brought up (yet; I’m not done reading) is the concern over job security and privacy. For example, we do a lot of in-house PD at my district and take online surveys afterward. I’ve been present when administration has reviewed those surveys and they most certainly are NOT anonymous as (repeatedly, in writing) promised. If we do not complete them, we are emailed continually until it is complete, which should be an indication that it is not anonymous. Is this data really being used to measure learning or for other purposes? Is it used to steer the direction of future PD? I can only comment on my opinions because the results of those surveys are usually not shared.
I know this is not the case for all contexts, but I do know that many worry about how their responses will come back to bite them in the butt, so to speak, or if it will have any impact at all. And I wonder how many PD leaders know what to do with the feedback they receive or if they know whether or not it is even flawed data they’re given. In order to have valid data, I believe longitudinal data speaks VOLUMES (although I imagine this is difficult for researchers).
New Conversation
Hide Full Comment
In reality, the post-PD surveys are just measuring whether the presenter sparked interest for us to explore more and attempt to use the new tool presented. Yes, we need to have the motivation that we can get from a good PD, but other measures need to take place to determine if we are then actually going to put it into practice and in a way that enhances the learning.
Personally, I think these measures I would think could be both quantitative and qualitative, but I’d really rather see observation here to determine how the info from the PD is used.
New Conversation
Hide Full Comment Hide Thread Detail
Have you ever had a presenter send a follow-up email months later to check in? You would think those surveys would really want to know how effective a session was in helping educators implement ____ into classroom teaching, so wouldn’t that be a good way to find out?
New Conversation
Hide Full Comment
While I don’t expect that you would read this entire document, please do read pp. 7-8 and then return here: https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_procedures_v3_0_standards_handbook.pdf
Now that you have a sense of how the WWC reviews studies, why do you think it would be particularly difficult to set up a study of effective professional development that meets the standards (notably, the criteria for randomized-controlled trials or quasi-experimental designs)?
In other words, if holding to a statistically viable understanding of “what works,” why is it that we are very unlikely to ever really know what works with technology implementation based on professional development?
New Conversation
Hide Full Comment Hide Thread Detail
New Conversation
This sentence sums up quite a bit!! How well teachers are prepared is the key, and this requires more than just one PD. It also requires time (that all-precious commodity we lack) to explore and work with the technology in order to integrate it with our content knowledge and pedagogical knowledge.
New Conversation
Hide Full Comment Hide Thread Detail
I also think having a structured support system or framework is important too. It can feel overwhelming to overhaul and do something new and especially so without guidance, even with infinite time. However, too many districts don’t provide enough of either for schools/teachers to be successful :(
New Conversation
Hide Full Comment
And this would lend itself to a great conversation…an open dialogue for all to discuss what it means to be effective. How can we use tech. in a learner-centered way? How can do so when the learners are TEACHERS DURING PD? How can we model during PD? How can we measure the effectiveness of PD through long-term, ongoing observations, discussions with students about standards, goals, purposes of tech., and so on, coaching sessions, use of a framework…???
Basically, start treating effective integration as the end goal and decide how you will assess it. Realize that you will never assess it in a basic, old-school way, so you’ll have to provide quality, ongoing training and various methods to “check in” and see (formatively) how it’s going. It doesn’t have to wait until the end and not everyone needs the same amount of time.
New Conversation
Hide Full Comment
New Conversation
New Conversation
“Why aren’t qualitative studies reviewed by the WWC? The goal of the WWC is to assess evidence of program effectiveness. Therefore, studies included in WWC reviews are based on research designs (randomized controlled trials, quasi-experimental designs, regression discontinuity designs, or single-case designs) that are widely believed to assess the impacts of programs on outcomes of interest through the identification of credible counterfactuals (what would have been observed in the absence of the intervention). Qualitative studies are useful for many things, such as understanding context, assessing the fidelity of implementation, or identifying relationships for further study. However, qualitative studies do not allow for an assessment of effectiveness resulting from a credible comparison of outcomes between program and counterfactual conditions. For this reason, they are not included in WWC reviews.”
What could be done – in this model of ETPD – to mitigate some of the (potentially negative) effects of qualitative elements of the research? In other words, how could qualitative data be used in a productive way to complement the quantitative data being gathered?
New Conversation
Hide Full Comment Hide Thread Detail
, it would undoubtedly impact teaching and learning. If one were so inclined to collect quantitative data after a bit of time (I imagine it’d take some time to impact the integration of technology, or whatever was being measured), I suspect a solid mixed methods study could be put together showing a positive correlation of student (and teacher) learning, teaching, technology integration because of this new systematic approach to PD.
New Conversation
Hide Full Comment
http://uso.edu/network/workforce/able/reference/development/PD_Eval_Framework_Report.pdf
I encourage you to look through the entire booklet, but focus in on the “Description” section for each level:
- Level 1: Satisfaction (p. 6)
- Level 2: Learning (p. 8)
- Level 3: Behavior (p. 10)
- Level 4: Impact (p. 12)
In order to measure ETPD at levels 2, 3, and/or 4, what would you — as a professional development leader and researcher — need to be able to do with/for participants? In what ways would you need to collect data from them, over time, and across contexts? How would this align with the what (TPACK), where, and how noted above?
New Conversation
Hide Full Comment Hide Thread Detail
It states that Learning needs to be measured before the “event” begins, but perhaps this could be self-assessment. Again, as I mentioned earlier, learning can be “assessed” throughout and formatively, not just in a culminating test at the end when everyone else takes it. I would assume, for a topic such as ETPD, this is ongoing and spans a great deal of time, so I would have the opportunity to have check-ins and collect formative data: “interviews” (chats), observation, lesson plans, using a framework, going through the act of choosing/evaluating a tech. tool and integrating it with the teacher, viewing student data, talking with students, surveys, and even keeping records of email correspondence. Perhaps even co-teaching would fit into this as well.
Honestly, I think these would apply to levels 3 and 4 as well.
New Conversation
Hide Full Comment
General Document Comments 0