Framing the Assessment of Educational Technology Professional Development in a Culture of Learning
Melissa Pierson
University of Houston
Arlene Borthwick
National-Louis University
2010. Journal of Digital Learning in Teacher Education, 26(4), 126–131.
Abstract
Assessing the effectiveness of educational technology professional development (ETPD) must go beyond obtaining feedback from participants about their level of satisfaction with a workshop presenter. Effective and meaningful assessment of ETPD requires that we design inservice learning activities that can be measured using methods that are consistent with what we know about teaching and learning, recognize teacher and student change as it relates to the larger teaching and learning context, and view evaluation as an inseparable component of ongoing teacher action. We therefore offer for consideration an ETPD assessment model that merges three theoretical constructs through which professional development consumers might interpret research findings: (a) technological pedagogical content knowledge (TPACK), (b) organizational learning, and (c) participant research and inquiry.
(Keywords: professional development, assessment, TPACK, action research)
We’ve all been there.
As you gather your notes and belongings at the end of a long day of learning some trendy, new technology tips and tricks, someone hands you a page with a few questions to answer: What did you learn today?
How likely are you to use what you learned?
Was the presenter effective?
You’re anxious to get home, eyes weary from peering at the computer screen all afternoon, so you do your best to circle the numbers that would be complimentary to the presenter—she tried hard to be funny and keep our attention, after all—but you skip past the large, open-ended comments box in your rush to get out the door.
Or maybe you are the professional developer charged with collecting data to evaluate the outcomes of a grant-funded project.
You plan ahead with teachers and
other school staff.
You provide an introduction during large- and small-group activities.
Later you proceed to “collect data” in classrooms, but instead you find yourself helping students and teachers with new software features and collaborative student projects.
While in the thick of providing support to the success of classroom implementation, you might find yourself setting aside your observation protocol.
Soon the class period is over and students have no time to enter comments for the daily journal prompts you developed (those data you planned to use for the evaluation).
These scenarios are all too common, and yet these data—haphazardly solicited and carelessly offered—may represent the total information source we have for judging the effectiveness of our efforts in preparing teachers to teach with 21st-century skills and tools. Any conclusions we draw from these data are necessarily limited, skewed, and of questionable validity. In the first scenario, data are likely more representative of how personable the presenter was, or how tired the participants were, than how well facilitated, well designed, or well matched to learners’ needs the learning experience was. In the second scenario, the presenter frets over lost opportunities for meaningful data collection, acknowledging that the data set will be smaller than desirable.
Assessing the effectiveness of educational technology professional development must go far beyond obtaining feedback from participants about their level of satisfaction with a workshop presenter. The “impact the professional development activity had on pedagogical change or student learning” (Lawless & Pellegrino, 2007 p. 579) is of particular importance to our field’s efforts to prepare teachers at all levels to use technology effectively. However, basing our evaluation on incomplete data leads us to an incomplete understanding and incomplete assumptions about the role of professional developers, teacher educators, and teachers.
The genesis of our thoughts on the assessment of professional development in educational technology was our work in co-editing the text Transforming Classroom Practice: Professional Development Strategies in Educational Technology (TCP) (Borthwick & Pierson, 2008). Through the solicitation of proposals and collaboration with chapter authors, what became clear to us in so many ways is the necessary connection between professional development models and the assessment strategies used in the evaluation of each model.
Effective Evaluation of Educational Technology Professional Development
The climate of accountability in the early 21st century has heightened the awareness of stakeholders at all levels of the education system to the need for data. These many audiences for the findings of educational technology professional development (referred to as ETPD throughout this article) demand more than descriptive studies reporting isolated anecdotal narratives, even if those narratives share compelling stories of success. Clearly, a planned evaluation strategy, in place from the inception of the project, could assist professional developers in understanding the extent to which ETPD is effective, rigorous, and systematic.
Unfortunately, the field has not yet arrived at a level we can term effective by means of rigor and system. Instead, PD efforts are deemed effective based on teacher self-report and opinion—those data so easily collected on photocopied surveys and so unrealistically depended upon as meaningful facts. These means fall considerably short of demonstrating “whether professional development has changed practice and ultimately affected student learning” (Hirsch & Killion, 2009, p. 468) and have led to a literature base of well-intentioned descriptions of promising, yet isolated, implementations and informal lessons learned from individual programs. This “show-and-tell” line of publication misses the definition of rigor with the absence of stated research questions, planned designs, and matched and multiple data collection. It hinders the flow of research into practice, leaving educators to wonder what PD workshops they should attend to improve their teaching or their students’ learning, as little or “virtually no information exists to help consumers of professional development” (Hill, 2009, p. 473). Further corroborating the troubling state of professional development literature, a 2009 review of published accounts of professional development with respect to student learning outcomes found that “only nine of the original list of 1,343 studies met the standards of credible evidence set by the What Works Clearinghouse” (Guskey & Yoon, 2009, p. 496).
Researchers have suggested frameworks to guide the assessment of ETPD. Desimone (2009) suggests that researchers seek a consistent framework to enable PD to be “based on a strong theoretical grounding” confirmed through multiple methods including case study, correlational, and experimental approaches (p. 186). Lawless and Pellegrino (2007) have recommended an evaluation schema that addresses (a) types of professional development, (b) unit of analysis, and (c) designs and methods. Much earlier, professional developers had been advised to make concerted efforts to systematically collect data on professional development in terms of the teacher and student outcomes (Guskey, 2000). The persistent challenge for professional developers as consumers of evaluation research reports is sifting out effects on teaching and learning that can be attributed to technology use from those results that are the result of other initiatives. Along these lines, Pianta & Hamre (2009) assert that the value of “observational assessment of teachers for leveraging improvements in educational outcomes is that they can be directly related to the investigation and experimentation of specific interventions aimed at improving teaching” (p. 115).
Effective and meaningful assessment of ETPD requires that we design inservice learning activities that can be measured using methods that are consistent with what we know about teaching and learning; recognize teacher and student change as it relates to the larger teaching and learning context; and view evaluation as an inseparable component of ongoing teacher action. We therefore offer for consideration an ETPD assessment model that merges three theoretical constructs currently enjoying much note and utility, through which professional development consumers might interpret research findings: (a) technological pedagogical content knowledge (TPACK); (b) organizational learning; and (c) participant research and inquiry.
Evaluating PD According to TPACK: The What
Professional development comes in all sizes and flavors, and to make an accurate assessment of the quality and impact of an activity, professional developers must consider the variety of ways teachers learn and the variety of variables that could affect teacher learning. However, simply recognizing that there are differences among professional development attributes, and recognizing just how those attributes can be interrelated for effective technology use, are two very different things. Layering any examination of ETPD findings with the TPACK model (Mishra & Koehler, 2006; Pierson, 2001) provides a helpful lens through which to view the process in light of current pedagogical thinking for 21st-century learners and teachers. The fields of educational technology and teacher education have come to agreement around the concept of TPACK to describe the meaningful use of technology in teaching and learning. Derived from Shulman’s (1986) notion of teaching as the intersection of content knowledge and pedagogical knowledge, the definition of 21st-century teaching also demands excellence in technological knowledge. True technology integration is said to be at the intersection of all three elements (Pierson, 2001). Further, the intersection of any two of the elements defines worthwhile knowledge sets: technological content knowledge, or technologies used for specific content applications; and technological pedagogical knowledge, or technology use for specific pedagogical purposes. Evaluating the effectiveness of professional development, then, must consider how well teachers are prepared to meaningfully use technologies in discipline-specific ways as well as ways that are compatible with multiple teaching and learning strategies and roles.
It stands to reason that if the elusive goal of effective technology-integrated teaching can be found at the intersection of content, pedagogy, and technology, then it logically follows that at this same center will be effective assessment (see Figure 1, p. 128). Assessment is an integral and inseparable part of the curriculum development and teaching process— one leading to the next and cycling back again. And if effective technology integration—and assessment of such—is the goal of educational technology professional development, then these elements should be prominent in any evaluation model. Lawless and Pellegrino hint at this importance when they say that “the most important impact a professional development activity can have on a teacher is that of pedagogical practice change ostensibly reflecting a deeper change in pedagogical content knowledge” (p. 597). Likewise, Guskey and Yoon (2009) found that PD projects are effective when they focus on enhancing teachers’ content knowledge and pedagogical content knowledge.
So, as the field continues to explore the usefulness of the TPACK construct to define teacher knowledge with technology, it must also push that exploration into how TPACK can shape evaluation of ETPD efforts.
Evaluating Professional Development within the Context of Organizational Learning: The Where
The problem with those descriptive studies of isolated pockets of successful professional development is that the authors are not always clear enough—and the readers make incomplete assumptions about—the surrounding context of school, student, and administrative factors. Of course, a highly successful effort in one instance may not work as well, even if implemented to the letter, in a less supported or funded or engaged context. In short, context matters, for both ETPD implementation and assessment. To assess the effectiveness of professional development in leading teachers to longlasting gains in knowledge, attitudes, and instructional behaviors, we must examine supporting factors within the teaching and learning context.
Professional development has been characterized in the literature as a variety of leveled developmental models with fixed sequences of stages and levels of knowledge and skills acquisition (Dall’Alba & Sandberg, 2006). Early frameworks for understanding how teachers learned to use technology aimed to shift the focus from the technology tools themselves to the teachers and their developmental needs, and in doing so, they addressed the uniqueness of learners who participate in any professional development session. The role of the school organization, then, was to address these individual needs, moving teachers to more advanced stages.
However, although assessing teachers’ needs, as well as understanding the types of assistance teachers might require and concerns they might have based on developmental levels, are indeed necessary steps, the focus on a single teacher’s needs and progress cannot reveal all requirements for success. In fact, the implied linear nature of these staged models may conceal “more fundamental aspects of professional skill development” (Dall’Alba & Sandberg, 2006, p. 383). Rather, we need to examine the learning of an individual within the learning of the system as a whole; this implies a shift toward thinking about how individuals fit into the larger organization and the additional learning that must take place on the organizational levels. Successful educational technology PD initiatives are characterized by an expanded, informed, and connected view of learning on both the individual and the organizational level.
A systems view suggests a focus not only on individual teacher and student growth, but also on changes in organization policies and procedures, infrastructure, curriculum and instruction, expectations for stakeholders, and organizational climate (Newman, 2008). Teacher growth and change will not be sustained without organizational support and, in fact, may be sabotaged (Borthwick & Risberg, 2008). Harris (2007) recommends that making choices about the particular professional development methods and strategies—selecting, combining, and sequencing aspects to craft an overall approach based on its unique needs as a learning organization—is exactly what a school or district must do. Trying out new technologies requires teachers to assume a level of risktaking; to allow for success, teachers will need to work in “a climate of trust, collaboration, and professionalism” where administrators “promote technology-related risk taking among teachers on behalf of students” (Borthwick & Risberg, 2008, p. 39). Desimone (2009) sees context as “an important mediator and moderator” for implementation of professional development models (p. 185), and even further, Zhao, Frank, and Ellefson (2006) suggest, in correlating classroom technology use with professional development activities, that “schools should develop a culture instead of a program of professional development” (p. 173). Evaluation of ETPD effectiveness must, then, include an assessment of the context in which that development is occurring. In that way, consumers of ETPD findings can determine in what ways they can scale a successful project, locally situated within the context of a school whose staff are committed to continuous learning, to different contexts.
Evaluating Professional Development through Practitioner Research: The How
Change in pedagogical practice is the ultimate goal of professional development. However, few studies use data on teacher use to inform their practice (Lawless & Pellegrino, 2007). So, even if professional developers ensure that assessment of ETPD focuses on teacher knowledge (TPACK) and occurs in a supportive context, how can they feed the results back into practice and allow that practice to inform continued research? The simple answer is that the two—research and practice—must in fact be one and the same.
The evaluation of such a complex educational endeavor as ETPD must employ multiple, flexible, systematic, and rigorous methods. It is the latter two on this list that are infrequently found in reports of ETPD. Yet, if approached from a rigorous stance, even such research methods as action research and case study can meet the “platinum standard” of research. Editors of journals in technology and teacher education propose: “The platinum standard requires rigorous research in authentic school settings that approaches idealized designs as nearly as possible, given the constraints of schools and real-world learning environments” (Schrum et al., 2005, p. 204). These editors recommended undertaking research based on authentic classroom settings as long as it is grounded in theory and builds upon the existing knowledge base. Operating within a platinum research standard opens up as “acceptable” those forms of experimental research that do not require a control group. The approach is a more reasonable methodological goal for classroombased research than a true experimental model, which is often referred to as the “gold standard.”
In particular, action research, with its practical and indigenous classroom applications (Nolen & Putten, 2007, p. 403) can provide a framework to explicitly connect professional development to evaluation methodology. As explained by Ham, Wenmoth, & Davey (2008), a self-study approach, then, might not only be the professional development method, but also serve as a form of assessment, leading to larger, common answers about student outcomes. Such a spiraled sequence of inquiry, data collection and analysis, and reflection uses the results of systematic inquiry to inform and lead into the next phase of questioning. The role of teachers as participant researchers is critical to the diagnosis of learning outcomes, identification of subsequent instructional strategies, and input to policy development (Borthwick, 2007–2008). Participant research as a 21st-century professional development assessment model allows teachers as professionals to look at their practice in new ways (Linn, 2006); respects teacher knowledge and experience; and provides a long-term as well as immediate evaluation and feedback loop, with small findings continuously driving the next steps of instruction (Fullan, Hill, & Crévola, 2006).
In other words, instead of just outside evaluators coming in to assess teaching practice at the end of a prescribed period and some time later feeding those findings back into another professional development workshop, teachers as researchers constantly ask questions of their teaching, collect and analyze multiple forms of data, collaborate with one another, and feed ongoing findings into tomorrow’s teaching plans. This metacognitive approach to the evaluation of professional development enables teachers’ lifelong learning, thus extending the reach of every formal professional development effort.
Unfortunately, in actual practice, classroom teachers are rarely supported or encouraged to engage in research or scholarly writing; thus, this metacognitive approach may not ever come to be.
Ideal partners to facilitate this process are those in higher education—researchers in educational technology and related fields who are regularly engaged in such scholarly activity (Cunningham, et al., 2008).
Teachers who have experienced the opportunity to collaborate and co-teach with such outside technology partners were afforded opportunities to experiment with teaching with new
technologies (Zhao, Frank, & Ellefson, 2006).
Such school–university partnerships can create the framework for ongoing co-research habits that will continually inform classroom practice and research alike.
A Model for Effective, Rigorous, and Systematic Evaluation of Educational Technology Professional Development
Assessment of ETPD must recognize the interaction among the TPACK variables of content, pedagogy, and technology to understand the extent of meaningful technology-supported teaching.
Assessment
of ETPD must consider the teaching and learning context, both locally and broadly.
And assessment of ETPD must close an action research loop joining research with practice.
We propose extending the TPACK model, defining effective technology-enhanced teaching, to guide the assessment of ETPD when supported by the surrounding “frame” of context and participant research, specifically phases of Reflection, Inquiry, Collaboration, and Sharing (see Figure 2, p. 130).
In this model, individual and organizational learning occur over and over again as educators and their research partners engage in the action research process in light of the organizational context, thus positioning assessment of ETPD as a culture of learning.
This contextually situated and inquiry-framed model of ETPD positions TPACK as not only the center of a conceptual model for effective professional development, but also as the center of a comprehensive plan for assessment embedded in the work and culture of teaching.
Further, it implies that successful
assessment of PD efforts must have at their centers the basic definitional components of TPACK elements.
They must resemble the iterative action research cycle: encouraging PD facilitators to begin any staff development session by asking participants to think about what they can learn from it and how what they will learn is situated in the work that they already do, by posing questions about how teaching and learning can improve, by collaborating with peers and more experienced colleagues to solve problems of practice, and by evaluating and sharing findings with one another as part of an ongoing effort at collective improvement.
Such a cycle of inquiry exemplifies Fullan, Hill, and Crévola’s (2006) Breakthrough model, in which “teachers will operate as interactive expert learners all the time” (p. 95).
Observing and documenting teacher behaviors “in the thick” of classroom practice, whether done by teachers as participant researchers or in collaboration with research partners, suggests the most direct route to evaluating effects of both formally supported and personally initiated professional development.
The Utility of this Model
The potential power of ETPD to enhance teacher knowledge and skills, and thus improve student learning, means it is worth our time to understand what works and in what contexts.
Although there are some good, reliable tools available, “they have not been subject to repeated use and validation and are not widely available” (Desimone, 2009, p. 192).
Collaborative research efforts such as the Distributed Collaborative Research Model (DCRM) (Pierson, Shepherd, & Leneway, 2009) show the promise of providing guidance about how assessment of ETPD might meet the “platinum standard” of educational technology research.
The intentionally collaborative nature of DCRM embeds
into the research design from the outset the capacity for rigorous and valid practice.
Partnerships, both in the form of higher education partners working with school partners and partners collaborating across distant locations to obtain a larger database about which to speak more broadly regarding joint findings, are at the heart of this planned collaborative research.
The DCRM model further recommends the need for consistent methods and data collection tools to facilitate interinstitutional collaborations.
Assessment tools with which we are familiar, including observations, surveys, interviews, and text and video analysis, all have value.
The quest for rigor will not be easy. Chapter authors in our TCP text spent much more time describing the activities undertaken as part of the implementation process of their model than providing evidence of student achievement. There are several logical explanations for this, including minimal budgets for program evaluation and minimal time for participant observers to collect data when they were in the thick of preparing for and implementing professional development activities. But our expectations as consumers of professional development reports need to change. We must expect a reporting of evaluation measures, something that now is rarely done (Desimone, 2009). We must expect that we see detail about “what works, for whom, and under what conditions” (Lawless & Pellegrino, 2007, p. 599). We must expect that the ETPD community will share findings among themselves, such as in a clearinghouse (Nolen & Putten, 2007) or in database format, such as the Action Research for Technology Integration (ARTI) (http://etc.usf.edu/fde/arti.php) at the University of Florida (Dawson, Cavanaugh, & Ritzhaupt, 2008).
Instead of viewing the complexity of classroom implementation of new instructional approaches as a detriment, we must learn to base ETPD in the heart of where the action takes place. And in fact our proposal of a contextually situated and inquiry-framed TPACK model as a template for the design of ETPD capitalizes on the work of teachers and research partners who embrace an inquiry approach to teaching and learning, connecting systematic evaluation of their participation in professional development activities directly to their effectiveness in the classroom. No matter what format PD takes, through all types of partnered approaches—including mentoring, peer coaching, students as professional developers, and professional learning circles—a contextually situated and inquiry-framed model of ETPD assessment can scaffold the assessment process. If we seek validity in our work and reporting, we must commit ourselves to systematic study of our work and documentation of related outcomes. In this way, our expectations as consumers of—and participants in—professional development evaluation can change.
See references and additional info in the original article
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
There are four questions in the reading task this week. Please answer each of my original questions individually (4 responses) and then reply to two of your classmates initial comments in any of the four elements in a substantive manner (2 responses).
4 initial posts + 2 responses to classmates = 6 total comments
New Conversation
Hide Full Comment
Before delving into the article, consider the initial argument here… how well do typical end-of-session surveys do at measuring the deeper impact of professional development?
What are the other measures — both quantitative and qualitative — that we would need to see, over time, in order to draw valid conclusions about the long-term effects of a professional development program?
New Conversation
Hide Full Comment Hide Thread Detail
End of session surveys are unreliable.They make me think of end of course evals. Those that love and/or hate the session will tell you their thoughts on how they felt about it Not whether or not learning took place or if that learning was actually applied in the classroom setting. And while I know this is a generalization it is a tad bit true. I think we have to really “begin with the end in mind” when it comes to teacher PD. So often one and done PD sessions are drop in, learn this shiny new thing, and go back and do it on your own. What do we value? What is the end goal? How can we personalize learning for individual teachers yet also all work toward a shared vision for the building/district? How are we going to know when we’ve arrived at said goal? How will we measure our success? Is it test scores? Student growth on defined outcomes? Teacher comfort? Is simply trying a new strategy a few times and reflecting on the experience enough? Overall, I think these items are harder to measure. They take time, effort, and forethought on the part of building/district leadership.
New Conversation
Hide Full Comment
Kirkpatrick Levels are represented by stages of initial reaction, learning, behavior, and results. As a developer of instructional technology, it is important to determine the level to which people are engaging with your resources and the ways to demonstrate ongoing learning and use of the technology. Some of these steps can be accomplished through self-reported data, but more advanced steps require observation and quantitative data to supply the full picture of the process. It’s an involved process in the business world and, though it tends to garner success, I do not envision this measurement method migrating to schools because of the limited budget/tolerance for support of the professional development of teachers beyond the initial offering and the threats of “accountability” assessments.
New Conversation
Hide Full Comment
I have not been to a PD session where the end survey was intended to measure anything deeper than the presenter’s approach and/or the thoughts I have about the topic/content right at that moment. I’m sure if this is a series of development opportunities that will be offered, it is beneficial to know if the presenter is engaging in order to continue with the next presentation. However, when it comes to long-term measurement of my use of the tool once I leave the instruction room and return to my classroom is lacking. A follow-up survey or email a week or two, or even a few months later to ask me a couple quick questions of how I’m still using the tool might be much more meaningful in regards to actual data related to the PD program. Reaching out to offer support and help along the way would aid my use of a tool over time as well.
New Conversation
Hide Full Comment
While I don’t expect that you would read this entire document, please do read pp. 7-8 and then return here: https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_procedures_v3_0_standards_handbook.pdf
Now that you have a sense of how the WWC reviews studies, why do you think it would be particularly difficult to set up a study of effective professional development that meets the standards (notably, the criteria for randomized-controlled trials or quasi-experimental designs)?
In other words, if holding to a statistically viable understanding of “what works,” why is it that we are very unlikely to ever really know what works with technology implementation based on professional development?
New Conversation
Hide Full Comment Hide Thread Detail
PD is designed to help everyone improve. It doesn’t seem possible (or likely) that a building/district would purposefully provide a learning opportunity to one group of teachers and not provide that same opportunity to another group. If the goal is to improve as professionals it is inequitable to only provide training and support to a select group of educators. So while the science major in me understands why we would conduct a randomized-controlled trial – I can’t wrap my head around how to do this when it comes to PD. I know that what is fair isn’t always equal, and what is equal isn’t always fair but If the end goal is improving student learning how can we justify not providing a service/experiences/opportunities to a group of students? Additionally, I don’t see how we can have random sample and personalized learning experiences for teachers.
New Conversation
Hide Full Comment
Just like beta-testing, pilot studies, and envoys, it’s fine to have set of people who get a PD opportunity to which others are not invited. While it seems unjustified in the world of public education where we leave no one behind, we are not engraving that designation in stone or casting the rest into a leper colony.
If School A gets a PD session that School B does not, everyone will survive. After a study on the impact of the PD in comparison to the absence of it, a decision can be made whether it demonstrated enough value to provide for all schools in the district. On the other hand, results may suggest that the only teachers who engaged in learning activity and incorporated it into their classrooms were teachers with 5-10 years of experience, theoretically because newer teachers were too overwhelmed and hanging on for dear life and older teachers weren’t optn to the newer ideas. The district would be wise to manage its resources by providing the training in other schools to teachers at that experience level, though others might attend at will, until the session limit was reached. Of course, this type of approach is not a once-and-done activity. It is something that takes two or three years to effectively establish from beginning to end.
New Conversation
Hide Full Comment
The biggest trouble I have with building-level or district-level (sometimes even regional-level) PD is that they provide exactly the same thing to everyone. It doesn’t matter which building you’re in and the makeup of your student population. It doesn’t matter what you teach and whether the PD they are providing works for your subject matter. It doesn’t matter if you already do this thing they are preaching about or if you’ve never heard of it before. It’s like the admins pat themselves on the back for offering the PD and never gauge whether it was useful to the majority of the teachers who attended. We cannot randomly assign teachers to PD – they need to meaningfully and purposefully choose what they learn about because then they are most likely to actually try it in the classroom. Isn’t that why when you go to to MACUL or another conference, they show you the sessions available and you choose what works? If they randomly assigned you to sessions, they’d see a decline in the attendance. Random PD removes just a bit of teacher autonomy in the classroom and says that you should learn about X topic even if it doesn’t apply to what you do. We have this fight in our building every time the ISD comes in to teach us about something – our band and art teachers have a fit because they start talking math and ELA testing and those two feel their time is wasted.
New Conversation
Hide Full Comment
A message has been sent and understood that professional development is a need to create improvement. This is likely based on a request of the teachers for support and resources or could be spurred by a district-level attempt to mandate levels of engagement which are not being witnessed.
The fact that a cursory “assessment” is done demonstrates the disrespect for the request and the limited level of actual support, whether it is on the basis of teacher needs or district goals. When the experience is driven by the teachers it is intended to serve and represent, the outcomes are more likely to improve. When the district becomes realistic about its expectations and the workload they represent, there can be better application of the concept.
New Conversation
Hide Full Comment Hide Thread Detail
“could be spurred by a district-level attempt to mandate levels of engagement which are not being witnessed.”
This is the hardest thing to do, isn’t it? Sit through PD that the district says you need, but that no one can make “fit” to what you do? The district says we educators need PD on X topic so they bring in someone. That someone knows how to make X topic fit in a math or ELA classroom because that’s what they used to teach. When Spanish asks how to make it fit, or art does, they have no idea and now those teachers are already put-off from the development opportunity in front of them. I would argue that the best PD always stems from the desires of the teachers rather than top-down.
New Conversation
Hide Full Comment
While TPACK has value, its focus on knowledge and not application betrays the shortcomings it offers as a framework to provide for the assessment of professional development. Although the next sentence acknowledges the use of technology, it is not embedded in the TPACK model which focuses on acquisition of knowledge, not the demonstration of it or the assessment of that demonstration.
New Conversation
Hide Full Comment
Ha ha! Wouldn’t this stand to reason that if you provide teachers with PD that is specifically related to what they do every day that it would be most effective? The challenge then becomes offering so many opportunities. In our building, we have 1 science teacher, 1 math teacher, 1 social studies teacher, etc. So either our admins have to collaborate with others around the region to pool teachers together in PD which ties to their content knowledge or pedagogical strategies for teaching their content OR… they just bring in something generic and present it to all building teachers with the hope that we can extrapolate what’s important and apply it ourselves to the subject we teach.
New Conversation
Hide Full Comment
For me, it’s not a lack of support or encouragement that prevents me from doing some of my own research and data collection – it’s time. There are so many other things that are required of me that I have to prioritize my time and gathering data I don’t officially need to report anyone doesn’t rank very high. I’ve read a few articles about providing teachers with training about something and using them as action research participants to see how it continues and it can work but teachers must devote a lot of time to that.
New Conversation
Hide Full Comment
“Why aren’t qualitative studies reviewed by the WWC? The goal of the WWC is to assess evidence of program effectiveness. Therefore, studies included in WWC reviews are based on research designs (randomized controlled trials, quasi-experimental designs, regression discontinuity designs, or single-case designs) that are widely believed to assess the impacts of programs on outcomes of interest through the identification of credible counterfactuals (what would have been observed in the absence of the intervention). Qualitative studies are useful for many things, such as understanding context, assessing the fidelity of implementation, or identifying relationships for further study. However, qualitative studies do not allow for an assessment of effectiveness resulting from a credible comparison of outcomes between program and counterfactual conditions. For this reason, they are not included in WWC reviews.”
What could be done – in this model of ETPD – to mitigate some of the (potentially negative) effects of qualitative elements of the research? In other words, how could qualitative data be used in a productive way to complement the quantitative data being gathered?
New Conversation
Hide Full Comment Hide Thread Detail
In response to the surveys in EDU 802, we gained numerical data points on a continuum expressed as a Likert scale. In my study, I included an open-ended question, which yielded some helpful information which I was able to use in the Discussion and Conclusions sections of my report. Any time that we proceed with a quantitative study, particularly those based on surveys, we presume to understand the parameters and nuances of the situation and well enough to prescribe the exact options that would cover the most likely responses. However, we can not always be assured to such clairvoyance.
I would posit that a qualitative study can demonstrate both rigor and validity, if conducted under specific circumstances. The assessment of qualitative responses can be enlightening, as well as quantified and differentiated. In a very real sense, qualitative studies are likely to provide significant value as precursors to quantitative studies to develop the parameters and the targets of important research without a lot of the guess work in designing the right study.
New Conversation
Hide Full Comment
Interestingly, I read some studies which used qualitative observation just to verify what had been gathered in a quantitative survey – and they found several discrepancies. Surveys are good for data and numbers but those who are reporting those numbers don’t always see their own bias or know if/when they do not represent their true self. Qualitative observation simply can serve as a double check for that information – so it wouldn’t be the only way data is collected, but it would help to reaffirm the validity of the responses received rather than simply trusting it. Not only would observing a teacher in the classroom be helpful but it would also be good to see how the teachers are working together to share findings and grow. That cannot be seen easily on a survey.
New Conversation
Hide Full Comment
http://uso.edu/network/workforce/able/reference/development/PD_Eval_Framework_Report.pdf
I encourage you to look through the entire booklet, but focus in on the “Description” section for each level:
- Level 1: Satisfaction (p. 6)
- Level 2: Learning (p. 8)
- Level 3: Behavior (p. 10)
- Level 4: Impact (p. 12)
In order to measure ETPD at levels 2, 3, and/or 4, what would you — as a professional development leader and researcher — need to be able to do with/for participants? In what ways would you need to collect data from them, over time, and across contexts? How would this align with the what (TPACK), where, and how noted above?
New Conversation
Hide Full Comment Hide Thread Detail
I just had to get that on the table first because the academic integrity of what I read in the document above made me boil. They even credited reading the 2006 version of his book on the topic, took everything, relabeled it, and published it as their original plan. I growl….
On the point of the question, data collection, according to this method, involves a survey in the PD environment with specific language about the experience. A follow-up is done about a week later, but in a different environment to determine whether perceptions and dedication to the model of learning have changed. The third level of behavioral evaluation is done by the supervisor and is based on the output of the individual following the PD experience. The final state is done through a triangulation of self-reported progress since the PD, observations of a supervisor or an in-person independent evaluator, and statistical data demonstrating a quantitative structure for evaluation, embedded with observation data which is qualitative, and the survey data which, when coupled with the preceding survey responses establishes a pattern of behavior and change over time (impact).
New Conversation
Hide Full Comment Hide Thread Detail
In reading through their document, I feel like they provided enough credit to Kirkpatrick for the structure of their ABLE plan. Relabeling and remixing to create something useful to them seems to be a fair way to use his research and they mentioned in several places that their model was heavily based on his and cited him several other times for his contributions and original ideas. I think that this is done in research a lot… while, yes, we still go back to good ol’ Maslow, there are more updated versions of his work that can be more accessible in the language and structure that people prefer to use. Heck, even TPACK has already received a facelift and Mishra and Koehler generally have a 2006 date on it which isn’t that old.
New Conversation
Hide Full Comment
Since our district is heavily invested in the Marzano method for teacher evaluation, my first thought related to evaluating learning is using a goal-and-scale type response as a pre/post assessment for the training. Gathering a numerical response about where participants are before it starts and then seeing if they improve in their understanding of the content once they’ve heard it all would be an easy way to see growth in their learning. A level 3 evaluation would be similar to something I suggested earlier about sending a survey to participants some time later – maybe 3-4 weeks to ask about how they are using the information they gathered. I feel as if the level 3 data can be gathered through quantitative surveys and maybe even qualitative observations but that level 4 has to come from another source. In order to see the impact, student assessment scores could be tracked over time to see their learning. Or even the evaluation model of the teacher could be tracked for their growth from an administrator. This pairs well with the concepts noted previously because about half of the responses to the levels would come in-the-moment of training at that site, while the later half comes from an in-situation experience of the educator using the new concept.
New Conversation
Hide Full Comment
General Document Comments 0