Talk to Us 
Have a Question?
Get answers  
Ready to Talk?
Contact Sales  

What’s Involved in Evaluating a Professional Development Program?

Professional Growth

Today’s article, focused on the five phases of program evaluation, is Part 2 of a seven-part series on evaluating professional development. In the rest of the series, you’ll learn how you can apply these phases to evaluate your own programs.

 

What is Program Evaluation?

In Part 1, we learned that program evaluation is applying systematic methods to collect, analyze, interpret and communicate data about a program to understand its design, implementation, outcomes, or impacts. In the field, evaluators talk about evaluation as a way to determine the merit, worth and significance of a program. Simply put, it’s about understanding the quality of our professional learning programs, how valuable they are to our schools or districts and how important they are to what we are trying to accomplish with teachers and students.

 

Program Evaluation Has 5 Key Phases

Evaluating a program consists of five interdependent phases that don’t necessarily occur in a perfectly linear fashion.

Program_Evaluation

Phase 1: Engagement and Understanding

Program evaluation is most successful when stakeholders are involved and work collaboratively. Who should be involved? Administrators and teachers connected to the program, along with instructors and participants, will likely be called upon to offer or collect data, and should be included in planning the evaluation.

Think of a professional learning program in your school or district. Is there a clearly articulated description of the program? Are there stated and measurable outcomes? Does everyone involved with the program know what participants are expected to learn, how they might change their practice, and what student outcomes are expected as a result? Does everyone agree on these? Don’t worry! It’s quite common to answer “no” to one or all of these questions.

In a future post, you will learn about the importance of program descriptions and logic models. I’ll share how these tools can be easily created to promote shared understanding of our professional learning programs and how this sets us up to conduct high quality evaluation.

Phase 2: Questions

Developing evaluation questions is foundational to effective program evaluation. Evaluation questions form the basis for the evaluation plan and drive data collection.

Conducting evaluation is much like conducting a research study. Every research study starts with one or a few broad questions to investigate. These questions inform how and from whom we collect data. The following are examples of the types of questions we might pursue in evaluating our professional learning programs:

  • To what extent is the program changing teacher practice?
  • What evidence do we have (if any) of student learning that may be attributable to the program?
  • How might the program be improved?

Phase 3: Data Collection

We collect data on professional learning programs to answer our evaluation questions, and all decisions about data collection strategies to use rest squarely on these.

  • Most people are familiar with surveys (also called questionnaires; check out my book on designing surveys), interviews, or focus group interviews, but data collection can go far beyond asking people questions.
  • Observations of a professional learning program in action can yield important insights into how the program is going and whether or not it appears to be on track to achieving its objectives.
  • Classroom observations can help us understand if and how well teachers are implementing new practices, whether there are barriers to implementation, and what might be getting in the way.
  • Teachers journaling about their learning, or creating lesson plans or other artifacts, can also demonstrate whether a professional learning program is working well.
  • And of course, student data — achievement, attendance, discipline, work samples, etc. — can also serve to help answer the evaluation questions.

Later on in this series we’ll offer a more in-depth look at the advantages and disadvantages of specific data collection strategies, along with ideas for exploring more innovative data sources.

Phase 4: Data Analysis

This is the phase that scares people the most. People often think they need to understand statistics or have advanced spreadsheet skills to do data analysis. They worry when their datasets aren’t perfect or whether they have collected data in a “scientific” enough way. They are concerned about whether their data is reliable and valid, especially if it is qualitative and perceptual data, such as answers to open-ended questions from surveys or interviews.

These concerns are understandable, but in truth, there’s no need to get worked up.  In a future post, we will put to rest all of these fears!

Given the types of data we use to evaluate professional learning programs, we rarely need statistics beyond simple frequencies and averages. And datasets are seldom perfect. When we conduct surveys, for example, we find that some people don’t answer some questions. Others misinterpret questions, or it’s clear they make mistakes answering them.

On one feedback form after a very well-received workshop, a participant checked “Strongly disagree” for every statement when it was clear that “strongly agree” was the intended answer. How did I know this? Between the statements were comment boxes filled with glowing praise about how much the participant enjoyed the workshop, valued the materials and loved the instructor. It was clear the person mistook “Strongly disagree” for “Strongly agree” based on the location of those responses on the sheet.

Phase 5: Reporting and Use

Evaluation should be conducted with an emphasis on use. Once we interpret the data and glean insights that can inform future decisions about a program, we need to consider how to report new learning to key stakeholders. The formula for effective reporting includes:

  • identifying appropriate audiences for evaluation reports,
  • understanding their information needs, and
  • knowing how they consume information.

Are you reporting to a Board of Education? A superintendent? A group of administrators and teachers? Do they need all the details, just a few key data points, or a brief summary of results? Knowing our audience and how to engage them informs how we create reports, and reports can come in a wide variety of formats. Here are just a few examples:

  • Presentations
  • Documents
  • Infographics
  • Podcasts
  • Webpages

Evaluation as an Iterative Process

Earlier, I mentioned that these phases aren’t necessarily linear. In the graphic, you see them as a cycle where Reporting and Use points back to Engagement and Understanding. As we complete an evaluation for a program and make decisions about its future, we may enter another evaluation cycle. Also, as we collect data, we may analyze and report on it even as the evaluation work continues, thus revisiting Phases 3, 4 and 5 multiple times in one evaluation cycle.


Next up in the series

In Part 3, we go deep into ensuring that everyone has a shared understanding of how our professional learning programs are designed to influence change in teaching practice and student learning.

Sheila B. Robinson

Sheila B. Robinson, Ed.D of Custom Professional Learning, LLC, is an educational consultant and program evaluator with a passion for professional learning. She designs and facilitates professional learning courses on program evaluation, survey design, data visualization, and presentation design. She blogs about education, professional learning, and program evaluation at www.sheilabrobinson.com. Sheila spent her 31 year public school career as a special education teacher, instructional mentor, transition specialist, grant coordinator, and program evaluator. She is an active American Evaluation Association member where she is Lead Curator and content writer for their daily blog on program evaluation, and is Coordinator of the Potent Presentations Initiative. Sheila has taught graduate courses on program evaluation and professional development design and evaluation at the University of Rochester Warner School of Education where she received her doctorate in Educational Leadership and Program Evaluation Certificate. Her book, Designing Quality Survey Questions was published by Sage Publications in 2018.