Talk to Us 
Have a Question?
Get answers  

Evaluating Professional Development for Teachers? Evaluation Questions Are the Linchpin.

Professional Growth

Edwards Deming, famous American management consultant, once quipped, “If you do not know how to ask the right question, you discover nothing.”

Evaluation questions form the cornerstone of professional development program evaluation. They both frame and focus the work, pointing us in the right direction for data collection. After all, how would we know what data to collect, and from whom, if we haven’t settled on the questions we’re asking? Crafting the right questions for a particular evaluation project is critical to an effective evaluation effort.

What are evaluation questions?

Think of evaluation questions as research questions. They are broad questions that usually cannot be answered with a simple yes or no, and require collecting and analyzing data to answer. Most importantly, these are not the individual questions we would ask someone on a survey (we’ll get to those in the future!).

Evaluation questions are big picture questions that get at program characteristics and are evaluative. That is, the answers to these questions will help us understand the importance, the quality or the value of our programs.

Imagine you are evaluating a professional development program. What do you need to investigate? To answer this, let’s take a quick step back. The previous article in this series showed how to engage stakeholders and generate shared understanding of how our professional learning programs work by creating program descriptions, logic models, and a program theory. Now, we’ll see what you can do with these products to focus your evaluation!

Identifying information needs and decisions

First, consider what you need to know about your professional learning program. This may depend on what decisions you (or others) have to make about it. Do you need to decide whether to continue offering the program? Offer it to an expanded audience or at multiple sites? Eliminate it altogether? Try a different program to address the problem at hand (e.g. improving middle school writing skills)?

Once you’ve identified decisions that need to be made, revisit those three products — program description, logic model and program theory. What are the implicit or explicit assumptions being made about the program? For example, does the program theory state that the professional learning will change teachers’ thinking? Encourage them to use new strategies or resources in their teaching practice? Does the logic model identify certain expected outcomes for students?

Determining the questions

You may be thinking at this point, “Well, we just need to know if our program is working.” To that I would ask, “What do you mean by working?”

“Well,” you might say, “We want to know if the program is effective.” And I would answer with another question: “What do you mean by effective?”

And so it would go until you can define and describe exactly what you are looking for.

Again, revisit your three documents. As you review the program description and program theory, what clues do you have about what it should look like if the program is working or effective? Try to describe this scenario in as much detail as possible. Here’s an example:

If our professional learning program on writing instruction for middle school teachers is working (or effective, or successful…) we would hear teachers saying that they’ve tried the new strategies they learned, and are now using them in their daily practice. They would be able to show us how they are now teaching writing using the new templates. They would be able to show us “before” and “after” examples of student writing, and be able to describe in specific ways how students’ writing has improved.

Once we have defined success, we can turn these ideas into evaluation questions. One of my favorite tricks for doing this is to ask “To what extent…” questions. For the example above, these questions might look like this:

  • To what extent are teachers using the new writing instruction strategies?
  • To what extent are teachers using the writing template?
  • To what extent are teachers able to identify and show specific improvements in students’ writing?

Posing these questions may also inspire some sub-questions:

  • To what extent are teachers using the new writing instruction strategies?
    • How many teachers have tried the strategies at least once?
    • How many teachers are using the strategies twice per week or more often?
  • To what extent are teachers using the writing template?
    • How many teachers are using the writing template at least once per week?
    • To what extent are teachers using the template as given, or modifying it for their classrooms?
  • To what extent are teachers able to identify and show specific improvements in students’ writing?
    • What specific improvements are teachers identifying?
    • To what extent do the improvements we are seeing match the writing deficits we identified when we started the program?

As you can see, your list of evaluation questions can grow quite long! In fact, you may be able to identify dozens of potential evaluation questions. To keep the evaluation feasible, prioritize these and settle on perhaps just 1-2 big questions, especially if each has a couple of sub-questions.

Need more inspiration?

Here are generic examples of the types of evaluation questions you may need to ask. Some questions might be formative in nature. That is, they may be used to inform potential changes in the program. Think of these as process questions or implementation questions.

  • Is the program reaching the right people? (In other words, are the people we want to be enrolling in the program doing so?)
  • Is the program being delivered (implemented) as intended?
  • In what ways (if any) does the implementation of the program differ from the program description?

Other questions might be summative in nature. These questions ask about outcomes, impacts or changes that occur that we believe can be attributed to the program.

  • To what extent is the program producing the expected outcomes (i.e., achieving its goals)?
  • How do the effects of the program vary across participants? (i.e., are different teachers experiencing different results?)
  • What evidence do we have that the program is effective in meeting its stated goals and objectives?
  • How can the program be improved?
  • Is the program worth the resources expended (i.e., the costs)?

Using a checklist may be helpful in determining whether your questions are appropriate and will be effective.

Evaluation questions lead to data collection

As we have learned previously, we can conceive of program evaluation as occurring in five phases: Engagement and Understanding, Evaluation Questions, Data Collection, Data Analysis and Interpretation, and Reporting and Use.

As you can see from the above examples, evaluation questions point us to where and from whom to collect data. If our question is, “To what extent are teachers using the new resources?” then we know we need to collect data from teachers. If our question is, “Are students’ writing skills improving?” we know we will likely need student work samples as evidence.

In each case, we will have to determine the feasibility of using different data collection strategies such as surveys, interviews, focus groups, observations or artifact reviews (e.g., looking at lesson plans or student work samples). Each of these strategies features a set of distinct advantages and disadvantages and requires different resources.

Next Up In the Series:

In Part 5, we dive deeper into Data Collection.

Sheila B. Robinson

Sheila B. Robinson, Ed.D of Custom Professional Learning, LLC, is an educational consultant and program evaluator with a passion for professional learning. She designs and facilitates professional learning courses on program evaluation, survey design, data visualization, and presentation design. She blogs about education, professional learning, and program evaluation at www.sheilabrobinson.com. Sheila spent her 31 year public school career as a special education teacher, instructional mentor, transition specialist, grant coordinator, and program evaluator. She is an active American Evaluation Association member where she is Lead Curator and content writer for their daily blog on program evaluation, and is Coordinator of the Potent Presentations Initiative. Sheila has taught graduate courses on program evaluation and professional development design and evaluation at the University of Rochester Warner School of Education where she received her doctorate in Educational Leadership and Program Evaluation Certificate. Her book, Designing Quality Survey Questions was published by Sage Publications in 2018.