Talk to Us 
Have a Question?
Get answers  

3 Surprising Ways to Engage Stakeholders in Evaluating Professional Development (and Ensuring a Quality Evaluation!)

Professional Growth

At Greece Central School District in Rochester, NY, we hired certified trainers to facilitate the 8-day Cognitive Coaching® seminar for all of our teacher leaders. We followed up with monthly collegial circle meetings for them to reflect on their learning, share how coaching sessions were going and practice scenarios with one another to refine their skills. We invited the trainers back for yearly refresher sessions. This program was designed to influence changes in teacher practice and ultimately impact student learning.

But, ask each of our teacher leaders to describe what changes in practice might look like and how coaching would impact student learning, and you’ll get almost as many answers as participants.

How do we evaluate a professional learning program if everyone has a different idea of what the program does, who it serves or what success looks like? How might we generate the right questions to ask, identify appropriate expected outcomes and determine what to measure?

Part of evaluating a program is understanding the program and what we expect it to do. And part of a successful evaluation effort is getting stakeholders — teacher participants, principals, district office administrators, Board of Education members, etc. — on board to support the work.

In my previous article, I outlined five phases of program evaluation with the first being engagement and understanding. Here, I’ll describe three evaluation-related practices:

  1. developing a program description,
  2. creating a logic model, and
  3. articulating a program theory.

These can be used to engage stakeholders, build a common understanding of professional learning programs and set up for a successful program evaluation.

Why spend time on crafting these elements? There’s nothing worse in program evaluation than collecting and analyzing data only to realize that the results aren’t useful. They don’t help you answer questions, or inform decisions you have to make about the program. Let’s take a look at these three elements, and how they lay the foundation for successful program evaluation.

Program Description

Why is a program description so important to program evaluation? A program description promotes clarity and contributes to a shared and comprehensive understanding of what the program is, who it is intended to reach and what it expects to accomplish.

A thorough description also identifies why the program was developed. What is the need or problem the program addresses? It’s worth gathering a group of key people to craft a few brief paragraphs to answer these questions, even when the program has been developed by someone else.

It’s OK if your description isn’t the same as another district might come up with. For example, maybe your district held the Cognitive Coaching® seminar for administrators, not teachers, and for a different reason than my district did. Our descriptions of need, target audiences and expected outcomes will look different, even when the program itself may be delivered identically in both places.

Logic Model

A logic model is a graphic representation — a concept map of sorts — of a program’s inputs, activities, outputs and outcomes.

  • Inputs are what we need to run the program. What resources do we need? Funding for professional learning facilitators, curriculum materials, space to hold courses?
  • What activities comprise the program? Workshop sessions, follow-up coaching, action research and examination of student work are just a few of the possible professional learning formats.
  • Outputs are produced by the program activities – that is, the number of sessions held, the number of participants who attended, products such as action research findings, lesson or unit plans, or other curricular resources that were developed. Outputs are generally easy to report, but don’t speak to the quality or effectiveness of the program.
  • Outcomes describe the expected changes in program participants. With professional learning, we may expect that teachers learn and apply new content, use new resources or instructional strategies, or change their teaching practice. And of course, expected outcomes may include increases in student achievement (or other student-related metrics such as discipline or attendance) as a result of teacher learning and change in practice.

Logic_Model

Outputs and outcomes are easily confused. Just remember that outputs are program data, and outcomes are people data! The Tearless Logic Model describes an interactive, collaborative (even fun!) process for creating a logic model that is certain to appeal to educators.

Program Theory

When we create professional learning programs, purchase professional learning curriculum or hire consultants to facilitate learning, we think we have high quality professional development, but how do we really know? Programs may meet certain characteristics that make them likely to be high quality (e.g., ongoing, job-embedded, data-driven). But how can we connect the dots between what the teachers are learning and how their students will benefit?

Recently, I led a collegial book study on Culturally Responsive Teaching and the Brain by Zaretta Hammond. We had teachers read the chapters and participate in online discussions. But how did we expect that teachers reading a book and writing about their thoughts would lead to improvement for students?

This is where program theory comes in. Program theory describes how the program is supposed to work. Some might call this “theory of change.” The program theory blends elements from the program description and information outlined in the logic model. Most importantly, a program theory articulates the linkages among the components of the logic model.

A key reflective question for articulating a program theory is this: What makes us think that this program, the way it is designed, and these particular program activities will lead to those expected outcomes we identified? A simple program theory for my book study might start like this:

  • If teachers read, reflect on and discuss with colleagues the material in Culturally Responsive Teaching and the Brain, they will deepen their understanding of culturally responsive teaching (CRT).
  • If teachers deepen their understanding of CRT, they will begin to think about how it connects to other work we do in the district around equity and social emotional learning.
  • If teachers deepen their understanding of CRT, they will also begin to recognize the cultural capital and capabilities students bring to the classroom, and will be able to use these tools to create engaging instruction.
  • If teachers learn about implicit bias, they will develop an awareness of how it can be a barrier to positive student-teacher relationships.

A few bullet points later, we might articulate how teachers will change their practice, and eventually, there will be a connection to the specific areas of student learning and achievement we want to improve.

The idea is that we’re identifying how we expect our professional learning programs to work. Once we do this, we can identify where we want to ask questions for the evaluation. Do we want to know if teachers are in fact, deepening their learning? Or do we want to investigate whether teacher learning is resulting in change in practice? A program theory helps us to know where to look and what to look for in a program evaluation.

Creating a program description, developing a logic model and articulating a program theory need not take a great deal of time. The investment, however, is sure to result in more clarity around our professional learning programs and shared understandings of how our professional learning programs are expected to produce results. They lay the foundation for us to identify relevant evaluation questions and set us up to collect the right data for our program evaluation.

Next Up In the Series:

In Part 4, we’ll learn about evaluation questions and how they point directly to the data we need to inform key decisions about professional learning.

Sheila B. Robinson

Sheila B. Robinson, Ed.D of Custom Professional Learning, LLC, is an educational consultant and program evaluator with a passion for professional learning. She designs and facilitates professional learning courses on program evaluation, survey design, data visualization, and presentation design. She blogs about education, professional learning, and program evaluation at www.sheilabrobinson.com. Sheila spent her 31 year public school career as a special education teacher, instructional mentor, transition specialist, grant coordinator, and program evaluator. She is an active American Evaluation Association member where she is Lead Curator and content writer for their daily blog on program evaluation, and is Coordinator of the Potent Presentations Initiative. Sheila has taught graduate courses on program evaluation and professional development design and evaluation at the University of Rochester Warner School of Education where she received her doctorate in Educational Leadership and Program Evaluation Certificate. Her book, Designing Quality Survey Questions was published by Sage Publications in 2018.