Skip to content

Field Trip: How to Tell If Your Professional Development Is Working

  

Schools spend billions of dollars and countless hours each year on professional learning. The big question: how do you tell if that time and money has a demonstrable impact on teachers’ classroom practice?

Sheila B. Robinson, Ed.D., has spent decades in education, as a classroom teacher as well as working with adults in professional learning and program evaluation. Today, we ask Sheila about:

  • Frequent challenges in professional learning
  • How to collect and analyze data to understand if a professional learning program is meeting teachers’ needs and moving the needle on classroom practice
  • How to ask questions of teachers that yield actionable insight into your PD program
  • What to be careful of when analyzing the data

Effective Professional Learning Strategies (That Actually Work):

Best practices for a comprehensive, individualized professional development program, resources for getting started, and more.  Read Now

Take a Deeper Dive:

Want to take a deeper look at program evaluation? Be sure to download Sheila B. Robinson’s eBook, “Professional Development Program Evaluation for the Win: How to Sleep Well at Night Knowing Your Professional Learning is Effective.” It’s a detailed overview of program evaluation, including how to engage stakeholders, crafting quality evaluation questions, and collecting, analyzing and reporting on data around your professional learning program.

Full Transcript  

It’s the million dollar question in professional learning: is it working?

 

SHEILA ROBINSON: The thing is professional development has always had somewhat of a bad name with some teachers because over the years there’ve been a lot of professional development programming that hasn’t been high quality, hasn’t been as relevant to teachers’ practices as they want it to be.

Are teachers able to take what they’ve learned and apply it in the classroom right away?

 

SHEILA ROBINSON: When teachers don’t feel respected by mandated professional learning, they often feel as if their knowledge and experience and their own expertise aren’t even being taken into account.

That is what we’re looking at today: how to evaluate a professional learning program to decide whether to continue it, or change it, or try something else.

 

SHEILA ROBINSON: So we ask questions, we collect data that help us answer those questions, and then we sit and talk about the results and how we use those results and maybe some other data to inform decisions. Should we keep the program, should we expand it, open it up to other teachers? Should we ditch this program and look to find a different one?

From Frontline Education, this is Field Trip.

 

* * *

 

Sheila B. Robinson is an educator. She’s spent years in the classroom, working with students with disabilities. She eventually moved on to teaching adults, and worked in professional development and program evaluation. Now she’s an independent consultant.

 

SHEILA ROBINSON: I’m teaching people what to do with their data and their results from program evaluation, data visualization and presentations and reports. And I’m having a fabulous time and I’m doing a little writing too.

 

RYAN ESTES: I want to mention also that Sheila is the author of an eBook for Frontline Education called “Professional Development Program Evaluation for the Win: How to Sleep Well at Night Knowing Your Professional Learning is Effective,” which you can download for free on our website at FrontlineEducation.com. Now, Sheila, your background is in the area of professional learning. And from your experience, what kinds of challenges are prevalent as schools work to provide professional growth opportunities for teachers?

 

SHEILA ROBINSON: Lots of challenges, and some of them are the same ones we’ve been struggling with for decades. I actually studied professional learning, professional development for my educational leadership doctorate, and I combined program evaluation and professional development, which is why I love talking about these two topics. But the thing is professional development has always had somewhat of a bad name with some teachers because over the years there’ve been a lot of professional development programming that hasn’t been high quality, hasn’t been as relevant to teachers’ practices as they want it to be. In fact, Education Week recently had a report about this called “Blind Spots in Teacher Professional Development,” and they nailed it, the same things that have been going on for decades. When teachers don’t feel respected by mandated professional learning, they often feel as if their knowledge and experience and their own expertise aren’t even being taken into account. They want professional development that’s differentiated. And it’s ironic, too, because we talk so much in education about meeting kids where they are and designing differentiated lessons and instructional practices to make sure that we’re meeting them where they are and understanding their prior knowledge and what they bring to the classroom with them. But I think we don’t do a good job with that with professional development, meeting the teachers where they are and giving them what they really need to further their learning.

 

Another problem is that now we’re seeing a lot of professional development that is really relevant and important to teachers. Stuff around social-emotional learning and culturally responsive education and trauma-informed instruction. And these topics are really hot right now. Teachers are very interested in learning them, but the challenge there is that they require some time, they require some deep learning. And too often, we’re only getting these surface level courses, you know, a couple of hours of workshop after school, and that’s just not going to do it for having things like SEL and CRE really impact practice in the classroom.

 

RYAN ESTES: And of course, I believe one of the common challenges that many districts face is simply knowing, “Is all the work that we’re doing in professional learning working? Is it actually moving the needle on classroom practice?” Would you say?

 

SHEILA ROBINSON: Yes. That is also a challenge. And even as a program evaluator, I can tell you it is a challenge. People are doing a lot of research around this topic and have been again for decades. That was the topic of my own dissertation as well, trying to look at the impacts of a professional development program. But there have been relatively few studies that have really effectively linked teacher professional development with student learning. And that’s really what we always want to do. It’s what we want to get at.

 

It’s just really hard to do. There’s a pretty long road and lots of steps between teachers sitting in a professional development scenario and student learning that can be attributed to that professional development. It’s hard to do, especially on a local level with the constraints of time and money. Research takes a lot of time and a lot of money to do. And most of us don’t have the means to do that in local school districts.

 

RYAN ESTES: Yeah. And of course, anyone who is an education knows the challenge of limited budgets and limited time especially. So this question of, “How do we really know what is working in professional development?” is an interesting one, because the last thing we want is for people to be spending lots of time and lots of money for a program that doesn’t ultimately do what we want it to do.

 

SHEILA ROBINSON: We can find out though. I mean, it’s not all bad news. There’s a lot we can do in our school districts locally to figure out what’s working and what’s not working in professional development. It takes a little bit of learning about the basics of program evaluation, and there’s a lot of information out there, and just a little bit of maybe stepping up our data collection strategies. And it’s pretty easy to find out if teachers’ beliefs have changed, or if they feel that they’ve learned something new. We can assess whether they intend to change their practices, and if we follow up with them, we can learn from them whether they have changed their practices. We can get them to share evidence of student learning. “Here’s what my students were doing in September and now here’s what they’re doing in April,” and we can get them to tell us that, “This is because of what I learned.” So we can do it.

 

RYAN ESTES: Absolutely. And I’m glad you brought that up because I know that you have done a lot of work in the area of professional development program evaluation. Can you walk me through a 30,000 foot view of what program evaluation is? And in broad terms, what it looks like? What is the system here?

 

SHEILA ROBINSON: Sure. Program evaluation is essentially thinking through what it is we want to know about our programs, in this case, our professional learning programs, and fashioning what we want to know into questions, much like you would do for a research study. I shy away from the word “research” because that is a scary word for some people. But the idea is, if we can ask some really targeted questions about our programs, then those questions will lead us in a certain direction to collect some data, and we analyze some data, and we’ll get some results. It’s pretty straightforward. And we use those results to inform decisions that we want to make about the professional learning.

 

RYAN ESTES: Can you give me just a quick example of what that process might look like? If I’m working in a school district and I’m trying to determine whether or not the professional learning that we’re providing to teachers or making available to teachers is having that kind of impact, are there given steps that you follow? Is there a particular process, an order, that you make sure that you’re consistent with every time you’re doing this? How does that work?

 

SHEILA ROBINSON: Well, the 30,000 foot view is, let’s say we were working together and you said, “I want to know if our program is having an impact.” I would say, “Well Ryan, what do you mean by ‘impact’? What is it that you’re looking for? Describe to me, if the program is impactful, if it’s successful, however you want to describe it, tell me what that looks like in the classroom for teachers, for students.” And as you describe that, we would form those into more specific, targeted questions. “Oh, so you want to know if teachers are implementing this strategy at least three times a week, because we know it has to be that often to be effective?” Well then, we could go measure that. How often are teachers implementing the new strategy? So we ask questions, we collect data that help us answer those questions, and then we sit and talk about the results and how we use those results and maybe some other data to inform decisions. Should we keep the program, should we expand it, open it up to other teachers? Should we ditch this program and look to find a different one?

 

RYAN ESTES: What you’re describing sounds very simple. It makes a lot of sense, right? Beginning with what we want to know and then trying to take steps to collect the data that we need in order to answer that question. Is program evaluation an enormous task or is it a relatively doable thing regardless of district size?

 

SHEILA ROBINSON: I say it’s definitely doable. It can be enormous. We can make it as big as we want. So you can collect mounds and mounds and mounds of data on one program or you can decide that you just need a minimum amount of data to answer one or two simple questions and that will be enough to guide your decisions. So again, this idea of data collection and analysis, think of that as the center of the sandwich. The two pieces of bread are the questions that we ask and the decisions that we think we’re going to make. So we really have to be clear on those two items. And then the data collection and analysis comes really easy. It’s not at all the scary part like people think it is.

 

RYAN ESTES: Obviously everything in today’s world seems to be data-driven these days, and rightly so. Data is very useful in making sure that we’re looking objectively at facts and not relying only on anecdotal evidence for whether or not a program is working, but how do you collect data and specifically the right data? And how do you analyze it and then use it? I’m asking because I think most of us often picture people in white lab coats analyzing charts. So what about someone who says, “I’m not a data scientist. I work with teachers”?

 

SHEILA ROBINSON: Well, I’m not a data scientist either. I have never worn a white coat and don’t plan to. Yeah, a lot of people shy away from data. And I think it’s because the idea of statistics can be so intimidating. I tell people all the time, “I’m a program evaluator, I’m not a statistician. I get really easily intimidated by all those named statistical tests and models and the stuff that I don’t understand when I read journal articles, same as everyone else.” But the thing is I’ve never had to use any of those statistics to analyze the data. Mostly what we’re looking at when we conduct surveys with closed-ended questions, we end up with frequencies. You know, 47% of people said this and 92% of people said that. Anyone can do that. We have to know a little bit about measures of central tendency, you know, means and mediums and know a little bit about that. But it’s really not complex at all and not intimidating once you get in there. Plus, there are tons of people who do know statistics, so if we ever need them, we just call on them.

 

RYAN ESTES: Help me understand what data collection might look like for  the average person working in a school district. Are we talking about looking at charts? Are we talking about surveys? Are we talking about simply asking people questions one-on-one?

 

SHEILA ROBINSON: For program evaluation for professional development especially, we almost always use a feedback survey at the end of whatever professional development course, whether it’s a one day workshop or a multi-session course over the semester or the year. We typically ask teachers a bunch of questions on a feedback survey right when the course ends. And that’s fine. As long as we’re asking really good questions, we can get some really great data from them. But there are other things we can do too, and these are slightly more time-intensive but still very doable. We could schedule follow-up surveys three or six months or even a year later to see how they’re doing and what they’re doing with what they learned. We can also do some interviews and you don’t have to be a pro to conduct an interview and get some good data from teachers, in person or over the phone.

 

We can conduct focus groups. I’ve been writing a lot about focus groups lately because they’re poorly understood but fabulous opportunities for data collection. And it’s essentially just a group interview, get a few people in a room and you start having a conversation and you collect the conversation. You find a way to get some notes, find a note taker, and you get some really rich data from actually talking with teachers about what they’re doing in their classrooms with what they learned and how they’re thinking and rethinking instruction based on what they learned. I also want to mention that there are other ways of collecting data. So surveys, interviews and focus groups I call the big three, but we can conduct observations. A little trickier given politics and you really have to build trust and rapport to be able to go to a teacher’s classroom and say, “I want to conduct observations just to see how instructional strategies are being implemented, and it’s not going to be a reflection on you, it’s a reflection on the professional development program, whether or not these strategies are being implemented across classrooms.” But we can also ask teachers to share lesson plans and artifacts and samples of student work. I love it when teachers say, “Hey Sheila, look at this kid’s writing from September and now it’s April. Look how far this kid has come with writing.” So lots of stuff.

 

RYAN ESTES: You mentioned very early on in our conversation, the idea that asking the right questions is really important when doing this, and that makes sense, right? We’re trying to figure out first of all what we want to learn, and then ask questions designed to get us there. But what would you say are the wrong kinds of questions to ask during this process?

 

SHEILA ROBINSON: Well, it’s not so much the wrong kind of question, but it’s stopping short of an evaluative question. So we tend to start with the question, “Is the program working?” or “Is it effective or successful?” or something like that. We just can’t stop there. I would call that the wrong question. We have to have some sort of a working definition and a clear understanding of what we mean when we say, “Is the program working, or is it effective?” We have to ask really specific questions that can be answered by the data, such as, “To what extent are teachers implementing the strategy learned in the XYZ program in their classrooms? To what extent are they doing this or that?” Or if it’s about things like changing instructional practice, which it often is.

 

RYAN ESTES: I’m thinking through, if teachers are coming out of professional development learning activity and they’re filling out a feedback form, I am guessing that simply asking, “What did you learn?” is not going to give you the kinds of data that you’re looking for.

 

SHEILA ROBINSON: Oh, that’s such a good one. I actually have a story about that one. So for years we asked that question because it sounds so direct. What did you learn, right? You just sat through a learning experience. What did you learn? Well, it turns out, for reasons I’m not sure I know, it turns out that people can’t really articulate that well. At least they can’t articulate it well in response to that question at that time. And so, what happened is, one year I looked at a year’s worth of data on that question. And what I found is that a huge percent of teachers would just echo back the title of the course. So if they took a course on instructional coaching and they would answer the question, “What did you learn?” “I learned strategies for instructional coaching.” Well that’s great, but that doesn’t really give us helpful and usable and actionable data.

 

So I started experimenting with a different way of asking. I thought, “Well, learning is so clearly connected to emotions and how you feel during the learning.” This is something that came up in my dissertation study as well. So I started asking, “How did you feel?” And then I gave them a bunch of choices: excited, renewed, frustrated, overwhelmed, a whole bunch of emotion kind of words. Let teachers check whatever they want, however many they want. And it’s the second part of the question that’s the real nuts and bolts: “Why did you select the ones that you did?” And then they have sentence starters. “Well, I felt energized because… I felt frustrated because…” and then they start actually writing about the stuff that’s going on in their minds and we get really good data that way.

 

RYAN ESTES: Once you have that data, I’m imagining you walking out of the room with a stack of these forms. What do you do with that? How do you make sense of that data in a way that you can actually take action based on that?

 

SHEILA ROBINSON: Well, that does take a little doing. It does take a little bit of time. So it’s one thing to look at your quantitative data, what comes from closed-ended surveys, the multiple choice rating scale data, 72% — that’s easy. But when we do interviews or focus groups or we ask open ended questions on surveys, we end up with qualitative data. Those are the words, the text that people have given us, whether they’ve spoken it or written it. And so we have to apply some kind of systematic analysis to that. Typically people aren’t real familiar with that, but it’s something that can be learned. It’s basically looking through all of the responses and applying some codes, looking to see what people are saying. If people are saying over and over again, “I just don’t feel I have the time to implement this new strategy,” then time becomes a code.

 

And I look to see how prevalent that is. And then I look to see, “Well, what exactly are people saying about time?” Or people say, “I don’t feel like I know enough about this.” Then I might assign a code like “don’t know enough” or “lack of knowledge” or something like that. And I start to uncover these categories or themes in this qualitative coding exercise. These categories or themes sort of emerge from the data, but I have to spend time with the data. I have to really read it and think through what people have said. And to make it even better, I would share that with a colleague as well and have some other people look through: “Are these teachers saying what I think they’re saying? Are you getting the same messages?” And we do make sense of it.

It’s not just important to look at the data, Sheila said. Numbers on their own won’t tell you much unless you have a sense of what they mean to you. And that can be tricky, and it’s worth thinking about before you get into doing program evaluation.

 

SHEILA ROBINSON: If you find out that, say, 78% of the teachers were satisfied with their experience in professional learning, you have to be ready to place a value on that. Is that good or is that not good? When I teach groups about program evaluation, I often use this example. I ask my audiences, “If your kid brought home 78% on a test, how many of you, you would say, ‘Oh my gosh, that’s fabulous. Congratulations!’?” And some hands go up. “And how many of you would have that kid grounded? ‘That’s awful. That’s the worst score you could get.’” And more hands go up.

 

Numbers by themselves do not have value. We have to know what that value is. If I hit the lottery 78% of the times I played, I’d be thrilled. But I like my airline pilot to have a better than 78% record of successful landings. So that’s a really important part of what we do, and a lot harder than all of the other parts, I think.

 

RYAN ESTES: Is that number going to differ significantly from school district to school district?

 

SHEILA ROBINSON: Yes. And from program to program. And it’s not that we have to assign an exact number, we just sort of have to know at what point are we going to say, “Yay, this program is successful.” And at what point are we going to say, “Oh yeah, I don’t think it’s working and we need to do something different”? We just have to have an idea in our heads before we get the data.

 

RYAN ESTES: Is that also tied to the amount of time and work that goes into a program? The amount of money that’s being invested? I can see if something’s free and doesn’t take a lot of time, then that number might be lower. But if you’re investing a ton of resources, that number might be higher.

 

SHEILA ROBINSON: Absolutely. So you always have to take into account all these other contextual factors. Think about a medical model. If your cholesterol is a little bit high and you’ll have a conversation with your doctor about whether to treat that or not, treat that. And a whole range of other factors about your health, about maybe even the cost of the medication or the side effects or all these things are going to bring something to bear on that conversation. Just that one number that came out of the blood test isn’t the only piece of data that’s used to make decisions.

 

RYAN ESTES: Once you’ve made sense of that data, what happens next? Do you get together with your team and say, “Okay, now we’re going to make changes to our program based upon what we’ve found out”? Are you sending it to other people in the district? How are you making sure that what you’re learning doesn’t stop with you simply knowing information? How do you make sure that you can then apply it?

 

SHEILA ROBINSON: That’s a really good question. Actually there is a set of program evaluation standards that most program evaluators are familiar with, and one of our standards is actually “utility.” We want to conduct program evaluation that is used. So this is going to look a little bit different in every district, and it’s something that should be discussed and planned out at the start. So when we sit down and say, “Hey, we’re going to evaluate this professional learning program,” we want to talk about who is going to receive a report that we generate, who’s going to receive the results? Are you, the professional development director, going to make all the decisions on your own? Do you have a committee? Do you report to your superintendent or your board of Education? So depending on how information flows in a district, you’ll want to plan that out ahead of time, because different audiences will need different kinds of reports and data, and you might not report the answers to every single survey question to the board of education, but they may want a high level overview of how well the professional development program worked, especially if you’re looking for funding to expand it.

I asked Sheila to talk about what she has seen this look like in practice.

 

SHEILA ROBINSON: Absolutely. Um, several programs because I have been involved in on the program evaluation for awhile. Cognitive coaching was a program that we started with teacher leaders maybe five or six years ago and it was very well received by some, and it was not well received by others. And what happens with a professional learning program is that people will talk about it and they’ll talk about it with administrators and they’ll talk about it with their colleagues and then people will get a sense about this program. And what was interesting is there were people who just didn’t feel that it was the right program for them and they were very vocal about it. So there got to be this perception that, “Oh, maybe this isn’t the right program for our district.” But when we actually sat down and looked at all the data that we collected through surveys, and at that time we were doing some learning logs, it turned out there were a lot of people who were really very positively impacted by it. We made the decision to continue offering it and more and more people taking advantage of the program, etc. So it’s so important to collect data and not just rely on the word on the street, what’s happening through the grapevine sort of thing.

 

RYAN ESTES: You might’ve stopped doing a program that actually had great value because if you hadn’t taken the time to collect that data you’re saying?

 

SHEILA ROBINSON: Absolutely. There was another program that we actually did regionally, not just with teachers from my own district, but we’re part of a statewide network of professional development teacher centers. This was a writing institute. We had some feedback data from right after the institute, right at the close, maybe even that afternoon when the institute closed. But that really didn’t give us a lot. And it wasn’t until a few months after that I went and conducted in-person focus groups with writing teachers. And this is where they were pulling out in the course of conversation, the conversation went really deep because at this point they had implemented the practices and the beliefs had changed, and they really sank their teeth into this new learning. And they started pulling out those student work samples and talking about them and the aspects of the student writing that had changed over time. That was incredibly rich data that we could never have gotten from a simple feedback survey right at the end of the event.

 

RYAN ESTES: What would you say to a district or a school that wants to get started with program evaluation? Who would like to begin doing this work? What would you say are the first steps to take there?

 

SHEILA ROBINSON: Well, there, there are some pretty good articles on the Frontline blog, and then those you mentioned at the start of the podcast, that we assembled those into a free eBook. And that’s a really nice overview, I think, on evaluating professional learning. It’s sort of that 30,000 foot view with a few specifics along the way. I usually tell people, I take the Nike approach, “just do it.” It’s okay to experiment and learn. At the same time you try out different questions on your feedback surveys. You try conducting focus groups and see how it goes. Then maybe you do a little more reading on focus groups and you learn more and then you try it again. The thing that people tend to worry about, they, they ask me questions like, “Is it scientific enough? Is it rigorous enough?” We’re not doing research on curing fatal diseases here, so it’s fairly low stakes if we collect data and the data really doesn’t help us. Yes, we’ve lost a little bit of time, but maybe we’ve learned how to do data collection a little bit better. So I tell people to just do this combination of “try it, experiment and do some reading, find some professional development on program evaluation if you can.” It’s out there. I’m out in school districts teaching people how to do this all the time.

 

RYAN ESTES: That’s great. Sheila B Robinson is the author of a book called Designing Quality Survey Questions. As we mentioned, she has also written an eBook for Frontline entitled “Professional Development Program Evaluation for the Win.” You can visit the show notes or FrontlineEducation.com for a link to download that for free. Sheila, thank you once again for joining us today. This was really helpful.

 

SHEILA ROBINSON: Oh, you’re very welcome. My pleasure. I enjoyed it, too. 

If you enjoyed this episode, don’t forget to click that subscribe button. We release new stories every other week, and that way, you won’t miss a single one.

 

Field Trip is a podcast from Frontline Education. Frontline’s industry-leading software is designed exclusively for the K12 market. That includes Frontline Professional Growth, a holistic solution to help school and district leaders manage the entire educator growth cycle in one system, including professional learning and evaluations, and provide tools for educators to collaborate online. For more information, visit FrontlineEducation.com/FieldTripPodcast.

 

For Frontline Education, I’m Ryan Estes. Thanks for listening, and have a great day.