Talk to Us 
Have a Question?
Get answers  

Professional Development Program Evaluation: Using Surveys, Interviews, Focus Groups and Observations

Professional Growth

In the previous article in this series on evaluating teacher professional development, I shared that evaluation questions drive data collection and asked, “How would we know what data to collect, and from whom, if we haven’t settled on the questions we’re asking?” Once we have settled on a small set (about 1-3) of evaluation questions, we set our sights on how to collect data to answer them.

There are a multitude of ways to collect data to answer evaluation questions. Surveys (aka questionnaires), interviews, focus groups and observation are the most commonly used, and each has distinct advantages and disadvantages.

You’ll choose data collection strategies based on these along with which align best with your evaluation questions.

Let’s take a quick look at each strategy:


A survey is “an instrument or tool used for data collection composed of a series of questions administered to a group of people either in person, through the mail, over the phone, or online” (Robinson & Leonard, 2019, p. xiii). Surveys tend to have mostly closed-ended items — questions that have a question stem or statement, and a set of pre-determined response options (answer choices) or a rating scale. However, many surveys also include one or more open-ended questions that allow respondents — the people taking the survey — the opportunity to write in their own answers.

Many surveys are still administered on paper, but they’re conducted more frequently now in online environments. Professional development management systems, such as Frontline Professional Growth, allow feedback forms to be attached to professional development courses, and also feature the ability to construct and administer follow-up surveys.

The main advantage of a survey is that it can reach a large number of respondents. With electronic platforms, one click can send a survey to hundreds or thousands of respondents. Survey data is also relatively easy to analyze, and allows for easy comparison of answers across groups (such as elementary vs high school teachers, or different cohorts of participants). The main disadvantage is that we lack the opportunity to ask respondents follow-up questions, and quantitative survey data isn’t often as rich and detailed as data that result from interviews and focus groups.


An interview is a set of questions asked in person or over the phone to one individual at a time. It’s essentially a conversation between interviewer and respondent. In contrast to surveys, interviews are largely composed of open-ended questions with the interviewer taking notes or recording respondents’ answers for later analysis. Interviewers can use “probes” to elicit more detailed information from respondents. Probes are specific follow-up questions based on how a respondent answers, or they can be more generic, such as, “Can you say more about that?”

An interview’s main advantage is that it allows us to deeply understand a respondent’s perspective and experience. An interview can give us a strong sense of how someone experienced new learning from professional development, and how that learning plays out in their teaching practice. The main disadvantage is that we usually don’t have time to interview more than a handful of people, unlike the hundreds of responses we can collect with surveys. Interview data is also qualitative, and thus a bit time-consuming to analyze.

Focus Groups

A focus group is simply a group interview. Typically a small group of people (ideally about 6-8)  are brought together and asked a set of questions as a group. While one focus group member may answer a question first, others then chime in and offer their own answers, react to what others have said, agree, disagree, etc. The focus group functions like a discussion. It’s best to have both an interviewer and a notetaker and to video record for later review and analysis.

The main advantage of a focus group is that when people respond to questions in a group setting, they build off each others’ answers. Often, the conversation inspires respondents to think of something they may not have remembered otherwise. Also, focus groups allow us to interview more people than individual interviews. The main disadvantage is the same as with interviews — we can still reach only a small number of people, and since the resulting data is qualitative, it can take time to analyze.


Observing teachers and students in action can be one of the best ways to capture rich data about how teacher professional learning plays out in the classroom. Typically, observers use a protocol informed by the evaluation questions that outlines what the observer is looking for and what data to collect during the classroom visit.

The main advantage of observations is in witnessing first-hand how curriculum is being implemented, how instructional strategies are being used and how students are responding. The main disadvantage is in the potential for conflict, especially if positive relationships and trust aren’t a strong part of the school culture. While many teachers willingly invite observers into their classrooms, there can be tensions among colleagues and with unions who want to ensure that program evaluation does not influence teacher evaluation. It is critical to clearly communicate that data collected for professional development program evaluation is not to be used for teacher evaluation.

A Few Recommended Practices

Do engage stakeholders, such as teachers and administrators, in all phases of program evaluation. Be transparent in letting people know why they are participating in data collection, why program evaluation is important to the department or district and what potential decisions may rest on the outcome. If people understand why program evaluation is being conducted and the role it plays in the organization, they will be much more likely to participate.

Don’t collect data you don’t need. For example, if you don’t need to compare how the program worked for 3rd grade vs 4th grade teachers, don’t ask them to provide their grade level. If you don’t plan to compare male and female teachers, or newer vs veteran teachers, don’t ask these questions.

Do keep the data collection brief. Have you ever known a teacher or administrator with loads of extra time in their schedule? Whether it’s an interview, focus group or survey, keep it brief by asking only the questions you need answers to.

Do incentivize responses to maximize the number of responses you receive. If possible, have light refreshments available for focus groups (minding rules for spending grants or general funds). Offer survey respondents raffle tickets for a good education book, a bag of school supplies, a gift card, etc. There are ways to keep survey responses anonymous while knowing who completed them for these types of incentives. Offer an extra planning period (coverage for a class or release from an administrative assignment) to interviewees.

Do find ways of working data collection into professional learning program activities — e.g., participant journals, pre-post assessments, logs (teachers might log how often they use a strategy or resource, and comment on how it worked with students), etc. The less people have to do outside of the professional learning program, the better.

Do think creatively about data collection. Student work samples, photos and videos are legitimate forms of data that can be analyzed to look for patterns that help to answer evaluation questions.

Next up in the series is what to do with all the data you collect: analysis and interpretation.

Robinson, S.B. & Leonard, K.F. (2019). Designing Quality Survey Questions. Thousand Oaks, CA: Sage.

Next Up In the Series:

In Part 6, we dive deeper into Effective Strategies for Analyzing Professional Development Data.

Sheila B. Robinson

Sheila B. Robinson, Ed.D of Custom Professional Learning, LLC, is an educational consultant and program evaluator with a passion for professional learning. She designs and facilitates professional learning courses on program evaluation, survey design, data visualization, and presentation design. She blogs about education, professional learning, and program evaluation at Sheila spent her 31 year public school career as a special education teacher, instructional mentor, transition specialist, grant coordinator, and program evaluator. She is an active American Evaluation Association member where she is Lead Curator and content writer for their daily blog on program evaluation, and is Coordinator of the Potent Presentations Initiative. Sheila has taught graduate courses on program evaluation and professional development design and evaluation at the University of Rochester Warner School of Education where she received her doctorate in Educational Leadership and Program Evaluation Certificate. Her book, Designing Quality Survey Questions was published by Sage Publications in 2018.