Talk to Us 
Have a Question?
Get answers  

Effective Strategies for Analyzing Professional Development Data: It’s Not As Hard As You Think!

Professional Growth

I remember struggling through my college statistics class. Just the term “data analysis” made me cringe. After all, I was going to be a teacher, not a scientist! Now that I’ve spent years collecting and analyzing data I’ve learned, I don’t need to be a statistician to do professional development program evaluation.

Whether you love or hate the idea of analyzing data, you probably don’t have loads of time on your hands — but you still need answers. You need actionable knowledge in order to report out results that inform smart decisions about professional learning. Good news! In this article, I’ll share strategies for analyzing and interpreting data in a painless way that doesn’t require unlimited time or advanced skills.

What is Data Analysis?

Data analysis and interpretation is about taking raw data and turning it into something meaningful and useful, much in the same way you turn sugar, flour, eggs, oil and chocolate into a cake! Analyzing data in service to answering your evaluation questions will give you the actionable insights you need. It’s important to remember that these questions drive data collection in the first place.

Since you’re generally not running experiments with randomly sampled study participants and control groups, you don’t need advanced statistical calculations or models to learn from professional development data. You mainly need to analyze basic survey data, and to do that, you will look at descriptive statistics.

You may enjoy this hand-picked content:

Podcast: Building a Culture of Professional Learning — Mary Kathryn Moeller and her team facilitate professional learning at Jenks Public Schools in Oklahoma. In this interview, she discusses the big questions that shape their program, how they iterate and improve, and what it looks like to measure impact.

Summarizing the Data

Raw data rarely yields insights. It’s simply too overwhelming to scan rows and rows of numbers or lines and lines of text and make meaning of it without reducing it somehow. People analyze data in order to detect patterns and glean key insights from it.

First, it’s helpful to understand the proportion of professional learning participants who complete a survey. This is your survey response rate. Your response rate is simply the number of people who completed the survey divided by the total number eligible to take the survey.

Example:

Effective Strategies

Next, use descriptive statistics including percentages, frequencies and measures of central tendency to summarize the data. Measures of central tendency — the mean, median, mode, and range — are summary statistics that describe the center of a given set of values.

Definitions and statistics

Choosing the Right Statistic to Report

How do you know when to use mean vs. median? When you know you have outliers, use the median. Here’s an example of how these measures can differ greatly in the same dataset. Let’s say you want to describe a group of 16 professional learning participants in terms of how much teaching experience they have. Here are the values and measures of central tendency:

Choosing the right statistic

Which summary statistic best describes this population of participants? The mean can be very sensitive to outliers, while the median is not. The mean of this dataset is 9 years, but the median is only 3. This means that half of participants have 3 or fewer years’ experience. In this case, knowing that half of participants were novice teachers may give you greater insight and better inform future programmatic decisions than knowing the average number of years of teaching experience of the group.

Example Insight: Half of the participants in this professional learning activity were novice teachers, which can be used to inform future decisions about professional development.

Next, you may want to cross-tabulate results. Cross-tabulating means looking at your dataset by subgroup to compare how different groups answered the questions. For example:

  • Were high school teachers more satisfied than elementary teachers?
  • Did more veteran teachers report great learning gains than novice teachers?
  • Did more teachers from one school express an intent to try a new instructional strategy?

Most online survey programs make cross-tabulation easy with built-in features, but you can also use pivot tables if your dataset is in a spreadsheet.

Descriptive Statistics Have Limits

Caution: when participants haven’t been randomly assigned and required to respond to feedback surveys, these types of analyses cannot be used to generalize to all teachers who participated in the professional learning. It’s always a possibility that more satisfied participants completed the survey and that more dissatisfied participants did not.

Descriptive statistics are helpful for telling what happened, but they can’t determine causality. They can’t tell you why something happened. You may know that 87% of participants feel they learned a great deal from participating in professional learning, but you won’t know what caused them to learn. That’s where qualitative data can help fill in the blanks.

Qualitative Data Analysis

Surveys may include some open-ended questions, or you may have conducted individual interviews or focus groups as part of professional development program evaluation. Crafting these questions carefully can help you understand why people experienced professional learning the way they did.

But what do you do with all of these answers, the words people write or say in response to these open-ended questions? Rigorous qualitative data analysis involves significant study to develop the needed skills, but you can still take a few easy steps to make sense of qualitative data in a credible way that will give you insight into participants’ experiences in professional learning.

Step 1: Begin by becoming very familiar with the data – just reading and rereading survey responses, interview transcripts or focus group notes. Try not to get caught up in the very positive or the very negative attention-grabbing comments at this stage.

Step 2: Revisit your evaluation questions to refresh what you need to know for the program evaluation.

Step 3: Start looking for patterns as you read and reread. Assign “codes” to chunks of text. If a participant talks about wishing there was more time to learn and practice what was learned, the code might be “time.” If another comment has to do with concern about administrative support, the code might be “support.” As you progress through the data, attempt to reduce as much as you can to codes that you generate from your reading. Write each code next to the data as you go along.

Step 4: Once you’ve coded all data, look for patterns in the codes. Are there sets of codes that are related and could become categories?

Interpreting the Data

Interpreting data is attaching meaning to it. For example, let’s say that 37% of professional learning participants indicated they learned something new. At first glance, that doesn’t sound like a particularly good outcome, does it? Too often, people view raw data like this in either a positive or negative light without taking the time to fully understand what’s really going on. What if I told you this was a refresher course for people who had already learned the material? In that case you might then interpret it as a positive outcome that more than one third picked up new learning.

Numbers don’t have inherent meaning. It’s up to us to put them in context to make sense of them.

What About Statistical Significance?

People like to ask about this, and most likely, what they’re really asking is, “Are the results you’re reporting on important? Are the differences we are seeing meaningful to us in any way?” Statistical significance is a technical term that has to do with whether results of an experiment are true, or are more likely due to chance. In evaluating professional learning programs, you are not likely to use the statistical analyses that result in statistical significance.

What If I Have a Small Sample?

You may be wondering, “I only surveyed 20 people — is that really enough data to give me an accurate picture of what’s really going on?” Absolutely! Remember, it’s about answering your evaluation questions to inform future professional learning programs. Even with what might seem like low response rates, you can still gain valuable insights, and make smart decisions for your school or district.

Next in the series, we’ll turn analyses and interpretation into a usable report!

Sheila B. Robinson

Sheila B. Robinson, Ed.D of Custom Professional Learning, LLC, is an educational consultant and program evaluator with a passion for professional learning. She designs and facilitates professional learning courses on program evaluation, survey design, data visualization, and presentation design. She blogs about education, professional learning, and program evaluation at www.sheilabrobinson.com. Sheila spent her 31 year public school career as a special education teacher, instructional mentor, transition specialist, grant coordinator, and program evaluator. She is an active American Evaluation Association member where she is Lead Curator and content writer for their daily blog on program evaluation, and is Coordinator of the Potent Presentations Initiative. Sheila has taught graduate courses on program evaluation and professional development design and evaluation at the University of Rochester Warner School of Education where she received her doctorate in Educational Leadership and Program Evaluation Certificate. Her book, Designing Quality Survey Questions was published by Sage Publications in 2018.