Skip to content

Blog

What’s Involved in Evaluating a Professional Development Program?

Today’s article, focused on the five phases of program evaluation, is Part 2 of a seven-part series on evaluating professional development. In the rest of the series, you’ll learn how you can apply these phases to evaluate your own programs.

 

What is Program Evaluation?

In Part 1, we learned that program evaluation is applying systematic methods to collect, analyze, interpret and communicate data about a program to understand its design, implementation, outcomes, or impacts. In the field, evaluators talk about evaluation as a way to determine the merit, worth and significance of a program. Simply put, it’s about understanding the quality of our professional learning programs, how valuable they are to our schools or districts and how important they are to what we are trying to accomplish with teachers and students.

[ctt template=”9″ link=”5Xvam” via=”no” ]PD program evaluation: using data to understand the quality of our programs, value to our schools and importance to our goals.[/ctt]

 

Program Evaluation Has 5 Key Phases

Evaluating a program consists of five interdependent phases that don’t necessarily occur in a perfectly linear fashion.

Program_Evaluation

Phase 1: Engagement and Understanding

Program evaluation is most successful when stakeholders are involved and work collaboratively. Who should be involved? Administrators and teachers connected to the program, along with instructors and participants, will likely be called upon to offer or collect data, and should be included in planning the evaluation.

Think of a professional learning program in your school or district. Is there a clearly articulated description of the program? Are there stated and measurable outcomes? Does everyone involved with the program know what participants are expected to learn, how they might change their practice, and what student outcomes are expected as a result? Does everyone agree on these? Don’t worry! It’s quite common to answer “no” to one or all of these questions.

In a future post, you will learn about the importance of program descriptions and logic models. I’ll share how these tools can be easily created to promote shared understanding of our professional learning programs and how this sets us up to conduct high quality evaluation.

Phase 2: Questions

Developing evaluation questions is foundational to effective program evaluation. Evaluation questions form the basis for the evaluation plan and drive data collection.

Conducting evaluation is much like conducting a research study. Every research study starts with one or a few broad questions to investigate. These questions inform how and from whom we collect data. The following are examples of the types of questions we might pursue in evaluating our professional learning programs:

  • To what extent is the program changing teacher practice?
  • What evidence do we have (if any) of student learning that may be attributable to the program?
  • How might the program be improved?

Phase 3: Data Collection

We collect data on professional learning programs to answer our evaluation questions, and all decisions about data collection strategies to use rest squarely on these.

  • Most people are familiar with surveys (also called questionnaires; check out my book on designing surveys), interviews, or focus group interviews, but data collection can go far beyond asking people questions.
  • Observations of a professional learning program in action can yield important insights into how the program is going and whether or not it appears to be on track to achieving its objectives.
  • Classroom observations can help us understand if and how well teachers are implementing new practices, whether there are barriers to implementation, and what might be getting in the way.
  • Teachers journaling about their learning, or creating lesson plans or other artifacts, can also demonstrate whether a professional learning program is working well.
  • And of course, student data — achievement, attendance, discipline, work samples, etc. — can also serve to help answer the evaluation questions.

Later on in this series we’ll offer a more in-depth look at the advantages and disadvantages of specific data collection strategies, along with ideas for exploring more innovative data sources.

Phase 4: Data Analysis

This is the phase that scares people the most. People often think they need to understand statistics or have advanced spreadsheet skills to do data analysis. They worry when their datasets aren’t perfect or whether they have collected data in a “scientific” enough way. They are concerned about whether their data is reliable and valid, especially if it is qualitative and perceptual data, such as answers to open-ended questions from surveys or interviews.

[ctt template=”9″ link=”jXp38″ via=”no” ]You don’t need a Ph.D. in statistics to understand and analyze data around your professional learning program. [/ctt]

These concerns are understandable, but in truth, there’s no need to get worked up.  In a future post, we will put to rest all of these fears!

Given the types of data we use to evaluate professional learning programs, we rarely need statistics beyond simple frequencies and averages. And datasets are seldom perfect. When we conduct surveys, for example, we find that some people don’t answer some questions. Others misinterpret questions, or it’s clear they make mistakes answering them.

On one feedback form after a very well-received workshop, a participant checked “Strongly disagree” for every statement when it was clear that “strongly agree” was the intended answer. How did I know this? Between the statements were comment boxes filled with glowing praise about how much the participant enjoyed the workshop, valued the materials and loved the instructor. It was clear the person mistook “Strongly disagree” for “Strongly agree” based on the location of those responses on the sheet.

Phase 5: Reporting and Use

Evaluation should be conducted with an emphasis on use. Once we interpret the data and glean insights that can inform future decisions about a program, we need to consider how to report new learning to key stakeholders. The formula for effective reporting includes:

  • identifying appropriate audiences for evaluation reports,
  • understanding their information needs, and
  • knowing how they consume information.

Are you reporting to a Board of Education? A superintendent? A group of administrators and teachers? Do they need all the details, just a few key data points, or a brief summary of results? Knowing our audience and how to engage them informs how we create reports, and reports can come in a wide variety of formats. Here are just a few examples:

  • Presentations
  • Documents
  • Infographics
  • Podcasts
  • Webpages

Evaluation as an Iterative Process

Earlier, I mentioned that these phases aren’t necessarily linear. In the graphic, you see them as a cycle where Reporting and Use points back to Engagement and Understanding. As we complete an evaluation for a program and make decisions about its future, we may enter another evaluation cycle. Also, as we collect data, we may analyze and report on it even as the evaluation work continues, thus revisiting Phases 3, 4 and 5 multiple times in one evaluation cycle.


Next up in the series

In Part 3, we go deep into ensuring that everyone has a shared understanding of how our professional learning programs are designed to influence change in teaching practice and student learning.

Professional Development for Principals

How Franklin Public Schools Used In-District Expertise to Promote Effective Observations

 

When you talk about employee evaluations in K-12, the first thing most people think is “teachers.” That’s true at Franklin Public Schools in Wisconsin, too. But as they work to build the capacity of teachers in the district, they also place a strong emphasis on developing principals — especially on equipping them to be strong instructional leaders.

This spring, we spoke with Christopher Reuter, Director of Teaching & Learning, and Erin King, principal at Forest Park Middle School, to find out what that looks like.

Here’s what Chris said about the role of the principal:

[ctt template=”9″ link=”qVdhU” via=”no” ]It’s the principal who has to connect the ever-moving parts together to make sure everyone’s moving forward. – @chris_reuter [/ctt]

Erin has put a lot of work and thought into the kinds of feedback she provides her teachers. For starters, she said it needs to be provided soon after the observation:

How does she make sure those are more than just one-way conversations? How does she build trust?

Chris asked Erin if she’d be willing to conduct a post-observation conversation with a teacher in a fishbowl setting for other principals to observe.

[ctt template=”9″ link=”5ukIW” via=”no” ]“As administrators we don’t see each other engage in post-observation conversations, so we assume that we’re probably all doing it in the same way.” – @erinkking [/ctt]

Since then, Chris and the other directors at the district have continued to observe principals as they conduct these conversations.

Chris and Erin both emphasized that these open conversations, rooted in trust between principals and teachers, or directors and principals, are vital to their growth efforts — and ultimately, to student achievement.

You can listen to the entire interview above, or — better yet! — subscribe to Field Trip and get new episodes every other Friday.

RTI/MTSS and End of School Year: 7 Tips to Reflect and Recalibrate

School leaders who implement RTI/MTSS have a big responsibility ― to deploy a school’s full array of intervention resources to find and help struggling students. To meet this goal, periodic checkups are needed to ensure that schools align their current practices with RTI/MTSS best practices. The close of the school year offers staff the ideal time for an RTI/MTSS checkup ― now is your chance to tidy up loose ends in record-keeping, use data to improve classroom instruction, identify gaps between intended and actual service delivery and look ahead to the next phase in RTI/MTSS program roll-out.

As summer approaches, here are 7 steps to firm up your procedures, ensure they are carried out with integrity and prepare staff for the 2018-19 school year.

Steps to Making the Most of Spring RTI/MTSS Data

1. Archive RTI/MTSS information

Schools should remind all staff responsible for keeping track of RTI/MTSS information that they should complete their records for the current school year before summer break. Staff should also be given a deadline date for finishing record entries. Having a district or school-wide RTI/MTSS program management system that all stakeholders can access helps keep data organized and archived for future use. Once that deadline is past, school staff can spot-check student entries in the RTI/MTSS system to verify that records are indeed complete.

2. Evaluate Effectiveness of Core Instruction

RTI/MTSS schools typically collect building-wide academic screening data at fall, winter and spring checkpoints. These data-sets are invaluable, as they allow a school to judge the effectiveness of its core instruction and, if needed, provide guidance to teachers on strengthening their instructional practices.

It is a widely accepted rule of thumb that classroom instruction across a school can be considered adequate if at least 80% of students meet or exceed a screener’s performance cut-points. The close of the school year is an ideal time for administrators to meet with grade-level teams to review screening data and brainstorm future instructional ideas to boost students’ collective academic performance.

For real-world examples of how to use this best practice, download the free eBook RTI/MTSS and End of School Year.

3. Analyze Data to Uncover Performance ‘Pockets’

As schools build a strong RTI/MTSS model, they collect troves of data monitoring student performance. If this data is reliably archived, school leadership can analyze it to identify pockets of student performance that either exceed or lag behind expectations. For example, a school might compare the relative outcomes of two Tier 2 reading groups using the same program to see if there are significant differences across instructors.

This type of advanced RTI/MTSS ‘data mining’ requires that a school first standardize its procedures to ensure that data sources are valid and reliable and that student data is uniformly stored in electronic format for easy retrieval.

4. ‘Recalibrate’ RTI/MTSS Procedures

Every school that follows an RTI/MTSS model has its own procedures to identify students for services, document intervention plans, collect data, move students up and down the tiers of intervention and so on. The end of the school year is the perfect time to review the school’s actual RTI/MTSS practices, identify any gaps in implementation and ‘recalibrate’ to align those day-to-day practices with the expected RTI/MTSS procedures.

Data can help school staff uncover discrepancies in procedures. It is an expectation, for example, that in a ‘typical’ school, 1-5% of students might be referred to the Tier 3 RTI/MTSS Problem-Solving Team in a given school year. If, as summer approaches, fully 10% of a school’s students have been brought to Tier 3 during the current year, the school can follow up by reexamining its criteria for accepting a Tier 3 referral and how those criteria are being enforced by staff.

For a real-world example of how to use this best practice for Tier 1, download the free eBook RTI/MTSS and End of School Year.

5. Recruit Fall Groups Using End-of-Year Screeners

To identify students at academic risk, most schools screen the entire building population 3 times per year (fall/winter/spring). Those data are then used to recruit students whose risk profile indicates they require Tier 2/Tier 3 academic-intervention services. While fall screening data would seem to be the logical data source to recruit fall academic-intervention groups, it presents 2 limitations:

  1. Tier 2/3 interventionists cannot begin work with students until the school has conducted the fall screening and identified groups, resulting in several weeks of dead-time when at-risk learners are not receiving intervention services.
  2. In an effort to speed formation of fall intervention groups, the school may be tempted to screen immediately after the start of school. However, students often experience a ‘summer slide’ — a predictable and temporary drop in reading or math skills over the summer. For most students, the summer-slide effect disappears after 4-5 weeks of school. Therefore, schools that screen within the first 2-3 weeks of school are likely to ‘lock in’ temporary academic deficits and falsely identify at least some students for Tier 2/3 services whose skills would have rebounded on their own.

A solution is to use the end-of-year (spring) academic screening results for 2 purposes: (1) to enter or exit students for current spring Tier 2/3 services and also (2) to identify fall Tier 2/3 intervention groups before the summer break. This approach allows academic-intervention groups to meet immediately when school resumes in the fall and encourages the school to schedule the fall screening when student skills have fully recovered from the summer regression.

6. Update the RTI/MTSS Roll-out Plan

It can take 3 to 5 years to fully implement the RTI/MTSS academic model. Schools in the midst of rolling out RTI/MTSS will find that the final months of the current school year offer a good vantage point from which to firm up plans for the next phase of implementation slated to start in the fall.

While advanced RTI/MTSS planning is always a good idea, some elements of RTI/MTSS require it. Schools seeking to overhaul their system of Tier 2 (supplemental/small-group) interventions, for instance, may need to alter multiple elements, e.g. changing the schedule for those services and training Tier 2 providers to deliver new research-based intervention programs.

For a real-world example of how to update your RTI/MTSS roll-out plan, download the free eBook RTI/MTSS and End of School Year.

7. Prepare RTI/MTSS Professional Development

While schools often do a good job of outlining and implementing a comprehensive RTI/MTSS plan, they sometimes overlook the need to provide ongoing professional development to prepare their staff to understand, accept and work effectively within the plan. As school leaders use the close of the school year to reflect on the quality of RTI/MTSS implementation and proposed next steps, they should also consider what additional training teachers and support staff require to help improve delivery of RTI/MTSS services. This professional development plan should include both the essential RTI/MTSS content to be delivered to teachers and a training calendar extending into the coming school year with opportunities in large- and small-group settings to provide that professional development.

Read examples of RTI/MTSS professional development planning.

A Final Thought on Optimizing RTI/MTSS at End of Year

A key quality for success in implementing an RTI/MTSS model is simply that schools pay attention to the details, verify that records are complete and archived, close gaps between current and best practices and look forward to the next steps in the unfolding RTI/MTSS roll-out plan. The end of the school year is a strategic time for schools to focus their attention — make productive use of this pivotal moment between the recently elapsed and coming school years!


As you reflect on your RTI/MTSS program this spring, consider how Frontline RTI & MTSS Program Management can help you collect and analyze the data you’ll need to make next year even better for staff and students.

How School Districts Can Provide A Great Applicant Experience

Would you want to go through your school district’s hiring process?

A poor applicant experience is a problem even in the best of times. But with the teacher shortage set to worsen as fewer new educators enter the profession, it’s more important than ever that candidates have a delightful experience from the moment they apply to your district.

After all, an exceptional applicant experience means three things for your district:

  • Fewer barriers to application means a larger applicant pool
  • Candidates with a good experience are more likely to encourage other educators to apply
  • New hires are more engaged and prepared to succeed from day one

So, what can you do to ensure that your hiring process is a positive experience?

1.    Walk a mile in the job-seeker’s shoes

Look at your hiring process from the perspective of the job-seeker at every stage. Try to find where applicants might hit snags, or what steps might be unnecessarily time-consuming. Remember that great teachers may already be working full-time in another district and may not have hours to spend on your application process.

Are open positions easy to find online, or are they hidden away on your district website? Is it clear to applicants which materials they need to submit, and how to submit them? Do they have to send printed materials through the mail, or can everything required be uploaded electronically? Make sure to look beyond the application itself. Does it take several rounds of phone tag to schedule interviews? Will references be contacted several times by different people from the same district? An applicant tracking system can help streamline the hiring process for both you and applicants, making it a more pleasant experience all around.

2.    Communicate

According to Forbes, over 70 percent of online applicants never receive even a generic reply from would-be employers. Look for ways that you can be more communicative with applicants from the beginning, even if you don’t have time to write to each applicant individually. If you’re one of the few school districts that makes a point to acknowledge each and every teacher application — even if it’s an automated form response that isn’t personalized in any way — you’re already ahead of the game. It’s okay to have a template response, especially if you can make it both informative and interesting. It’ll set your school district apart and put the “Human” back in “Human Resources.” Plus, it’s just good etiquette.

Beyond the initial application, the more transparent you can be with job-seekers, the better. You may have a qualified, talented teacher candidate who is a great cultural fit, who never knows they’re one of your top picks. If they don’t hear anything from you for weeks while you work through bureaucratic internal processes, they may assume they won’t be hired and accept another district’s offer — even if working for your school district was their dream job.

3.    Set reasonable timeframes

Similarly, keep in mind that teachers need to have a plan. If a great teacher applies for a position in early May, but you aren’t able to make an offer until the end of August, that’s too late! It’s not fair to expect an exceptional candidate to wait months to receive an offer from your district, especially if other, faster districts have already made their hiring decisions. Like anyone else, teachers need to know where they will be working and what they will be doing ahead of time. This is especially true if they would need to relocate from another area in order to work in your school.

Be upfront about how long applicants can expect to wait to hear back. Hiring teachers will always have a degree of uncertainty — current teachers may decide not to return for the next year, or funding may not come through —  try to stick to the promised timeline as much as possible. If something does come up, make sure to communicate with candidates so they know where they stand.

4.    Seek feedback

You don’t need a “secret shopper” to get the inside scoop on your applicants’ experience with your hiring process. Gather feedback from both new employees and candidates who were not hired on an ongoing basis, so you can continually improve your hiring process. Remember: this isn’t a “once and done” thing: regularly implementing changes based on honest feedback will help your hiring process evolve.


When Cabot Public Schools decided to move to a new applicant tracking system, providing a great applicant experience was one of their top priorities. That’s why they chose Frontline Recruiting & Hiring.

Read the Case Study  

Your Section 504 Eligibility Questions Answered

Understanding Section 504’s eligibility criteria is crucial to compliant Section 504 accommodation delivery and implementation. Unlike special education, Section 504 does not expand rights or change the educational experience. Rather, Section 504 protects the general education experience and ensures that students are not discriminated against on account of disability.

Due to the lack of federal and state regulations regarding Section 504, determining eligibility can be a daunting process involving logistical and legal questions. As an education law attorney representing schools, I see these as among the most common questions educators face.

What is the Section 504 eligibility criteria? How should it be used?

The Section 504 eligibility criteria involves two questions:

  1. Does a student present with a physical and/or mental impairment?
  2. If so, does that physical and/or mental impairment substantially limit one or more major life activities?

How is the Criteria Defined?

Let’s define those terms:

  • “Physical impairment” ― means a diagnosis affecting one or more of the physical systems, such as the neurological, musculoskeletal, special sense organs, respiratory, cardiovascular, etc. Section 504’s broad protections cover all of the body systems. Virtually any diagnosis, affecting any system, constitutes physical impairment.

  • “Mental impairment” ― means a diagnosis involving virtually any mental disorder listed in the DSM-5, including anxiety, cognitive impairment, brain syndrome, dystonia, oppositional defiant disorder, obsessive compulsive disorder, attentional difficulties or somatoform disorder.

Given the broad nature of these definitions and how many different diagnoses committees may face, the diagnosis itself typically isn’t the tough issue in understanding eligibility. More importantly, committees need to identify data that shows student’s need.

What are “Major Life Activities?”

After establishing a diagnosis that affects a body system or the mind, Section 504 committees need to identify the impact of that difficulty on a major life activity. Major life activities are construed broadly also, involving everyday actions like walking, talking, seeing, breathing, hearing, caring for oneself, working, eating, processing, learning – any activity one engages in regularly.

Is Section 504 a “Consolation Prize?”

Applying a real understanding of the eligibility criteria safeguards against the “consolation prize” phenomenon – offering Section 504 accommodations instead of special education. Unlike special education, Section 504 doesn’t provide clear rules or regulations that define its decision makers or the decision-making, eligibility process, itself. In the absence of such rules, getting Section 504 right means understanding and applying the right eligibility criteria, and not simply issuing plans to all former special education students.

Are Any Students “Presumptively Eligible” for 504 Plans?

There is no presumptive eligibility under 504 – simply presenting a diagnosis does not “get you” a plan. In order to affect a compliant process, schools need to consistently implement the right eligibility criteria.

Educators need to recognize that the eligibility process requires consideration of the impact of disability. That means, in order to be eligible for accommodation under a Section 504 plan, students must show symptoms.

When determining Section 504 eligibility in a school, should we only look at activities that impact “learning?”

Section 504 eligibility is broad and involves consideration of all major life activities, not just “learning.” As we continue to discuss, Section 504 protects against disability discrimination so that all students, both disabled and non-disabled, may access the same education. The major life activities that may be considered through the criteria are broad. Learning is certainly involved, but so is walking up the steps to get on a school bus, sitting on that school bus, developing appropriate peer relations so the student may ride that bus successfully to school, climbing off that school bus and being able to walk down the sidewalk to get into the school, and walking through the hallways to the classroom where the student will engage in learning.

To Sum Things Up.

So, where are we? Let’s remember the eligibility criteria for Section 504 is different than the eligibility criteria for IDEA. Remember there are criteria. Remember the eligibility criteria has two primary questions, involving physical and/or mental difficulties. And such physical and/or mental impairment must substantially limit one or more major life activity. Without data that satisfies the eligibility criteria, you should not find students eligible, or provide plans. However, at all times, and with all students, we may never treat any students differently.


Do you have the data you need to make legally sound Section 504 eligibility determinations? Consider how Frontline 504 Program Management can help you efficiently collect, use and securely archive your student data.

Measuring the Impact of Professional Learning

Investing in professional learning for educators comes with the expectation that you’ll be able to evaluate the gains from that investment. But all too often, rolling out a learning plan without an evaluation method means that the second part never happens.

That’s why the Frontline Research & Learning Institute, along with Learning Forward, worked with six districts to examine the best way to measure the impact of professional learning.

Those districts were:

  • Boston Public Schools, Massachusetts
  • Greece Central School District, New York
  • Jenks Public Schools, Oklahoma
    Metro Nashville Public Schools, Tennessee
  • Prior Lake Savage Area Schools, Minnesota
  • Shaker Heights City Schools, Ohio

Each district brought an existing professional learning program to this small-scale study with the intention of collaboratively determining which changes to the programs would most likely benefit educators at their school. Their five essential findings might surprise you.

1. Plan evaluation as a holistic part of the program

To evaluate effectiveness of professional learning programs on both educators and their students, the program needs to be designed with clear targeted, measurable outcomes and indicators of success. In wrestling with this fact, the districts fell into three buckets of program development:

  • Existing programs with evaluable outcomes in place.
  • Existing programs that required reworking for more clarity of outcomes.
  • In-development programs that didn’t yet have targeted outcomes.

Similar to how educators develop units of learning, planning their final assessments first, these district leaders found it was necessary to do that same. While a “one-off” professional learning event may seem like a good idea, the bigger question is: how does it all fit together?

2. Develop targeted outcomes with existing data sources in mind

In developing their target outcomes, Boston Public School leaders looked to the multiple data sources they already had at their fingertips in the district. They considered which might be useful as indicators of the impact of their new professional learning program. Repurposing data sources in this way can help you more easily and quickly evaluate the program without creating a new data burden (Killion, 2018).

The down side: data sources already in place are approximations of measures of the targeted outcomes of the professional learning program. So, district leaders worked to analyze and interpret the results, then form conclusions about the impact of the program.

Weigh how important exacting data tailor-made for your program is vs. the functional ease of using existing data sources. Consider a combination of the two, if necessary.

3. Consider a systems-approach to better evaluate effectiveness. from its inception.

When professional learning within a district or school lacks coherence — that is, when each event or initiative feels ad hoc or separate — it’s pretty difficult to measure its effects. That’s why Jenks Public Schools took the opportunity to revise their program using a systems-approach. They reworked the planning process to ensure that professional learning met their criteria for quality

Using this planning model, the district and school leaders aligned professional learning with identified needs, provided adequate implementation support, and monitored implementation to increase the likelihood of results.

4. Continually evaluate both new and existing programs

The reasons for evaluating a new program are obvious:

  • Determine if it’s worth the investment to continue into a second year
  • See how to refine the program
  • Incorporate feedback from participants

But what about after the first year or so, when you feel it’s going well?

The districts in the study found it helpful to continue to evaluate existing programs in the following ways:

  • Run annual data collection from multiple sources about the program to inform continuous upgrades, even after it’s been refined for a year or so.
  • Go a step further to measure the impact of the program on student achievement, connecting the dots between program outcomes and changes in student learning.

Metro Nashville Schools collected data using the Collaborative Inquiry Process in partnership with REL Appalachia, a system for collecting, analyzing, and using a variety of data to improve programs.

These modes of evaluation helped the districts stay freshly engaged with programs, even if they had been running for more than a year or so. Continuous data collection meant they could go a step further in guiding teachers who were implementing learnings, too.

5. Use reliable and flexible systems for data collection and evaluation

Useful evaluations can take time, resources, and effort. Districts with data systems that allow them to gather, track, analyze and access data quickly are able to monitor the program’s success or needed adjustments more easily.

Data systems that generate analytics using multiple types of educator and student data allow district leaders to see the best way to adjust a program more clearly. This ease shifts the focus from collecting data to analyzing it — a much more effective use of time.

In this small-scale study, the six districts looked at their professional learning programs, however established or nascent, to collaboratively examine methods of evaluating those programs. Rolling evaluation into the holistic design process, beginning with targeted outcomes, taking a systems approach, continually evaluating the program, and using a solid data system all felt like the most important pointers to take away from the study to run an impactful professional learning program for educators.

5 Reasons You Should Be Evaluating Your Professional Development Programs

Wouldn’t it be great if we knew when our professional learning programs were successful? What if we knew more than just the fact that teachers liked the presenter, were comfortable in the room or learned something new? Wouldn’t it be better to know that teachers made meaningful changes in teaching practice that resulted in increased student learning?

We can ascertain all of this and more by conducting program evaluation.

Every day we engage in random acts of evaluation – multiple times per day, in fact. When we get dressed in the morning, we implicitly ask ourselves a set of questions and gather data to answer them.

  • Will it be warm or cold?
  • Will the temperature change throughout the day?
  • Will there be precipitation?
  • Which clothes do I have that are clean?
  • What do I have on my schedule? How should I dress for that?

Of course, getting dressed is pretty low stakes. At worst, we might find ourselves too warm or cold, or under- or overdressed for an occasion. Buying a new car, however, is a higher stakes proposition. We could end up with a lemon that costs us a lot of money, or even worse, is unsafe. When we evaluate, we are more or less systematic about it depending on the context. For the car, we may create a spreadsheet and collect data on different models, their price, performance, safety features and gas mileage. Or, at the very least, we would read up on this information and note it in our heads.

But what about our professional learning programs? What does it mean to evaluate a program?

What is Program Evaluation?

Program evaluation is applying systematic methods to collect, analyze, interpret and communicate data about a program to understand its design, implementation, outcomes or impacts. Simply put, program evaluation is gathering data to understand what’s going on with a program, and then using what we learn to make good decisions.

Program evaluation gives us key insights into important questions we have about our professional learning programs that help inform decisions about them. For example, we may want to know:

  • How well does the program work? Is it changing teacher practice?
  • Is the program meeting the needs of the participants?
  • To what extent has there been progress toward the program’s stated objectives?
  • Do we have evidence of student learning attributable to the program?
  • How can the program be improved?

Part of the innate beauty of program evaluation lies in its abundant flexibility. First, there are numerous forms and approaches, and second, evaluation can be conducted both before and during a program, as well as after the program ends.

Systematic Methods

What do we mean by systematic methods? Much like a high-quality lesson or unit plan, program evaluation is the result of good thinking and good planning. It’s knowing what we want our programs to accomplish and what types of assessment will help us determine if we are successful. Being systematic means:

  • Ensuring that we understand what our programs do and what they are expected to do for both educators and students
  • Identifying which questions need to be answered
  • Knowing what data we need to collect to answer those questions
  • Identifying the primary users of our evaluation results – those who rely on the answers to be able to make good decisions

There are myriad strategies for collecting data. Surveys, interviews or focus group interviews, and observations or walkthroughs are common methods. We can also look at student achievement data, student work samples, lesson plans, teacher journals, logs, video clips, photographs or other artifacts of learning. The data we collect will depend on the questions we ask.

5 Reasons to Evaluate Professional Learning Programs

Learning Forward offers a set of standards, elements essential to educator learning that lead to improved practice and better results for students. The Data Standard in particular calls for professional learning programs to be evaluated using multiple sources of data. While adhering to a set of standards offers justification for action, there are specific advantages of program evaluation that substantiate its need:

  1. Evaluating professional learning programs allows leaders to make data-informed decisions about them.When leaders have evaluation results in hand they can determine the best course of action for program improvement. Will the program be expanded, discontinued, or changed?
  2. Evaluating professional learning programs allows all stakeholders to know how the program is going.How well is it being implemented? Who is participating? Is it meeting participants’ learning needs? How well is the program aligned to the Every Student Succeeds Act (ESSA) definition of professional learning?
  3. Evaluation serves as an early warning system.It allows leaders to peek inside and determine the degree of progress toward expected outcomes. Does it appear that program goals will be achieved? What’s going well? What’s going poorly? Evaluation uncovers problems early on so that they can be corrected before the program ends.
  4. Program evaluation helps us understand not only if the program has been successful (however “success” is defined) but also why the program is or is not successful.It allows us to know what factors influence success of the program.
  5. Program evaluation allows us to demonstrate a program’s success to key stakeholders such as boards of education and community members, or potential grant funders.Evaluation results allow us to document accomplishments and help substantiate the need for current or increased levels of funding.

All Evaluation is NOT the Same

The word “evaluation” can strike fear into the hearts of teachers and administrators alike. People naturally squirm when they think they are being evaluated. Although personnel or employee evaluation shares some characteristics with program evaluation — such as collecting and analyzing data, using rubrics, assigning a value or score and making recommendations — they serve entirely different purposes.

Program evaluation focuses on program data, not on an individual’s personal performance. The focus of the evaluation is on how the program performs. In education, we take great care not to let program evaluation results influence personnel evaluation. And remember the example about buying a car? That’s product evaluation, and it too shares traits with program evaluation but serves a different purpose.

Are you convinced that program evaluation will help you generate insights that inspire action to improve professional learning in your school or district?

Next up in the series

In Part 2, we’ll take a deeper dive into program evaluation and understand the big picture of how evaluation is conducted, the forms it can take, and how it relates to research.

3 Non-traditional Professional Learning Ideas for Teachers

Tips to Boost Teacher Agency

Sarah_Hayden20 miles east of Portland, Oregon sits Gresham-Barlow School District. With 18 schools, “We’re too small to be big, and too big to be small,” says Sarah Hayden, an instructional coach at the district. Sarah and her colleagues work one-on-one with teachers, but also work at the district level in to provide support where needed.

They wear many hats, and just like many districts, they’re asked to do a lot with limited resources. In response, her team has come up with some creative ways to provide educator-driven, make-an-honest-to-goodness-difference-in-the-classroom professional learning opportunities for teachers.

 [Note: this interview has been edited for brevity and clarity.]

Collaboration Walks

FRONTLINE EDUCATION: We’re here today to talk about something you’re doing at Gresham-Barlow called “Collaboration Walks.” Tell me about that — what are they?

SARAH HAYDEN: Collaboration walks are something that we started in our district about three years ago to promote teacher voice, teacher agency and teacher professional growth. On a given day, we get about ten teachers together and explore different classrooms around the elementary schools in our district. Then, teachers sit together to talk and collaborate with each other about what they’ve seen in the classrooms — how they can take what they’ve learned and internalize it.

FRONTLINE: What led to you starting these? 

SARAH: We wanted our rubric (Charlotte Danielson’s Framework for Teaching) to be a model for professional growth and not just evaluation. We took all the numbers away from the rubric, and we used it to talk about instruction in a way that was meaningful, safe and promoted growth for teachers. We even rebranded [the walks]. Instead of ‘Learning Walks’, we call them ‘Collaboration Walks.’ We talk about how the collective voice of the teachers in the room is what is needed for everyone to grow — this is not something that’s top-down. It’s a collaborative way to talk about teaching.

Want more details? Listen to our full interview with Sarah Hayden about Collaboration Walks at Gresham-Barlow School District.

 

FRONTLINE: Describe the process — who’s there? Where do you go? What do you do?

SARAH: There are usually about ten teachers who sign up, usually within a day and a half. We meet at one of the schools in the morning, and we talk about our goals for the day: What do we want to get out of today? How are we going to be reflective? How are we going to move forward collectively?

We focus on two or three of the different standards in our rubric and we ask, “What does best practice look and sound like in the classroom? What does best practice surrounding discussion and question techniques look and sound like in the classroom? What does setting purposeful intentions for students look like and sound like in the classroom?” And in a collaborative way, we come up with, “What does best practice with these indicators, these standards, really mean?”

Then we go into the classrooms with this lens in mind. Teachers bring their cameras, they talk with students, they work alongside teachers. We’re in a classroom for anywhere from 15 to 20 minutes, and we observe. Then the most exciting part of what we do, we sit and we talk, and we talk, and we talk about teaching — what I do in my classroom, what you do in your classroom, what we observed in the teacher’s classroom that we saw.

We keep it really safe and non-evaluative. We use sentence stems that just say, “This is what I observed… This is what I saw… This is what I wonder….” Through this process of collaborative discussion come amazing points about how teachers are going to move their practice forward.

FRONTLINE: What is it about the structure of what you’re doing that makes these effective?

SARAH: It is 100% teacher-driven, teacher-centered, and the entire goal is to elevate teacher voice across our system. The caliber of the teachers in our district is amazing, and if you get some like-minded individuals in a room, we can solve the problems of the world. That’s why I think it’s been so successful, because it’s about meeting the teachers where they’re at, and helping them continue on their personal journey.

Whether they’re a first-year teacher or a veteran of 20+ years in our district, every single person in that room can support their colleagues through collaborative conversations.

Reflective Conversations

FRONTLINE: Collaboration walks aren’t the only thing that you’re doing in professional learning, of course. How else are you working to make professional learning more teacher-centered?

SARAH: We’ve taken the idea of collaboration walks and we’re doing what we call “reflective conversations,” where teachers videotape themselves conducting a lesson in their classroom, and then they come to our professional development session with a trusted peer from their school or their grade level, who has also videotaped themselves.


We had one teacher comment, “In all of my 20 years of teaching, I’ve never had professional development as meaningful as what I experienced with my colleague at this professional learning.” — Sarah Hayden


Then we spend some time talking about what a reflective conversation is, and how to support your colleague in a way that promotes their professional growth. The teachers watch the videos alongside each other and use these reflective conversation skills to discuss their practice. We had one teacher comment, “In all of my 20 years of teaching, I’ve never had professional development as meaningful as what I experienced with my colleague at this professional learning.” 

FRONTLINE: What is it about the use of video for these reflective conversations that makes it important?

SARAH: Videotaping yourself as a teacher is absolutely terrifying, which is why we bring that trusted peer in. When you watch yourself teach, you are your own worst critic. And everything that you see, you don’t realize that you do on a day to day basis. Once you get past, “Oh my gosh, I really sound like that?” you see exactly what you’re doing and how your students respond to you. Things that aren’t usually visible in the classroom are very visible when you watch yourself on video.

It’s a chance to go deep into what you’re doing every day, and see how the things you do affect student outcomes. It’s completely and totally career-altering. And the teachers communicated that to us, even after one video.

FRONTLINE: Can you talk more about the training you provide for these reflective conversations?

SARAH: The reflective conversations training is a day long. In the morning, we don’t watch any videos — the trusted peer and the teacher sit with us, and we talk about what a reflective conversation is. “What are ways that you can pose an open question that invites inquiry from your partner?” Because as a teacher, it’s very easy to watch a video alongside a colleague and say, “Well, in my classroom, I…” or “Have you ever tried…?” — which can stifle what that teacher needs to understand about their own practice. So we ask questions in an invitational way, where the trusted peer is asking questions so that the teacher can develop their own understanding.

The trusted peer is never telling, never answering. The trusted peer is facilitating and prompting their peer to think about their teaching in a deeper, different way. The trusted peer is the one who can help you draw out what you need to investigate about your own practice. So it’s way more meaningful when you have someone to support you in that and to ask those questions that you didn’t even know to ask yourself.

You may enjoy this hand-picked content:

White Paper: 10 Strategies to Improve Teaching with Video 

FRONTLINE: I would imagine receiving that kind of feedback is both helpful and scary.

SARAH: Exactly. As we’re setting up the day, we talk about what a reflective conversation is, and we say, “It’s rigorous. It’s not mere support group talk.” It’s talking about teaching and giving meaningful feedback. We’ve heard from our teachers that so often in our profession, this is missed. Deep, rich, meaningful feedback is missed by teachers, and they crave it. That’s why they feel so good at the end of the day —they’re finally getting something that’s going to help them, that’s going to take them to the next level.

You’re the trusted peer for your colleague, and then they are the trusted peer for you. So not only are you getting, you’re also giving. Additionally, the teachers have found that being the trusted peer and watching their colleague’s video and asking questions allows them to be reflective of their practice as well.

Inquiry Teams

SARAH: I think one of my favorite things that we’re doing in our district is what we call “inquiry teams.” Inquiry teams are a way for us to allow teachers to experience professional learning completely and totally on their own terms.

In teams, teachers put together a proposal about something that they want to learn about. It could be anything from mindfulness in the classroom to new math strategies for STEM to exploring questions of equity within our school. Then, we put them together with a facilitator and give them time and space to inquire about what they want to learn about in a meaningful way, and then share what they’ve learned with their colleagues across our district.

FRONTLINE: How did they do that? How did they share out those practices?

SARAH: At the end of the inquiry team process, the teams and the facilitators put together a ten-minute presentation about what they learned, and then we have an inquiry celebration where teachers can go and learn from their colleagues in a forum. There’s cake involved, which always is exciting, and then they share out their project and what they’ve learned.

This is our second year of inquiry teams, and last year, some of the presenters said, “You know what? We learned a lot about what doesn’t work through inquiry. We hit some roadblocks, which was completely meaningful for our way of learning. Investigating those things and finding out what doesn’t work was the most beneficial type of learning that we could have done.”

It’s just…it’s amazing. Going over just the inquiry proposals this year was inspiring. What teachers are grappling with, what they want to learn, how they feel that they can move their practice forward, and then seeing how they bring their knowledge through their inquiry team back to their buildings, back to the school district as a whole, is so exciting.

Why Substitutes Work in Your District (or Not) – and What You Can Do About It

Ever feel the pinch of not having enough substitutes to fill in for absent teachers? You’re not alone.

Substitute shortages continue to be a top concern for school districts across the country, and there are plenty of theories why:

  • Teacher shortages make it easier for new educators to find full-time jobs
  • Wages are too low to make substitute teaching an attractive choice
  • People move into other careers in times of economic prosperity.

But data from the Frontline Research & Learning Institute suggests that the issue might not be simply a shortage of substitute teachers in the pool — instead, perhaps our current substitute teachers just aren’t working enough. Looking back at data from our annual report on national employee and substitute absences, during the 2016-17 school year, 46% of substitutes didn’t work at all during the 2016-17 school year. Those who did worked an average of 33.3 days. Fill rates that year averaged 84.3%, indicating that there’s more still work to be done to in finding enough substitutes to cover employee absences.

So, what influences substitutes’ decision-making process when taking jobs? To find out, the Center for Research and Reform in Education at Johns Hopkins University used Frontline Education’s data to explore substitute teacher preparation and working patterns in greater depth.

The study’s data sources included:

  • Human resources data from 2014 to 2017, including over 1.5 million substitutes from over 6,000 K-12 organizations
  • A survey of over 5,000 substitutes from over 2,000 organizations and fifty administrators

What affects how often substitutes work?

First, let’s look at what correlates with taking more substitute teaching jobs:

  • Shorter commute times
  • Availability of work
  • A desire to work in their community or be near their children

It makes sense that having more available jobs means substitutes accept more work, and that those with shorter commute times take more jobs — after all, most people would prefer not to have a very long commute.

We also found that substitutes who teach as a primary source of income work more than those who teach to earn extra money on the side, and that substitutes with strong preferences for specific subjects were more selective and accepted fewer positions.

How do substitutes choose where to work? (Hint: it’s not what you might think!)

At the District Level:

With many substitutes working in multiple school districts, you might be wondering what compels them to choose one district over another. When we asked substitutes and administrators about the values that made substitutes want to work for a particular district, both substitutes and administrators cited availability of work as a leading reason to work in a district.

Reasons for absences

But that’s the only top reason they agreed on.

In addition to availability of work, substitutes reported that district culture (48%), pay (33%), and administrative support (29%) were important in selecting a district. In contrast, a much higher percentage of administrators (62%) believed that pay was a top influencing factor for substitutes in selecting a district, and a lower percentage of administrators believed that district culture (38%) and administrative support (12%) were top factors for substitutes.

That begs the question: does every district leader fully recognize the importance of school culture and administrative support when it comes to recruiting and retaining substitutes?

Maybe not. Only 8% of organizations surveyed provided substitutes with ongoing feedback, and 10% offered professional development. Another 36% of organizations reported providing no support to substitutes.

At the school level:

If you have an employee absence management system that gives you access to your district’s absence data, you may have noticed that some schools have higher fill rates than others. That might not only be due to lower absence rates: substitutes tend to prefer specific schools based on certain characteristics.

As we’ve seen before, short commute times and locality are a major influence — substitutes prefer to work locally and in their own communities. But school culture also plays a role, coming in third on the list of reasons substitutes prefer certain schools. Welcoming substitutes into the school community can help encourage more substitutes to work in schools with low fill rates.

What about providing training for substitute teachers?

While substitutes and administrators alike agreed that training for substitutes is important, our findings suggest that substitute training and preparation has not been fully utilized in the vast majority of districts.

According to the survey, 45% of substitute teachers reported receiving no training at all, and only 7% reported participating in district orientation training. Moreover, the majority of administrators reported that the amount of job skills training was inadequate and that they were dissatisfied with the current format of the training.

A district which invests in its substitutes through professional development opportunities is likely one with a positive, supportive culture, which we have identified as a strong influence on substitute decision-making.

What does this mean for school administrators?

Administrators can encourage substitutes to take more jobs in order to raise fill rates in several ways.

Take stock of your district and school culture — how are substitutes welcomed into the community? Fostering a supportive culture across every level of the organization will attract more substitutes and inspire them to take jobs in your classrooms.

Provide more administrative support to substitute teachers. Chances are, you don’t need to hire more staff — just look for areas where you can streamline inefficient processes and reclaim time.

Invest in providing support to substitute teachers, and consider providing ongoing feedback, professional development or other substitute training opportunities.

Encourage teachers to report absences as far in advance as possible. Longer lead times have a positive effect on position acceptance, and substitutes appreciate having thorough lesson plans.

Target your recruiting efforts to be as local as possible, and draw a connection between substitute teaching and working with the community.

Finally, if your budget allows, consider your district’s substitute wages. Pay isn’t the top reason why substitutes choose to work in a particular district, but higher wages are still an effective incentive — especially if neighboring school districts already pay more.

6 School Districts Challenging How We Attract and Retain Educators

The hard work of attracting, engaging, growing and retaining quality teachers has got to be one of the greatest challenges facing K-12 right now.

Many teachers don’t feel engaged, equipped or empowered. They’re leaving the field at alarming rates, and new graduates are not taking their place fast enough, causing a widespread teacher shortage. Meanwhile, administrators are often over-burdened with paperwork and compliance. Budget cuts and limited resources compound the problem.

So what’s the solution? To be clear, it’s not dependent on the HR department to have all the answers. That’s just one piece of the puzzle.

Strategic, forward-thinking school districts are collaborating across all teams to take a holistic look at human capital management. What exactly does that look like?

  • Using data to drive strategies, from recruiting to professional learning
  • Setting goals and benchmarking progress
  • Collaborating across departments to find solutions
  • Empowering teachers with the tools to be more efficient

These districts are K-12 innovators, and we can all learn from their stories.

Read their stories  

4 Ways to Support Staff Working with English Language Learners

Guest post by Sara Smith-Frings, Former Director of Language Programs

Supporting teacher staff working with english language learners

Being a director of programs supporting English language learners (ELLs) is truly a balancing act. Just like a juggler with spinning plates, we always have our eye on the prize of student success. We know, however, that unless each staff member involved is fully committed to supporting the ELLs, one of those plates might come crashing to the ground. So, how can we keep those plates spinning? And just who are the individuals represented by these “plates”? Who are those staff members that contribute to ELL success?

1. Support Classroom Teachers Working with ELLs

This is the front line, so to speak. These are the individuals who are closest to the academic needs of ELLs. Teachers face a multitude of challenges in today’s schools: in an average classroom, there are students with differing needs ― and having ELLs in class adds to the instructional challenge.

  • Ensure teachers have information about each student that would assist with instruction, such as:
    • Home language
    • Number of years in the U.S.
    • Level of English proficiency
    • Academic history

I recall a campus on which the teachers were provided essential and pertinent information regarding their ELLs. Of course, instruction and learning differed from classroom to classroom, even though teachers had the same information.

When the information was used correctly, the students at this campus felt supported academically and emotionally. The latter was as important as the former. Students who are stressed about their academic situation find it difficult to learn.

One language-arts teacher grouped students by language level and provided carefully selected academic material, and used thoughtfully implemented teaching strategies and learning accommodations, such as word walls and modified material according the language levels. The teacher was friendly and kind, and provided the necessary structure for learning to take place. Students were familiar with the teacher’s classroom procedures and expectations for learning, alleviating stress that comes from the unknown. Do you think the teacher’s management style affected student performance?

In a room right down the hall, the science teacher didn’t review and utilize the information provided for his ELLs. He used the state-provided textbook with no language accommodations, and lectured without providing language supports. His demeanor was stern, and language accommodations weren’t made. How do you think this was reflected in the academic performance of the ELLs?

In the above scenarios, students whose stress is lessened ― in this case by a caring teacher making appropriate accommodations ― tend to have more positive academic experiences and outcomes.[1]

How could the science teacher be better equipped to support ELLs? What if he had been provided with a mentor to help implement teaching methodologies to meet the needs of his ELLs? What if he had the opportunity to observe teachers who successfully taught ELLs? Were there science teachers with the same positive results for their ELLs as the language-arts teacher? If not, could he have observed the language-arts teacher? This leads to the next point.

2. Provide Meaningful Professional Development

Teachers working with ELLs need access to meaningful professional development. There are many theories of adult learning — too many to go into detail — but suffice to say, PD should be targeted to the teacher’s immediate needs and job-embedded, if possible.

This could include opportunities for teachers to collaborate around the needs of their students. Had the science teacher in the previous example been provided with the opportunity to collaborate with other staff members, the students would have benefited as their teacher learned new strategies for supporting ELLs.

Collaborative cultures create a positive work environment for teachers and benefit students. Unfortunately, opportunities for collaboration are not always the norm. Teachers have hectic schedules and collaboration can feel like “one more thing” if time isn’t built into the school day. It is easy for a teacher with many responsibilities and demands to simply stay in his/her classroom and not seek out others. Consider building development time into the schedule, either through early/late release once a week or professional development days during the school year.

3. Think Strategically About ELL Paperwork

Take into consideration any paperwork associated with your ELL program. Who does the paperwork? How much time is involved? It is common for ELL-program paperwork to be complex due to state and federal regulations.

In my former district, a teacher was responsible for completing and maintaining paperwork for each campus. When a classroom teacher was designated, s/he was provided extra time to complete the paperwork, on top of classroom activities.

I remember the Tale of Two Middle Schools: one had a teacher who was given two class periods to keep up with the paperwork for the ELL program. On days that she had no paperwork, she would reach out to the content teachers of the ELLs, checking on the progress of the students and ensuring classroom teachers had what they needed to fully support students. At the other middle school, the ELL teacher responsible for paperwork did not have enough time during the school day for paperwork, except for the usual conference period or lunchtime. Needless to say, paperwork was not in the best order at the second campus, and student performance suffered.

4. Support Campus Administrators

Without the support of these campus folks, ensuring academic success of ELLs can be an uphill battle. These are the individuals that make sure the best personnel, processes and procedures to support ELLs are in place. How do we support administrators so that those plates keep spinning?

Ensure principals are informed of their ELL population and how student performance might affect their campus academic ratings. Why is this important?

  • They are ultimately the ones who provide the support mechanisms for their staff:
    • How the information about ELLs is communicated and by whom
    • Time for their staff to complete paperwork
    • Opportunities for professional development
    • Selecting the staff to instruct the ELLs

Summing Up the Importance of Role-Based Support

Those who go into education do so to promote student learning and to make a difference in the lives of children. Providing appropriate support to teachers raises the level of instruction for ELLs. Providing purposeful support to all staff, including teachers and administrators, will ultimately lead to the desired academic outcomes for students.

[1] Abukhattala, Ibrahim. (2013). Krashen’s Five Proposals on Language Learning: Are They Valid in Libyan EFL Classes. English Language Teaching; Vol. 6(1). Retrieved from https://files.eric.ed.gov/fulltext/EJ1076806.pdf.

Learn more about Frontline English Learner Program Management

5 Tips to Understanding and Avoiding Bias in Teacher Performance Evaluations

Bias is normal and universal. We all perceive the world differently, and interpret what we observe differently. Our experiences shape our views and vice versa. Although we cannot be free of bias, if we can acknowledge and understand our biases, we will be better able to overcome their effects.

However unintentional, as evaluators of teachers’ performances, our biases can interfere with the accuracy and reliability of evaluations. Teachers won’t trust and use evaluation results for improvement unless they are convinced the observations are accurate, truthful and justifiable.Error in evaluation, or in any measurement, is inevitable. Human performance – including teaching – is especially elusive to measure from the very beginning, and the validity of the evaluators is compromised by a number of factors (Bejar, 2012).

IdealReality
The scores evaluators assign to teachers only reflect:
  • The true quality of performance
The scores evaluators assign to teachers also depend on:
  • The quality of evaluators’ understanding of the performance rubric
  • The quality of the evaluators’ interpretation of teachers’ performance
  • Fatigue and other factors that can influence evaluators
  • Environmental conditions
  • The nature of performance previously scored

Here are 5 common bias issues in teacher observation and evaluation, and proposed solutions to overcome them:

Issue 1: Rater Personal Bias

This bias occurs when evaluators apply idiosyncratic criteria that are irrelevant to actual teacher performance. Often without realizing it, evaluators give higher ratings to teachers who resemble them or have characteristics in common with them — for instance, certain beliefs or ways of getting things done which are not essential to an educator’s effectiveness.

Likewise, the evaluator might give a lower rating because the teacher has different preferences for instruction, even though the instructional delivery is effective. If the teacher is rated too high or low based on a rater’s personal bias, she will not know how to improve her teaching because she will not understand from where the rating really came.

Examples:

  • “This teacher reminds me of myself when I started teaching, so I’ll give him a higher rating.”
  • “The teacher moves around a lot. I prefer to be more stationary when I teach.  I’ll give her a lower rating.”

Solution:

Training evaluators on objective ways to collect evidence from multiple sources on uniform, research-based performance standards will help overcome this bias. When evaluators let their own judgments get in their way of accurately evaluating teachers, training can help them be more objective.

Issue 2: Halo and Pitchfork Effect

The halo and pitchfork effect can arise when early impressions of the educator being evaluated influence subsequent ratings. In the halo effect, this impression tends to be one that is too favorable. For example, let’s assume a principal has a positive impression of a teacher who is professionally dressed. Even if the actual observation of the teacher suggests deficiencies in the teacher’s performance, the evaluator might use more leniency than with other teachers who may not be dressed as professionally.

On the other hand, if the evaluator has a negative impression of a classroom where students scramble around the room and chat noisily minutes before the lesson starts, the evaluator might then have less tolerance for that teacher, even when students are on task and engaged when the lesson begins. This would be an example of the pitchfork effect.

It may seem that the halo effect could help teachers, but if their ratings are inflated, they will not know how to improve and develop their instructional skills. If they are unjustly deflated they may get disheartened because they don’t feel they are getting a fair assessment on their true abilities.

Examples:

  • “You were very professionally dressed and well-spoken, so I’ll give you the benefit of the doubt if I see deficiencies in your classroom.” (Halo)
  • “The kids were rowdy and noisy as the lesson started, so now I look for flaws when I observe you.” (Pitchfork)

Solution:

To help counter this issue, evaluators should be trained on objective ways to collect evidence on uniform, research-based criteria. Multiple evaluators also should be used, so that various perspectives are included. These solutions will help prevent an evaluator from rating a teacher inaccurately based on his or her own impressions.

Issue 3: Error of Central Tendency

Central tendency is a bias in which evaluators tend to rate all teachers near the middle of the scale and avoid extreme scores, even when such scores are warranted. This is a very common issue and can happen for various reasons: a desire to avoid hurting anyone’s feelings, for example, or the worry that teachers will be upset if they realize their ratings are different.

If everyone receives the same rating, improvement is difficult. It discourages those teachers who are performing at a highly effective level, while giving false confidence to those who need significant improvement because their current performance is not meeting student needs. Essentially, the rating can perpetuate ineffective teaching practices.

Examples:

  • “We’re all the same…and we’re all acceptable!”
  • “I don’t want to upset anybody, so I am not going to differentiate and am going to rate everyone in the middle.”

Solution:

One solution is to train evaluators to distinguish between the various ratings on the scale. Evaluators also should be trained on using precise feedback based on data-generated evidence. This is done formatively so the teacher can continually improve. These solutions help teachers receive accurate, helpful ratings rather than always being rated in the middle.

Issue 4: Error of Leniency

When evaluators tend to assign high ratings to a large sector of teachers when the ratings are not earned, this is known as leniency error. For instance, they might rate all or most of their teachers as highly effective, even when teaching performance or student growth and achievement measures do not justify these ratings.

While the reasons for this particular error are often well-meant, it causes similar problems as the error of central tendency. Leniency can frustrate high-performing teachers and keep lower-performing teachers from receiving the support they need to improve.

Examples:

  • “Everyone is superior…or better!”
  • “We are living in the fictional town of Lake Wobegon, where everyone is above average!”

Solution:

Train evaluators on distinguishing between the various rating levels so they can score teacher performance based on pre-defined criteria and the actual evidence collected. Evaluators likely rate teachers too highly when they do not clearly understand the differences between the ratings. Extra training will help them see the difference between effective and highly effective.

Issue 5: Rater Drift

With this, evaluators begin with a level of agreement on observations and ratings, but then gradually drift apart as they begin to apply their own spin to various criteria. Rater drift can happen at a collective level. For example, all evaluators might initially agree on what “student engagement” means, but over time come to define it differently. One evaluator might start to base it on how many students are looking at the teacher, while another looks at how many questions students ask and answer, and yet another focuses on student work from the lesson.

Rater drift also can happen to evaluators individually. A 2015 study by Casabianca and colleagues examined the ratings given to teachers based on observations. In the beginning, raters gave high scores. As time went on, however, they issued lower scores, even for the same teaching quality, ultimately dropping from about the 84th percentile to the 43rd — despite the fact that a teacher’s quality had not changed.

Examples:

  • “Although my co-evaluators and I were trained and calibrated at the beginning of the school year, I am going to add my own personal twists down the road!”
  • “I just read an interesting article about classroom management, and that changed my view of what productive classroom environment should look like. I will redefine the evaluation criteria!”

Solution:

This bias can be addressed by providing refresher training for evaluators and by using tandem reviews to ensure that evaluators are seeing things in the same way, making them less likely to drift away from each other in their ratings.

Bias and errors crop up when evaluators accidentally or habitually overlook, misinterpret, or distort what is perceived. Bias and errors confound the quality of evaluation, and that is why research-based calibration training is essential – training that prepares evaluators to know:

  1. What effectiveness truly looks like and what to look for,
  2. How to document teacher performance with objective evidence, and
  3. How to synthesize evidence and apply the rubrics to provide ratings.

A solid training plan involves more than a one-shot calibration dose at the beginning of the school year. It also needs ongoing refresher training sessions on a recurring basis to make sure that evaluators consistently and persistently follow the prescribed criteria.

How can you ensure your evaluators and observers are trained and calibrated to provide reliable and defensible evaluations? Learn about the Stronge Master-Coded Simulations and the Stronge Effectiveness Performance Evaluation System, powered by Frontline Professional Growth.

 

References:

Bejar, I. I. (2012). Rater cognition: Implications for validity. Educational Measurement: Issues and Practice, 31(3), 2-9.

Casabianca, J. M., Lockwood, J. R., & McCaffret, D. F. (2015). Trends in classroom observation scores. Educational and Psychological Measurement, 75(2), 311-337.

This post was collaboratively authored by Xianxuan Xu, Ph. D. and Dr. James Stronge, Ph.D., President of Stronge and Associates Educational Consulting, LLC.