Explain It Like I'm Nine: Low Substitute Fill Rates
Can’t find enough substitute teachers to cover teacher absences? Seems...
Recent decades saw state and local leaders, with the support of federal incentives, devoting great time, energy and hope to reformed educator evaluations.
But as the reforms were implemented, it became clear that their success rested in large part on trust in their ability to assist educator improvement — and that few educators felt that trust. The fear was that educator evaluation systems would prove impotent as evaluators would tend to overestimate improvement, with scores climbing despite a lack of true growth. Whether or not skepticism prevailed, it has certainly lingered.
Wondering whether that skepticism was well-founded, the Frontline Research & Learning Institute took a look at datai from schools using a formal electronic evaluation management system over a 5-year period. What it found suggests that if evaluations are failing to produce improvement, it might be a problem with post-evaluation follow-through rather than with evaluations themselves.
Despite the concern that evaluators would inflate the skills of educators, summative evaluations are actually showing a downward trend over time. With the passage of the Every Student Succeeds Act in 2015, and the removal of some of the high stakes associated with evaluation, it appears that evaluators have felt they were able to give more accurate ratings because the process was more growth-centered, rather than punitive.
Additionally, schools who have more recently implemented their evaluation system tended to have lower scores from the outset, which suggests that over time, evaluation systems are being refined toward accuracy and enforcing a higher standard.
It might be easy to think that lower initial scores show a less-skilled teaching workforce, possibly due to the teacher shortage, teacher turnover, and more novice teachers entering the field. But that doesn’t appear to be the case for two reasons:
These shifts indicate that, although their history has been less than perfect, educator evaluations hold real potential for effecting growth in educators. But if the trends are not showing educator improvement, where are things falling apart?
Data can only go so far unassisted, and it could be that evaluations are proving ineffective simply because the data they yield are being used ineffectively. To turn that around, here are four recommendations for district leaders:
Growth Metric for K-12 HCM: Evaluations – This report zeroes in on the role evaluations can play in fostering an engaged, effective educator workforce.
Continuous Improvement in Professional Learning – Thinking about how to connect evaluations professional learning? Take a look at how one district is striving for — and achieving — dynamic, effective professional learning.
i Silverman, S. (n.d.). Bending Toward Accuracy: How Teacher Evaluations Are Evolving. Retrieved October 14, 2019, from https://www.frontlineinstitute.com/reports/evolution-of-teacher-evaluations/.