Teacher Evaluation: WHY It Matters and HOW We Can Do Better
An in-depth look...
When I visit schools as an RTI/MTSS consultant and talk with teachers about Tier 1/classroom academic interventions, I often hear frustration over the difficulty of collecting and interpreting data to monitor student progress. Yet, the critical importance of data is that it ‘tells the story’ of the academic or behavioral intervention, revealing the answers to such central questions as:
If the information required to answer any of these questions is missing, the data story becomes garbled and teachers can find themselves unsure about the purpose and/or outcome of the intervention.
While following a guide does not eliminate all difficulties in tracking Tier 1/classroom interventions, these 7 steps will help the educators you work with ask the right questions, collect useful data and arrive at meaningful answers at Tier 1.
The first step in setting up a plan to monitor a student is to choose the specific skill or behavior to measure. Your ‘problem-identification’ statement should define that skill or behavior in clear, specific terms.
Keep in mind that a clear problem definition is a necessary starting point for developing a monitoring plan: “If you can’t name the problem, you can’t measure it.”
Next, select a valid, reliable and manageable way to collect data on the skill or behavior the instructor has targeted for intervention. Data sources used to track student progress on classroom interventions should be brief, valid measures of the target skill, and sensitive to short-term student gains.
There are a range of teacher-friendly data-collection tools to choose from, such as rubrics, checklists, Daily Behavior Report Cards (DBRC), Curriculum-based Measures (CBMs), teacher logs and student work products.
When planning a classroom intervention, the teacher should choose an end-date when he/she will review the progress-monitoring data and decide whether the intervention is successful.
A good practice is to run an academic intervention for at least 6-8 instructional weeks before evaluating its effectiveness. Student data can vary significantly from day to day: Allowing 6-8 weeks for data collection permits the teacher to collect sufficient data points to have greater confidence when judging the intervention’s impact.
Run an #RTI / #MTSS intervention for at least 6-8 instructional weeks before evaluating effectiveness. Read why here
Before launching the intervention, the teacher will use the selected data-collection tool to record baseline data reflecting the student’s current performance. Baseline data represents a starting point that allows the teacher to calculate precisely any progress the student makes during the intervention.
Because student data can be variable, the instructor should strive to collect at least 3 data points before starting the intervention and average them to calculate baseline.
Next, the teacher sets a post-intervention outcome goal that defines the student’s expected performance on the target skill or behavior if the intervention is successful (e.g., after 6-8 weeks). Setting a specific outcome goal for the student is a critical step, as it allows educators to judge the intervention’s effectiveness.
A teacher with a student who frequently writes incomplete sentences might collect writing samples from a small group of ‘typical’ student writers in the class, analyze those samples to calculate percentage of complete sentences, and use this peer norm (e.g., 90 percent complete sentences) to set a sentence-writing outcome goal for that struggling writer.
A math instructor wishes to teach a student to follow a 7-step procedural checklist when solving math word problems. The data source in this example is the checklist, and the teacher sets as the outcome goal that — when given a word problem — the student will independently follow all steps in the teacher-supplied checklist in the correct order.
TIP: For a student with a large academic deficit, the teacher may not be able to close that skill-gap entirely within one 6-8-week intervention cycle. In this instance, the instructor should instead set an ambitious ‘intermediate goal’ that, if accomplished, will demonstrate the student is clearly closing the academic gap with peers. It is not unusual for students with substantial academic delays to require several successive intervention-cycles with intermediate goals before they are able to close a skill-gap sufficiently to bring them up to meet their grade-level peers.
The more frequently the teacher collects data, the more quickly she/he will be able to judge whether an intervention is effective. This is because more data points make trends of improvement easier to spot and increase instructors’ confidence in the overall direction or ‘trend’ of the data.
Ideally, teachers should strive to collect data at least weekly for the duration of the intervention period. If that is not feasible, student progress should be monitored no less than twice per month.
Once the teacher has created a progress-monitoring plan for the student, she/he puts that plan into action. At the end of the pre-determined intervention period (e.g., in 6 weeks), the teacher reviews the student’s cumulative progress-monitoring data, compares it to the outcome goal and judges the effectiveness of the intervention. Here are the decision rules:
The goal in monitoring any classroom intervention is to let the data guide you in understanding a learner’s unique story. When teachers can clearly define a student’s specific academic or behavioral challenge, collect data that accurately tracks progress, and calculate baseline level and outcome goal as points of reference to judge intervention success, the student’s story will be truly told.
 Upah, K. R. F. (2008). Best practices in designing, implementing, and evaluating quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 209-223). Bethesda, MD: National Association of School Psychologists.
 Howell, K. W., Hosp, J. L., & Kurns, S. (2008). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.349-362). Bethesda, MD: National Association of School Psychologists.
 Hixson, M. D., Christ, T. J., & Bruni, T. (2014). Best practices in the analysis of progress monitoring data and decision making in A. Thomas & Patti Harris (Eds.), Best Practices in School Psychology VI (pp. 343-354). Silver Springs, MD: National Association of School Psychologists.
 Shapiro, E. S. (2008). Best practices in setting progress-monitoring monitoring goals for academic skill improvement. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 141-157). Bethesda, MD: National Association of School Psychologists.
 Filderman, M. J., & Toste, J. R. (2018). Decisions, decisions, decisions: Using data to make instructional decisions for struggling readers. Teaching Exceptional Children, 50(3), 130-140.