From Ed Week:
Student feedback, test-score growth calculations, and observations of practice appear to pick up different but complementary information that, combined, can provide a balanced and accurate picture of teacher performance, according to research released today by the Bill &Melinda Gates Foundation.
The $45 million study, in progress since 2009, is one of the largest and most extensive research projects ever undertaken on the question of how to identify and measure high-quality teaching. It involved some 3,000 teachers in six districts: Charlotte-Mecklenberg, N.C.; Dallas; Denver; Hillsboro County, Fla.; Memphis, Tenn.; and New York City.
To the amazement of all:
Basing more than half a teacher’s evaluation on test-score-based measures of student achievement seemed to compromise it, the researchers also found.
Another piece suggests that teachers should be observed by more than one person to ensure that observations are reliable.
I'm just gobsmacked.
From the Times via the AP:
Several districts involved in the research acknowledged that student
surveys were the most controversial part of the process, and some, like
Hillsboro County Public Schools in Florida, have opted to leave them out
of the mix when scoring teachers.
Jean Clements, president of the Hillsboro Classroom Teachers
Association, said her district decided the results of student surveys,
which ask questions like "do you feel challenged to do your best work,"
may not be trusted by teachers.
The researchers found, however, that student surveys help teachers
improve their practice because those results evoke the most emotions.
You mean because teachers hearing from students about what works and doesn't work, what motivates and moves them might help their teaching? Good to know.