Seattle Schools' Student Survey of Teachers
Update: I have received some new info - via the City - and either we have a case of Who's on First or SPS is not being forthcoming with all it knows about this student survey of teachers.
I have word that one school has been working on this - with Teachers United and possibly school staff as well as the Seattle Youth Commission - for almost two years.
I'll let you know all the details when I get them sorted out but I have to wonder whose survey it truly is going to be.
End of update.
I had a long conversation with Dr. Eric Anderson, head of Research, Evaluation and Assessment, who is in charge of the initiative for a separate student survey of teachers.
So let's get some basics out of the way:
He said that teachers would not see the survey results individually but receive those results as a whole (I would think in graphs or charts). He said the results would not be used for teacher evaluation but to help teachers and principals in guiding the teaching and learning. (I would think it would both reinforce what teachers believe works in their classroom or give them feedback on tweaks and/or changes they might consider.)
This pilot survey is to be done in May.
Analysis
I believe that Dr. Anderson is sincere about what SPS is doing.
One, when I asked him about the involvement of Teachers United and the Seattle Youth Commission, he could not really say. (He knew nothing about the Seattle Youth Commission being involved.) I asked what TU would be doing and he said a TU rep might be on the work group but that this work was the district's initiative. I pointed out that the blurb that was sent out to the PTAs said that TU, the Seattle Youth Commission and SPS were working together on this. He said that was not written by SPS.
My take is that the blurb sent out by the Garfield parent to various PTAs' presidents was written by TU. I think TU wants to be some kind of player in the public education community in the district and took the aggressive action of writing this blurb and including the district as if they were all in on it together.
They are not.
Two, the blurb also had the curious statement that
As you may know, student surveys are the best overall indicator of long term teacher effectiveness.
I'll get to where that point may have come from but Dr. Anderson said he was not aware of that statement being true and that their interest in a student survey was from the Road Map project and the metric in the Strategic Plan around student perceptions of teaching. Again, SPS did not write this blurb so someone else wanted PTA leadership to believe that statement on student surveys.
Three, the blurb wanted the PTAs to believe that "parents" plural were going to be involved in this work group and it appears to not be the case. Again, SPS is running this show and not whoever is sending out this information.
The parent who wrote the blurb said "I have agreed to solicit other parents to become members of the task force to advance this work."
It may be that TU is putting together a taskforce with parents but SPS is not.
So it's a SPS-driven pilot survey to see if asking students questions about teachers and their teaching effectiveness is a worthy/helpful proposition. Indeed, a couple of teachers weighed in on the previous thread and said they do their own surveys. A simple, home-grown, under-10 question survey about teachers by their students sounds good to me.
If only it was going to end up that way.
BUT, there were two red flags to me (three, if you count the meddling of TU in this issue).
One, was the Road Map project which is very data-driven (the more the better it seems). They are also funded by the Gates Foundation (among others).
Two, this study by the Gates Foundation, dated September 2012, called "Asking Students About Teaching - Student Perception Surveys and their Implementation" is troubling to me in reference to this issue of student surveys of teachers in SPS.
I asked Dr. Anderson if he had read it and he simply said yes but he also referenced one survey mentioned in the report, Tripod.
Several issues jumped out at me in this report which lead me to believe this "pilot" is going to wind up with several outcomes.
1) the pilot will become a full-out survey, probably down into the elementary level
2) the early low-cost will blow up to be more costly, both in finances and classroom teaching time
3) in the end, it probably will end up in the teacher evaluation (it is in at least one state where it counts for 5% of a teacher's evaluation score).
What's in the Gates study?
- I think that sentence about "student surveys being the best overall indicator" may come from these sentences:
Analysis by the Measures of Effective Teaching (MET) project finds that teachers’ student survey results are predictive of student achievement gains.
And guess what mentioned in the study? Lots of surveys and the companies who make them. Convenient, no? One is them was referenced by Dr. Anderson. It's Tripod which the study describes as the "oldest and most widely used off-the-shelf survey instruments."
How complex does it get?
Consider the Tripod survey employed in the MET project study. Tripod is designed to measure teaching, student engagement, school norms, and student demographics. To measure teaching, the survey groups items under seven constructs, called the “7 Cs”: Care, Control, Challenge, Clarify, Confer, Captivate, and Consolidate. For each, the survey poses a series of statements, asking students’ level of agreement on a five- point scale.
Tripod is "full versions included 80+ items, shorter forms available." I would guess the shorter ones are for the K-2nd students, no?
The shortest one they mention has 20 questions. So we want to take classroom time and district dollars for a single survey that would likely have more than 20 questions?
They have a whole section on confidentiality which is quite important - for everyone involved - in this idea (they are speaking in terms of whether the feedback is part of the teacher's evaluation).
I have word that one school has been working on this - with Teachers United and possibly school staff as well as the Seattle Youth Commission - for almost two years.
I'll let you know all the details when I get them sorted out but I have to wonder whose survey it truly is going to be.
End of update.
I had a long conversation with Dr. Eric Anderson, head of Research, Evaluation and Assessment, who is in charge of the initiative for a separate student survey of teachers.
So let's get some basics out of the way:
- The survey has not been written but they will likely pivot off other student surveys out there (more on this below)
- Who will be part of this "work group" that will decide on the survey, which schools will pilot it and which students (grade level) will take it? Vaguely, some staff, some teachers (SEA has been invited but has not responded as of yesterday), some students and maybe one parent-at-large. (Oddly, the district is probably going with the Garfield parent who has been something of a facilitator for Teachers United. I think they really should look more broadly and also have more than one parent in this work group.)
- The district itself is paying for this work and no other group is giving funding towards it.
- Dr. Anderson will, of course, run the final work past the Superintendent but he is the final voice on it.
- He said there would be a separate survey on principals (but I need to double-check if he meant by teachers or students).
- This survey is not part of the CBA, teachers can (and do) give their own student surveys and it will not be used in teacher evaluations at this time. State law does allow student surveys.
- They don't know what schools will pilot it nor what grade levels (he thought 6th-12th).
- I asked if teachers would have to give the survey (given its not in their CBA) and he said they are often asked to do surveys that aren't in the CBA. (I'm sure SEA would not see this particular survey as just another one.)
- I asked if students had to take it and he seemed taken aback as if it had not occurred to him that some students might say no.
- He said that they may try to combine surveys into one but they could continue to be separate.
He said that teachers would not see the survey results individually but receive those results as a whole (I would think in graphs or charts). He said the results would not be used for teacher evaluation but to help teachers and principals in guiding the teaching and learning. (I would think it would both reinforce what teachers believe works in their classroom or give them feedback on tweaks and/or changes they might consider.)
This pilot survey is to be done in May.
Analysis
I believe that Dr. Anderson is sincere about what SPS is doing.
One, when I asked him about the involvement of Teachers United and the Seattle Youth Commission, he could not really say. (He knew nothing about the Seattle Youth Commission being involved.) I asked what TU would be doing and he said a TU rep might be on the work group but that this work was the district's initiative. I pointed out that the blurb that was sent out to the PTAs said that TU, the Seattle Youth Commission and SPS were working together on this. He said that was not written by SPS.
My take is that the blurb sent out by the Garfield parent to various PTAs' presidents was written by TU. I think TU wants to be some kind of player in the public education community in the district and took the aggressive action of writing this blurb and including the district as if they were all in on it together.
They are not.
Two, the blurb also had the curious statement that
As you may know, student surveys are the best overall indicator of long term teacher effectiveness.
I'll get to where that point may have come from but Dr. Anderson said he was not aware of that statement being true and that their interest in a student survey was from the Road Map project and the metric in the Strategic Plan around student perceptions of teaching. Again, SPS did not write this blurb so someone else wanted PTA leadership to believe that statement on student surveys.
Three, the blurb wanted the PTAs to believe that "parents" plural were going to be involved in this work group and it appears to not be the case. Again, SPS is running this show and not whoever is sending out this information.
The parent who wrote the blurb said "I have agreed to solicit other parents to become members of the task force to advance this work."
It may be that TU is putting together a taskforce with parents but SPS is not.
So it's a SPS-driven pilot survey to see if asking students questions about teachers and their teaching effectiveness is a worthy/helpful proposition. Indeed, a couple of teachers weighed in on the previous thread and said they do their own surveys. A simple, home-grown, under-10 question survey about teachers by their students sounds good to me.
If only it was going to end up that way.
BUT, there were two red flags to me (three, if you count the meddling of TU in this issue).
One, was the Road Map project which is very data-driven (the more the better it seems). They are also funded by the Gates Foundation (among others).
Two, this study by the Gates Foundation, dated September 2012, called "Asking Students About Teaching - Student Perception Surveys and their Implementation" is troubling to me in reference to this issue of student surveys of teachers in SPS.
I asked Dr. Anderson if he had read it and he simply said yes but he also referenced one survey mentioned in the report, Tripod.
Several issues jumped out at me in this report which lead me to believe this "pilot" is going to wind up with several outcomes.
1) the pilot will become a full-out survey, probably down into the elementary level
2) the early low-cost will blow up to be more costly, both in finances and classroom teaching time
3) in the end, it probably will end up in the teacher evaluation (it is in at least one state where it counts for 5% of a teacher's evaluation score).
What's in the Gates study?
- I think that sentence about "student surveys being the best overall indicator" may come from these sentences:
Analysis by the Measures of Effective Teaching (MET) project finds that teachers’ student survey results are predictive of student achievement gains.
Further, the MET project finds student surveys produce more consistent results than classroom observations or achievement gain measures (see the MET project’s Gathering Feedback policy and practitioner brief).
- I will point out this sentence about using one test to predict teaching ability:
Whereas annual measures of student achievement gains provide little information for improvement (and generally too late to do much about it), student surveys can be administered early enough in the year to tell teachers where they need to focus so that their current students may benefit.
- The study talks about the challenges of using surveys. Things like asking the right questions with the right wording, implementation and the high stakes involved for teachers.
Some of these relate to the survey instrument itself. Not every survey will produce meaningful information on teaching. Not to be confused with popularity contests, well-designed student perception surveys capture important aspects of instruction and the classroom environment.
Any instrument, when stakes are attached, could distort behavior in unwanted ways, or produce a less accurate picture of typical practice. One can imagine a teacher who, consciously or not, acts more lenient if student surveys are factored into evaluation, even though well-designed surveys stress a balance of challenge and support. This is further reason for multiple measures. It lets teachers stay focused on effective teaching and not on any one result.
Here's where the money comes in:
But even a good instrument, implemented poorly, will produce bad information. Attending to issues such as student confidentiality, sampling, and accuracy of reporting takes on greater urgency as systems look toward includ- ing student surveys in their evaluation systems. The care with which systems must administer surveys in such contexts is akin to that required in the formal administration of standardized student assessments. Smooth administration and data integrity depend on piloting, clear protocols, trained coordinators, and quality-control checks.
Any instrument, when stakes are attached, could distort behavior in unwanted ways, or produce a less accurate picture of typical practice. One can imagine a teacher who, consciously or not, acts more lenient if student surveys are factored into evaluation, even though well-designed surveys stress a balance of challenge and support. This is further reason for multiple measures. It lets teachers stay focused on effective teaching and not on any one result.
Here's where the money comes in:
But even a good instrument, implemented poorly, will produce bad information. Attending to issues such as student confidentiality, sampling, and accuracy of reporting takes on greater urgency as systems look toward includ- ing student surveys in their evaluation systems. The care with which systems must administer surveys in such contexts is akin to that required in the formal administration of standardized student assessments. Smooth administration and data integrity depend on piloting, clear protocols, trained coordinators, and quality-control checks.
And guess what mentioned in the study? Lots of surveys and the companies who make them. Convenient, no? One is them was referenced by Dr. Anderson. It's Tripod which the study describes as the "oldest and most widely used off-the-shelf survey instruments."
Consider the Tripod survey employed in the MET project study. Tripod is designed to measure teaching, student engagement, school norms, and student demographics. To measure teaching, the survey groups items under seven constructs, called the “7 Cs”: Care, Control, Challenge, Clarify, Confer, Captivate, and Consolidate. For each, the survey poses a series of statements, asking students’ level of agreement on a five- point scale.
Tripod is "full versions included 80+ items, shorter forms available." I would guess the shorter ones are for the K-2nd students, no?
The shortest one they mention has 20 questions. So we want to take classroom time and district dollars for a single survey that would likely have more than 20 questions?
They have a whole section on confidentiality which is quite important - for everyone involved - in this idea (they are speaking in terms of whether the feedback is part of the teacher's evaluation).
Confidentiality for students is a non- negotiable if surveys are part of formal feedback and evaluation. If students believe their responses will negatively influence how their teachers treat them, feel about them, or grade them, then they’ll respond so as to avoid that happening. More fundamentally, students shouldn’t be made to feel uncomfortable. They should be told, in words and actions, that their teachers will not know what individual students say about their classrooms.
Consistently applied protocols are essential for providing students such assurance. Although in many situations teachers will distribute student perception surveys in their own classrooms, no teacher should receive back a completed survey form that would allow the teacher to identify who filled it out. In the MET project, following procedures generally employed in administering Tripod, paper surveys were distributed to students with their names on peel-off labels that they removed before completing them. All that remained on the form when they finished were unique bar codes to let researchers link their responses to other data collected for the study but which no school personnel could use to identify respondents. Students also placed their completed forms in opaque envelopes and sealed them.
Confidentiality also requires setting a minimum number of respondents for providing teachers with results. If results for a class are based on surveys from only three or four students, then a teacher could conjecture that a highly favorable or unfavorable result for an item reflects how all of those students responded individually. (Although less likely, this still could happen if all students in a class give the same highly unfavorable or favorable response to a teacher.)
Consistently applied protocols are essential for providing students such assurance. Although in many situations teachers will distribute student perception surveys in their own classrooms, no teacher should receive back a completed survey form that would allow the teacher to identify who filled it out. In the MET project, following procedures generally employed in administering Tripod, paper surveys were distributed to students with their names on peel-off labels that they removed before completing them. All that remained on the form when they finished were unique bar codes to let researchers link their responses to other data collected for the study but which no school personnel could use to identify respondents. Students also placed their completed forms in opaque envelopes and sealed them.
Confidentiality also requires setting a minimum number of respondents for providing teachers with results. If results for a class are based on surveys from only three or four students, then a teacher could conjecture that a highly favorable or unfavorable result for an item reflects how all of those students responded individually. (Although less likely, this still could happen if all students in a class give the same highly unfavorable or favorable response to a teacher.)
Well, that sounds kind of complicated. They explain that in Denver the teachers need to read a script about the survey and that - very funny - the teachers assign a student "the task of collecting all the completed surveys, sealing them in an envelope and taking them to the school's survey coordinator."
Yes, lets have students collect the surveys - nothing could go wrong there.
(One good thing in Denver was the feedback that the survey was too long and they paired it down to just 21 questions for each of those Tripod 7 Cs. But they note that the district's Prek-2 version has just 9 items. Well, I'm relieved. We wouldn't want to miss out on pre-K kids joining in.)
(One good thing in Denver was the feedback that the survey was too long and they paired it down to just 21 questions for each of those Tripod 7 Cs. But they note that the district's Prek-2 version has just 9 items. Well, I'm relieved. We wouldn't want to miss out on pre-K kids joining in.)
Teachers are told to strive for classroom response rates of at least 80 percent.
So let's not put too much pressure on either the teacher or the student to get this done. After all, it's only someone's education and that teacher's job.
More issues:
A significant challenge is posed by linking each student’s individual responses to a particular teacher and class. Doing so while ensuring confidentiality generally requires that each survey form be assigned a unique student identifier before a student completes it. While systems can administer surveys without this feature and still attribute a classroom’s group of surveys to the right teacher, they wouldn’t be able to compare how the same students respond in different classrooms or at different points in time. Nor could they connect survey results with other data for the same students. Such connections can help in assessing the effectiveness of interventions and the reliability of the survey itself. They also allow for more data integrity checks.
So there are a large number of steps - both for confidentiality and getting MORE data identified with a teacher and his/her classroom.
They also have an section on Sampling for Teachers Who Teach Multiple Sections. This is also a complicated issue (page 17). Consider the high school or middle school student with multiple teachers.
I had written to the Board about this, mostly to put it on their radar (as I suspect they didn't know anything about it).
In summary, our district is moving in an ed reform direction. I see it more every single day. If I thought a student survey was being developed solely by our district, with short, direct questions and results seen only by teachers and principals, fine by me.
But I suspect that is not what going to end up happening.
Comments
peonypower is right on the money--and I'm not talking about Gates' money.
Kate Martin, I hope that in the future you be more careful what you wish for. Affiliating yourself with Teachers' United and its sponsorhip is not a good track record to run on. Trying to resolve your issues as a parent by potentially selling teachers out is not cool.
--enough already
So they will get your student's state identifier, coursework schedule, and their responses to these surveys.
So, yeah, this survey won't be used to evaluate teachers. Until it will.
The current work & time burden for administrators in this evaluation system is overwhelming and untenable. Anything like this, that can be used to collect numbers and vomit them out of a computer, will soon be seen as a null-labor-cost lifesaver. Universities all do it. (As if university teaching is some kind of model beacon for effective, engaging instruction. You lecture, I'll text.)
Ask your friendly neighborhood professor if there's any downside to student evaluations. Like, everybody gets an "A". So you'll like me.
The student evaluation idea is at least partially predicated on the premise that students and teachers want the same thing: engaging and time efficient learning. Any parent knows the reality is often rather different.
Another false premise is that teachers need these evaluations because otherwise, they're working in some kind of vacuum, and don't have any means to discern what's happening with their students. To that one, I just have to say, "Wow."
HP
I am not surprised no one contacted you back. The transition from McGinn to Murray as far as the kids on the Commission are concerned, has not been smooth. There is no meeting this week due to midwinter break.
HP
Again, the blurb from the Garfield parent said the Seattle Youth Commission is involved and yet the district knows nothing about it.
And even though Dr. Anderson said that some students would be on the work group, would it be students from the Seattle Youth Commission?
Again, I suspect the clumsy hand of Teachers United, trying to insert themselves into district initiatives without being invited or asked.
Frankly, I want them to leave my kid alone, and allow time for teachers to teach and enrich my child's life.
Don't forget, as this project moves forward, the district wants to take dollars out of our schools.
"Adults at my school care about me"
and then there are all those ridiculous sappy-happy choices to bubble in about strongly agree to strongly might not be less than agreeable, maybe.
Suppose the survey is at a high school. Shouldn't we know how long the student has been at that school? Shouldn't we know how many adults the student has had regular contact with (at least once a month) 1 to 5, 6 to 10, 11 to 15 ...?
NOW we can ask how many adults don't give a rip about you!
BTW - what if the student hates math and therefore hates all 3 math teachers, out of 22 adults in 3 years. ummmm...
BTW - where are the surveys on the Power Point jockeys? Remember when Charlie tried to keep tabs on 20? something Grand Initiatives during the Reign Of MGJ?
Dear Teacher -
How many NEW initiatives have you had dumped on your head this year?
How many initiatives were realistically funded?
Where the initiatives prioritized?
What was the priority of this year's initiatives to last year's?
Who is directly responsible for creating the initiative?
I suppose that PSED - Gate$ - Roadmap woman would NOT want to have those boss questions answered, cuz, it might show that their work is just
HeadHuntingOnTeachers