Contract Update
Here's what I believe is the latest on the teacher contract from SPS Communications:
"We’re very pleased to let you know that SEA and SPS believe that we have reached tentative agreement. We will meet again tomorrow, Tuesday, to confirm final details and will provide information about the tentative agreement at 4:30 p.m. Tuesday."
However, there is nothing current at either the SEA website or the SPS website (either in News or the Labor Relations link). I looked at the Seattle Times, nothing.
We will have to wait for details but I believe SERVE may not have survived.
I have to wonder what may have happened given the district said information at 4:30 p.m. I'll try to listen to the news.
"We’re very pleased to let you know that SEA and SPS believe that we have reached tentative agreement. We will meet again tomorrow, Tuesday, to confirm final details and will provide information about the tentative agreement at 4:30 p.m. Tuesday."
However, there is nothing current at either the SEA website or the SPS website (either in News or the Labor Relations link). I looked at the Seattle Times, nothing.
We will have to wait for details but I believe SERVE may not have survived.
I have to wonder what may have happened given the district said information at 4:30 p.m. I'll try to listen to the news.
Comments
"The subcommittee just concluded for the night at 11:00 p.m. and they are even closer to a tentative agreement, but must continue in the morning. As soon as a tentative agreement is reached, details will be shared with members."
So it continues.
It seems obvious, but there is still research being done, much of the extant research seems to question VAM, and that begs the question:
Why rush ahead with an unproven idea that, in my opinion, will HURT student learning as curriculum is narrowed and as teachers teach to the test and as the famed "collaboration" is replaced with the infamous "competition" amongst teachers.
I've heard that teachers had a four-hour training, complete with streaming video, on the PG+E part of their new evaluation (this part is assumed to be in place, as it is legislated) This PG+E stresses collaboration, as did the training, and the obvious question is what about collaboration outside the building, up the chain of command? There WAS collaboration for two years on a new evaluation tool, but that collaboration was destroyed when the district slammed SERVE onto the table in late July.
We have some really solid research now that says using testing to evaluate teachers is junk science.
And we've made a little progress here in Seattle in terms of organizing teachers, parents, and community members against this. We'll have to organize across the state.
That statement there SC brings up questions I've had all along, but in a different direction.
When there is "research" in education to determine if some strategy, curriculum, process, etc. will work, why is it that folks (I don't mean just on this blog, but in general) jump up and down against being the first to do it? I mean how is it actually going to be researched if somebody doesn't try it? It's similar to one of my main complaints about much of professional development--it's never handled in the classroom with kids, it's all theory.
The other thing is about teaching to the test. So why would everyone who defends teachers as these brilliant folks who care so much about kids, then turn around and say things like SERVE will make them teach to the test and get into competition with each other. I would think the opposite: they would really collaborate more and use more best practices so everyone is successful. For example, if I'm a 5th grade teacher I'm certainly hoping the 4th grade teacher does their job well because I'm going to get those kids the next year. Instead of hoping I'd be more inclined to actually collaborate and put some of my own skin in the game.
Just thinking out loud.
Then, all of a sudden, when the research reaches a conclusion we like, we are quick to adopt it without any of those same concerns.
Humans ain't rational, but we are brilliant at making rationalizations. So let me go back and provide logical-sounding reasons for supporting the new conclusions I like.
First, we are being consistent in questioning both the validity of the experiments and the interpretation of the data. Most of the new conclusions now appearing that discredit the use of student test scores as a measure of teacher effectiveness are the result of having more data and making a closer examination of the data that originally supported it.
Second, the new conclusions are consistent with what we already knew - the relatively small role that incremental differences in teacher effectiveness plays in student achievement. The primary determinants of student achievement are - far and away - home-based, not school-based, so no single school-based factor can have a significant role.
Experience says that high-stakes testing does not lead to collaboration. It leads to gaming the test. That includes gaming your teaching colleagues. One of Dr. Goodloe-Johnson's principal proteges in Charleston, South Carolina became a superstar for improvements in test scores at her elementary school. Turns out.... well, yes, the answer sheets were gamed after they were collected from students.
No one actually working in education is against trying new things. We do it ALL the TIME. In fact, that's a major problem in public education: rather than pick one BIG idea and work it, work it, work it, we're always jumping to the next thing, year after year. So we're doing this NEW thing, while STILL doing the thing from last year. The older the great new thing is, the more it fades, never really dying. No institutional memory.
This is more or less true for all human organizations, I suspect. It's a HUGE problem with this school district. Ask Melissa or Charlie or Meg Diaz or the State Auditor.
Do we really want to push an entire large urban district all at once into an experimental program like SERVE (never mind the conflict-of-interest problems)?
Teachers "research" in classrooms all the time, try out new curricula, etc...But when the GOAL of the research appears (to some of us) to be transparently anti-learning, there is a certain hesitancy.
In some instances is seems like almost a no-brainer: "Try this new system that evaluates you based on student learning" raises all sorts of questions, the first one being the obvious, "Student learning is a factor of many, many elements, is very difficult to measure, would seem to narrow curriculum, so how on earth would we even begin to attempt to tie student learning to teacher eval?"
IF the underlying desire of the researchers appeared to be purely academic, perhaps teachers could put aside this concern for their students (not fear of evaluation, though that is certainly valid for a professional) but fear that it will narrow the curriculum, and give it a try. BUT: It's apparently NOT driven by academic interest, but by players in the edu-industry who have predetermined goals in mind and are busuy conducting "research" to provide a rationale for those goals.
Eli Broad and Arne Duncan didn't ask themselves, "how can we make education better?" and start their research; they started with the presumption that "teacher quality" is a big problem, that public schools under one set of policies were a big problem (hence the need, supposedly, for charters) and jumped ahead from their. They're all about charters and merit pay and breaking seniority BEFORE their is evidence that these are helpful, and their whole thrust is towards those beliefs.
Teachers MIGHT participate in such a prima facia fallacy such as VAM if it was conducted fairly and by a neutral body. But is has been apparent for quite some time that it is not being researched neutrally by some, that they have a preordained aganda.
In your scenario, F4K, the 5th grade teacher would be wanting the 4th grade teacher to JUST teach those things that are measured, would want 4th grade to prepare the students JUST on the measured items, and would want ALL students to be similarly educated, have the same learning. Students aren't and don't. Teachers already teach a variety of students with a variety of levels (adolescent development itself tells us students learn things at different rates; it's organic)
So the collaboration in your scenario would be towards...the test. I think teachers recognize the variety of students and would be hesitant to so narrow their expectations.
Besides, there's nothin to prevent collaboration on all sorts of things without making a teacher's evaluation dependent on kicking that 4th grade teacher's butt so the 5th grade teacher doesn't get demerits or lose money or even get fired.
Lastly, there are millions of children every year who move around. A 5th grade teacher would have to collaborate with EVERY 4th grade teacher in the district, maybe even the state, to help with the students he/she might see.
Good collaboration works with individual students: "Johnny has this issue, what can we do NOW to support him?" but not so well with groups, as they are malleable and changing.
@LA Teacher Warehouse: Didn't know legislature could be involved. Please explain.
For teachers to score well, they wouldn't have to get their students to score well, but to score better than other students.
That's competition.
There is a huge difference between researching something and implementing something. SPS was not proposing a research project; instead, they wanted to fully implement VAMs based on little more than their intuitive appeal.
I could have been on board with a research project or a pilot project. I can even imagine ways that it could have been done, but this type of research would take years to do well. And the big thing we always hear is that "we don't have time to wait to fix education." But I disagree - isn't it better to take the needed time to do something right, so that everyone who comes after us benefits, rather than rushing into and institutionalizing something that is wrong and harms everyone who comes after us?
I for one am thankful that our society generally embraces the scientific method when it comes to things that can affect large swaths of the population. We have the FDA to make sure drugs are safe and effective, for example. We don't just let pharma market drugs because they think they would work in a given situation. Indeed, there would be a huge conflict of interest to let them market without first testing - and, hey, now that I write this, the parallel to NWEA/MAP/SPS is actually quite strong. If our Superindentent wants MAP to be used for VAMs and teacher evaluations, she should step back and make sure they are proven "safe and effective" for that purpose first. Unfortunately, that step was missing.
Lori's approach is too rational and reasonable for too many of the Reformistas. They want their way, right now(!), and to hell with empiricism.
If any shred of the philosophy of competition survives, we're in trouble.
There is no "I" in TEAM.
That's the usual MO for the District... hope teachers have the guts to say we're not voting (assuming an agreed document comes out) until we've had time to read and discuss fully....
some SEA teachers think their union and the WEA have sold them out and are working with the education deformists against their interests, rather than for them...
As do many teachers in the AFT - they're really unhappy about what Randi Weingarten's being doing lately, cosying up to Arne Duncan....
There are legitimate political issues involved in this situation and a lot of strategising and manipulation, so get off my back, anothermom...if you don't like what I post, feel free to scroll right on by my name.... there's no need to get personal with your comments....
http://www.facebook.com/group.php?gid=148970815124891
http://www.facebook.com/group.php?gid=150209878337501#!/group.php?gid=150209878337501
http://www.facebook.com/group.php?gid=123352677681840
Dear Teachers,
Thank you for the wonderful job you do in teaching our children. You work on academic subjects, sure - reading, math, and science. But you know that teaching is so much more - it's about raising the whole child to be a responsible community member. It's about using the arts, music, and P.E. to help children mature intellectually, socially, and emotionally. It's about helping children to follow rules in the classroom and manage conflicts effectively. It's about giving inclusive support to all kinds of students, including students of color, ESL students, low income students, special education students, and APP students. And it's about raising creative, critical thinkers to participate in democracy.
None of that is on the MAP test. The district's SERVE proposal will force teachers to "teach to the test" at the expense of real learning, and use the MAP test to punish teachers instead of helping students. It gives the superintendent broad powers to fire teachers - even though she's never set foot in your classroom.
It's not fair. As parents, we won't stand for this scapegoating of our teachers.
All this focus on teacher accountability is a distraction from the real problems: a crisis in funding, lack of accountability from the school board and superintendent, and unethical financial ties between the Gates and Broad Foundations and the district.
We support teachers in your negotiations with the district. We're organizing parent support. If you have interested parents, please refer them to the Facebook group “Parents Across America Seattle” to help coordinate their efforts.
PAAS is standing with you as you stand up for quality teaching.
Yours truly,
Parents Across America Seattle.
"Special Representative Assembly TODAY!
If you're an AR for please plan to attend the Special Representative Assembly Wednesday, Sept 1st to go over the Tentative Agreement and Vote of No Confidence.
Pathfinder @ Cooper
1901 SW Genesee St.
Seattle, Wa 98106"
For me, the interesting part here are the intent to move forward on a vote of no confidence even while accepting a tentative agreement on the contract.
So in the course of two posts in 35 minutes you flip flop -- what was an SPS-motivated strategy that caused you concern becomes a problem some teachers have with their union.
I thought as a person who describes herself on her blogger profile as a Communications Specialist you'd appreciate the feedback. Bottom line, when you post one unfounded conspiracy theory and then a few moments later present a theory that actually contradicts what you said at first, it can be a little hard to follow you.
You missed the part where I said that some teachers think their union is in cahoots with SPS/the ed deformers and is not working in their best interests...
So I didnt flip....
And besides... two (either completely inept or calculatingly manipulative) organisations can have the same modus operandi....
Either I'm completely cynical or you're completely naive.... take your pick - doesnt bother me which...
BTW the "Stand for Children" toadies are movin' it to the legislature. Vote for their endorsee's opponent!
XI:G Definition and Use of Student Growth Standards:
When common district-wide assessments are available, the results of those assessments will be used
to determine a student growth rating. These results will be calculated as set forth below.
1. Teachers of tested subjects and grade levels are those for whom two or more common state or
district assessments are available.
2. Teachers of tested subjects and grade levels will receive a rating on their student academic
growth of either low, typical, or high based on the assessments available to that teacher.
Students will be compared to their academic peers – e.g. students in the same grade who
performed at a similar level in the subject in previous years.
3. Student growth ratings will be based on a two-year rolling average.
4. Students must be enrolled 80% of the time and must be in attendance 80% of that time to have
their assessments counted in the teacher’s growth rating.
5. SPS will calculate each teacher’s rating by using a valid, reliable and transparent methodology as
agreed upon by SEA and SPS.
6. To ensure that teachers of challenging student populations are evaluated fairly, aggregate
student growth results will factor in the student composition of the teacher’s classroom(s),
including the proportion of English learners, students who qualify for free/reduced lunches, and
students with disabilities
7. For teachers of subjects that are assessed by the state, the final rating will be contingent on the
receipt of state assessment data; a written report will be issued to each teacher within 30 days of
the district’s receipt of the final assessment report from the state.
8. Performance Expectations
a. The performance of a teacher’s students on each assessment will be rated according to a
100-point scale signifying the following:
i. Low growth: less than 35
ii. Typical growth: 35-70
iii. High growth: more than 70
b. A teacher’s overall rating will equal the average of his/her ratings on all common
assessments over two years.
c. Teachers will be eligible to apply for career ladder positions if they are “Innovative” in at
least two of the following domains: Classroom Environment; Planning and Preparation;
or Instruction. They must be at least “Proficient” in the other domains. They must also
demonstrate high student growth averaged over two years.
d. Teachers who demonstrated low growth averaged over two years will return to a
comprehensive evaluation and receive additional observations and support as outlined
below in number 9.
9. If an employee returns to the Comprehensive Evaluation because of low growth averaged over
two years, the principal shall:
a. Within two (2) months of receiving the student growth data or the beginning of school,
whichever is later, conduct two (2) thirty-minute observations;
32
b. Schedule monthly conferences with the teacher to discuss goals, progress, and best
practices;
c. Consult with the employee to plan how to use up to $500 of the improvement fund as
detailed in Article II, Section C, 21, below.
d. If requested by the teacher, develop a Support Plan (SP).
By December 15, the principal shall take one of the following actions:
a. Remove the teacher from the SP, if one was requested and developed; or
b. Continue the teacher on the SP, if one was requested and developed; or
c. Re-evaluate the teacher as set forth in Section D and recommend placing the teacher on a
Performance Improvement Plan and/or probation.
ARTICLE XII: LAYOFF AND RECALL
SECTION C: DISPLACEMENT AND LAYOFF GUIDELINES
Guidelines for displacement and layoff shall be as follows:
1. Displacement of staff from buildings, layoff, and recall shall be by seniority, within
categories, subject matter areas, or departments....
XI.SECTION G: DEFINITION AND USE OF STUDENT GROWTH STANDARDS
When common district-wide assessments are available, the results of those assessments will be used
to determine a student growth rating. These results will be calculated as set forth below.
1. Teachers of tested subjects and grade levels are those for whom two or more common state or
district assessments are available.
2. Teachers of tested subjects and grade levels will receive a rating on their student academic
growth of either low, typical, or high based on the assessments available to that teacher.
Students will be compared to their academic peers – e.g. students in the same grade who
performed at a similar level in the subject in previous years.
3. Student growth ratings will be based on a two-year rolling average.
4. Students must be enrolled 80% of the time and must be in attendance 80% of that time to have
their assessments counted in the teacher’s growth rating.
5. SPS will calculate each teacher’s rating by using a valid, reliable and transparent methodology as
agreed upon by SEA and SPS.
6. To ensure that teachers of challenging student populations are evaluated fairly, aggregate
student growth results will factor in the student composition of the teacher’s classroom(s),
including the proportion of English learners, students who qualify for free/reduced lunches, and
students with disabilities
7. For teachers of subjects that are assessed by the state, the final rating will be contingent on the
receipt of state assessment data; a written report will be issued to each teacher within 30 days of
the district’s receipt of the final assessment report from the state.
8. Performance Expectations
a. The performance of a teacher’s students on each assessment will be rated according to a
100-point scale signifying the following:
i. Low growth: less than 35
ii. Typical growth: 35-70
iii. High growth: more than 70
b. A teacher’s overall rating will equal the average of his/her ratings on all common
assessments over two years.
c. Teachers will be eligible to apply for career ladder positions if they are “Innovative” in at
least two of the following domains: Classroom Environment; Planning and Preparation;
or Instruction. They must be at least “Proficient” in the other domains. They must also
demonstrate high student growth averaged over two years.
d. Teachers who demonstrated low growth averaged over two years will return to a
comprehensive evaluation and receive additional observations and support as outlined
below in number 9.
9. If an employee returns to the Comprehensive Evaluation because of low growth averaged over
two years, the principal shall:
a. Within two (2) months of receiving the student growth data or the beginning of school,
whichever is later, conduct two (2) thirty-minute observations;
32
b. Schedule monthly conferences with the teacher to discuss goals, progress, and best
practices;
c. Consult with the employee to plan how to use up to $500 of the improvement fund as
detailed in Article II, Section C, 21, below.
d. If requested by the teacher, develop a Support Plan (SP).
By December 15, the principal shall take one of the following actions:
a. Remove the teacher from the SP, if one was requested and developed; or
b. Continue the teacher on the SP, if one was requested and developed; or
c. Re-evaluate the teacher as set forth in Section D and recommend placing the teacher on a
Performance Improvement Plan and/or probation.
At any rate, read it and weep (for education): Evals tied to "student growth" as measured by at least district-wide tests.
Strangely, during the superintendent's report at the board meeting just now, she is doing a review of "excellence for all," and posits, as one of the successes, the implementation of MAP as a "formative" test that gives teachers "real-time feedback" on student understanding. She does NOT say a word about MAP being used to eval teachers.
They rolled out MAP as formative, trained formatively last year, and here again we hear it's formative. So how, exactly, is this test designed to tie student growth to teaching? If a test like that could?
Bah.
I know that the first part of the eval part of the tentative contract, PG+E, is the work of the joint task force, using the Danielson model. One of the parts of SERVE that got ADDED to PG+E is that when evaluating under PGE, "student growth" would be measured by some district test (MAP, unless they have some other district test, which they don't) and the teacher would be held "accountable" for growth or lack thereof. Read the section XI..G? It indicates that district will have some "transparent" way to take into account a couple potential variables....but, to me, that's a load of hooey, because students are much, much more complicated than the categories they check on the registration form.
So teachers have been SERVED with this tentative contract. I hope teachers don't accept it, even though the "community" members of "Our Schools Coalition" (hah!) all wore their red shirts to the borad meeting.
Once proficient in all four domains, a certificated employee will be evaluated under a general
evaluation (as outlined in Section E) unless one of the following occurs:
a. Based upon formal or informal observations and conferences, the evaluator deems the
employee’s overall performance to be no longer proficient in one or more of the criteria
and domains; or
b. The employee’s students do not meet typical growth, as defined in Article XI, Section G.
c. The employee voluntarily moves to a comprehensive evaluation in order to be considered
for a career ladder position.
Satisfactory performance is defined as maintaining proficiency or above in all four domains.
a. An employee whose performance is Unsatisfactory shall be placed on a Performance
Improve Plan (PIP) and receive the support of a Human Resource Consulting Teacher.
He/she may be subsequently placed on probation.
Sorry I do not trust SEA leaders much further than TEAM MGJ....
research evidence is lacking for most everything coming from MGJ.
And, Ian at 8:13 - I'm feeling, right now, that there is a very easy decision to make tomorrow about this contract DUMPED in our laps at the last minute - NO.
Check out page 18 of the SEA bylaws
+++++
6.41a. Quorum for Ratification: Twenty percent of each bargaining unit must be present for ratification. If a quorum of each bargaining unit is present, that body determines if a vote by voice or secret ballot will be taken.
6.41b. a simple majority of the vote cast will ratify the Tentative Collective Bargaining Agreement(s).
6.41c. If the contract is not ratified the General Assembly will determine what other actions will be taken.
+++++
I know what "other actions" should be taken - come back in 2 weeks and vote for it?
How come the WEA doesn't have clearly outlined alternatives - a nice 1 page flow chart of what can happen? We're NOT trying to model the distribution of 80,000 barrels of oil at 5000 feet underwater - these rules and these processes are MAN made, and they're dumb, and they're inexcusable.
WHY are the processes from the people running the union such an indecipherable mess? The union pooh-bahs are too ensconced in their insular world of inside-the-union to know that practically no one knows what are the options? They like the Mushroom Theory of Management? They're so freaking incompetent at what they do that they don't even know what I'm talking about?
Vote against the stupid thing, and leave, and let them pick up the pieces. I'll be in class on Wed. to teach the kids from the community.
BM
- stijockey
Very, very low test scores, over a two year period, trigger two observations on the part of a supervisor. The supervisor is also supposed to meet monthly with the teacher. And the teacher gets up to $500 for training or whatnot. The teacher has the option of going on a support plan. By December 15, the principal decides to remove the teacher from the support plan; continue the teacher on the support plan; or re-evaluate the teacher and place him or her on a Performance Improvement Plan (PIP) or/probation.
What it boils down to is that supervisor is forced to take a closer look at a teacher if he or she has low test scores. If the supervisor is satisfied with what he or she observes, then nothing happens to the teacher. However, if the supervisor, according to the provisions of PG&E, deems the teacher as less than proficient, then the consequences are a PIP and/or probation.
The thing is: according to the provisions of PG&E, a supervisor could deem any teacher as less than proficient based on observations spread throughout the year. The low test scores effectively speed up the observations of the supervisor.
Do I like this? No. Did I vote for the TA? No. Do I still think the TA is mostly a victory for teachers? Yes, mostly.
"Low student growth" is not the same as "low student scores" and the District FAQs seem to be referencing "low student growth." A student can have low scores, yet still show growth, and a student can have high scores, yet show little to no growth. So is the evaluation based on raw scores or student growth?
The section on "Student Growth Rating" makes little sense to me. It states "educators that receive a low student growth rating in September will enter an immediate cycle of monitoring and support." If growth is measured from September to April (or whenever they take the Spring test), how can there be measured growth in September?
If this begins in the 2011-2012 school year, the two-year rolling average is based on last year's MAP scores (the first year of use in the District) and the 2010-2011 scores. Would you want the first year's data (of either the MAP or MSP) used in your evaluation?
The other issues revolve around the validity of student growth data. There seem to be too many issues unaccounted for. For example, for K and 1st grade, the MAP test results are highly variable. For high performing students, the growth can be negative. There are just so many variables.
And how are teachers expected to set student growth? The expected growth is calculated by MAP based on initial student scores and varies accordingly by student. Must teachers understand the statistical intricacies of the test to set a reasonable growth rate? How is reasonable growth defined?
Also, they state that no adjustment will be made to the schedule and there will be no additional early release days, yet student learning time can potentially be reduced by voting to extend PCP time. In order to give teachers more planning time, student learning time in core subjects may be reduced????
As a reminder, the tentative agreement is here: http://www.seattlewea.org/static_content/certta.pdf
One other question I have - this contract is for three years. Is that typical? I am concerned about teachers having only one day to decide something that will be in place for three years.
If, in a subsequent year, a teacher's test scores are still low, the evaluator would have to go through the process all over again.
I agree with you about Section G. It's unclear, and much is to be determined.
But to answer your question, I believe that the student growth rating is based on, uh, student growth rather than raw scores.
And think about this: only teachers of tested subjects will be subject to Section G. The definition of teachers of tested subjects is "those for whom two or more common state or district assessment are available." Okay. What does that mean? Two or more different assessments, such as MSP and MAP? Or does it mean two assessments of one type for each student? For example, would a fourth grade teacher be held accountable for MAP scores and the change in MSP scores between third and fourth grade? And if the district develops its own common end-of-course evaluations, how will that work?
Note the caveat below: "SPS will calculate each teacher's rating by using a valid, reliable and transparent methodology as agree upon by SEA and SPS." In other words, how this is actually implemented is still to be determined.
This section is so vague that we won't know the full practical implications of it for about three years. Anyone who tells you otherwise isn't, in my opinion, telling you the truth.
I don't understand your interpretation of the TA. I see the part about grades 3-5 and grades 7-10, but that is only for schools that are entirely on PG&E, and the provision only refers to those who will receive a comprehensive evaluation in the first year that the system is used. By the second year, everyone at such schools are supposed to receive a comprehensive evaluation. I don't see how this provision limits which teachers are accountable for test scores.
The only limits are that teachers have to teach a subject for which two or more common state or district assessments are available. Whatever that means.
Confusing? It is to me.
This aspect of the bargain is dissatisfying. However, if you step back for a moment and compare what happened in Chicago, San Diego and Washington D.C., we teachers won a victory against the education reformers, who are really education reactionaries.
And keep in mind that the Goodloe-Johnson no-confidence vote at the RA meeting was about 95% in favor. The press is paying attention. How it gets spun is the big question.
And as I've said before, I fear that Goodloe-Johnson might go to the legislature to try to get some aspects of SERVE legislated. For example, she might try to have the seniority system for RIFs combined with performance criteria. Perhaps some of you are in favor of that. She also might try to have test data tied to teacher evaluations as in SERVE. I'm told that the WEA had to lobby like crazy to keep that out of SB 6696.
I'm also very concerned that categories (race, F/Rl etc) will be used to supposedly mitigate low "scores" when evaluating teachers. It would sound like this:
"We understand that you scored somewhat low, but we have quantified the impact of being Black or poor, and we have factored in this data. Since we know, statistically, that Blacks and poor children don't perform as well, we will raise your effectiveness percentage accordingly."
Categories MIGHT help us get a general, formative idea of where students struggle and which ones, but the categories themselves are relatively meaningless (what does it mean to be "Black"?) and to tie that to evals perpetuates and further solidifies this idea that "Black" dysfunction can be measured and expected.
the roll out section looks to me (& I could be mistaken - so please clarify if you can) to include schools designated in improvement (tier 2)- which would be how many/which schools?
MSP is only 3-8. HSPE is grade 10 only. Grade nine gets MAP-tested, but not state-tested.
If your interpretation is correct, the only teachers who would be subject to section G are those teaching 3-8. As I understand it, the district does not intend to use the HSPE. So high school teachers are entirely off the hook (for the time being)? Is that what this means?
To answer your question, the roll out is different for Level One and Level Two schools, as described in Section H.
The union is claiming this as a victory and saying that the whole section devoted to student test scores is the compromise they had to make. They say that it's really not much to be concerned about, since the test scores would have to be truly abysmal (clearly I'm paraphrasing here, but that was what it amounted to) for that part of the evaluation to kick in. They could not guarantee that test scores wouldn't someday be published.
They also say that value added measures would be put in place:
"To ensure that teachers of challenging student populations are evaluated fairly, aggregate student growth results will factor in the student composition of the teacher’s classroom(s), including the proportion of English learners, students who qualify for free/reduced lunches, and students with disabilities"
The tests to be used were not specified. They claim that it would have to be several different tests, but the language states:
"Teachers of tested subjects and grade levels will receive a rating on their student academic growth of either low, typical, or high based on the assessments available to that teacher. Students will be compared to their academic peers – e.g. students in the same grade who performed at a similar level in the subject in previous years."
Presently, the only test where you could compare to other students who performed at a similar level is the MAP, because the MAP tests are given at the beginning (middle) and end of the year, and can show the gain that students who were in the such and such a range at the beginning of the year made at the end of the year. That would be the "typical" gain.
This whole process of "factoring in the student composition" of each classroom and coincidentally holding onto the MAP to do so looks like a huge time and money sinkhole to me. And if student scores really must be hugely atypically abysmal (as claimed) for the student score portion of the evaluation part to kick in, why have it in at all? Surely such a terrible teacher would have already jumped out at the principal during observations?
The union says the TA was "strongly recommended" at the reps meeting. Was anyone there who can verify this? I know there was a lot of opposition to the student testing part.
Also, that part is dependent on the levy passing.
If you refer to NWEA's norm data on the MAP, you would see that the highest performing students have the lowest expected growth. The students with the highest expected growth are the low performing students (there's more room to grow).
The contract language around student growth just seems too vague and may lead to some unintended consequences.
(and a clarification of an earlier post - the contract says PCP time can be increased on an individual day, but total instructional time would not be changed)
The union is claiming this as a victory and says that the student test score part of the evaluation system is the compromise they "had to make." They say that part will only kick in anyway if the scores are truly abysmal (clearly I"m paraphrasing here, bit that was the upshot) so no need to fret. So, if a classroom test scores were that terrible, and by implication so was the teaching, surely the principal would have noticed in one of his/her observations?
They say that value added measures will further protect teachers:
"To ensure that teachers of challenging student populations are evaluated fairly, aggregate student growth results will factor in the student composition of the teacher’s classroom(s), including the proportion of English learners, students who qualify for free/reduced lunches, and students with disabilities"
They claim that a variety of tests must be used, although the contract language does not make that clear. In any case, the MAP is the only test that could be used presently, as it is the only one that allows for comparison with the "typical" growth of similar students.
They could not guarantee that test scores would not someday be published.
The testing and the factoring in of the composition of each classroom seems to me to be a huge time and money sinkhole. And, by their own admission, the test scores would be basically meaningless anyway (the whole "scores would *really* have to stink" argument.) So, why put it in at all? That's a lot of money to throw into a rat hole because we "have to compromise."
The union facebook site says that the reps "strongly recommended" approval. Was anyone there who can verify this? I had to leave early, but it looked pretty contentious when I left.
Also, the student test scores part is dependent on the levy passing.
Hope this post makes it. The last one disappeared into the cybervoid. I'll copy first before I post this time.
I am asking all teachers to not allow an evaluation system that depends on testing particularly the MAP test.
As parents, we do not want to see our students’ teachers focusing on the minutia of test questions when the focus should be on the broader subject and include an opportunity for the student to develop creative and critical thinking skills.
And why particularly the MAP test? Because it is unproven, because Brad Bernatek, former Broad resident and now head of REA, Research, Evaluation and assessment, the department that has been heading the implementation of the MAP test, told several parents in a meeting that I was a part of that the MAP test was not designed as an evaluation tool to assess a teacher’s performance.
I sat across the table from him and looked him straight in the eye while he said that.
I sent an e-mail to him and Jessica DeBarros, the other REA representative who attended the meeting, and a Broad resident, three weeks ago asking them to confirm that was stated by Brad. Neither Jessica nor Brad returned my e-mail.
The use of testing students to evaluate a teacher should not be accepted on any level. Period.
Please do the right thing tonight and vote against high stakes testing and teaching to the test.
It’s not fair to our students.
Dora
Note:
From the SPS website, this is the description of REA.
“The Department of Education Technology REA is primarily responsible for official student statistics for the Seattle School District. This includes statistics on enrollment, student demographics, evaluation, standardized testing and surveys.”
I understand the roll out sequence is different for level 1 and level 2 schools. However, how a school is designated a 1 or a 2 is not mentioned. Where does the rating come from? There seems to be building wide evaluation implications if one is designated either.
Again, if I am incorrect, please clarify. My SEA rep doesn't know the answers to my questions. The part about 3-8 makes the equity issue even more important when considering how to vote.
Have teachers analyzed the student growth data from the first year of MAP testing? And if not, it doesn't seem wise to commit to three years of a system based on unknowns. How can one make an informed choice?
There is LOTS of evidence out there that suggests that this is a BAD outcome for students, and very bad educational practice.