Superintendent to Create Time to Talk with Parents and Community
Dr. Enfield sent out a letter to parents about the coming days for Seattle Public Schools. Here's some of what she says:
In the coming weeks I will be in the community listening to your comments, questions and concerns. Additionally, I will continue to have an open door policy so you can offer input directly on how we can continue to improve. Beginning on Thursday, March 24th, I will be holding open office hours from 4:00 p.m. - 5:30 p.m. at the John Stanford Center, 2445 3rd Avenue South. I encourage you to make an appointment to come and meet with me. Please contact Venetia Harmon, vlharmon@seattleschools.org to schedule a time.
I applaud the Superintendent for her efforts to reach out and connect with parents and the community. It is a good first step.
In the coming weeks I will be in the community listening to your comments, questions and concerns. Additionally, I will continue to have an open door policy so you can offer input directly on how we can continue to improve. Beginning on Thursday, March 24th, I will be holding open office hours from 4:00 p.m. - 5:30 p.m. at the John Stanford Center, 2445 3rd Avenue South. I encourage you to make an appointment to come and meet with me. Please contact Venetia Harmon, vlharmon@seattleschools.org to schedule a time.
I applaud the Superintendent for her efforts to reach out and connect with parents and the community. It is a good first step.
Comments
Also, it's great to have an "open door policy" and an "open office," but to me this implies just dropping on by, walking on in. Must we contact Ms. Harmon for an appointment? That kind of defeats the idea of "open door."
Parent of Two and Susan, I think I'll start a separate thread tomorrow on this subject to get input.
"proctors aren't paid to proctor the test--parents and volunteers are proctoring MAP)"
Uh, no. Maybe is some cases, but I don't believe this is generally true. If it IS, that's wrong: are parents trained to administer MAP? It's a computer program that requires booting up, student ID, error over-rides...
MAP takes FTE time (staff paid to proctor) and also gobbles up tech resource (almost every computer i the district unavailable for a few days; and also the actual FTE for the tech people who watch over the whole system and troubleshoot. Ot to mention teacher time trying to look at the "data" and see if matches up with their on-the-ground understanding of the student, is the data reliable, etc. I mean, how many teachers are fluent in the use of RIT scores? Do YOU know what a RIT score is? How about Descartes? Do teachers use it? Or just spend time looking at it, wondering...
MAP Useful
And what, pray tell, was SPS using for district-wide assessments before MAP? Were you a fan of the WASL?? Cause afaik that was it. And a very sad test that was. There was nothing that even purported to assess kids who were working far outside their current grade level, either above or below. You may not like any assessment, but as "MAP Useful" pointed out, there are benefits that can only come from these kinds of adaptive tests. Our family saw similar benefits, and I've heard the same thing from others.
Are there downsides? Sure. But it's more useful to at least attempt to be objective than to jump on the "It's evil, evil, evil!" bandwagon.
I have yet to meet a person who has seen the test who takes it seriously. It is such a bad test. ... You have no idea what an idiotic test it is. I encourage you to look at the test when your child takes it in June.
I have no idea what you're talking about. I was able to sit down and look through many questions on the MAP, almost all of which were quite reasonable for the grade level that I saw. So you can no longer honestly repeat your first statement. In fairness, this was only the math portion, not the reading portion, which I've heard is not as good.
Perhaps a bigger problem is the lack of alignment between state standards, what is actually taught in the classrooms and the assessment expectations. In math that's less of a problem because there's a (mostly) natural logical progression through the topics, from basic arithmetic through calculus. And an adaptive test like MAP can do a decent job (with a margin of error, of course) of honing in on a student's skill level in the 4 different strands.
But in reading it's harder. What year (or semester) are the various grammatical or poetic structures taught? What order are these presented? There is no definitive natural order for many of the topics.
Now, if anyone thinks I'm a huge MAP fan, you're wrong. I think 3x/year is too much. I think it's inappropriate for K-2. I'm frustrated that it's reducing access to many schools' libraries as much as it is, and it is expensive (so was WASL!). But I'm so sick of people ranting on and on about the first and only assessment we've ever had that even attempts to give useful information about our kids' achievement levels outside of their narrow current grade level expectations.
One last comment--Seattle's MSP scores actually dropped once the entire district started using MAP. It appears that MAP has actually hurt students academically when it is meant to help!
This is so ridiculous I don't honestly believe you're serious, but you might consider looking at plausible reasons for those declines, like maybe the change from WASL to MSP? Or crappy math text books, etc.
I am curious- am I correct in assuming MAP is a computer administered test? Are there accommodations for students who need a paper & pencil version? Or does the test at least allow the student to go back and make corrections to their work or to skip ahead to answer problems that they can finish quickly & then go back?
The concerns about the MAP aren't mostly about the test itself (although Parent of two clearly has some). The concerns about the MAP fall into three basic categories:
1) Is the test too expensive?
This includes not only the payment to NWEA but also the cost to administer the test - the time it blocks out in our school libraries and computer labs, the cost (if any) to proctor the test, and the professional development around interpreting the results.
2) Are the test results useful?
Do the results tell us anything we don't already know? Are the results in a format that teachers can translate into differentiated instruction for students? Do teachers even have the time and resources necessary to differentiate instruction?
3) Are there alternative means to this end which are less expensive or easier to apply?
Surely there are classroom based assessments that teachers have been using as formative assessments before MAP. What was wrong with them? Why couldn't we adopt a set of these to use across the district?
These are not unreasonable questions. And it could be that the proponents of the MAP test - they are out there - can provide satisfactory answers to these questions. The problem is that we rushed forward with the MAP before these questions were asked or answered to anyone's satisfaction.
Yes, I'm sure some of you parents are so proud that your child is being placed at a high-school level while in fifth grade. But, put your pride away, yes, you have an exceptional child, but get real. Do you really think your child can do high-school work? Common...
If your child can pass the MAP and not the MSP, then you should be concerned. The MSP is the test they are required to pass in high-school not the MAP. The MAP is worth beans--no state agency or college is going to look at it.
You should be concerned, because teachers are being evaluated on the MAP not the MSP so your child is being prepped for the MAP not the MSP.
MAP is diluting the help our children need to do well on the MSP.
Again, I add, the MSP scores dropped the year Seattle adopted MAP.
BTW, I'm also a teacher and yes, I pretend to take the MAP seriously in front of parents who are so proud their child scores are above grade level (who can take that pride away, I have children I find exceptional and above grade level, too) but secretly (since I've seen the idiotic questions)I do not take the test seriously.
It's a poorly written test and one that hurts a child's understanding of the state standards. It needs to go.
This new computer-adaptive assessment is now becoming the norm for other standardized tests. GREs are now computer-adaptive tests. Will SATs be far behind?
Anyone else with corroborating stories?
As for the paradigm of computer adaptive exams, I am not saying that they are intrinsically bad. They are, however, very different from what we are used to with paper and pencil exams and the tactics for taking them is very different.
IS MSP computer or pencil and paper? Fascinating if it is pencil and paper because one would use the strategies we used -- doing the questions out of order, pacing oneself by skipping questions -- would be quite different, something students would be less familiar with.
Was your childs classroom education modified to match the higher level grade work that MAPs indicated he/she was ready for?
I saved the text before posting, so I'll post it again now, but what's going on?
Again this year the District has funded that position while cutting classroom teaching positions.
---------------------
Furthermore, the district can continue to use the assessments they were already using before MAP and Gates and MGJ.
And what, pray tell, was SPS using for district-wide assessments before MAP? Were you a fan of the WASL?? Cause afaik that was it. And a very sad test that was. There was nothing that even purported to assess kids who were working far outside their current grade level, either above or below. You may not like any assessment, but as "MAP Useful" pointed out, there are benefits that can only come from these kinds of adaptive tests. Our family saw similar benefits, and I've heard the same thing from others.
Are there downsides? Sure. But it's more useful to at least attempt to be objective than to jump on the "It's evil, evil, evil!" bandwagon.
I have yet to meet a person who has seen the test who takes it seriously. It is such a bad test. ... You have no idea what an idiotic test it is. I encourage you to look at the test when your child takes it in June.
I have no idea what you're talking about. I was able to sit down and look through many questions on the MAP, almost all of which were quite reasonable for the grade level that I saw. So you can no longer honestly repeat your first statement. In fairness, this was only the math portion, not the reading portion, which I've heard is not as good.
Perhaps a bigger problem is the lack of alignment between state standards, what is actually taught in the classrooms and the assessment expectations. In math that's less of a problem because there's a (mostly) natural logical progression through the topics, from basic arithmetic through calculus. And an adaptive test like MAP can do a decent job (with a margin of error, of course) of honing in on a student's skill level in the 4 different strands.
But in reading it's harder. What year (or semester) are the various grammatical or poetic structures taught? What order are these presented? There is no definitive natural order for many of the topics.
Now, if anyone thinks I'm a huge MAP fan, you're wrong. I think 3x/year is too much. I think it's inappropriate for K-2. I'm frustrated that it's reducing access to many schools' libraries as much as it is, and it is expensive (so was WASL!). But I'm so sick of people ranting on and on about the first and only assessment we've ever had that even attempts to give useful information about our kids' achievement levels outside of their narrow current grade level expectations.
One last comment--Seattle's MSP scores actually dropped once the entire district started using MAP. It appears that MAP has actually hurt students academically when it is meant to help!
This is so ridiculous I don't really believe you're serious, but you might consider looking at plausible reasons for those declines, like maybe the change from WASL to MSP? Or crappy math text books, etc.
Surely there are classroom based assessments that teachers have been using as formative assessments before MAP. What was wrong with them? Why couldn't we adopt a set of these to use across the district?
Individual classroom based assessments have been used forever. But like the WASL or MSP, by nature they are going to be criterion-referenced, not norm-referenced. That was a big gripe among many of us about the WASL, yourself included. They are not useful at distinguishing between a student who is one step ahead of the game vs. a student who is 3 or 4 years ahead of the class. Neither do they distinguish between a student who is 1 year behind vs. a student who is 3-4 years behind. There's a huge difference in how educators (more than just their classroom teacher(s) ) should be responding to those different needs.
Without adaptive tests, it's very difficult to get that information. Sure, you could force a kid to take multiple tests at successively higher or lower out-of-band levels, but ouch. And that still wouldn't provide good differentiation among strands. Certainly there are no perfect solutions, but there is valuable information available now, for the teachers who actually care enough to dig into the results. That seems to be the exception, rather than the norm so far, but I'm hearing of situations where MAP scores are being used as part of an overall assessment that can help determine class placement. Stuff like that doesn't happen without adaptive test results to trigger a deeper look.
Many people -- teachers included -- find the MAP useful as a formative assessment. Teachers get fairly specific information on where students are, and many use it for grouping purposes, and also to understand where students do and do not need additional support. The district trains teachers on how to interpret the data, and I know for a fact that some look carefully at the results and use them. I suspect that more will if the test sticks around. As a parent of children who might fade into the background a bit in large and boisterous classes, I feel the more information I (and my child's teacher) has about them, the better.
While I like MAP, I question some of SPS's decisions on USE of the test (e.g. as a summative assessment, as a tool for evaluating teachers, as a screening tool for advanced learning). These are all "off label" uses, as I understand it.
@Parent of Two: NWEA publishes sample MAP questions on their website -- no need to file a FOIA request. Also, I think it is a stretch to attribute a dip in MSP scores to the introduction of the MAP. Not sure how that would even work. You are correct, though, that parent volunteers proctor the test at some schools (mine included).
But that's not the bulk what I'm hearing. There's way too much covering of the ears, singing "La-La-La-La I can't hear you!", we don't like standardized tests, this data isn't useful, blah blah blah. Guess what? If we want to hold ALL our kids and staff (not just teachers) accountable for meeting standards, we need district-wide (and state-wide) assessments. And if we want to ensure that ALL kids have at least a chance of an appropriate education, these tests should be adaptive, or somehow able to distinguish among a wide range of achievement levels.
There are teachers who are looking at this data and making changes based on the results, although I don't think it's common. I know if I was a (professional) teacher, I would be looking very carefully at the results to see if I had any obvious weaknesses in my curriculum or methods. But that's just my nature, and I'm not afraid of change or of finding areas in which I can improve my skills. In fact, I welcome it.
I'd love to hear what alternatives there might be -- I can't see any primary/secondary achievement tests except MAP and STAR (I think the district was already using STAR for reading and found it unsatisfactory) in the list of "Operational CAT Testing Programs" at http://www.psych.umn.edu/psylabs/catcentral/.
Helen Schinske
Is anyone hearing me? The test itself if flawed. Those who are posting, have you actually seen the MAP?
I know what it purports to do and I know it is exciting to find a test that claims to place your advanced child at her or his level. But MAP is grossly inaccurate.
Instead of getting on my grand-stand, just answer me honestly, those posting--have you actually SEEN the test?
I have no idea how good NWEA's response to such reports is, of course -- that email could be a black hole for all I know.
Helen Schinske
Apparently you didn't see my post above.
Yes, I've looked at a LOT of MAP questions (math only), and have no idea what you're talking about. The questions were reasonable and appropriate for the level I saw (middle school). I agree that most people with the knee-jerk responses haven't sat down and looked with their own eyes.
I know what it purports to do and I know it is exciting to find a test that claims to place your advanced child at her or his level. But MAP is grossly inaccurate.
Your attitude here and earlier toward parents of advanced learners is both arrogant and condescending. I'm glad my kids' teachers don't have this attitude. This data can be helpful at both ends of the achievement spectrum.
As for the claim of "grossly inaccurate", exactly what kind of inaccuracies are you talking about? The RIT scores are merely numbers, and are valuable as relative benchmarks, both among students in a particular grade and over time for individuals. The only place I've seen anything that could be construed as inaccurate would be where they suggest readiness for a given topic/class in math, i.e. 245 -> Geometry. That is complete BS, and I haven't heard anyone suggest that they would make class placements based solely on that "one-liner".
However, if you have MAP scores across multiple years and grade levels, you will see for your own population(s) what your normal distributions are. This gives useful and hopefully actionable data to use, based on comparisons with your own classes, school, district, state. Both by grade level as well as year-to-year (in your own classrooms). While I don't agree that this should be any kind of major component to teacher evaluation, if I was a (professional) teacher I would be eagerly digging into this data to look for clues as to where I could improve, or to where I might help my peers. But as I said earlier, I'm not afraid of change, nor of improving myself.
SPS doesn't care about an individual child's stellar performance. Believe it or not, the teacher still has 27+ other kids to teach. That child's score gets massed with every other child's score to grade teachers and schools. That was the whole point, if that isn't clear by now. That is the whole purpose of weeks of lost learning, labor and $$$'s. Because Boston Consulting Group, Gate Foundation, Stuart Foundation etc etc want to know their money's well-spent.
Consider the State of Delaware; they spent a year and $100Ks of dollars soliciting and selecting NWEA, then (when called on it) restarting again and picking a superior product. NWEA was unable to use their inside connection through NWEA Director/School Superintendent Joe Wise to subvert that procurement. Our district was much easier to bamboozle. Just appeal to the vanity of a certain former supt and...
But, hey, I'm trying to put all that behind me. Breathe...
WV: refibi
Perhaps a bigger problem is the lack of alignment between state standards, what is actually taught in the classrooms and the assessment expectations ...What year (or semester) are the various grammatical or poetic structures taught? What order are these presented? There is no definitive natural order for many of the topics.
I see this as a core problem with the MAP. I am by no means an expert, but it seems to me that any test that might be used to evaluate teaching quality simply must be alligned with what those teachers are being required to teach. Personally, I like the way the MAP adapts to students who excede grade level expectations, but I don't see how a test that does can be used to evaluate teachers who cannot be expected to teach to standards several grades above standard. Even if they are differentiating instruction for advanced learners, there is no reason to expect them to (for example) specifically teach iambic pentameter to one 4th grader (assuming that is one of the 9th grade standards.).
I strongly object to the results being used to grade teachers and schools, especially when the test is not aligned with WA state standards. It’s simply measuring exposure to grade level topics, but not necessarily topics that are taught in SPS.
And when a 5th grader tests at a 9th grade level, it means that they are scoring as well as half of the 9th graders who also took the test, not that they can handle 9th grade material (makes one wonder about the knowledge of an average 9th grader).
We had a teacher that was attempting to prep the class for the test. The teacher must have taken the test, because he/she had some preconceived ideas about what they should know for the test (topics that weren’t even part of this year’s curriculum). They spent class time on random worksheets, rather than the prescribed curriculum, with the end result being confusion and falling further behind in class work. All for the MAP test.
I haven’t yet seen the benefits that justify its many costs.
and Union and Proud, if by some chance you were referring to me, know that I totally agree with you. NWEA was crammed down our throats based on a chimera that with a RIT score each teacher will instantaneously have the holy grail to each child's intellectual promise. They need only use that RIT score and pull out their handy dandy Descartes and differentiate that lesson plan for every child, or small group. Except that the margin for error in results is so broad (missed breakfast, distracted by a bug on the windowsill, stayed up too late playing the DS) as to risk being completely off the mark. A teacher's day to day observation is superior, by far.
The level of MAP she would have taken is MAP 2-5. Quite apart from the actual accuracy of the test, it doesn't really test at a high-school level, despite having RIT scores on the same scale as the 6+. As with almost any test, the grade-equivalent scores are the least meaningful. That's not to say that it's impossible for a third-grader to be reading or doing math at a high-school level, only that the 2-5 MAP couldn't really show it.
Out-of-level testing can be a great tool, but you need decent tests and you need decent ways to interpret them.
Helen Schinske
My understanding of "alignment" is that NWEA correlates the RIT score with pass rates on the State test. RIT scores can then be used to identify students at risk of not passing the State test.
The information about posting classroom MAP scores for each teacher is disturbing. How was growth calculated? Based on the last year and a half of data, or just the growth from Fall to Winter for this year? And are individual student names and scores posted?
Really not a fan
The problem you're exposing here is not the test itself, but the invalid interpretation. Your daughter's test score did not place her at an 11th grade math level. A RIT score of 240 in math merely says she scored similarly to typical 11th grade students, not that she has mastered the material in grades 4-10. And just perusing the NWEA site, I found this quote next to the RIT #s: "These data should be used as one of many data points for instructional decisions rather than as the only single placement guide.", which is only common sense!
Presuming this score wasn't an anomaly and her scores are reasonably consistent, what you and her teacher should be taking from that score is simply the fact that your daughter is very advanced in math for her age and should probably be getting some kind of differentiated material and instruction. Also, if she's not in an advanced learning program, that's a good indicator that she should be. Unless you have an absolutely stellar teacher, they probably knew that she's strong in math, but without an adaptive test like this it's very unlikely that they had any clue just how advanced she really is.
Something else: test results like this help prevent teachers from hiding this kind of student from advanced learning programs because of personal philosophical issues, which is a load of BS. Also, I've heard of low-achieving classrooms (and entire buildings) in past years that wanted to keep their high achievers, so they didn't refer ANYONE for advanced learning tests or programs. Having a norm-referenced test that ALL kids take cuts out this kind of misbehavior.
I'm not sure why other parents (none1111) so strongly are supporting MAP
Why on earth would you say I'm "strongly supporting MAP"?!?! Just because I'm taking a balanced view and not parroting the "it's totally worthless and could never be useful for anything" mantra? I've written above that I am NOT a big supporter, and I've written about the aspects of MAP that I don't like at all. I'm just sick of people constantly spouting off completely one-sided arguments without even attempting to consider any aspects other than the ones that support their own arguments.
and this: unless its pride (they want their child to be a mensa genius)?
Yeah, here we go with the condescending stuff again. My earlier post may eventually reappear, but like Po2's comments above, this comment is arrogant and insulting. Whether my kid is struggling to keep up or a "mensa genius", I want the best and most appropriate instruction possible. Not something that's way beyond or way behind their abilities. We should ALL be striving for this.
Regardless of the merits (or not) of the MAP, if teachers don't like it, and are either afraid of it or actively fighting against it, that's demoralizing and likely sucking energy out of them and their classrooms.
I don't know how much of the poor morale I'm hearing about is due to use of the test itself vs. the push for its use in teacher evaluation, but the net effect is real. I've heard from a number of teachers in different schools, that morale is down, and that's really not good for a classroom. It's not just the MAP, but that's a piece of the frustration.
Growth was calculated from Fall to Winter, which is only 3 months. No, individual students scores were not posted, just each teacher's MAP growth from Fall to Winter.
Having our class MAP scores posted is definitely a major reason I feel demoralized. I can't speak for other teachers. But the negative attention nationally that we are getting, how we are being portrayed in the press, is also a major reason.
Last night I went to my niece's middle school jazz concert and the music teacher was so polished, and the music he got out of the kids just phenomenal. I thought, if only the nation could see how professional he is, the majority of teachers are, wouldn't they feel awful about all of this mud-slinging?
Trying to get rid of our seniority in the State Senate, trying to tie our pay to test scores, claiming that older teachers need to retire, trying to claim that the drop-out rate in low-income schools is entirely because of poor teachers. I've taught in high-income schools and had almost 100 percent pass on the WASL and taught in low-income where only 60 percent passed. My teaching wasn't any worse in the low-income schools. In fact, I worked harder, was more innovative in the low-income school.
I would accept MAP and even find it useful if it were not used to evaluate my teaching, not printed for the entire school to see, not put in my permanent record.
BTW, thank you all for posting. I feel so much better this morning reading your posts. This MAP issue has been really bothering me and it helps to "talk" it out with all of you. It also helps to hear the other side--those who like MAP. It keeps me balanced.
I was referring to you but not negatively. I think your posts are brilliant and really appreciate the information you are posting. I don't know how it is so many are so informed on this blog, but I learn so much from it. I actually hooted when I read your post and agreed, SPS doesn't care about a child's stellar performance. I'm glad you posted that, so that parents can realize that MAP isn't about recognizing their children and meeting the bright children's needs. It is all about evaluating teachers. The district is still in the process of phasing out Spectrum, after-all. How many Spectrum teachers would 5 million buy?
Is ALO the new Spectrum?
Or is APP morphing into Spectrum?
What is the District's vision for Advanced Learning?
Mom of 2
Now will my child be pushed out of classrooms? Because teacher evaluations are tied to a score on a crappy test? Will discrimination take on yet another dimension?
I'm curious how the results are actually shown. Is it some sort of percent, like the change in average score relative to baseline? Do they post just one number for each teacher, or does the data point come with a standard deviation or confidence interval to assist with interpretation? And, is the result compared to a national reference point, a local reference point, or is each teacher's "expected growth" determined based on the individual class roster that teacher has so that "measured growth" is thus compared to an individualized "expected growth"? As you can see, I have a lot of questions about the methodology here.
I think teachers who want to fight this kind of nonsense need to approach it from multiple angles. Yes it's important to point out when subjective measures of a teacher's performance don't match the MAP results, but it's also important for you to ask questions about the methods and point out potentially fatal flaws, which adds an objective angle to your complaints as well.
http://www.nwea.org/support/article/543/growth-measure
http://community.nwea.org/node/228
FYI
If you want to end the use of the MAP in teacher evaluations, I can't think of a quicker way to get it done than to publish the data, not just to other staff, but to parents as well.
My own experience, so far, has been that my kid's MAP scores have not at all been reflective of the quality of the teacher or the amount of academic growth I've seen in my child.
Publish the data and a lot of other parents will notice the same. Keep the data secret and it will do it's damage silently without alerting the public to how whacko the whole program is.
I am not a MAP fan (cost, time, and failure to use test for intended purposes), but am fairly ignorant, as my child has never taken it. Now that we have two years under our belts, I think it is time for the SSD to have a full, informed discussion, with ALL stakeholders (teachers, parents, etc.) about what we are doing, what we are getting from it, how much it costs, and -- if we aren't happy -- what other alternatives there are (this last being something that was glaringly NOT done when we signed up for this thing).
Don't ask them what it is. It's like the Colonel's secret recipe.
Wow, now there's a different take on it. My thoughts after reading about scores being posted publicly (what school is this?!) is that it's appalling. This is Seattle, not Los Angeles! Didn't we learn anything from their fiasco? Still mulling your comment though.
Lori said: They are actually drawing conclusions about teachers' effectiveness based on data from 30 or so kids that have been under their tutelage for 3.5 months?!
I totally agree with this. Individual student scores are going to have a great deal of error, and even a full classroom can have a significant margin of error. You need to have quite a bit of data before you can even pretend to know how to use it. My personal opinion is that the district won't really even have a baseline to work from until at least the end of this school year, and that won't even be especially robust yet.
But as with any large-scale system, patterns will emerge over time. Eventually, it will become hard to hide the fact that certain elementary teachers (virtually) ignore math because they prefer to teach writing or social studies, or vice-versa. Teachers are human, and they have preferences. And anyone who's spent time in their kids' schools for a while knows which teachers do a great job in which subjects, and which are not so good (or worse) in other subjects. But because it's all hearsay and innuendo, it rarely gets more than a little lip service in practice. And that's not fair to the kids, some of who may take years to recover (if ever) if they get a couple bad years in a row.
I'm also fully aware of the dangers that tools like this pose when put in the hands of bullies or incompetents. And I'm very sympathetic to teachers and teaching positions where there are extenuating circumstances. I can think of many ways the district could easily abuse this data (not going to write about them and give them any ideas!), and that is truly a big worry. But here's something else to consider: as far as I can tell, the district already abuses their power. Will MAP data really make it that much worse? MAP data didn't affect the ability of the district to bring in TfAers, did it? Or flinging principals around like rag dolls playing musical chairs. At least having some kind of uniform district-wide norm-referenced test scores might provide some benefit, where many of the district's maneuvers don't seem to provide any benefit whatsoever as far as I can tell.
I did dig into some of the research early last year, and saw 2 other tests (I don't remember the names anymore) that looked like they might have been better for SPS's stated purposes.
But here's a question to ask yourselves: would it matter? How many of you would welcome ANY (or reject ALL) computer-based adaptive testing? The feeling I get from so many people here is (cue robot voice:) "standardized..tests..bad -- only..teachers..know..what..is..good". But there is no doubt whatsoever that there is a HUGE range of expectations (and abilities) among teachers, as with any other profession.
How can that be managed in a fair way? In a way that's fair to our kids as much as it's fair to the teachers? I don't claim to have any pat answers, but I know there is a lot of frustration across the country about this right now. If we bury our collective heads in the sand and don't at least recognize the fact that there are real issues, we run the risk of the (very powerful) reform crowd pulling a lot of fence-sitters to their camp. I don't want to see more crap like TfA here in Seattle.
Too much writing by me over the past couple days, back to work! But I'll still keep reading.
Union, why do you say this?
When someone says their post disappeared, I try to get to the spam holder as fast as I can. We have no control over Blogger's filter (we are considering moving to another more flexible platform).
This has less to do with tests as it does the purpose of education: Is it to put in front of students a systematic process of standards delivery, or to have general expectations of knowledge to be passed on with the caveat that educators will, per force, be varying from the script?
Standardized tests (if I can use that general term) seem to suggest that there is a set list of things that MUST be taught at each...age? (I say age, because schools operate on a grade level, rather than a college system of readiness) The standardized test only tests (if it does this, even) certain identified...standards. I guess my question about education is whether "the standards" are all it's about? Is there more that goes on in classroom? Is there fluidity? Are things taught extemporously that aren't tested on standardized tests? Should there be? Should all students be at the same place at the same time on the same standards?
The question isn't the test, so much as what drives it: Do we want schools to be only about "the standards"? Do we want educators to only teach "the standards"?
I'm not saying there shouldn't be standards, but that there's more going on and the tests seem always to be the focus: HSPE, MAP...we hardly ever hear about incredible teaching and learning that isn't on these things (indeed, we mostly hear only hugely generalized "scores", that are used, lately, to slam educators (all teachers MAP scores posted...eek!)
So standardized tests, I'd posit, kill the spirit of education, the fluidity and passion: They turn it into a formula, and then measure educators and students based merely on that simplistic formula. Show me where this isn't so, and I'll take a look, but even MAP, which was sold as a formative device, is now apparently a teacher-tester. Ack.
"But there is no doubt whatsoever that there is a HUGE range of expectations (and abilities) among teachers, as with any other profession. How can that be managed in a fair way? In a way that's fair to our kids as much as it's fair to the teachers? I know there is a lot of frustration across the country about this right now. If we bury our collective heads in the sand and don't at least recognize the fact that there are real issues, we run the risk of the (very powerful) reform crowd pulling a lot of fence-sitters to their camp"
Would you care to answer this Seattle Citizen?
The teachers posting here are very aware of the academic profiles of their students. But my experience was that 6 years of mostly excellent teachers did not pick up on my son’s learning disabilities. They said ‘he is a boy’, ‘he is lazy’, ‘he’s just not that bright’. His low WASL scores & classroom assessments backed that up, because he has learning disabilities around writing. He was placed in remedial reading groups and told not to try to read books that he would bring to school as they were too far above his reading level based on the teacher’s evaluation of his hand written assessments. When he started taking MAP they were quite shocked at his high scores. We had the whole battery of psych-ed testing done, including multiple achievement & IQ tests. The MAP results are very consistent with those results. (So I my one objectively measured data point does not show the MAP to be a loopy inaccurate measure.) All of the other tests he has been given including WASL, MSP, CBA’s etc. are not consistent with those achievement tests.
I know a number of other children with the same experience of having learning disabilities missed for years by well intentioned classroom teachers.
So I would be happy if I thought all teachers would be able to correctly assess my child in the classroom, but it has not been my experience.
-I really wanted to trust the teachers.
This would substantially cut down the costs, and the lost library/class time, while still giving parents and teachers a timely assessment.
So MAP in the Fall, MSP in the Spring, and in-class assessments throughout the year. Shouldn't that be enough?
Another mom
I can relate to your experience. However, I would say for every child missed, there are two or more found. Just this year another student in my child's school was identified (at fifth grade) and placed in the inclusion program.
Some disabilities are very hard to discern. For example, it was only through outside psychological testing that I was able to learn that my special-needs child struggled with inferential reasoning. Too a teacher, it would just seem that his comprehension was lousy, but in fact it's more complex than that. Fortunately, through his IEP I've been able to get him help.
NWEA itself does not market MAP as a method to identify special leaners. The district's own "assessment expert" said MAP should not be used for this in isolation, and that two other data points should be collected before a decision (such as failing the Spectrum cut-off) should be made.
They don't.
The MAP results are intended to prompt questions.
The assessment suggests...
a) that the student is working below grade level - check that.
b) that the student is working above grade level - check that.
c) that the class as a whole doesn't understand fractions as well as they should - check that.
The results from the assessment are supposed to be indicators to be investigated, not gospel facts to be blindly accepted.
Anyone who suggests that the MAP results prove anything misunderstands the assessment and the results.
Sigh.
The "real issues" none1111 suggest we address to keep people from jumping off the fence onto the "reformers" side are issues the reformers themselves have propper up, using various propaganda techniques. I have no interested in granting those issues credence.
There is already a way to evaluate educators. There's no need to include standardized tests. Just because the reformers have said it's a big, big issue doesn't make it so.
Rather than address the "real issues" raised by the reformers, I would rather up the level of truth-telling, up the level of daylighting the reformer's agendas. This is already working: People around the nation are calling BS on many of the reformer's "issues" and this is a good thing.
During these tough economic times, the reformers are using more and more anti-union and anti-teacher rhetoric and hyperbole to try to sway the fence sitters. I have no interest in granting them validity; rather, I would like to see citizens stand together even closer, united against the corporate take over of public schools.
NWEA, TFA, OSC, A4E, are working had to break public education by demonizing educators: People are waking up this, and it is a good thing.
"there is no doubt whatsoever that there is a HUGE range of expectations (and abilities) among teachers, as with any other profession."
Thank goodness for a range of abilities! Can this range be identified by a standardized test?
As to the range of expectations, I'm not sure which expectations this refers to: Students, parent/guardians, citizens, board, educators, business....
Does a standardized test identify success in meeting all those expectations in students or educators? Or parent/guardians? Or citizens?
To me, that wouldn't save anything like enough money to be worth tossing out one of the test's supposed great advantages: the ability to monitor progress (including above- and below-level progress) during the year, so that you're comparing students to their own work and judging their own progress, rather than the nonsensical business of comparing this year's fourth-grade class to last year's.
If it's not doing the job properly (and again, I'm getting the sense it isn't), toss it completely. It makes no sense to give it once a year and get even less usable data. For one thing, that form of administration would mask any problems with the test and we'd be stuck with the dang thing much longer.
The only reason I can see for cutting MAP administrations down to once a year is if it is part of a planned political process for eventual elimination: in short, a face-saving measure. But we'd be paying a heck of a lot, both in dollars and kid/teacher hours, for that supposed savings in "face". We heave-ho'ed the ITBS in one year without a blink. Why not the MAP?
Personally I think the whole licensing fee structure stinks to high heaven. For the amount we've paid NWEA already, we should just OWN the test and use it as and how we like, just as we do with typing tutor software or whatever.
Helen Schinske
There is then an assumption that teachers target instruction for each child, based on MAP results, to ensure measurable growth on the MAP test. The MAP content would then be driving instructional content.
Is that the intent?
Yet, there is also a push to standardize content and delivery, which seems to limit the ability to differentiate.
So which way is it?
I disagree that the once a year test would eliminate its use to measure "progress." You just measure progress on a year-to-year basis, which seeems more reliable than measures only two months apart (Fall to Winter).
To me, the MAP helps identify those needing extra support, either above or below level, but the District needs to decide what that support involves - out of class support, in class groupings, gifted classes, etc.
Signed - Less testing, more learning
You know what test is loopy, though? The CogAT. That one has been off by 20-30 percentile points twice now.