Disqus

Wednesday, March 16, 2011

Superintendent to Create Time to Talk with Parents and Community

Dr. Enfield sent out a letter to parents about the coming days for Seattle Public Schools.  Here's some of what she says:

In the coming weeks I will be in the community listening to your comments, questions and concerns. Additionally, I will continue to have an open door policy so you can offer input directly on how we can continue to improve. Beginning on Thursday, March 24th, I will be holding open office hours from 4:00 p.m. - 5:30 p.m. at the John Stanford Center, 2445 3rd Avenue South. I encourage you to make an appointment to come and meet with me. Please contact Venetia Harmon, vlharmon@seattleschools.org to schedule a time.


I applaud the Superintendent for her efforts to reach out and connect with parents and the community.  It is a good first step.

79 comments:

seattle citizen said...

I also applaud the superintendent for these open office hours. I'm a bit unclear if it is every Thursday or every day: The message is unclear on this.

Also, it's great to have an "open door policy" and an "open office," but to me this implies just dropping on by, walking on in. Must we contact Ms. Harmon for an appointment? That kind of defeats the idea of "open door."

Parent of two said...

Anybody going to the Board meeting tonight? I would like to know how the SEA presentation about phasing out MAP goes over. Thanks.

Susan said...

Hi, I'd love to know more about the implications of ditching the MAP test. Can someone explain? I know that some folks don't like it, and maybe it was purchased unethically (?), but don't we have so, so much money invested in the computers and infrastructure? Are we going to need to replace this testing system with a different testing system with a whole new set-up cost? In which case we win the battle but lose the war (losing even more money from the budget)? Not challenging anyone here, I just really want to understand. Thanks!

Melissa Westbrook said...

It's every Thursday; she said this at the Board meeting.

Parent of Two and Susan, I think I'll start a separate thread tomorrow on this subject to get input.

Parent of Two said...

What money?! The computers were already there and the 5 million a year paid to NWEA for their computer test (proctors aren't paid to proctor the test--parents and volunteers are proctoring MAP)was covered by grant money from Bill Gates. The money that will be paying for MAP is coming from the current levy that passed. Only Gates would be out the money and I think he can stand to lose a little.

Parent of Two said...

Furthermore, the district can continue to use the assessments they were already using before MAP and Gates and MGJ. This will cost the district nothing.

Parent of Two said...

The test will cost Seattle property owners 5 million a year if we keep it. And have you seen the test? I have yet to meet a person who has seen the test who takes it seriously. It is such a bad test. I think the Times needs to publish some of the questions this test actually asks. The would be the best way to have the district ditch the test the way they ditched MGJ. You have no idea what an idiotic test it is. I encourage you to look at the test when your child takes it in June. You are allowed to do this as part of the Freedom of Information act.

Parent of Two said...

One last comment--Seattle's MSP scores actually dropped once the entire district started using MAP. It appears that MAP has actually hurt students academically when it is meant to help!

seattle citizen said...

Parent of 2, you wrote that
"proctors aren't paid to proctor the test--parents and volunteers are proctoring MAP)"

Uh, no. Maybe is some cases, but I don't believe this is generally true. If it IS, that's wrong: are parents trained to administer MAP? It's a computer program that requires booting up, student ID, error over-rides...

MAP takes FTE time (staff paid to proctor) and also gobbles up tech resource (almost every computer i the district unavailable for a few days; and also the actual FTE for the tech people who watch over the whole system and troubleshoot. Ot to mention teacher time trying to look at the "data" and see if matches up with their on-the-ground understanding of the student, is the data reliable, etc. I mean, how many teachers are fluent in the use of RIT scores? Do YOU know what a RIT score is? How about Descartes? Do teachers use it? Or just spend time looking at it, wondering...

seattle citizen said...

oh, sorry Po2, I thought you were pro-MAP with your comment about parents proctoring. I think I misread your comment. My bad.

Anonymous said...

While I have concerns about the rollout and purchase of MAP, and might like to see it 2 times a year rather than 3, I have to say prsonally it's been a huge plus. Turns out my child was hiding quite a bit of knowledge and skill, and tested really high! Remember it's not like MSP which tests for specific greade-levelo oriented knowledge. Because MAP has no ceiling, my child was able to perform work at a level conseidered grades higher. I like the fact that MAP can do something no other current test in SPS can, and no teacher can take the time to. In same vein, talked to a first grade teacher who asked me when I was volunteering to work with a child who blew her away on MAP, dhe had no idea. Many teachers do like MAP. They are not all crazy or tools of the machine. I am hoping people will consider that not everyone will agree on MAP, and that reasonable people may come down on different sides. Thanks.

MAP Useful

none1111 said...

Furthermore, the district can continue to use the assessments they were already using before MAP and Gates and MGJ.

And what, pray tell, was SPS using for district-wide assessments before MAP? Were you a fan of the WASL?? Cause afaik that was it. And a very sad test that was. There was nothing that even purported to assess kids who were working far outside their current grade level, either above or below. You may not like any assessment, but as "MAP Useful" pointed out, there are benefits that can only come from these kinds of adaptive tests. Our family saw similar benefits, and I've heard the same thing from others.

Are there downsides? Sure. But it's more useful to at least attempt to be objective than to jump on the "It's evil, evil, evil!" bandwagon.


I have yet to meet a person who has seen the test who takes it seriously. It is such a bad test. ... You have no idea what an idiotic test it is. I encourage you to look at the test when your child takes it in June.

I have no idea what you're talking about. I was able to sit down and look through many questions on the MAP, almost all of which were quite reasonable for the grade level that I saw. So you can no longer honestly repeat your first statement. In fairness, this was only the math portion, not the reading portion, which I've heard is not as good.

Perhaps a bigger problem is the lack of alignment between state standards, what is actually taught in the classrooms and the assessment expectations. In math that's less of a problem because there's a (mostly) natural logical progression through the topics, from basic arithmetic through calculus. And an adaptive test like MAP can do a decent job (with a margin of error, of course) of honing in on a student's skill level in the 4 different strands.

But in reading it's harder. What year (or semester) are the various grammatical or poetic structures taught? What order are these presented? There is no definitive natural order for many of the topics.

Now, if anyone thinks I'm a huge MAP fan, you're wrong. I think 3x/year is too much. I think it's inappropriate for K-2. I'm frustrated that it's reducing access to many schools' libraries as much as it is, and it is expensive (so was WASL!). But I'm so sick of people ranting on and on about the first and only assessment we've ever had that even attempts to give useful information about our kids' achievement levels outside of their narrow current grade level expectations.


One last comment--Seattle's MSP scores actually dropped once the entire district started using MAP. It appears that MAP has actually hurt students academically when it is meant to help!

This is so ridiculous I don't honestly believe you're serious, but you might consider looking at plausible reasons for those declines, like maybe the change from WASL to MSP? Or crappy math text books, etc.

Jet City mom said...

Thats great to hear seattle citizen.

I am curious- am I correct in assuming MAP is a computer administered test? Are there accommodations for students who need a paper & pencil version? Or does the test at least allow the student to go back and make corrections to their work or to skip ahead to answer problems that they can finish quickly & then go back?

Charlie Mas said...

I like the idea of an adaptive formative assessment. I even like the idea of the whole district using the same formative assessments. That's what the MAP is supposed to be.

The concerns about the MAP aren't mostly about the test itself (although Parent of two clearly has some). The concerns about the MAP fall into three basic categories:

1) Is the test too expensive?
This includes not only the payment to NWEA but also the cost to administer the test - the time it blocks out in our school libraries and computer labs, the cost (if any) to proctor the test, and the professional development around interpreting the results.

2) Are the test results useful?
Do the results tell us anything we don't already know? Are the results in a format that teachers can translate into differentiated instruction for students? Do teachers even have the time and resources necessary to differentiate instruction?

3) Are there alternative means to this end which are less expensive or easier to apply?
Surely there are classroom based assessments that teachers have been using as formative assessments before MAP. What was wrong with them? Why couldn't we adopt a set of these to use across the district?

These are not unreasonable questions. And it could be that the proponents of the MAP test - they are out there - can provide satisfactory answers to these questions. The problem is that we rushed forward with the MAP before these questions were asked or answered to anyone's satisfaction.

Po2 said...

Good point. It is a computer test and students are not allowed to go back and check work and (believe me) several students accidentally push answer button by mistake while reading the test and then they can't go back and redo the problem, it's done.

Yes, I'm sure some of you parents are so proud that your child is being placed at a high-school level while in fifth grade. But, put your pride away, yes, you have an exceptional child, but get real. Do you really think your child can do high-school work? Common...

If your child can pass the MAP and not the MSP, then you should be concerned. The MSP is the test they are required to pass in high-school not the MAP. The MAP is worth beans--no state agency or college is going to look at it.

You should be concerned, because teachers are being evaluated on the MAP not the MSP so your child is being prepped for the MAP not the MSP.

MAP is diluting the help our children need to do well on the MSP.

Again, I add, the MSP scores dropped the year Seattle adopted MAP.

BTW, I'm also a teacher and yes, I pretend to take the MAP seriously in front of parents who are so proud their child scores are above grade level (who can take that pride away, I have children I find exceptional and above grade level, too) but secretly (since I've seen the idiotic questions)I do not take the test seriously.

It's a poorly written test and one that hurts a child's understanding of the state standards. It needs to go.

Po2 said...

Good points, Charlie Mas, also. (:

Dorothy Neville said...

Emeraldkity, times have changed. Testing on the computer is a completely different paragigm. Such tests are adaptive, so that your answers to each question inform which question you get next. There is no such thing as going back to change an answer or skipping an answer that you want to work on later. Very different from when we took assessments. It changes strategy, for sure.

This new computer-adaptive assessment is now becoming the norm for other standardized tests. GREs are now computer-adaptive tests. Will SATs be far behind?

Dorothy Neville said...

Last night at the board meeting one of the public testimonies was about MAP. She brought up something I had not heard before, that the MAP is supposed to be in one Font, but shows up on the SPS computers in another one, archaic unfamiliar one. This leads to all sorts of errors, inability to read whole passages and other glitches. Evidently NWEA sent out a service pack of bug fixes, but this has not fixed the font issue. She said this was a widespread issue and one of many reason why test scores can be unreliable.

Anyone else with corroborating stories?

As for the paradigm of computer adaptive exams, I am not saying that they are intrinsically bad. They are, however, very different from what we are used to with paper and pencil exams and the tactics for taking them is very different.

IS MSP computer or pencil and paper? Fascinating if it is pencil and paper because one would use the strategies we used -- doing the questions out of order, pacing oneself by skipping questions -- would be quite different, something students would be less familiar with.

Po3 said...

Map Useful you said, "Because MAP has no ceiling, my child was able to perform work at a level conseidered grades higher."

Was your childs classroom education modified to match the higher level grade work that MAPs indicated he/she was ready for?

none1111 said...

Hey Mel, another post disappeared. Doesn't show as 'deleted', just not there. Right after "MAP Useful" and before emeraldkity in the early am of 3/17. After posting I reloaded the page and the post was there.

I saved the text before posting, so I'll post it again now, but what's going on?

Salander said...

At the high school level one entire full time certificated position is staffed for no other purpose than MAP testing.

Again this year the District has funded that position while cutting classroom teaching positions.

none1111 said...

reposting this. apologies if the previous post magically reappears.
---------------------

Furthermore, the district can continue to use the assessments they were already using before MAP and Gates and MGJ.

And what, pray tell, was SPS using for district-wide assessments before MAP? Were you a fan of the WASL?? Cause afaik that was it. And a very sad test that was. There was nothing that even purported to assess kids who were working far outside their current grade level, either above or below. You may not like any assessment, but as "MAP Useful" pointed out, there are benefits that can only come from these kinds of adaptive tests. Our family saw similar benefits, and I've heard the same thing from others.

Are there downsides? Sure. But it's more useful to at least attempt to be objective than to jump on the "It's evil, evil, evil!" bandwagon.


I have yet to meet a person who has seen the test who takes it seriously. It is such a bad test. ... You have no idea what an idiotic test it is. I encourage you to look at the test when your child takes it in June.

I have no idea what you're talking about. I was able to sit down and look through many questions on the MAP, almost all of which were quite reasonable for the grade level that I saw. So you can no longer honestly repeat your first statement. In fairness, this was only the math portion, not the reading portion, which I've heard is not as good.

Perhaps a bigger problem is the lack of alignment between state standards, what is actually taught in the classrooms and the assessment expectations. In math that's less of a problem because there's a (mostly) natural logical progression through the topics, from basic arithmetic through calculus. And an adaptive test like MAP can do a decent job (with a margin of error, of course) of honing in on a student's skill level in the 4 different strands.

But in reading it's harder. What year (or semester) are the various grammatical or poetic structures taught? What order are these presented? There is no definitive natural order for many of the topics.

Now, if anyone thinks I'm a huge MAP fan, you're wrong. I think 3x/year is too much. I think it's inappropriate for K-2. I'm frustrated that it's reducing access to many schools' libraries as much as it is, and it is expensive (so was WASL!). But I'm so sick of people ranting on and on about the first and only assessment we've ever had that even attempts to give useful information about our kids' achievement levels outside of their narrow current grade level expectations.


One last comment--Seattle's MSP scores actually dropped once the entire district started using MAP. It appears that MAP has actually hurt students academically when it is meant to help!

This is so ridiculous I don't really believe you're serious, but you might consider looking at plausible reasons for those declines, like maybe the change from WASL to MSP? Or crappy math text books, etc.

none1111 said...

Charlie: I agree with all of your points (including the fact that the district rushed into this without having a good plan in place), except this:

Surely there are classroom based assessments that teachers have been using as formative assessments before MAP. What was wrong with them? Why couldn't we adopt a set of these to use across the district?

Individual classroom based assessments have been used forever. But like the WASL or MSP, by nature they are going to be criterion-referenced, not norm-referenced. That was a big gripe among many of us about the WASL, yourself included. They are not useful at distinguishing between a student who is one step ahead of the game vs. a student who is 3 or 4 years ahead of the class. Neither do they distinguish between a student who is 1 year behind vs. a student who is 3-4 years behind. There's a huge difference in how educators (more than just their classroom teacher(s) ) should be responding to those different needs.

Without adaptive tests, it's very difficult to get that information. Sure, you could force a kid to take multiple tests at successively higher or lower out-of-band levels, but ouch. And that still wouldn't provide good differentiation among strands. Certainly there are no perfect solutions, but there is valuable information available now, for the teachers who actually care enough to dig into the results. That seems to be the exception, rather than the norm so far, but I'm hearing of situations where MAP scores are being used as part of an overall assessment that can help determine class placement. Stuff like that doesn't happen without adaptive test results to trigger a deeper look.

SPS Alumna and Mom said...

Thank you Charlie for the clarifications! I have been at a loss to understand the extreme opposition to the MAP in this group.

Many people -- teachers included -- find the MAP useful as a formative assessment. Teachers get fairly specific information on where students are, and many use it for grouping purposes, and also to understand where students do and do not need additional support. The district trains teachers on how to interpret the data, and I know for a fact that some look carefully at the results and use them. I suspect that more will if the test sticks around. As a parent of children who might fade into the background a bit in large and boisterous classes, I feel the more information I (and my child's teacher) has about them, the better.

While I like MAP, I question some of SPS's decisions on USE of the test (e.g. as a summative assessment, as a tool for evaluating teachers, as a screening tool for advanced learning). These are all "off label" uses, as I understand it.

@Parent of Two: NWEA publishes sample MAP questions on their website -- no need to file a FOIA request. Also, I think it is a stretch to attribute a dip in MSP scores to the introduction of the MAP. Not sure how that would even work. You are correct, though, that parent volunteers proctor the test at some schools (mine included).

none1111 said...

FWIW, I totally agree with many people that the district rushed into the MAP with questionable (or worse) motivations. There are a couple other adaptive tests in use that should have been given more attention. And the district did NOT have a good plan in place in advance to help support teachers in making use of the new data.

But that's not the bulk what I'm hearing. There's way too much covering of the ears, singing "La-La-La-La I can't hear you!", we don't like standardized tests, this data isn't useful, blah blah blah. Guess what? If we want to hold ALL our kids and staff (not just teachers) accountable for meeting standards, we need district-wide (and state-wide) assessments. And if we want to ensure that ALL kids have at least a chance of an appropriate education, these tests should be adaptive, or somehow able to distinguish among a wide range of achievement levels.

There are teachers who are looking at this data and making changes based on the results, although I don't think it's common. I know if I was a (professional) teacher, I would be looking very carefully at the results to see if I had any obvious weaknesses in my curriculum or methods. But that's just my nature, and I'm not afraid of change or of finding areas in which I can improve my skills. In fact, I welcome it.

hschinske said...

Dorothy, there *was* a computerized adaptive version of the SAT piloted years and years ago (mid to late nineties, IIRC). It seems to have gotten dropped and I haven't heard about it in a long time. I don't know why that should be, as the SAT is really very similar to the GRE.

I'd love to hear what alternatives there might be -- I can't see any primary/secondary achievement tests except MAP and STAR (I think the district was already using STAR for reading and found it unsatisfactory) in the list of "Operational CAT Testing Programs" at http://www.psych.umn.edu/psylabs/catcentral/.

Helen Schinske

po2 said...

For 5 million a year, maybe the district can make its own, adaptive test.

Is anyone hearing me? The test itself if flawed. Those who are posting, have you actually seen the MAP?

I know what it purports to do and I know it is exciting to find a test that claims to place your advanced child at her or his level. But MAP is grossly inaccurate.

Instead of getting on my grand-stand, just answer me honestly, those posting--have you actually SEEN the test?

hschinske said...

One thing I've been wondering about is whether the teachers who see inaccurate or ambiguous questions on the MAP are reporting them appropriately. There is a process for that (see http://www.nwea.org/support/article/966/problem-item-report-form).

I have no idea how good NWEA's response to such reports is, of course -- that email could be a black hole for all I know.

Helen Schinske

none1111 said...

Is anyone hearing me? The test itself if flawed. Those who are posting, have you actually seen the MAP?

Apparently you didn't see my post above.

Yes, I've looked at a LOT of MAP questions (math only), and have no idea what you're talking about. The questions were reasonable and appropriate for the level I saw (middle school). I agree that most people with the knee-jerk responses haven't sat down and looked with their own eyes.

I know what it purports to do and I know it is exciting to find a test that claims to place your advanced child at her or his level. But MAP is grossly inaccurate.

Your attitude here and earlier toward parents of advanced learners is both arrogant and condescending. I'm glad my kids' teachers don't have this attitude. This data can be helpful at both ends of the achievement spectrum.

As for the claim of "grossly inaccurate", exactly what kind of inaccuracies are you talking about? The RIT scores are merely numbers, and are valuable as relative benchmarks, both among students in a particular grade and over time for individuals. The only place I've seen anything that could be construed as inaccurate would be where they suggest readiness for a given topic/class in math, i.e. 245 -> Geometry. That is complete BS, and I haven't heard anyone suggest that they would make class placements based solely on that "one-liner".

However, if you have MAP scores across multiple years and grade levels, you will see for your own population(s) what your normal distributions are. This gives useful and hopefully actionable data to use, based on comparisons with your own classes, school, district, state. Both by grade level as well as year-to-year (in your own classrooms). While I don't agree that this should be any kind of major component to teacher evaluation, if I was a (professional) teacher I would be eagerly digging into this data to look for clues as to where I could improve, or to where I might help my peers. But as I said earlier, I'm not afraid of change, nor of improving myself.

Emily's Mom said...

I have seen the MAP and I agree with PO2, the questions for both math and reading are poorly designed. And I also question a test that places my child (who is in third grade) at a high-school level. I asked Emily questions from my high-school algebra text after I was told placed at 11th grade in math and she was clueless. I know that Emily is bright, but she is not at a high-school level in math. I question the tests validity. Do teachers take the test seriously? I doubt it. I like the assessments the teacher has given Emily. They are much more accurate in telling me what she knows in math and what she doesn't. I'm not sure why other parents (none1111) so strongly are supporting MAP unless its pride (they want their child to be a mensa genius) or perhaps these are the teachers at school who are getting a stipend for proctoring? Emily's teacher is the one responsible for setting up MAP at our school and she get's a small stipend to do this. I understand those who are against MAP becuase it doesn't seem like a valid test. I don't understand why there are those supporting it. Fill me in. Thanks

A Teacher said...

I am a second grade teacher and I (also) have seen the MAP test and agree that the questions are odd. I also question the validity of the test. Before the MAP, the district used CBAs and WASL released tests. I think it would be nice to have a test that measured growth and I was sincerely hoping that the MAP would be that test. But it is not. I agree, it is not a very good test. It's not as if we teachers are clueless as to where children are. It's not as if we don't collect their work every day and look at it. I also give tests in class daily to help me decide what needs to be taught the next day. I am constantly monitoring and adjusting my lessons to meet the needs of every child. We are with these kids for 6 hours, it becomes quite apparent where each child is at day to day. Anyway, if we need a test to hold teachers accountable, why not a better one? You would think 5 million a year would buy a better test.

Parent Proctor said...

I am a parent who proctors the test for my school. I have seen both the math and reading tests for various grade levels. I agree that it is a crappy test. No, none 111 is wrong, the questions are so not appropriate. Ditch it.

Matt's Dad said...

I am so glad that parents and teachers are finally speaking out against MAP. I asked to look at the questions on both the reading and math after my son came home bragging that his teacher told him he placed at 8th grade for reading and 9th grade for math. He is only in fourth grade. I said to myself, I have to check this out. And yes, I am apalled at the questions. My main complaint is that the test is not rigorous enough. Do we really need this dumbing down of the curriculum? I'm perfectly happy with the MSP and the WASL was fine with me. I am deeply perturbed to be paying for a test that is so grossly inaccurate. What a head-ache. Maybe we could put up an initiative on the November ballot to have MAP phased out?

PO2 said...

Just because a child can figure out that two balls balance three triangles, does not mean they are proficient in algebra. Just because a child knows what congruent and parallel mean does not mean they have mastered high-school geometry. What is wrong with this district? Does anyone choosing these tests truly take the education of our children seriously? At least I can understand MGJ take on MAP becuase she was on the take. But, what is up with the bloggers on this site? Please tell me, those who support MAP, why? Why?!

MAPsucks said...

Gee, on Font problem...where's our refund? C'mon this is year three with many FTE's and costly district resources tied up on this dam test and the font goes whack?!

MAPsucks said...

none1111,

SPS doesn't care about an individual child's stellar performance. Believe it or not, the teacher still has 27+ other kids to teach. That child's score gets massed with every other child's score to grade teachers and schools. That was the whole point, if that isn't clear by now. That is the whole purpose of weeks of lost learning, labor and $$$'s. Because Boston Consulting Group, Gate Foundation, Stuart Foundation etc etc want to know their money's well-spent.

Ditto on Map Sucks said...

I guess parents who support MAP feel they are getting what they pay for. I guess they feel that 5 million of Bill's money ought to buy a pretty, d-mned good test. Well, guess again. Why, why, why can't we get our money's worth on this test. I could design a test that is adaptive and charts growth in a summer that would be so much more valid than what they are giving us. I just don't get why there is so much incompetence in this regard.

MAPsucks said...

The $64K question: is there something better out there? After months of research and disclosure later, I believe there are numerous. The crime is no one knows because we did not ask. We did not compare (DeBarros' lame analysis included).

Consider the State of Delaware; they spent a year and $100Ks of dollars soliciting and selecting NWEA, then (when called on it) restarting again and picking a superior product. NWEA was unable to use their inside connection through NWEA Director/School Superintendent Joe Wise to subvert that procurement. Our district was much easier to bamboozle. Just appeal to the vanity of a certain former supt and...

But, hey, I'm trying to put all that behind me. Breathe...

WV: refibi

Union and Proud said...

I care about a child's stellar performance. I don't need MAP to tell me when a child is quick to grasp the fifth grade standards and can learn them along with the sixth and seventh grade standards during math. I also can see when a child is passionate about learning math and doesnt' just want to learn the basic skills at each grade level but wants something more challenging. I give them Math Olympiad problems and Mensa brain teasers. I don't need MAP to tell me because the child during math tells me. They finish the work immediately, they go on to Advanced work, they go on to the Enrichement beyond Advanced. I don't need a 5 million dollar test to tell me. I also don't need a 5 million dollar test to tell me about the boy who can't add, can't subtract, doesn't know division from multiplication. He also gives himself away immediately. There is no quiet, shy child who blends in and goes unnoticed like the parent in the former post blogged. Where would she get that idea? I am aware of all of the abilities of all of the children in my class. Why do we get treated as if we are morons? We are trained in graduate school in classroom assessment and use it daily. What kind of teacher would I be if I taught blindly until December to find out how my student performed on MAP? ARe there really teachers who would wait for 4 months not assessing and not aware of where their students are until they take the MAP? What teachers are these bloggers referring to who would teach blind until they get the MAP in December and then change their teaching? How dumb is that? No, the sole purpose of this Bill Gates test is to try to discredit our profession. And do you wonder why Bill Gates is out to get public school teachers? Because we, like those in Wisconsin, are the last of the union workers that he can't touch. The private sector has done away with private sector unions (Bill Gates doesn't allow unions) and know they are going after the public sector union jobs. The MAP is ENTIRELY created to evaluate teachers. The reason it is such a poorly designed test is because it is not intended to assess students but to assess teachers.

Replace MAP with a better test said...

Well, this all makes sense, then. I kept wondering why NWEA would have such bizaare questions on their math and reading tests. I, also, have seen the test. We are actually not allowed to disclose the questions we saw. I have seen other bloggers posting the questions but I do believe they weren't supposed to. So I can just say, I agree with all of these other posts, I have seen the reading and math MAP and the questions are indeed poorly written. In fact, I counted 3 math problems that were actually incorrect and 1 reading problem that had 2 answers that would have both been right. Who do you think comes up with these problems, anyway?

Maureen said...

none1111 said:

Perhaps a bigger problem is the lack of alignment between state standards, what is actually taught in the classrooms and the assessment expectations ...What year (or semester) are the various grammatical or poetic structures taught? What order are these presented? There is no definitive natural order for many of the topics.

I see this as a core problem with the MAP. I am by no means an expert, but it seems to me that any test that might be used to evaluate teaching quality simply must be alligned with what those teachers are being required to teach. Personally, I like the way the MAP adapts to students who excede grade level expectations, but I don't see how a test that does can be used to evaluate teachers who cannot be expected to teach to standards several grades above standard. Even if they are differentiating instruction for advanced learners, there is no reason to expect them to (for example) specifically teach iambic pentameter to one 4th grader (assuming that is one of the 9th grade standards.).

Not a fan said...

First, it’s a cost vs. benefit issue – do the costs ($5M?, computers, lost library time, lost class time, proctor time, etc.) justify the benefits (more timely assessment feedback , and ??) . Then it’s a question of how the results are being used or misused.

I strongly object to the results being used to grade teachers and schools, especially when the test is not aligned with WA state standards. It’s simply measuring exposure to grade level topics, but not necessarily topics that are taught in SPS.

And when a 5th grader tests at a 9th grade level, it means that they are scoring as well as half of the 9th graders who also took the test, not that they can handle 9th grade material (makes one wonder about the knowledge of an average 9th grader).

We had a teacher that was attempting to prep the class for the test. The teacher must have taken the test, because he/she had some preconceived ideas about what they should know for the test (topics that weren’t even part of this year’s curriculum). They spent class time on random worksheets, rather than the prescribed curriculum, with the end result being confusion and falling further behind in class work. All for the MAP test.

I haven’t yet seen the benefits that justify its many costs.

"Average" to "Strong" Teacher said...

At our school not only are our MAP scores used to evaluate us but every teacher's MAP scores are printed for the entire staff to see. So, now I know that the other fourth grade teacher has "poor growth" and that the second grade teacher has "strong growth". The bizaare thing is, the fourth grade teacher is so much more passionate about teaching, designs so much more interesting projects, has a higher pass rate on the MSP, has more experience, is always there after hours, obviously cares very much about her class, and the students and parents LOVE her. You see how happy the students are going to her class and being in her class. The work she puts up in the hallway is always wonderful. The second grade teacher is never there, always on sick leave, always on the phone to her mother when I go into her room or on her computer checking her Facebook, teaches the class from behind her desk, never monitors their work, never puts their work up in the hallway. How is it she is being rated a "strong" teacher and the fourth grade teacher is being rated "poor"? I, if you are wondering, am rated "average" in Reading and "strong" in math. Yes, we are being rated by MAP, and yes, that is the sole purpose of MAP. Now, I also like the idea of an adaptive test and actually started to like MAP until the day I went to my box in the school office to find that our principal had published all of our test scores and ratings as a teacher. How frustrating.

A.T.S. Teacher said...

I think I should add that my son is in the fourth grade teacher's class and my daughter in the second grade teacher's class. I hate MAP because my son is having the best year ever with this teacher and yet she scores "poor growth" and my daughter is having the worst year ever (I have seen her readind and math skills drop this year) and yet her teacher scores "strong" growth. It's like the MAP is promoting poor teaching. I can only speak for the happiness of my children. My son is happy with the teacher who has "poor growth" and my daughter I have to drag, crying to school every morning to the "strong growth" teacher. Is this the trend, out with the passionate, creative, spontaneous, innovative, fun teachers, in with the teachers who do math prep. all day.

MAPsucks said...

Just to clarify. NWEA did go through an exercise to align MAP 2-6 with WA EALRs, but MAP does not necessarily align with what your child has been taught in any particular order the previous few months. The questions are drawn from a bank of questions written by teachers all over the US. REA used that as a selling point: "see, we don't even have to use our (considerable) in-house talent to lift a finger. We're gonna let NWEA pick the questions."

and Union and Proud, if by some chance you were referring to me, know that I totally agree with you. NWEA was crammed down our throats based on a chimera that with a RIT score each teacher will instantaneously have the holy grail to each child's intellectual promise. They need only use that RIT score and pull out their handy dandy Descartes and differentiate that lesson plan for every child, or small group. Except that the margin for error in results is so broad (missed breakfast, distracted by a bug on the windowsill, stayed up too late playing the DS) as to risk being completely off the mark. A teacher's day to day observation is superior, by far.

hschinske said...

And I also question a test that places my child (who is in third grade) at a high-school level.

The level of MAP she would have taken is MAP 2-5. Quite apart from the actual accuracy of the test, it doesn't really test at a high-school level, despite having RIT scores on the same scale as the 6+. As with almost any test, the grade-equivalent scores are the least meaningful. That's not to say that it's impossible for a third-grader to be reading or doing math at a high-school level, only that the 2-5 MAP couldn't really show it.

Out-of-level testing can be a great tool, but you need decent tests and you need decent ways to interpret them.

Helen Schinske

Anonymous said...

If NWEA limited/adjusted the test questions for Seattle in order to align with WA standards, wouldn't that negate the value of the norm data?

My understanding of "alignment" is that NWEA correlates the RIT score with pass rates on the State test. RIT scores can then be used to identify students at risk of not passing the State test.

The information about posting classroom MAP scores for each teacher is disturbing. How was growth calculated? Based on the last year and a half of data, or just the growth from Fall to Winter for this year? And are individual student names and scores posted?

Really not a fan

Maureen said...

Thank you for posting ATS Teacher!

none1111 said...

Looks like another one of my posts from earlier today was flagged as spam. Is this happening to others? I wonder what's triggering it, because it looks like it's happening automatically. Maybe it will reappear when Mel or Charlie find it.

none1111 said...

Emily's Mom said: And I also question a test that places my child (who is in third grade) at a high-school level. I asked Emily questions from my high-school algebra text after I was told placed at 11th grade in math and she was clueless.

The problem you're exposing here is not the test itself, but the invalid interpretation. Your daughter's test score did not place her at an 11th grade math level. A RIT score of 240 in math merely says she scored similarly to typical 11th grade students, not that she has mastered the material in grades 4-10. And just perusing the NWEA site, I found this quote next to the RIT #s: "These data should be used as one of many data points for instructional decisions rather than as the only single placement guide.", which is only common sense!

Presuming this score wasn't an anomaly and her scores are reasonably consistent, what you and her teacher should be taking from that score is simply the fact that your daughter is very advanced in math for her age and should probably be getting some kind of differentiated material and instruction. Also, if she's not in an advanced learning program, that's a good indicator that she should be. Unless you have an absolutely stellar teacher, they probably knew that she's strong in math, but without an adaptive test like this it's very unlikely that they had any clue just how advanced she really is.

Something else: test results like this help prevent teachers from hiding this kind of student from advanced learning programs because of personal philosophical issues, which is a load of BS. Also, I've heard of low-achieving classrooms (and entire buildings) in past years that wanted to keep their high achievers, so they didn't refer ANYONE for advanced learning tests or programs. Having a norm-referenced test that ALL kids take cuts out this kind of misbehavior.


I'm not sure why other parents (none1111) so strongly are supporting MAP

Why on earth would you say I'm "strongly supporting MAP"?!?! Just because I'm taking a balanced view and not parroting the "it's totally worthless and could never be useful for anything" mantra? I've written above that I am NOT a big supporter, and I've written about the aspects of MAP that I don't like at all. I'm just sick of people constantly spouting off completely one-sided arguments without even attempting to consider any aspects other than the ones that support their own arguments.

and this: unless its pride (they want their child to be a mensa genius)?

Yeah, here we go with the condescending stuff again. My earlier post may eventually reappear, but like Po2's comments above, this comment is arrogant and insulting. Whether my kid is struggling to keep up or a "mensa genius", I want the best and most appropriate instruction possible. Not something that's way beyond or way behind their abilities. We should ALL be striving for this.

none1111 said...

Here's another issue to consider. I know it's been mentioned elsewhere, but not so much here.

Regardless of the merits (or not) of the MAP, if teachers don't like it, and are either afraid of it or actively fighting against it, that's demoralizing and likely sucking energy out of them and their classrooms.

I don't know how much of the poor morale I'm hearing about is due to use of the test itself vs. the push for its use in teacher evaluation, but the net effect is real. I've heard from a number of teachers in different schools, that morale is down, and that's really not good for a classroom. It's not just the MAP, but that's a piece of the frustration.

ATS Teacher said...

"How was growth calculated? Based on the last year and a half of data, or just the growth from Fall to Winter for this year? And are individual student names and scores posted?"

Growth was calculated from Fall to Winter, which is only 3 months. No, individual students scores were not posted, just each teacher's MAP growth from Fall to Winter.

Having our class MAP scores posted is definitely a major reason I feel demoralized. I can't speak for other teachers. But the negative attention nationally that we are getting, how we are being portrayed in the press, is also a major reason.

Last night I went to my niece's middle school jazz concert and the music teacher was so polished, and the music he got out of the kids just phenomenal. I thought, if only the nation could see how professional he is, the majority of teachers are, wouldn't they feel awful about all of this mud-slinging?

Trying to get rid of our seniority in the State Senate, trying to tie our pay to test scores, claiming that older teachers need to retire, trying to claim that the drop-out rate in low-income schools is entirely because of poor teachers. I've taught in high-income schools and had almost 100 percent pass on the WASL and taught in low-income where only 60 percent passed. My teaching wasn't any worse in the low-income schools. In fact, I worked harder, was more innovative in the low-income school.

I would accept MAP and even find it useful if it were not used to evaluate my teaching, not printed for the entire school to see, not put in my permanent record.

BTW, thank you all for posting. I feel so much better this morning reading your posts. This MAP issue has been really bothering me and it helps to "talk" it out with all of you. It also helps to hear the other side--those who like MAP. It keeps me balanced.

Union and Proud said...

Map Sucks,

I was referring to you but not negatively. I think your posts are brilliant and really appreciate the information you are posting. I don't know how it is so many are so informed on this blog, but I learn so much from it. I actually hooted when I read your post and agreed, SPS doesn't care about a child's stellar performance. I'm glad you posted that, so that parents can realize that MAP isn't about recognizing their children and meeting the bright children's needs. It is all about evaluating teachers. The district is still in the process of phasing out Spectrum, after-all. How many Spectrum teachers would 5 million buy?

Anonymous said...

Union and Proud said The district is still in the process of phasing out Spectrum, after-all.

Is ALO the new Spectrum?

Or is APP morphing into Spectrum?

What is the District's vision for Advanced Learning?

Mom of 2

SeattleSped said...

Thank you ATS Teacher! What does the posting of teacher scores bode for special needs children in inclusive settings? Many can succeed given the right supports and specially-designed instruction. But will teachers dare give them a chance if it may mean getting a "Poor" growth measurement posted for all to see. I woul bet you a million bucks some of our "poorest" teachers are, in fact, the greatest. They have been identified by leadership as best able to teach to ALL learners, and so have some disabled children in their classrooms. Those I know go the extra mile, collaborating with the special education teacher and aides, truly differentiating instruction (before NWEA made it fashionable) and advocating for their students. They make the "wunderkind" TFA interns look like the test prep instructors they are.

Now will my child be pushed out of classrooms? Because teacher evaluations are tied to a score on a crappy test? Will discrimination take on yet another dimension?

Lori said...

ATS Teacher, thank you for sharing this information. I am very concerned about how your school (and presumably others) are using this so-called data. They are actually drawing conclusions about teachers' effectiveness based on data from 30 or so kids that have been under their tutelage for 3.5 months?! Even if MAP were the end-all, be-all most awesome test for teacher effectiveness, I can't see how anyone could draw valid conclusions in this situation!

I'm curious how the results are actually shown. Is it some sort of percent, like the change in average score relative to baseline? Do they post just one number for each teacher, or does the data point come with a standard deviation or confidence interval to assist with interpretation? And, is the result compared to a national reference point, a local reference point, or is each teacher's "expected growth" determined based on the individual class roster that teacher has so that "measured growth" is thus compared to an individualized "expected growth"? As you can see, I have a lot of questions about the methodology here.

I think teachers who want to fight this kind of nonsense need to approach it from multiple angles. Yes it's important to point out when subjective measures of a teacher's performance don't match the MAP results, but it's also important for you to ask questions about the methods and point out potentially fatal flaws, which adds an objective angle to your complaints as well.

Anonymous said...

Here's some info on NWEA growth norms:

http://www.nwea.org/support/article/543/growth-measure

http://community.nwea.org/node/228

FYI

Bird said...

...actually started to like MAP until the day I went to my box in the school office to find that our principal had published all of our test scores and ratings as a teacher.

If you want to end the use of the MAP in teacher evaluations, I can't think of a quicker way to get it done than to publish the data, not just to other staff, but to parents as well.

My own experience, so far, has been that my kid's MAP scores have not at all been reflective of the quality of the teacher or the amount of academic growth I've seen in my child.

Publish the data and a lot of other parents will notice the same. Keep the data secret and it will do it's damage silently without alerting the public to how whacko the whole program is.

Jan said...

Melissa: this is one of the best MAP discussions on this blog (and for once, I wasn't the highjacker-- yay!). Part of what I find so interesting is that there are people here who see good things in MAP, and whose views I hadn't heard -- or noticed -- before (thanks none1111, Map Useful, SPS Alumna and others). Could you tag it for MAP along with the tags it already has -- so people can find the discussion?

I am not a MAP fan (cost, time, and failure to use test for intended purposes), but am fairly ignorant, as my child has never taken it. Now that we have two years under our belts, I think it is time for the SSD to have a full, informed discussion, with ALL stakeholders (teachers, parents, etc.) about what we are doing, what we are getting from it, how much it costs, and -- if we aren't happy -- what other alternatives there are (this last being something that was glaringly NOT done when we signed up for this thing).

MAPsucks said...

But Anonymous, Brad "17% w/ 83% going to hell in a handbasket" Bernatek came up with his own "homegrown" way to measure growth. Gee, do you think it's the genuine article?

Don't ask them what it is. It's like the Colonel's secret recipe.

none1111 said...

Publish the data and a lot of other parents will notice the same. Keep the data secret and it will do it's damage silently without alerting the public to how whacko the whole program is.

Wow, now there's a different take on it. My thoughts after reading about scores being posted publicly (what school is this?!) is that it's appalling. This is Seattle, not Los Angeles! Didn't we learn anything from their fiasco? Still mulling your comment though.


Lori said: They are actually drawing conclusions about teachers' effectiveness based on data from 30 or so kids that have been under their tutelage for 3.5 months?!

I totally agree with this. Individual student scores are going to have a great deal of error, and even a full classroom can have a significant margin of error. You need to have quite a bit of data before you can even pretend to know how to use it. My personal opinion is that the district won't really even have a baseline to work from until at least the end of this school year, and that won't even be especially robust yet.

But as with any large-scale system, patterns will emerge over time. Eventually, it will become hard to hide the fact that certain elementary teachers (virtually) ignore math because they prefer to teach writing or social studies, or vice-versa. Teachers are human, and they have preferences. And anyone who's spent time in their kids' schools for a while knows which teachers do a great job in which subjects, and which are not so good (or worse) in other subjects. But because it's all hearsay and innuendo, it rarely gets more than a little lip service in practice. And that's not fair to the kids, some of who may take years to recover (if ever) if they get a couple bad years in a row.

I'm also fully aware of the dangers that tools like this pose when put in the hands of bullies or incompetents. And I'm very sympathetic to teachers and teaching positions where there are extenuating circumstances. I can think of many ways the district could easily abuse this data (not going to write about them and give them any ideas!), and that is truly a big worry. But here's something else to consider: as far as I can tell, the district already abuses their power. Will MAP data really make it that much worse? MAP data didn't affect the ability of the district to bring in TfAers, did it? Or flinging principals around like rag dolls playing musical chairs. At least having some kind of uniform district-wide norm-referenced test scores might provide some benefit, where many of the district's maneuvers don't seem to provide any benefit whatsoever as far as I can tell.

none1111 said...

Again, most of the "non-negative" things I'm bringing up aren't specific to MAP, but merely about having some kind of district-wide, adaptive, norm-referenced assessment. I agree with many here that the district's choice of MAP appears to have been a sweetheart deal, and that sucks.

I did dig into some of the research early last year, and saw 2 other tests (I don't remember the names anymore) that looked like they might have been better for SPS's stated purposes.

But here's a question to ask yourselves: would it matter? How many of you would welcome ANY (or reject ALL) computer-based adaptive testing? The feeling I get from so many people here is (cue robot voice:) "standardized..tests..bad -- only..teachers..know..what..is..good". But there is no doubt whatsoever that there is a HUGE range of expectations (and abilities) among teachers, as with any other profession.

How can that be managed in a fair way? In a way that's fair to our kids as much as it's fair to the teachers? I don't claim to have any pat answers, but I know there is a lot of frustration across the country about this right now. If we bury our collective heads in the sand and don't at least recognize the fact that there are real issues, we run the risk of the (very powerful) reform crowd pulling a lot of fence-sitters to their camp. I don't want to see more crap like TfA here in Seattle.

Too much writing by me over the past couple days, back to work! But I'll still keep reading.

none1111 said...

It looks like more of my comments disappeared. Sorry, but Mel, can you take a look? Are you unflagging them, or are they coming back by themselves?!

Melissa Westbrook said...

"The district is still in the process of phasing out Spectrum, after-all.'

Union, why do you say this?

When someone says their post disappeared, I try to get to the spam holder as fast as I can. We have no control over Blogger's filter (we are considering moving to another more flexible platform).

seattle citizen said...

None1111 wrote: "The feeling I get from so many people here is...'standardized..tests..bad -- only..teachers..know..what..is..good'. But there is no doubt whatsoever that there is a HUGE range of expectations (and abilities) among teachers, as with any other profession. How can that be managed in a fair way?"

This has less to do with tests as it does the purpose of education: Is it to put in front of students a systematic process of standards delivery, or to have general expectations of knowledge to be passed on with the caveat that educators will, per force, be varying from the script?

Standardized tests (if I can use that general term) seem to suggest that there is a set list of things that MUST be taught at each...age? (I say age, because schools operate on a grade level, rather than a college system of readiness) The standardized test only tests (if it does this, even) certain identified...standards. I guess my question about education is whether "the standards" are all it's about? Is there more that goes on in classroom? Is there fluidity? Are things taught extemporously that aren't tested on standardized tests? Should there be? Should all students be at the same place at the same time on the same standards?

The question isn't the test, so much as what drives it: Do we want schools to be only about "the standards"? Do we want educators to only teach "the standards"?

I'm not saying there shouldn't be standards, but that there's more going on and the tests seem always to be the focus: HSPE, MAP...we hardly ever hear about incredible teaching and learning that isn't on these things (indeed, we mostly hear only hugely generalized "scores", that are used, lately, to slam educators (all teachers MAP scores posted...eek!)
So standardized tests, I'd posit, kill the spirit of education, the fluidity and passion: They turn it into a formula, and then measure educators and students based merely on that simplistic formula. Show me where this isn't so, and I'll take a look, but even MAP, which was sold as a formative device, is now apparently a teacher-tester. Ack.

anonymous said...

Seattle Citizen but you didn't answer none111's question, and I think it is a very valid question to ask.

"But there is no doubt whatsoever that there is a HUGE range of expectations (and abilities) among teachers, as with any other profession. How can that be managed in a fair way? In a way that's fair to our kids as much as it's fair to the teachers? I know there is a lot of frustration across the country about this right now. If we bury our collective heads in the sand and don't at least recognize the fact that there are real issues, we run the risk of the (very powerful) reform crowd pulling a lot of fence-sitters to their camp"

Would you care to answer this Seattle Citizen?

Anonymous said...

I agree that MAP scores should not be used to evaluate teachers. I also agree that MAP costs more than we can afford to spend. But I think that MSP is a bigger waste of money. And I do think that some standardized testing is helpful.

The teachers posting here are very aware of the academic profiles of their students. But my experience was that 6 years of mostly excellent teachers did not pick up on my son’s learning disabilities. They said ‘he is a boy’, ‘he is lazy’, ‘he’s just not that bright’. His low WASL scores & classroom assessments backed that up, because he has learning disabilities around writing. He was placed in remedial reading groups and told not to try to read books that he would bring to school as they were too far above his reading level based on the teacher’s evaluation of his hand written assessments. When he started taking MAP they were quite shocked at his high scores. We had the whole battery of psych-ed testing done, including multiple achievement & IQ tests. The MAP results are very consistent with those results. (So I my one objectively measured data point does not show the MAP to be a loopy inaccurate measure.) All of the other tests he has been given including WASL, MSP, CBA’s etc. are not consistent with those achievement tests.

I know a number of other children with the same experience of having learning disabilities missed for years by well intentioned classroom teachers.

So I would be happy if I thought all teachers would be able to correctly assess my child in the classroom, but it has not been my experience.

-I really wanted to trust the teachers.

Anonymous said...

It seems that a reasonable use of the MAP test would be as a once a year test at the beginning of the year.

This would substantially cut down the costs, and the lost library/class time, while still giving parents and teachers a timely assessment.

So MAP in the Fall, MSP in the Spring, and in-class assessments throughout the year. Shouldn't that be enough?

Another mom

SeattleSped said...

Trusting Anonymouse,

I can relate to your experience. However, I would say for every child missed, there are two or more found. Just this year another student in my child's school was identified (at fifth grade) and placed in the inclusion program.

Some disabilities are very hard to discern. For example, it was only through outside psychological testing that I was able to learn that my special-needs child struggled with inferential reasoning. Too a teacher, it would just seem that his comprehension was lousy, but in fact it's more complex than that. Fortunately, through his IEP I've been able to get him help.

NWEA itself does not market MAP as a method to identify special leaners. The district's own "assessment expert" said MAP should not be used for this in isolation, and that two other data points should be collected before a decision (such as failing the Spectrum cut-off) should be made.

Charlie Mas said...

I think one of the most common mis-uses of the MAP results is to think that they supply answers.

They don't.

The MAP results are intended to prompt questions.

The assessment suggests...

a) that the student is working below grade level - check that.

b) that the student is working above grade level - check that.

c) that the class as a whole doesn't understand fractions as well as they should - check that.

The results from the assessment are supposed to be indicators to be investigated, not gospel facts to be blindly accepted.

Anyone who suggests that the MAP results prove anything misunderstands the assessment and the results.

ATS Teacher said...

Yes, MAP in the Fall, only, would be perfect. This way, it would not be used to assess teachers and would give those parents who have bright children with learning disabilities a test to prove that their children are indeed bright just don't test well on CBAs and the MSP. The MAP in the Fall would also help teachers indentify those students who need the extra help immediately. This seems like the perfect solution. Does anyone posting on here have a problem with MAP just in the Fall? If so, please explain.

seattle citizen said...

peon, I would reply, as I just did, but googleblog ate my response. I have to remember to copy it before I hit "enter"
Sigh.

seattle citizen said...

I'll try again:

The "real issues" none1111 suggest we address to keep people from jumping off the fence onto the "reformers" side are issues the reformers themselves have propper up, using various propaganda techniques. I have no interested in granting those issues credence.

There is already a way to evaluate educators. There's no need to include standardized tests. Just because the reformers have said it's a big, big issue doesn't make it so.

Rather than address the "real issues" raised by the reformers, I would rather up the level of truth-telling, up the level of daylighting the reformer's agendas. This is already working: People around the nation are calling BS on many of the reformer's "issues" and this is a good thing.

During these tough economic times, the reformers are using more and more anti-union and anti-teacher rhetoric and hyperbole to try to sway the fence sitters. I have no interest in granting them validity; rather, I would like to see citizens stand together even closer, united against the corporate take over of public schools.

NWEA, TFA, OSC, A4E, are working had to break public education by demonizing educators: People are waking up this, and it is a good thing.

seattle citizen said...

On this point:
"there is no doubt whatsoever that there is a HUGE range of expectations (and abilities) among teachers, as with any other profession."

Thank goodness for a range of abilities! Can this range be identified by a standardized test?

As to the range of expectations, I'm not sure which expectations this refers to: Students, parent/guardians, citizens, board, educators, business....

Does a standardized test identify success in meeting all those expectations in students or educators? Or parent/guardians? Or citizens?

hschinske said...

Does anyone posting on here have a problem with MAP just in the Fall? If so, please explain.

To me, that wouldn't save anything like enough money to be worth tossing out one of the test's supposed great advantages: the ability to monitor progress (including above- and below-level progress) during the year, so that you're comparing students to their own work and judging their own progress, rather than the nonsensical business of comparing this year's fourth-grade class to last year's.

If it's not doing the job properly (and again, I'm getting the sense it isn't), toss it completely. It makes no sense to give it once a year and get even less usable data. For one thing, that form of administration would mask any problems with the test and we'd be stuck with the dang thing much longer.

The only reason I can see for cutting MAP administrations down to once a year is if it is part of a planned political process for eventual elimination: in short, a face-saving measure. But we'd be paying a heck of a lot, both in dollars and kid/teacher hours, for that supposed savings in "face". We heave-ho'ed the ITBS in one year without a blink. Why not the MAP?

Personally I think the whole licensing fee structure stinks to high heaven. For the amount we've paid NWEA already, we should just OWN the test and use it as and how we like, just as we do with typing tutor software or whatever.

Helen Schinske

SeattleSped said...

Argh, sorry for typos. Good thing I'm not being tested on this...

Anonymous said...

To me, that wouldn't save anything like enough money to be worth tossing out one of the test's supposed great advantages: the ability to monitor progress (including above- and below-level progress) during the year, so that you're comparing students to their own work and judging their own progress, rather than the nonsensical business of comparing this year's fourth-grade class to last year's.

There is then an assumption that teachers target instruction for each child, based on MAP results, to ensure measurable growth on the MAP test. The MAP content would then be driving instructional content.

Is that the intent?

Yet, there is also a push to standardize content and delivery, which seems to limit the ability to differentiate.

So which way is it?

I disagree that the once a year test would eliminate its use to measure "progress." You just measure progress on a year-to-year basis, which seeems more reliable than measures only two months apart (Fall to Winter).

To me, the MAP helps identify those needing extra support, either above or below level, but the District needs to decide what that support involves - out of class support, in class groupings, gifted classes, etc.

Signed - Less testing, more learning

twice gifted said...

We had a similar experience to "I really wanted to trust the teachers." My kid is twice exceptional, with a very high IQ and dyslexia. Private achievement testing was very much inline with MAP scores, except for one instance where he bombed the MAP (40 percentile point drop in the winter last year).

You know what test is loopy, though? The CogAT. That one has been off by 20-30 percentile points twice now.

AParent said...

The real problem I have with MAP is that it tests breadth not depth of understanding. What I find is happening (in my son's and daughter's classes) is that teachers are indeed now teaching to the MAP (becuase they are evaluated on it) and are getting a huge breadth of knowledge with no depth. My second grade daughter is doing long division but has no real concept of what it is. That bothers me and I know will hurt our children's true understanding of math.