MAP

By request, a thread on MAP.

Heard that when everyone received their e-mail on MAP today, so many people logged on The Source, it crashed.

Comments

SE Elementary Parent said…
The scores were updated, and as stated the percentiles were re-calibrated, however Fall RIT scores changed. My 2nd grader's went down by 20 points, and now definitely look like an anomaly compared to her other scores. From what we are hearing, teachers don't have confidence in this score either. I find this rather shocking. Re-calibrating based on new norms is one thing. I see no reason that the RIT score should change, however.

This makes MAP look like a joke. A really expensive joke.
Anonymous said…
(repost from earlier thread)

From a generic NWEA parent letter:

The most recent [norms] study occurred in July of this year [2011] and it benefits from a larger, more representative sample that reflects the diverse demographics and characteristics of students and schools in the United States. The larger, more representative sample allows us to make more informed estimates of student growth throughout the school year that we can use to adjust instruction in the classroom.

With the improved methodology behind the 2011 study, we are seeing some differences from the previous study, particularly in how student percentile rankings and growth projections are distributed. For the majority of students, the changes from the new norms have resulted in only minor differences. However, we are seeing more significant changes for certain groups of students, particularly students in the 1st and 2nd grade, but also for some students who are performing much higher or lower than others.

Your child’s learning, or demonstration of what they have learned, has obviously not changed as a result of the updated norms. Rather, the group of students that they are being compared with in the norm study has changed, and this will subsequently impact the average growth projections and percentile rankings relative to this new group of students. While these differences will be more apparent for some students, they represent significant improvements in how we are measuring and evaluating the academic growth of your child.


-parent
Anonymous said…
Ahhhh...statistics...technology...widgets....

Good thing the private sector is here to help us poor public school folks understand just what is going on with our kids.

Oompah
Anonymous said…
(granted I have little to no recollection or stats 101) The SPS and NWEA explanations make absolutely no sense to me. WHY the recalibration - what was compared to what prior, and what is being compared to what now? "larger more representative sample"? does that mean in the past we were comparing apples to apples for percentiles (say only SPS kids to SPS kids?), but now we've added in oranges too just for kicks (SPS kids to ALL kids taking the test)?

so it seems this is NWEA dictated - all school districts that use MAP have changed percentiles?

-Diane
MAPsucks said…
Diane,

Thanks to NWEA's slimy sales strategy (flatter and kiss up to district superintendents and tech admin, offer positions on their board of directors), thereby getting in through the back door, they have managed to a) avoid actually having to prove the validity of their product; and b) grab a monstrous market share without competitive evaluations.
Anonymous said…
This comment has been removed by a blog administrator.
Anonymous said…
So was MAP sold to lower SES districts first to quantify teacher evals? Maybe government funding? Then more prosperous Districts took it on and raised the mean (median, mode)...whatever it's called?

I'm trying to figure this out.

In any case, it is all about the marketing.
n...
Anonymous said…
In other words, we're going to keep adding more and more subjects to the overall "sample population" so that performance is always a moving target - we're never going to really be comparing anything meaningful and consistent, but we thought that you would appreciate our ongoing "improvement" of our product.

Thanks for your continued support!
NWEA
(well, Oompah actually)
Anonymous said…
This comment has been removed by a blog administrator.
Anonymous said…
My 2nd grader--who is probably a more proficient reader than me (certainly more prolific)--went from 99th percentile to 70th percentile in reading. Huh? And my 6th grader--who was in special ed from age 3 until this past June--is suddenly 92nd percentile in reading and math (he has always scored 90+ percentile in math--but his previous score in reading was 46th percentile). So, this cements my belief that the MAP=crap!!

WAmama2
whitney said…
Now, imagine you are your son's teacher, and your principal's evaluation of you is partially based on such faulty data. Have I just swung from being a fantastic teacher yesterday to a miserable teacher today because of this 20-point downward shift in your son's MAP scores?

Has your son plummeted in his abilities as this swing shows? And what does this do to his confidence as a student? The danger of test scores .. . .

Parents -- you should be demanding better of your district. You should be demanding that the money obviously wasted on MAP, which yields such arbitrary scores, could be better used hiring an elementary school counselor.

But NWEA continues to collect $$$ from you.

The MAP emperor has no clothes, and I can only hope this situation opens everyone's eyes to the dangers of such "data for hire."

This should cause an absolute revolt among parents. How can you ever trust this company again?
Anonymous said…
My son's Fall RITs changed today, thanks to the recalibration. His winter scores were posted as well, but how am I to have any confidence in them when his fall scores suddenly changed almost 4 months later?

- D's mom
SP said…
A huge vote of NO confidence for the REA's constant readjustments (remember the "ready for 4-year college rate" 17%, then 43%, and then 66%?...and still they are not even using the actual college entrance requirements such as taking the required SAT/ACT test, or even including SPS HS graduation requirements, ie 2.0 GPA for core classes- both of these would lower this current figure dramatically!).

My middle school/high school kid's MAP scores all went down (math & reading both) by an average 3 points. Fairly consistant drop. It would be interesting to hear from as many families as possible, as REA's letter was characteristically vague on how many were actually affected & by how much- they dare not release this information?

What a colossal waste of our time and money to administer the MAP and chase this moving and questionable target every year!
Wsparent said…
The district seems to be saying that RIT scores should not change as a result of recalibration, but my son's RIT scores for prior tests are different on the pre-calibrated and post-calibrated pages. His percentiles also jumped all over the place. The only thing this tells me is that MAP is a huge waste of time and resources
StepJ said…
Taxpayers of Seattle.

You are very generous to fund this expensive BETA test for NWEA.
Anonymous said…
My 2nd grader went from a 98 (re-calibrated from 99) in math for Spring to a 27 in Winter. I'm assuming there was trouble of some kind with the test. Any thoughts on how to clear something like this up?

criminy
NLM said…
My first grader also dropped precipitously from Spring 11 to Winter 12. The fall, "normed" data remained virtually unchanged in math but was 20 pts lower in reading and the winter scores? Holy cow, how do you go from 93 to 63 in reading in just three months?!? Something is definitely not right here. I'm not sure if this is an NWEA thing or an SPS thing but my confidence in the data, anyof it, is totally shot.
Anonymous said…
My 7th grader has a slight dip in Math, but 20 point drop in Reading.

Another Parent.
Anonymous said…
As a teacher whose students do extremely well on the MSP which is a "show your thinking" kind of test, I see little, no very little value in the MAP scores. We could go on and on about what the MAP does not do and how there is almost no field tested data to support it but that would not be a prudent use of my time right now.

I will tell parents that unfortunately, the MAP is being used as it was NOT intended to be used and that is as a gate keeper and high stakes testing for individual students. It is used as one of the measures to get your student tested for advanced learning.

The variability in MAP scores for my students is truly astonishing. If you are interested in high stakes testing, please read up on it. There is some very good research now on what makes a valid test and what makes an invalid test.

-teacher
Anonymous said…
As was mentioned in an earlier post, the original Board policy on assessments was much more detailed in the requirements for validity, reliability, precision, etc. The MAP test probably wouldn't have met the original criteria. The policy was "updated" prior to adopting MAP as an assessment tool.

Many reports of high variability seem to be coming from the K-2 parents, which makes one wonder if this is an appropriate assessment for this age group. How much of the variation is simply a matter of the child's age?

The advanced learning tests were given verbally (as a group) for this age group in years past.

What a mess.
Anonymous said…
My kindergartner jumped 26 points for her fall percentile, but her verbal percentile didn't change. Her winter math score was pretty much the same as the adjusted fall percentile, but her verbal percentile dropped by 25 points. I am so confused and baffled by all of this. How in the heck is this supposed to be meaningful when percentiles are jumping around so much? How can percentiles be used to determine Advanced learning eligibility when they change constantly? The whole thing seems like a joke to me as well. Not impressed!

WS Mom
Anonymous said…
Also wanted to add how frustrated and annoyed I am that my child spent 10 valuable hours of classroom time in the first 3 months of the school year wasting her time on this crap (she hated it and told me so). She would get much more out of actually learning something during that time! Down with testing!

WS Mom
suep. said…
Clearly it's time to can the MAP.
15 Reasons Why the Seattle School District Should Shelve the MAP® Test—ASAP

Re: K-2 scores, former SPS MAP administrators (Brad Bernatek and Jessica DeBarros) told a group of us parents who met with them in 2010 that MAP is not considered appropriate for grades K-2, which is why some other districts opt not to use it for those grades.

Also, I heard that the 2008-09 winter MAP scores took a nosedive district-wide (because of a post-vacation slump) rendering the data invalid and unusable. When I asked a knowledgeable teacher who was responsible for administering MAP in his school why the district didn't simply cancel the winter MAP, he said it was because NWEA (the test vendor) wanted three data points for its own research purposes. In other words, our kids are merely data fodder for a test vendor. And we pay the vendor for this dubious privilege.

Lastly, remember how we ended up with the MAP test. Broad Resident Jessica DeBarros was hired by Broad-trained Supt. Goodloe-Johnson to "research" which test the district should purchase. Lo and behold, Debarros' report determined that the best choice was the MAP test, sold by Northwest Evaluation Association -- on whose board Supt. Goodloe-Johnson sat. (She was later cited by the state auditor for her failure to disclose this liaison -- an ethics violation -- and forced to step down from the NWEA board.)

MAP has been a costly boondoggle from the beginning. It's time for it to go.

In the meantime, OPT YOUR KIDS OUT.
Anonymous said…
Which kids can accurately and consistently:

divide 3/4 and get .75?

tell me how many maples are on the block when they're told 25% of the 16 trees on the block are maples.

calculate 12 - 16 and 16 - 12.

tell me how students bombed the test if 3/4 of 32 passed the test.

tell me how much ham to prepare if they need to cook 64 3 egg omelets and 25% of them will need 2 ounces of ham.

2 cubed =

3*2 =

3 squared =

2*3 =

5-11 =

PEMDAS

3*4/4*4 = what percent?

We have tons and tons of data about our kids, little of it is readily available and therefore useful.

I suppose we should dump another pot of money into policy picked by the credentially clueless and career first charlatans ...

ColorMeComplacent
Anonymous said…
Please pardon my rant...but how can all of this be so dang hard. Why miscue after miscue. These are educated people making these decisions. This all is really becoming quite a joke....literally. Are we on the show punk'd? RIDICULOUS!!!!!

-very tired of it all
Anonymous said…
FWIW, I'm seeing at most a one percent change in past percentile results and no change at all on previous RIT scores for my two elementary kids. Their school did not administer MAP this past fall. Winter scores just posted.

Not much change
Anonymous said…
Devil is in the details. And not much details in NWEA's "improved methodology" or what "benefits from a larger, more representative sample that reflects the diverse demographics and characteristics of students and schools in the US" mean exactly. What changed? Why did it changed? How did it changed? How much did it changed? Improved methodology..... do give specifics. How is the data NOW an improvement in how NWEA measures and evaluates the academic growth of our kids? So there are changes this year. How about next year? Conclusions? Why is this data more reliable and how and why should we find it useful? Why should we trust it?

HELLO, we are not talking about quantum statistics here and space of closed three geometries.

Null
Kate Martin said…
And don't forget that NWEA is a "non-profit". Yuk.
Josh Hayes said…
Well, in a previous life I was a consulting statistician, so let me guess at what's happened here.

The MAP producers generated a distribution of test results from a limited sample of test-takers. This allows new test-takers to be "placed" on the existing distribution: their new score, had it been in the calibration sample, would have fallen at the XX% percentile (fill in whatever number you want).

What they seem to be claiming is that they now have a much larger sample of test scores, so they can generate a new distribution of expected test results. I'm okay with that, from a statistical perspective, but from the anecdotes posted here, this seems to make no sense -- if this is really what happened, we should see a directional bias: every test percentile should slide in the same direction, if they're going to slide. But that's not what I'm seeing here: some kids went dramatically down, and some went dramatically up. I suppose this could happen if there are considerable between-grade variabilities, but this strikes me as unlikely to explain the dramatic differences.

In short, I think they don't know what the hell they're doing. I'd love to get my hands on the original data, revised data, and collection protocols, so I could give you a real assessment.

WV says this whole thing is "tanter"-mount to fraud.
WS Parent said…
I just really don't get it.

My 2nd grader's RIT score percentages in K and 1st grade were high 90's in both reading and math. These test scores determined her eligibility to test for Spectrum/APP, etc. So now, after the winter test, her reading score stays the same, but her math plummets from a 99 to a 67?? 32 points? So, is she an "Advanced Learner" or not?? From the looks of this, Is she now one of those advanced in one areas kid but not the other? Or was she REALLY never advanced to begin with?

Our school (won't mention specifically but we are in West Seattle) did not take the fall test. Not to mention, they come back after Winter Break and immediately started testing. Is that REALLY the best time to be testing kids, after a 2+ week break?

This data is meaningless to me, but when it is being used to determine Spectrum/APP eligibility, this is very concerning.
NLM said…
Not only that but my kid's RIT scores also changed. SPS/NWEA claim those were not supposed to change. Part of the dramatic slide in percentiles comes from a marked decline in the RIT. My kid's actual performance didn't change even if the sample size, grading curve did. It looks like one giant cluster to me and seems to primarily be affecting the K-2 crowd.
SPSLeaks said…
Here is the NWEA Complete Norms study for 2008. Now if someone with a login to the NWEA Reports site would kindly download it and send it to spsleaks@gmail.com, I would gladly post it...

Julian
Anonymous said…
My 6th grader's scores dropped by 1 - 3 points. My first grader's scores remained unchanged for the most part. Both kids score in the 96th - 99th percentile. Neither kid took MAP testing in the Fall (neither school did the Fall MAP testing).

Jane
Anonymous said…
I just want to add my kids' results to the discussion. My second grader's reading score went from 98th percentile last spring to 78th yesterday. Math went from 93rd to 70th. His fall scores are not showing up for some reason. Anyway, that is a shocking fall. I would really like an explanation of how that could happen. How many new students were added to the comparative data to create that kind of fall? Why did this not happen to my older elementary school child? Her scores remain roughly the same.
-KS
Anonymous said…
Both my first and third grader did similarly to before, and we have been happier with this test than others, fwiw.

-not unhappy
Lori said…
Josh, help me understand why you think the MAP percentile changes would be unidirectional.

My thought is that if scores are normally distributed, the first sample would have a relatively short, wide bell-shaped curve (hopefully centered somewhere near the true population mean) due to its relatively smaller sample size, and the updated distribution would be taller and narrower (again, centered near the true mean). If so, then some originally reported low scores would likely be bumped up in the new sample, while some high scores would be bumped down.

In order for the changes to be unidirectional, you'd have to see the entire curve shifted to the left, with a completely different estimate of the population mean.

And if that's the case, then the whole thing has been a big scam. That is, the first sample (and heck, maybe the current sample) was not generally representative of American kids as a whole and should never have been used to determine placements, such as Advanced Learning.

Seriously, I'm thinking that the district should consider suing NWEA! If we "overtested" kids with the CogAT based on faulty data, that's an expense that the district should try to recoup. They sold us a product that was supposed to be nationally normed; 30-point swings in a recalibration indicate that it was not.
Anonymous said…
My K child's fall scores were originally 47%ile and 51%ile. In November, the rankings were revised to the 61%ile and the 62%ile. (I don't remember the reason for that change.) Her fall rankings have changed again to 75%ile and 84%ile. Three different rankings for the same performance. Maybe they'll keep revising her from average into advanced learner.


Crazy!
Anonymous said…
I really don't understand all the uproar. If the sample size is bigger (for instance now Seattle is in the data pool) then you will have more data points. Just think about your kid's height percentile. If you only take men in Indonesia into consideration you will be considered tall if you are 5'7" but that wouldn't even reach the average height of a Danish man which is 5'11"
From talking to some classmates in my kid's school (not APP), my kid reported that more than half of them were above the 90th percentile, with some scores over the top for the grade, the MAP scores in our case have been consistently in the 99th percentile for Math, and didn't change at all with the new scores. In reading the new scores are one percentile lower.
I wonder if only Seattle students were taken into consideration if those percentiles would even be harder to reach.

Bell Curve
Anonymous said…
just another data point -- one of my 2nd grader's scores from last year moved one point lower. otherwise, his scores are the same. he's in the upper 90's. his school didn't take MAP in the Fall.
-- just another anecdote
Sahila said…
Easy to fix this problem.... OPT OUT...

Wont make any negative difference to anyone, now that the tests results have been shown to be completely unreliable...

Vote with your feet, people...
Anonymous said…
My recollection of events - it was Advanced Learning that wanted to use MAP for AL identification, despite some reservations expressed by assessment folks at SPS.
North Seattle Mom said…
All our Spectrum student's percentiles went down by 1-2% so not a big deal to us personally, but it is ridiculous that MAP percentiles were used as Advanced Learning eligibility criteria for CogAT when things are clearly swinging all over the place now.

I'd be interested in hearing the District's explanation of some RIT scores changing here when their letter states This percentile update does not change your child’s RIT score.
Anonymous said…
I was just comparing my kindergartener's winter MAP score with her second grade sister's winter MAP score from two years ago. Older sister: reading RIT of 191, 94%ile, math RIT of 184, 82nd%ile. Younger sister: reading RIT of 169, 95th%ile, math RIT of 173, 96th%ile. So my older daughter has higher RITs but lower percentiles? THat doesn't make sense to me. Unless the data is analyzed year by year, not using a sample across the years? I'm not putting this very eloquently, but I hope someone more fluent in statistics can explain this to me.
--monkeypuzzled
Anonymous said…
Oh, and just to clarify: I am comparing both girls' MAP score as kindergarteners, mid-year.
--monkeypuzzled
SPSLeaks said…
Ask and ye shall receive...

NWEA 2011 RIT Norms Study

Julian
Anonymous said…
Our school test administrator reports that last Friday was some kind of deadline for Winter appeals, etc. But the scores were only published yesterday.

Why do teachers have test info the day after? To begin using info? To notice red flags, like a 70% drop Spring to Winter say? What if your kid's teacher did not look at the scores? What recourse does a parent have for what looks to be a serious anomaly?

"don't worry about it, see how the child does in Spring" doesn't cut it. This is part of a permanent record and who knows how it may be used in the future.

Any thoughts?
Anonymous said…
sorry, Anonymous above is

-Criminy
Anonymous said…
I just wonder if teachers really have time to analyze each child's test scores. I feel like my children's teachers already have a very good sense of where my child needs help without looking at a useless bunch of numbers. My daughter who scores in the 98-99% wants to take the test, because she says she doesn't want to be the only one not doing it.

WWmom
Kmom said…
My K son scored very low in both categories. He can use a mouse, but he is challenged to sit still for very long and is still learning how to concentrate. I'm biased, like any parent, but he is a very bright kid. For the testing days, he told us he got to "play on the computer". I have no intention of sharing these scores with him. He is only 5. I understand the desire to set benchmarks for growth, advance programs, etc., but testing at this level seems to be a highly questionable use of time and money. I would sincerely like to understand the argument FOR computerized testing at this level, when something as basic as an ill-timed need to use the restroom can make a score plummet.
Josh Hayes said…
Lori, thanks for making me think harder about this. If we've assumed an underlying normal distribution - and this is a technical term! I don't mean "typical" - then we should have originally had estimates of the mean and variance of that distribution (and in this case, of course, mean = median since the distribution is symmetrical). Adding more data to the sample distribution should fill in the same underlying distribution, which should mean that existing scores shouldn't move at all, or at least, only a teeny bit.

I could see scores moving with a central tendency -- that is, percentile scores above 50% could creep downward, and those below could creep upward, as the tails of the distribution are filled in compared to the relatively sparse original samples. But why would we see some scores originally in the top half of the distribution move down, while others originally in the same half move up? This should only happen if the real-world distribution is not as assumed.

And that's entirely possible: these tests seem to have something of an upper bound which is truncated, while the lower bound probably is less so. This means that the median will be greater than the mean, and lots of scores piled up near the upper end of the distribution could mean that a small rejiggering of the distribution could produce a substantial swing in percentile rank (but again, it should be on the whole in the same direction, unless a blob of additional scores were added both above high scores and below less-high scores - this would push scores between those two points together).

I think my original point, though, is mostly this: WTF? And I think that still applies. :-)
Josh Hayes said…
Oh, and another thing: it's impossible to countenance releasing such a test without a robust calibration sample in the first place. They should NEVER have used this test without a good understanding of the real distribution of expected test scores; if they've moved this much from the original, they did a horrible job to begin with. If they did a good job to begin with, this recalibration is screwed up. Either way, it looks bad for the utility of this test.
MAPsucks said…
The argument is to establish a teacher evaluation system based upon student test scores. Period. Doesn't matter if the tests are unreliable, lack validity, and result in wildly fluctuating scores; the system is "scientific" and will cut the wheat from the chaff, teacher-wise. Arne and Gates like it, so that's all that matters.
Anonymous said…
As a parent, I found MAP info somewhat useful (though less so now), but have not seen how the district, the school or the teacher use the scores to guide teaching or affect changes at the classroom or individual level. I find teachers use what goes on in the classroom, the quiz and test results, informal Q & A, the HW and projects, and the state MSP to guide their teaching. I don't think MAP is being used to direct individual's learning much except for when it's used as a gatekeeper. Thus far, I haven't seen MAP's purported benefit trickling down to the individual child. So for me, cost/benefit wise, MAP is not worth continuing while we're having budget cuts.

something to think about
MAPsucks said…
reposting from earlier thread:

Kennewick School District Citizens blog has run a series of investigative reports on the MAP. It can be found here:

KSDblog with great links to series
Anonymous said…
My first graders fall reading RIT scores when down significantly from the original fall scores and her math RIT scores went up slightly. My third graders RIT scores did not change at all.

I don't have much confidence in either set of numbers now.

-yumpears
Anonymous said…
Interesting link to the MAP analysis.

The irony is that the same link was referenced in the original discussions around MAP in SPS and it was suggested that you can't rely on random links found on-line...sigh. If you can follow the analysis, it's worth the read.
Anonymous said…
NWEA has expanded the scope of what MAP results mean. Perhaps as they expand more use of MAP, the fluctuations parents are seeing now is a reflection of a work in progress. From NWEA 2011 RIT scale norms:

****"Common school realities and policies can, and often do, require scores to be used for different purposes,effectively changing their intended meanings and uses. Issues such as those involving accountability,estimating school effectiveness, estimating student progress, informing instructional content decisions,and using test results to select students into programs or make promotion or graduation decisions areamong the more common uses of test scores. In these and other common situations, test scores take onnot only different meanings, but carry varying consequences as well. While it is unlikely that any set of achievement test norms can fully accommodate such a broad range of uses and score meanings, it is a reasonable challenge to develop norms that can optimize the information that is used in these contexts.

Critical to taking up this challenge is the availability of references to student achievement status and change that can accommodate some variation in when tests are administered, differences in the precision of achievement estimates, and how their results are to be used. The results from this norming study provide a clear move in this direction."****

They want to be able to provide the data to guide present and future educational policies. This is a good business model as today's Ed Reform movement emphasizes DATA DRIVEN reform. Standardized testing will guide the direction, implementation, budget, and evaluation. NWEA is making effort to reconcile MAP data to state and national standards. Given the variabilities there, lots of tinkering going on. NWEA wants to use MAP to predict college preparedness and readiness, for teacher eval, school eval, student eval, program entrance/exit eval, and on it goes....

NWEA's ambition is vast, so is the MAP payout potential. How does this translate down to the benefit for the individual child? Well that's harder to see. There are good reasons for that as a test MAY identify, but it can't fix if teacher/schools don't have resources or effective support and certainly whatever factors outside of school that affects a student's life.

But we will have good data to judge from, make budget priorites, and set educational standards from, won't we? Numbers don't lie, right?

something to think about
MAPsucks said…
reposting Anonymous:

"Interesting link to the MAP analysis.

The irony is that the same link was referenced in the original discussions around MAP in SPS and it was suggested that you can't rely on random links found on-line...sigh. If you can follow the analysis, it's worth the read."

Of course they said that when we were fighting the utter lack of independent, expert analysis of assessment tools (DeBarros' report? *snort*). But then what do they offer in return. PR-written emails and little to no context for what these changes mean to a student or family. Because MAP was acquired for downtown's Performance Management system and School Performance Framework. It is the club by which to "manage" effectiveness of principals and teachers, and the worth of your neighborhood school.

WV: Hey, don't cheadoff my test!
Anonymous said…
Can any one explain how a child can be admitted into AL based upon MAP scores that now would not be accepted. Maybe I have this wrong.

WS Dad.
Anonymous said…
I wonder why Gates likes MAP - would he run Microsoft that way? Oh wait, I forgot all the Windows bugs...

Opt out option now.

-JC.
WS Parent said…
I have the same question, WS Dad. How they can use the MAP to evaluate K-2 is ridiculous in the first place. Using it to assess Advanced Learning? I just don't even know where to go with this.
Anonymous said…
I'm seriously considering opting out our kids for the Spring test - we have our Winter data points for whatever, and I don't feel compelled to have them spend another three hours on it in the Spring when they also have MSP testing.

If they are using this data for teachers to have immediate feedback and see year over year growth, why isn't it just administered in the Fall of each year?

not a MAP fan
Anonymous said…
Something to Think About,

Here are a just few examples of how schools and teachers I know in SPS use MAP.

1. At the beginning of the year, I can use the results from last spring to determine which students might need further diagnostic testing. I can also focus on doing one-on-one reading tests with struggling readers first and begin reading groups and tutoring sooner in the fall.

2. The reading data gives a good idea of which students need differentiated text to access content area reading or extra support when alternative text is not available.

3. Before MAP we administered paper and pencil math tests covering grade level content at the beginning of the year. I had students who scored 100 percent and some who scored 0. In order to tell where these kids are at, I would need to administer off grade level tests, which took even more time and coordination. MAP is much more efficient in giving me a general sense of how low or how high individuals are performing.

4. At mid year I am able to analyze growth. I look for trends in my data and looks for patterns. For instance, if I noticed that my lowest performing were making accelerated progress, and my highest students were not moving, it would cause me to rethink my instruction to bett meet the needs of these students.

5. MAP data is useful in helping to inform course placement in secondary schools, especially for those in need of remediation.

6. When my MAP data does not match my classroom based assessments, it causes me to ask why and to investigate further. It does not mean that the test is useless. In some cases, I found that different tests were measuring different skills and multiple data points provided a more complete picture of the student. Sometimes I found the student took very little time on the MAP and I don't rely on that data point to inform my instruction.

7. When a student moves to a different school within the district, we have immediate data to determine if interventions are necessary and at least a general sense of the level of performance from day 1. With struggling students we don't have a minute to waste in getting started with interventions.

Teacher
Anonymous said…
We opt our son out. How is the MAP test different from a report card? It seems like the only difference between MAP and a report card is that MAPS scores compare your kid to other kids. It's an ego trip.

RA
Kathy said…
Something to think about,

Grant funding for teacher ed. has run out. Many teachers do not know how to use test results for instructional purposes.

Worse yet, I believe the Family and Education Levy will be tied into MAP. Folks promised they would "measure" results of F&E Levy. I think this was a huge mistake that will come back and bite them.
Anonymous said…
@teacher - those are all great reasons for a computerized testing tool.

so what do you do now with the child in your class room who used to be a 90% but now only is a 60% in math? or the one that you thought was a 75% in reading, "not quite a spectrum kid" but oh, is really now a 95% and should have been tested last fall? that's not "accelerated" progress, that's re-calibration...

i have no problem with the premise of using MAP to guide classroom teaching, or even serve as a gatekeeper. but only if I believed MAP was testing the RIGHT things, with ACCURATE results

-diane
MAPsucks said…
Teacher, that's all well and good. I am surprised all those things are possible when teachers can't tell which questions (drawn from a national database) were asked, answered or missed. How can one tell what areas need remediation?

You seem to be in a distinct minority. Here are 80 pages of teacher feedback on MAP efficacy: MAP teacher survey. The vast majority of these teachers are as dedicated as you are, but find MAP to be of limited utility.
Anonymous said…
Teacher, thanks for your input on how you use MAP. It sounds like your class takes MAP 3x a year and it's particularly useful to you to help ID struggling students. I hope I am understanding that right? If there's not time to waste with struggling student and if pencil to paper test has students scoring low or zero when testing them within the 1st 2 weeks of school, wouldn't that be a faster indicator than waiting on MAP results to tell you that? Also, I would hope for those struggling kids (especially if they are not new to the school), there would be some hand off among teachers regarding kids with high needs and hopefully identified. At least that is what I see being done at our school. It does take a bit of collaboration and time for teachers to do this.

As to using MAP for course placement in MS especially those in need of remediation, would the state MSP tests not do the job? Perhaps MAP can provide more strata and lexile, so I don't doubt you can use MAP to help fine tune. But with 150 + kids and large class sizes, less instructional support in classrooms, wouldn't direct intervention with classwork and constant ongoing eval via formal tests and quizzes to informal ones of HW and
Q & A checks be more accurate to gauge what each child is learning or not (especially as it correlates directly to what is being taught in the classroom)? I don't doubt that MAP can supply data. That's its purpose.

I guess for me, it comes down to money and where should we spend it and what benefits do we get from it. Perhaps there are many teachers here who find MAP useful and would prefer to spend the money for MAP. If so, then I bow to their wishes. I just haven't heard from many teachers that MAP has been as beneficial in relation to the cost, the time, and the effort to obtain such results.

something to think about
Anonymous said…
This winter is our first run-in with MAP scores, and they're *significantly* different (higher, actually) than my student's CogAT scores. Is large variation between these two tests normal for K students?
-confused
Anonymous said…
Confused, our first grader also had MAP percentiles higher than the Cogat. She also did better in math than reading on the MAP, while she actually is much better in reading. And, while we didn't see a big percentile shift with the recalibration, our child only went up one reading RIT (whatever that is) from spring to winter 2011 - the same time that she went from beginning readers to confidently reading chapter books.

I think we are going to opt out in the future. Trying to make sense of the numbers is a big waste of time.

nw parent
Anonymous said…
Remember,

This is an anonymous posting.
"Teacher" could mean anything--
including a classroom short-timer
or no-timer.

Most teachers do not want their job evaluations linked MAP. Even NWEA stated that MAP should not be used as a teacher evaluation tool.

How many readers would like to have their job performance linked to MAP? Teachers are not afraid of accountability. Using MAP for such a purpose is a sick joke.

By the way, I do not have my job performance judged by MAP (thank God), so I am not writing out of self interest.

--enough already
Anonymous said…
The linked document of teacher responses about MAP is very informative. Some comments:

"MAP is a waste of time"

"The alignment of skills to scores is fuzzy"

"advanced learners didn't show growth"

"Our curriculum does not dovetail with the concepts being evaluated on the MAP. We need more intentional instruction of grammar, Latin roots, literary elements..."

"I'm not a fan of the MAP at this point"

"It is very difficult for students to read [long passages] on computer screens"

"Way too much overall testing for kids"

"the data collected does not align with the curricula or the state standards"

Thank you teachers!

not a MAP fan
SPSLeaks said…
This comment has been removed by the author.
SPSLeaks said…
In the interests of transparency, I would note that public records or "oh no she didn't" have additional NWEA/SPS records...
NLM said…
Just logged onto the source and SOS has an addl letter posted for parents of k-2 kids saying that the scores were incorrectly tabulated for fall hence changed rit scores. the calibration was on top of that.
Anonymous said…
Two things:
1) Last year my son's score dropped dramatically at winter testing (below the score from the fall a year before). The staff happened to notice that he was him just clicking though without really processing the questions (he was on their radar as a kiddo on and IEP). He now gets 1-on-1 support during MAP testing (making sure he's not just clicking through). But how many other kids might be doing the same thing resulting in totally unhelpful data...?

2) This year my son's class had to take the MAP test 5 mins into the first Monday back from a snow day--teacher felt it was horrible timing. I've heard other staff say they just expect winter scores to drop (or not increase much) due to winter break. It's hard when parents are being told this is a measure of how much growth your child has had halfway through the school year...

Elementary Mom
seattle citizen said…
"The staff happened to notice that he was him just clicking though without really processing the questions (he was on their radar as a kiddo on and IEP). He now gets 1-on-1 support during MAP testing (making sure he's not just clicking through)."

Do you know what sort of support the staff gives your son to help him not just "click through"? I ask because if he is "coached" somehow, doesn't that further render the scores somewhat meaningless (and, statistically, skew other kids' percentiles as it raises his RIT.)

How does one "support" a student taking a test on a computer, and is that support factored into the resultant RIT score?
seattle citizen said…
I think what I am also getting at in the last comment that if a staffer was sitting with the kid, saying things like, "look carefully at all choices," or "have you carefully read the text," and all other students aren't getting the same support, who's to say all the other students looked at choices and read the text? Doesn't this skew results? Do the students nearer to the kid, hearing the support, also adjust THEIR attention? What about the students further away?

Seems to further erode reliability. Not that your student, your wonderful kiddo, shouldn't get support, I'm always in favor of one-on-one support! (and an IEP would dictate some sort of support, perhaps) but I'm looking at what it does for test results.
End the Idiocy said…
Finally, parents are waking up to what an idiotic test MAP is. I remember writing on here about this a year ago and I was one of few who saw that the "Emperor wore no clothes". No, I am a teacher and no teacher in my building (nor the principal) takes MAP seriously. It is one, big, colossal, expensive joke in our building. But we have had the luxury of actually seeing the test questions and believe me, they are beyond belief stupid questions. I had a father who teaches math at the U.W. look at the math MAP and leave the computer lab in anger because he was so upset with how ridiculous the questions are.

Isn't there somebody on here who can do something for the love of sanity and our children to get rid of this expensive test?
whitney said…
Where are the School Board members???

If anything needs to be addressed by them, and quickly, it is MAP as it wraps up so many district shortcomings in one: wasted money, suspicious alliances/influence peddling, waste of valuable class time, faulty data and therefore faulty School Reports, botched Advanced Learning Placements .

School Board members -- are you out there? Are you hearing the anger, pain, and confusion parents are voicing on this blog on behalf of their kids? Are you listening?? Show us!!!
Anonymous said…
Most of the board and the super and the corporate types aka Alliance 4 Ed will put a stinking fight up over getting rid of MAP. The argument will be how else to evaluate teachers and to bring more kids of color in to APP. As if both could not be done otherwise. But lack of will and imagination will no doubt be on display.

DistrictWatcher
MAPsucks said…
The previous Board were content with the bill of goods hawked by staff. Oh APP this, and MS pathways that. Then Teoh and the "oopses" come along. Can someone with an ounce of sense take a critical eye to what these $$$ give us? Already the only real data guy, Eric Anderson, said MAP is crap and we should use a combo of other free or cheap models. Oh! But would Gates and its $M for our "data warehouse" get mad and stomp off? Well, as is typical with this "philanthropissers", they have left the building. We are left with the dysfunctional, questionable, wreckage. Meanwhile NWEA can continue its juggernaut as the tool of choice for misguided districts nationwide. Did I mention that Broad loves MAP for its Ed Reform "Broad Prize" for sucking up?
suep. said…
Yes, there is something we all can do:

Boycott the spring MAP -- better still, demand the district cancel it, in light of this "recalibration" debacle.

Opt our kids out of the MAP.

Write to the school board and tell them to cancel MAP asap.

kay.smith-blum@seattleschools.org,
betty.patu@seattleschools.org,
sharon.peaslee@seattleschools.org,
martha.mclaren@seattleschools.org,
michael.debell@seattleschools.org,
harium.martin-morris@seattleschools.org, sherry.carr@seattleschools.org

This is a lingering vestige of Goodloe-Johnson's failed "Strategic Plan." Four of the current seven school board members had nothing to do with its purchase. During this time of fiscal crisis and school time lost to snow days, we cannot afford to waste any more resources or classtime on MAP.
Anonymous said…
I am a teacher and a parent in the district. While I don't believe that MAP should be used for anything other than helping teachers develop appropriate lessons and making sure that kids are not "falling through the cracks", I do find it useful in those capacities, both for the kids I teach and my own children. If MAP were not a part of what we do, my then kindergartener would have been stuck with no real reading instruction last year. Instead, after the first go round of MAP which showed she was off the charts for reading, she went off to read with the first grade every day and got her needs better met. If teachers use it appropriately, I have no problem with it. If a student performs well, then bombs the MAP, I don't know any teacher who changes their opinion of the student's skills based on the MAP. however, if a student isn't performing well in class but scores well on the MAP, that is something the teacher needs to investigate and to figure out how to better engage the student. I don't get how that is wrong.

To those of you whose children took "10 hours" to complete the fall MAP, this sounds like an issue with the school and how they administer it. In our building the children spend between 30 and 45 min per test, twice a year. I can tell you that MUCh more time than that is wasted in a classroom on a weekly basis.

I now have a 1st and a 5th grader. Both were "recalibrated." The 1st grader stayed the same (99th) while the 5th grader went down one percentage point in each area to 90. no biggie.

Take it for what it's worth...it's just another piece of information on your child. Just my opinion...

ParentTeacher
suep. said…
ParentTeacher, you write:

If MAP were not a part of what we do, my then kindergartener would have been stuck with no real reading instruction last year. Instead, after the first go round of MAP which showed she was off the charts for reading, she went off to read with the first grade every day and got her needs better met.

You mean to say that, without the MAP, your child's teacher would have been clueless about your child's reading abilities and left her to languish all year? If a teacher is unable to see that a child is "off the charts" in her reading skills without a standardized test to show her/him that, then that teacher is lacking some basic skills and insight.

In our building the children spend between 30 and 45 min per test, twice a year.

In my children's schools, I'm hearing that they are easily spending an hour or more on each MAP test session. And schools with larger populations will spend/waste that much more time, space and money administering the test to that many more kids.

I can tell you that MUCh more time than that is wasted in a classroom on a weekly basis.

Really? Then that reflects poorly on your school.

For my children, I value classtime, using the library for reading (not testing), and even recess more than I value having them sit in front a computer three times a year to slog through the inaccurate, unnecessary (and irrelevant to their curriculum) MAP test.
Anonymous said…
I will speak to the 10 hour comment since I made it. At my daughter's school, they have one week of computer lab on a rotating basis. In the fall, the school administered the test during this time including the practice test leading up to the real thing. The kids did only this for the entire week of computer lab. I believe the same thing happened with the winter test, although they did shift the time to make it more kindergarten friendly. So, yes, they did spend this much time testing. My daughter hated it. Boooooorrrrriiiinnnngggg. I would much rather she had more classroom instruction time, or heck, more time to actually eat her lunch or play at recess than take this test. I know that the strong readers in my daughter's class were identified very early and are getting their needs met. We will be opting out in the future. I can tell my daughter is learning and I don't need a test to tell me that.

WS Mom
Anonymous said…
This comment has been removed by a blog administrator.
Anonymous said…
Anonymous, I'm going to repost as your unsigned comment will get deleted.

"The time spent on the test is also a function of the student's level and perseverence.

A 6th grader may spend close to two hours on single math test because they are getting algebra level questions, and actually trying to solve them. 52 questions is lot when you're used to class tests consisting of 20 questions.

Then you have to ask, was that time worth it? Will those scores be used for anything?

2/3/12 7:31 AM"

--spring roll
Anonymous said…
Having proctored many of these tests, I see great variations on time spent taking the test. There are definitely kids who blow the tests off. It's tough for some kids to sit still and focus for that long a period.
Some of these kids are actually quite capable of tackling very hard stuff and can do well on class and standardized tests. They are not consistent at it. This is my beef because these kids are more vulnerable to being overlooked academically while their behaviors are the main intervention focus. With limited resources and large class size, these kids often don't get what they need. MAP isn't going to fix that or ID their needs faster or better. You need human inteaction for that.

Other kids may require help from an IA, spec ed, or proctor to help manipulate the mouse and computer. There are those kids who will not move from a problem if they don't get it at first and yes, they can spend a LOT of time on the MAP. Many times, teachers have to send these kids back to finish the tests during make up test times. These kids do quite well. Though I am beginning to see some testing fatigue as we are now on 3rd year of this.

Overall, I just don't think MAP tests are cost effective or time efficient for the individual child. Mainly because teachers are evaluating kids in the classroom anyway. If you must have MAP, do it once a year. Better yet, save the money and spend it on more targeted evaluation for kids who need it (dyslexia, behavioral/learning issues, etc.)

spring roll
Anonymous said…
ParentTeacher: I hear what you're saying, but wouldn't that apply to almost any computerized testing program, many of which are available at far less cost than the MAP? I don't have an issue with proper, helpful testing. But I do have an issue with a test that requires so much time and resources, which proves to be of sub-standard quality, is very likely to be used inappropriately to "hold teachers accountable," which is deployed in haphazard and inconsistent fashion throughout the district, and which is not embraced by a majority of teachers.

It's not the testing or the computerized testing in general It's the MAP that stinks.

So couldn't we get better results from a quality testing program, instead of the MAP? WSDWG
Anonymous said…
re: 10 hours wasted...

at least for our school, don't forget the "lost time" that kids are typically in computer class, but are instead on "4th recess" because another class is using the computer lab (or perhaps library at your school) for testing. MAP testing last weeks in our school. if you're fortunate to have those resource classes 2xs a week, that's a lot of extra time on the playground instead (but hey, they get fresh air!)

-diane
Anonymous said…
From a neighborhood email list--

"If your child's Seattle Public Schools MAP scores changed dramatically when they were recently recalibrated, KUOW Public Radio wants to hear from you! Contact KUOW Education Reporter Ann Dornfeld at adornfeld@kuow.org or 206-221-7082."

Signed,
MadronaMom
Anonymous said…
My child had a large change in reading, and I seem to be seeing anecdotal evidence on the blog of change downward esp. in reading. 8th grader--went from 249 (98th percentile) to a RIT of 239 (83rd percentile). So the RIT score does change.

Confused
MAP (not Love) Stinks said…
The diameter of a sphere is 25cm. Three of these spheres are in a cylinder packed so that they fill the the vessel from top to bottom and side to side (diagram included). Identify the expression which represents the volume within the cylinder not occupied by spheres. Choose from three expressions.

a. (3.14 X r^2 x h) - (3(4/3 X 3.14 X r^3)) =
b. (3.14 X r^2 x h) - (22/7(4/3 X 3.14 X r^2))
c. (3.14 X r^2 x h) - (3(4/3 X 3.14 X r^3)) X 3

5th grade MAP question...huh? True!

This has got to stop...NOW!
dw said…
MapStinks said: 5th grade MAP question...huh? True!

There is no such thing as a "5th grade MAP question". It's an adaptive test that adjusts to a student's ability level, no matter what age or grade. And it does do an okay job of that, in general.

and This has got to stop...NOW!

Mostly, what I'm wondering is what you find so offensive about this particular question (I've seen far worse). It seems like a perfectly logical and well-written question, especially as a diagram was included, and assuming the carat notation was just so you could post it here. Yes, it would be beyond most 5th graders, but once the test has leveled your kid (first sitting), almost all the questions they'll see are in the general range of their ability. (I'm speaking specifically toward the math section, I don't believe the reading section does nearly as well on that)

There are serious problems with the way the test is being used and the way it was brought into SPS was crappy. I've also had credible reports of "bad" math questions (no correct answer available). But complaining about a perfectly valid question doesn't help the cause.
Anonymous said…
I've always been dubious of the MAP tests but now even more so. The day before the re-calibration was announced my daughter's 1st grade teacher told me that she was far above what was expected in both reading and math. The day after the re-calibration it seems that my daughter's improvement in math is somewhat below the improvement expected and she needs to "knucle down" and do some extra math. What happened to actually just teaching? I looked over her math homework and it seems that they have been practicing the same concepts for the last 3 weeks. And this is with a teacher that I think very highly of.
MapStinks said…
dw...if you really understood what an adaptive test was, and saw a question like the one above, you would have asked, "What question number was it?" And, "What was the student's Spring 2011 Math RIT score?"

I would have then responded with "Question #2 and RIT = 216." To which you would have responded, "That's not right, the test is adaptive, based on the 2011 National Normative Data. It is supposed to get harder as the student moves forward, not start hard and move the student backwards."

Then, as a MAP supporter you would express outrage at the lack of alignment early in the test to the student's previous RIT range which was Nationally Normed at (drumroll) 5th Grade. You would then (digging deeper) check the problem against the Common Core or current State Standards and see, lo and behold, that non-Euclidean Geometry is a high school subject in general (well-beyond a 216 RIT). [So, if you want the boring details dw, there are 5th grade problems based on the 2011 Normative data, cross-correlated to RIT scores for the winter testing period based on the 50th percentile mean RIT at any grade level. There are Washington State correlations as well, but they reference the MSP success probabilities which would mean cross-checking the State 5th Grade MSP Question Bank against the success rate for passing the MSP with a 216 RIT. Did you really want me to explain that here?]

Indubitably, you would have felt bad for the kid who sits down to an exam knowing it will be used to make placement decisions the following year, and gets a problem right out of the gate that is impossible to answer with anything but a guess.

So, here is a good question dw. What is the correct answer? No fair Googling.

Here is a better question: Can you give us an appropriate geometry question for a 216 RIT in the question #2 position? Keep in mind (you should know this) that the test "adapts" upwards after (3) three correct answers at the student's RIT level. [Hint - It has to be Euclidean Geometry (not non-Euclidean)].

Finally, knowing as you do that the test is designed so that a student will get 50% of the questions incorrect (the Canadian's exposed this little ditty), you will reevaluate your last shred of support for this deeply flawed exam and quit making excuses for its failures.

To MapSucks: Thanks for the temporary loan of your pen name (albeit modified to suit my sensibilities).
Anonymous said…
Does anyone know how a parent can officially opt their child out of this testing? Is there a form I need to fill out?

Thanks.

WS mom
Anonymous said…
So the letter regarding the re-calibration of scores said specifically that this would not effect the RIT score, but actually it did. What does that mean?

SE Parent
TraceyS said…
WS Mom - there is no specific form. Just write a letter to the principal stating you wish to opt out of spring MAP testing. I also suggest letting the teacher know as well, plus following up on test day to make sure your child sits out.

I took my child out of school for a dentist appointment during one test period, and sat with her in class during another, but that is not required. If more than one kid is out in the classroom, it might be nice to find parent volunteers to supervise the opted-out kids.
MAPsucks said…
I would add your expectation of how your child's time will be used in a constructive fashion to further her education. Put them on notice that anything with punitive overtones will Release the Kraken! (kidding!) When I first opted my child out, I was informed that she would sit in the office. Uh-uh.
Anonymous said…
SE Parent:

There was another letter saying that K-2 students would see their Fall RITs change due to a grading error.

- D's mom
dw said…
MapStinks commented: if you really understood what an adaptive test was, and saw a question like the one above, you would have asked, "What question number was it?" And, "What was the student's Spring 2011 Math RIT score?"

If I really understood what an adaptive test was? First, it's not a good idea to start off with an insult, especially when I've built adaptive systems (not in education) but I doubt you have. Second, I'm not the one who called this question a "5th grade MAP question", which is clearly wrong.

The questions you ask above are good, but you've stopped too soon. What about: What is the assumed RIT value for this question? Was it one of the experimental questions that are mixed in for leveling purposes? There are probably others, but I'm sure you see my point.

The problem with the MAP test isn't that the concept is poor. In theory, an adaptive assessment like this should be head and shoulders above grade level criterion-referenced tests. The problem with MAP appears to be one of quality control. Too many of the questions are poor (this does not appear to be one of them), and the number of questions I've been made aware of that have wrong answers does not speak well of NWEA's technical prowess. Wrong answers are inexcusable. My guess is that this question is either new or improperly leveled, but that's only a guess.

Back to the question itself, So, here is a good question dw. What is the correct answer? No fair Googling.

I seriously hope you're joking. This is a trivial problem for anyone who had Geometry in middle school or junior high. I guarantee you there are at least a couple APP kids in 5th grade most years that will get this problem correct without guessing. If you think this is a non-trivial question I hope you're not teaching mathematics! And WTH are you talking about with the non-Euclidean Geo?!

Beyond that, how do you have details about a specific question, where it appeared in a particular student's sequence, and intimate knowledge about that particular student? Are you a MAP administrator? Were you peering over their shoulders? Did they ask for help? Something seems fishy.

Lastly, Finally, knowing as you do that the test is designed so that a student will get 50% of the questions incorrect (the Canadian's exposed this little ditty), you will reevaluate your last shred of support for this deeply flawed exam and quit making excuses for its failures.

What do you mean by "exposed"? This is no secret. It's exactly how an adaptive test is supposed to work! If you get even 70% correct, the test is not working properly.

I have no love for this particular exam, but it pains me to hear people nitpick at the wrong things. It sullies the credibility of others who are dealing with legitimate complaints.
suep. said…
To WS Mom and others, here's an update of something I wrote about a year ago for the Seattle Education Blog.

How to opt your child out of the MAP test:

Parents and guardians have the right to opt their children out of the MAP test. To the best of my knowledge, there is no official form, but this is what you need to do:

1. Write a brief letter/e-mail at the beginning of the school year saying you are opting your child out of the MAP test.

2. Send the letter to: your school principal, librarian (many librarians are charged with administering the test) or whomever administers the test for your school, and your child’s class teacher. If your student is in middle or high school, also send the note to the homeroom teacher and, frankly, all his/her other teachers just to cover all bases because it’s hard to know during which period the kids will be sent off to take the test.

3. Tell you child ahead of time that s/he’s not taking the test, and to tell the teacher to talk to you if there are any questions about this.

4. Request that your child read a book or do homework in the library or classroom during the test.

5. Send this letter again, before every MAP test session, as a reminder (in Sept., immediately after the December break and in May/Spring).
Anonymous said…
As posted on the open thread, in response to the (questionably) released MAP test item:

I'm not seeing why the example MAP question is so outrageous. It's pre-algebra and it's not beyond the capabilities of some 5th and 6th graders in the accelerated classes. It's a straightforward application of formulas.

-but, still not a MAP fan


Oddly enough, a teacher gave the same example during curriculum night and it made me shudder to think the teacher considered it so hard.

The test is operating how it's designed to operate. The question is: Is it measuring what we want it to measure? Does it measure it reliably or with any precision?

It may be appropriate as a screening tool (to identify students for further testing, support, or advancement), but is it appropriate for high stakes uses such as Advanced Learning qualification and teacher evaluations?

Seattle parent
NorthSeattleParentofTwo said…
Would it not be less expensive (AND add back valuable instruction time to the classroom) to have the district pay the $400 per student to have a pyschologist administer the Wechsler IV test every two or three years.
Imagine: just issue coupons/vouchers for 'one free WechslerIV' to all the families in august and give a deadline to have have it done (Jan 31) -those that want to abstain, can. Those who participate can have scores applied to ..well, whatever it is they are actually used for in SPS.

This test is arguably more accurate (age quadrilles are used and ten subtests give you a comprehensive observation including national percentiles SANS RECALIBRATION), there's no yearly licensing fee, no trainers to pay/get subs for, no computers required, a voucher system would encourage opt-out families to actually do so, AND it's the test that many local private schools use.
Total hard cost IF EVERY STUDENT opted in: $18M. And NO hidden labor/overhead/transport/materials/software upgrade costs. Administer it every three years and this would have a bottom line of $6.2M/yr. -NorthSeattleParentof2
Mrs. Grumps said…
NSP of 2-

You're missing a huge distinction between cognitive testing and achievement testing. The Wechsler IV is an IQ test, which is similar in function to the CogAT. The point of the MAP is to identify the current achievements of students. You can have a brilliant score on an IQ test but never have been taught anything or be well taught despite low cognitive ability. That's a huge distinction

Popular posts from this blog

Tuesday Open Thread

Breaking It Down: Where the District Might Close Schools

MEETING CANCELED - Hey Kids, A Meeting with Three(!) Seattle Schools Board Directors