The speaker list is up for the Board meeting tomorrow; not as packed as I thought with just four people on the waitlist. The majority of the speakers are speaking on high school boundaries (with several wanting to talk about Ballard High). There are only three of us speaking about the Green Dot resolution asking the City to not grant the zoning departures that Green Dot has requested. It's me, long-time watchdog, Chris Jackins, and the head of the Washington State Charter Schools Association, Patrick D'Amelio. (I knew Mr. D'Amelio when he headed the Alliance for Education and Big Brothers and Big Sisters; he's a stand-up guy.)
Comments
This makes MAP look like a joke. A really expensive joke.
From a generic NWEA parent letter:
The most recent [norms] study occurred in July of this year [2011] and it benefits from a larger, more representative sample that reflects the diverse demographics and characteristics of students and schools in the United States. The larger, more representative sample allows us to make more informed estimates of student growth throughout the school year that we can use to adjust instruction in the classroom.
With the improved methodology behind the 2011 study, we are seeing some differences from the previous study, particularly in how student percentile rankings and growth projections are distributed. For the majority of students, the changes from the new norms have resulted in only minor differences. However, we are seeing more significant changes for certain groups of students, particularly students in the 1st and 2nd grade, but also for some students who are performing much higher or lower than others.
Your child’s learning, or demonstration of what they have learned, has obviously not changed as a result of the updated norms. Rather, the group of students that they are being compared with in the norm study has changed, and this will subsequently impact the average growth projections and percentile rankings relative to this new group of students. While these differences will be more apparent for some students, they represent significant improvements in how we are measuring and evaluating the academic growth of your child.
-parent
Good thing the private sector is here to help us poor public school folks understand just what is going on with our kids.
Oompah
so it seems this is NWEA dictated - all school districts that use MAP have changed percentiles?
-Diane
Thanks to NWEA's slimy sales strategy (flatter and kiss up to district superintendents and tech admin, offer positions on their board of directors), thereby getting in through the back door, they have managed to a) avoid actually having to prove the validity of their product; and b) grab a monstrous market share without competitive evaluations.
I'm trying to figure this out.
In any case, it is all about the marketing.
n...
Thanks for your continued support!
NWEA
(well, Oompah actually)
WAmama2
Has your son plummeted in his abilities as this swing shows? And what does this do to his confidence as a student? The danger of test scores .. . .
Parents -- you should be demanding better of your district. You should be demanding that the money obviously wasted on MAP, which yields such arbitrary scores, could be better used hiring an elementary school counselor.
But NWEA continues to collect $$$ from you.
The MAP emperor has no clothes, and I can only hope this situation opens everyone's eyes to the dangers of such "data for hire."
This should cause an absolute revolt among parents. How can you ever trust this company again?
- D's mom
My middle school/high school kid's MAP scores all went down (math & reading both) by an average 3 points. Fairly consistant drop. It would be interesting to hear from as many families as possible, as REA's letter was characteristically vague on how many were actually affected & by how much- they dare not release this information?
What a colossal waste of our time and money to administer the MAP and chase this moving and questionable target every year!
You are very generous to fund this expensive BETA test for NWEA.
criminy
Another Parent.
I will tell parents that unfortunately, the MAP is being used as it was NOT intended to be used and that is as a gate keeper and high stakes testing for individual students. It is used as one of the measures to get your student tested for advanced learning.
The variability in MAP scores for my students is truly astonishing. If you are interested in high stakes testing, please read up on it. There is some very good research now on what makes a valid test and what makes an invalid test.
-teacher
Many reports of high variability seem to be coming from the K-2 parents, which makes one wonder if this is an appropriate assessment for this age group. How much of the variation is simply a matter of the child's age?
The advanced learning tests were given verbally (as a group) for this age group in years past.
What a mess.
WS Mom
WS Mom
15 Reasons Why the Seattle School District Should Shelve the MAP® Test—ASAP
Re: K-2 scores, former SPS MAP administrators (Brad Bernatek and Jessica DeBarros) told a group of us parents who met with them in 2010 that MAP is not considered appropriate for grades K-2, which is why some other districts opt not to use it for those grades.
Also, I heard that the 2008-09 winter MAP scores took a nosedive district-wide (because of a post-vacation slump) rendering the data invalid and unusable. When I asked a knowledgeable teacher who was responsible for administering MAP in his school why the district didn't simply cancel the winter MAP, he said it was because NWEA (the test vendor) wanted three data points for its own research purposes. In other words, our kids are merely data fodder for a test vendor. And we pay the vendor for this dubious privilege.
Lastly, remember how we ended up with the MAP test. Broad Resident Jessica DeBarros was hired by Broad-trained Supt. Goodloe-Johnson to "research" which test the district should purchase. Lo and behold, Debarros' report determined that the best choice was the MAP test, sold by Northwest Evaluation Association -- on whose board Supt. Goodloe-Johnson sat. (She was later cited by the state auditor for her failure to disclose this liaison -- an ethics violation -- and forced to step down from the NWEA board.)
MAP has been a costly boondoggle from the beginning. It's time for it to go.
In the meantime, OPT YOUR KIDS OUT.
divide 3/4 and get .75?
tell me how many maples are on the block when they're told 25% of the 16 trees on the block are maples.
calculate 12 - 16 and 16 - 12.
tell me how students bombed the test if 3/4 of 32 passed the test.
tell me how much ham to prepare if they need to cook 64 3 egg omelets and 25% of them will need 2 ounces of ham.
2 cubed =
3*2 =
3 squared =
2*3 =
5-11 =
PEMDAS
3*4/4*4 = what percent?
We have tons and tons of data about our kids, little of it is readily available and therefore useful.
I suppose we should dump another pot of money into policy picked by the credentially clueless and career first charlatans ...
ColorMeComplacent
-very tired of it all
Not much change
HELLO, we are not talking about quantum statistics here and space of closed three geometries.
Null
The MAP producers generated a distribution of test results from a limited sample of test-takers. This allows new test-takers to be "placed" on the existing distribution: their new score, had it been in the calibration sample, would have fallen at the XX% percentile (fill in whatever number you want).
What they seem to be claiming is that they now have a much larger sample of test scores, so they can generate a new distribution of expected test results. I'm okay with that, from a statistical perspective, but from the anecdotes posted here, this seems to make no sense -- if this is really what happened, we should see a directional bias: every test percentile should slide in the same direction, if they're going to slide. But that's not what I'm seeing here: some kids went dramatically down, and some went dramatically up. I suppose this could happen if there are considerable between-grade variabilities, but this strikes me as unlikely to explain the dramatic differences.
In short, I think they don't know what the hell they're doing. I'd love to get my hands on the original data, revised data, and collection protocols, so I could give you a real assessment.
WV says this whole thing is "tanter"-mount to fraud.
My 2nd grader's RIT score percentages in K and 1st grade were high 90's in both reading and math. These test scores determined her eligibility to test for Spectrum/APP, etc. So now, after the winter test, her reading score stays the same, but her math plummets from a 99 to a 67?? 32 points? So, is she an "Advanced Learner" or not?? From the looks of this, Is she now one of those advanced in one areas kid but not the other? Or was she REALLY never advanced to begin with?
Our school (won't mention specifically but we are in West Seattle) did not take the fall test. Not to mention, they come back after Winter Break and immediately started testing. Is that REALLY the best time to be testing kids, after a 2+ week break?
This data is meaningless to me, but when it is being used to determine Spectrum/APP eligibility, this is very concerning.
Julian
Jane
-KS
-not unhappy
My thought is that if scores are normally distributed, the first sample would have a relatively short, wide bell-shaped curve (hopefully centered somewhere near the true population mean) due to its relatively smaller sample size, and the updated distribution would be taller and narrower (again, centered near the true mean). If so, then some originally reported low scores would likely be bumped up in the new sample, while some high scores would be bumped down.
In order for the changes to be unidirectional, you'd have to see the entire curve shifted to the left, with a completely different estimate of the population mean.
And if that's the case, then the whole thing has been a big scam. That is, the first sample (and heck, maybe the current sample) was not generally representative of American kids as a whole and should never have been used to determine placements, such as Advanced Learning.
Seriously, I'm thinking that the district should consider suing NWEA! If we "overtested" kids with the CogAT based on faulty data, that's an expense that the district should try to recoup. They sold us a product that was supposed to be nationally normed; 30-point swings in a recalibration indicate that it was not.
Crazy!
From talking to some classmates in my kid's school (not APP), my kid reported that more than half of them were above the 90th percentile, with some scores over the top for the grade, the MAP scores in our case have been consistently in the 99th percentile for Math, and didn't change at all with the new scores. In reading the new scores are one percentile lower.
I wonder if only Seattle students were taken into consideration if those percentiles would even be harder to reach.
Bell Curve
-- just another anecdote
Wont make any negative difference to anyone, now that the tests results have been shown to be completely unreliable...
Vote with your feet, people...
I'd be interested in hearing the District's explanation of some RIT scores changing here when their letter states This percentile update does not change your child’s RIT score.
--monkeypuzzled
--monkeypuzzled
NWEA 2011 RIT Norms Study
Julian
Why do teachers have test info the day after? To begin using info? To notice red flags, like a 70% drop Spring to Winter say? What if your kid's teacher did not look at the scores? What recourse does a parent have for what looks to be a serious anomaly?
"don't worry about it, see how the child does in Spring" doesn't cut it. This is part of a permanent record and who knows how it may be used in the future.
Any thoughts?
-Criminy
WWmom
I could see scores moving with a central tendency -- that is, percentile scores above 50% could creep downward, and those below could creep upward, as the tails of the distribution are filled in compared to the relatively sparse original samples. But why would we see some scores originally in the top half of the distribution move down, while others originally in the same half move up? This should only happen if the real-world distribution is not as assumed.
And that's entirely possible: these tests seem to have something of an upper bound which is truncated, while the lower bound probably is less so. This means that the median will be greater than the mean, and lots of scores piled up near the upper end of the distribution could mean that a small rejiggering of the distribution could produce a substantial swing in percentile rank (but again, it should be on the whole in the same direction, unless a blob of additional scores were added both above high scores and below less-high scores - this would push scores between those two points together).
I think my original point, though, is mostly this: WTF? And I think that still applies. :-)
something to think about
Kennewick School District Citizens blog has run a series of investigative reports on the MAP. It can be found here:
KSDblog with great links to series
I don't have much confidence in either set of numbers now.
-yumpears
The irony is that the same link was referenced in the original discussions around MAP in SPS and it was suggested that you can't rely on random links found on-line...sigh. If you can follow the analysis, it's worth the read.
****"Common school realities and policies can, and often do, require scores to be used for different purposes,effectively changing their intended meanings and uses. Issues such as those involving accountability,estimating school effectiveness, estimating student progress, informing instructional content decisions,and using test results to select students into programs or make promotion or graduation decisions areamong the more common uses of test scores. In these and other common situations, test scores take onnot only different meanings, but carry varying consequences as well. While it is unlikely that any set of achievement test norms can fully accommodate such a broad range of uses and score meanings, it is a reasonable challenge to develop norms that can optimize the information that is used in these contexts.
Critical to taking up this challenge is the availability of references to student achievement status and change that can accommodate some variation in when tests are administered, differences in the precision of achievement estimates, and how their results are to be used. The results from this norming study provide a clear move in this direction."****
They want to be able to provide the data to guide present and future educational policies. This is a good business model as today's Ed Reform movement emphasizes DATA DRIVEN reform. Standardized testing will guide the direction, implementation, budget, and evaluation. NWEA is making effort to reconcile MAP data to state and national standards. Given the variabilities there, lots of tinkering going on. NWEA wants to use MAP to predict college preparedness and readiness, for teacher eval, school eval, student eval, program entrance/exit eval, and on it goes....
NWEA's ambition is vast, so is the MAP payout potential. How does this translate down to the benefit for the individual child? Well that's harder to see. There are good reasons for that as a test MAY identify, but it can't fix if teacher/schools don't have resources or effective support and certainly whatever factors outside of school that affects a student's life.
But we will have good data to judge from, make budget priorites, and set educational standards from, won't we? Numbers don't lie, right?
something to think about
"Interesting link to the MAP analysis.
The irony is that the same link was referenced in the original discussions around MAP in SPS and it was suggested that you can't rely on random links found on-line...sigh. If you can follow the analysis, it's worth the read."
Of course they said that when we were fighting the utter lack of independent, expert analysis of assessment tools (DeBarros' report? *snort*). But then what do they offer in return. PR-written emails and little to no context for what these changes mean to a student or family. Because MAP was acquired for downtown's Performance Management system and School Performance Framework. It is the club by which to "manage" effectiveness of principals and teachers, and the worth of your neighborhood school.
WV: Hey, don't cheadoff my test!
WS Dad.
Opt out option now.
-JC.
If they are using this data for teachers to have immediate feedback and see year over year growth, why isn't it just administered in the Fall of each year?
not a MAP fan
Here are a just few examples of how schools and teachers I know in SPS use MAP.
1. At the beginning of the year, I can use the results from last spring to determine which students might need further diagnostic testing. I can also focus on doing one-on-one reading tests with struggling readers first and begin reading groups and tutoring sooner in the fall.
2. The reading data gives a good idea of which students need differentiated text to access content area reading or extra support when alternative text is not available.
3. Before MAP we administered paper and pencil math tests covering grade level content at the beginning of the year. I had students who scored 100 percent and some who scored 0. In order to tell where these kids are at, I would need to administer off grade level tests, which took even more time and coordination. MAP is much more efficient in giving me a general sense of how low or how high individuals are performing.
4. At mid year I am able to analyze growth. I look for trends in my data and looks for patterns. For instance, if I noticed that my lowest performing were making accelerated progress, and my highest students were not moving, it would cause me to rethink my instruction to bett meet the needs of these students.
5. MAP data is useful in helping to inform course placement in secondary schools, especially for those in need of remediation.
6. When my MAP data does not match my classroom based assessments, it causes me to ask why and to investigate further. It does not mean that the test is useless. In some cases, I found that different tests were measuring different skills and multiple data points provided a more complete picture of the student. Sometimes I found the student took very little time on the MAP and I don't rely on that data point to inform my instruction.
7. When a student moves to a different school within the district, we have immediate data to determine if interventions are necessary and at least a general sense of the level of performance from day 1. With struggling students we don't have a minute to waste in getting started with interventions.
Teacher
RA
Grant funding for teacher ed. has run out. Many teachers do not know how to use test results for instructional purposes.
Worse yet, I believe the Family and Education Levy will be tied into MAP. Folks promised they would "measure" results of F&E Levy. I think this was a huge mistake that will come back and bite them.
so what do you do now with the child in your class room who used to be a 90% but now only is a 60% in math? or the one that you thought was a 75% in reading, "not quite a spectrum kid" but oh, is really now a 95% and should have been tested last fall? that's not "accelerated" progress, that's re-calibration...
i have no problem with the premise of using MAP to guide classroom teaching, or even serve as a gatekeeper. but only if I believed MAP was testing the RIGHT things, with ACCURATE results
-diane
You seem to be in a distinct minority. Here are 80 pages of teacher feedback on MAP efficacy: MAP teacher survey. The vast majority of these teachers are as dedicated as you are, but find MAP to be of limited utility.
As to using MAP for course placement in MS especially those in need of remediation, would the state MSP tests not do the job? Perhaps MAP can provide more strata and lexile, so I don't doubt you can use MAP to help fine tune. But with 150 + kids and large class sizes, less instructional support in classrooms, wouldn't direct intervention with classwork and constant ongoing eval via formal tests and quizzes to informal ones of HW and
Q & A checks be more accurate to gauge what each child is learning or not (especially as it correlates directly to what is being taught in the classroom)? I don't doubt that MAP can supply data. That's its purpose.
I guess for me, it comes down to money and where should we spend it and what benefits do we get from it. Perhaps there are many teachers here who find MAP useful and would prefer to spend the money for MAP. If so, then I bow to their wishes. I just haven't heard from many teachers that MAP has been as beneficial in relation to the cost, the time, and the effort to obtain such results.
something to think about
-confused
I think we are going to opt out in the future. Trying to make sense of the numbers is a big waste of time.
nw parent
This is an anonymous posting.
"Teacher" could mean anything--
including a classroom short-timer
or no-timer.
Most teachers do not want their job evaluations linked MAP. Even NWEA stated that MAP should not be used as a teacher evaluation tool.
How many readers would like to have their job performance linked to MAP? Teachers are not afraid of accountability. Using MAP for such a purpose is a sick joke.
By the way, I do not have my job performance judged by MAP (thank God), so I am not writing out of self interest.
--enough already
"MAP is a waste of time"
"The alignment of skills to scores is fuzzy"
"advanced learners didn't show growth"
"Our curriculum does not dovetail with the concepts being evaluated on the MAP. We need more intentional instruction of grammar, Latin roots, literary elements..."
"I'm not a fan of the MAP at this point"
"It is very difficult for students to read [long passages] on computer screens"
"Way too much overall testing for kids"
"the data collected does not align with the curricula or the state standards"
Thank you teachers!
not a MAP fan
1) Last year my son's score dropped dramatically at winter testing (below the score from the fall a year before). The staff happened to notice that he was him just clicking though without really processing the questions (he was on their radar as a kiddo on and IEP). He now gets 1-on-1 support during MAP testing (making sure he's not just clicking through). But how many other kids might be doing the same thing resulting in totally unhelpful data...?
2) This year my son's class had to take the MAP test 5 mins into the first Monday back from a snow day--teacher felt it was horrible timing. I've heard other staff say they just expect winter scores to drop (or not increase much) due to winter break. It's hard when parents are being told this is a measure of how much growth your child has had halfway through the school year...
Elementary Mom
Do you know what sort of support the staff gives your son to help him not just "click through"? I ask because if he is "coached" somehow, doesn't that further render the scores somewhat meaningless (and, statistically, skew other kids' percentiles as it raises his RIT.)
How does one "support" a student taking a test on a computer, and is that support factored into the resultant RIT score?
Seems to further erode reliability. Not that your student, your wonderful kiddo, shouldn't get support, I'm always in favor of one-on-one support! (and an IEP would dictate some sort of support, perhaps) but I'm looking at what it does for test results.
Isn't there somebody on here who can do something for the love of sanity and our children to get rid of this expensive test?
If anything needs to be addressed by them, and quickly, it is MAP as it wraps up so many district shortcomings in one: wasted money, suspicious alliances/influence peddling, waste of valuable class time, faulty data and therefore faulty School Reports, botched Advanced Learning Placements .
School Board members -- are you out there? Are you hearing the anger, pain, and confusion parents are voicing on this blog on behalf of their kids? Are you listening?? Show us!!!
DistrictWatcher
Boycott the spring MAP -- better still, demand the district cancel it, in light of this "recalibration" debacle.
Opt our kids out of the MAP.
Write to the school board and tell them to cancel MAP asap.
kay.smith-blum@seattleschools.org,
betty.patu@seattleschools.org,
sharon.peaslee@seattleschools.org,
martha.mclaren@seattleschools.org,
michael.debell@seattleschools.org,
harium.martin-morris@seattleschools.org, sherry.carr@seattleschools.org
This is a lingering vestige of Goodloe-Johnson's failed "Strategic Plan." Four of the current seven school board members had nothing to do with its purchase. During this time of fiscal crisis and school time lost to snow days, we cannot afford to waste any more resources or classtime on MAP.
To those of you whose children took "10 hours" to complete the fall MAP, this sounds like an issue with the school and how they administer it. In our building the children spend between 30 and 45 min per test, twice a year. I can tell you that MUCh more time than that is wasted in a classroom on a weekly basis.
I now have a 1st and a 5th grader. Both were "recalibrated." The 1st grader stayed the same (99th) while the 5th grader went down one percentage point in each area to 90. no biggie.
Take it for what it's worth...it's just another piece of information on your child. Just my opinion...
ParentTeacher
If MAP were not a part of what we do, my then kindergartener would have been stuck with no real reading instruction last year. Instead, after the first go round of MAP which showed she was off the charts for reading, she went off to read with the first grade every day and got her needs better met.
You mean to say that, without the MAP, your child's teacher would have been clueless about your child's reading abilities and left her to languish all year? If a teacher is unable to see that a child is "off the charts" in her reading skills without a standardized test to show her/him that, then that teacher is lacking some basic skills and insight.
In our building the children spend between 30 and 45 min per test, twice a year.
In my children's schools, I'm hearing that they are easily spending an hour or more on each MAP test session. And schools with larger populations will spend/waste that much more time, space and money administering the test to that many more kids.
I can tell you that MUCh more time than that is wasted in a classroom on a weekly basis.
Really? Then that reflects poorly on your school.
For my children, I value classtime, using the library for reading (not testing), and even recess more than I value having them sit in front a computer three times a year to slog through the inaccurate, unnecessary (and irrelevant to their curriculum) MAP test.
WS Mom
"The time spent on the test is also a function of the student's level and perseverence.
A 6th grader may spend close to two hours on single math test because they are getting algebra level questions, and actually trying to solve them. 52 questions is lot when you're used to class tests consisting of 20 questions.
Then you have to ask, was that time worth it? Will those scores be used for anything?
2/3/12 7:31 AM"
--spring roll
Some of these kids are actually quite capable of tackling very hard stuff and can do well on class and standardized tests. They are not consistent at it. This is my beef because these kids are more vulnerable to being overlooked academically while their behaviors are the main intervention focus. With limited resources and large class size, these kids often don't get what they need. MAP isn't going to fix that or ID their needs faster or better. You need human inteaction for that.
Other kids may require help from an IA, spec ed, or proctor to help manipulate the mouse and computer. There are those kids who will not move from a problem if they don't get it at first and yes, they can spend a LOT of time on the MAP. Many times, teachers have to send these kids back to finish the tests during make up test times. These kids do quite well. Though I am beginning to see some testing fatigue as we are now on 3rd year of this.
Overall, I just don't think MAP tests are cost effective or time efficient for the individual child. Mainly because teachers are evaluating kids in the classroom anyway. If you must have MAP, do it once a year. Better yet, save the money and spend it on more targeted evaluation for kids who need it (dyslexia, behavioral/learning issues, etc.)
spring roll
It's not the testing or the computerized testing in general It's the MAP that stinks.
So couldn't we get better results from a quality testing program, instead of the MAP? WSDWG
at least for our school, don't forget the "lost time" that kids are typically in computer class, but are instead on "4th recess" because another class is using the computer lab (or perhaps library at your school) for testing. MAP testing last weeks in our school. if you're fortunate to have those resource classes 2xs a week, that's a lot of extra time on the playground instead (but hey, they get fresh air!)
-diane
"If your child's Seattle Public Schools MAP scores changed dramatically when they were recently recalibrated, KUOW Public Radio wants to hear from you! Contact KUOW Education Reporter Ann Dornfeld at adornfeld@kuow.org or 206-221-7082."
Signed,
MadronaMom
Confused
a. (3.14 X r^2 x h) - (3(4/3 X 3.14 X r^3)) =
b. (3.14 X r^2 x h) - (22/7(4/3 X 3.14 X r^2))
c. (3.14 X r^2 x h) - (3(4/3 X 3.14 X r^3)) X 3
5th grade MAP question...huh? True!
This has got to stop...NOW!
There is no such thing as a "5th grade MAP question". It's an adaptive test that adjusts to a student's ability level, no matter what age or grade. And it does do an okay job of that, in general.
and This has got to stop...NOW!
Mostly, what I'm wondering is what you find so offensive about this particular question (I've seen far worse). It seems like a perfectly logical and well-written question, especially as a diagram was included, and assuming the carat notation was just so you could post it here. Yes, it would be beyond most 5th graders, but once the test has leveled your kid (first sitting), almost all the questions they'll see are in the general range of their ability. (I'm speaking specifically toward the math section, I don't believe the reading section does nearly as well on that)
There are serious problems with the way the test is being used and the way it was brought into SPS was crappy. I've also had credible reports of "bad" math questions (no correct answer available). But complaining about a perfectly valid question doesn't help the cause.
I would have then responded with "Question #2 and RIT = 216." To which you would have responded, "That's not right, the test is adaptive, based on the 2011 National Normative Data. It is supposed to get harder as the student moves forward, not start hard and move the student backwards."
Then, as a MAP supporter you would express outrage at the lack of alignment early in the test to the student's previous RIT range which was Nationally Normed at (drumroll) 5th Grade. You would then (digging deeper) check the problem against the Common Core or current State Standards and see, lo and behold, that non-Euclidean Geometry is a high school subject in general (well-beyond a 216 RIT). [So, if you want the boring details dw, there are 5th grade problems based on the 2011 Normative data, cross-correlated to RIT scores for the winter testing period based on the 50th percentile mean RIT at any grade level. There are Washington State correlations as well, but they reference the MSP success probabilities which would mean cross-checking the State 5th Grade MSP Question Bank against the success rate for passing the MSP with a 216 RIT. Did you really want me to explain that here?]
Indubitably, you would have felt bad for the kid who sits down to an exam knowing it will be used to make placement decisions the following year, and gets a problem right out of the gate that is impossible to answer with anything but a guess.
So, here is a good question dw. What is the correct answer? No fair Googling.
Here is a better question: Can you give us an appropriate geometry question for a 216 RIT in the question #2 position? Keep in mind (you should know this) that the test "adapts" upwards after (3) three correct answers at the student's RIT level. [Hint - It has to be Euclidean Geometry (not non-Euclidean)].
Finally, knowing as you do that the test is designed so that a student will get 50% of the questions incorrect (the Canadian's exposed this little ditty), you will reevaluate your last shred of support for this deeply flawed exam and quit making excuses for its failures.
To MapSucks: Thanks for the temporary loan of your pen name (albeit modified to suit my sensibilities).
Thanks.
WS mom
SE Parent
I took my child out of school for a dentist appointment during one test period, and sat with her in class during another, but that is not required. If more than one kid is out in the classroom, it might be nice to find parent volunteers to supervise the opted-out kids.
There was another letter saying that K-2 students would see their Fall RITs change due to a grading error.
- D's mom
If I really understood what an adaptive test was? First, it's not a good idea to start off with an insult, especially when I've built adaptive systems (not in education) but I doubt you have. Second, I'm not the one who called this question a "5th grade MAP question", which is clearly wrong.
The questions you ask above are good, but you've stopped too soon. What about: What is the assumed RIT value for this question? Was it one of the experimental questions that are mixed in for leveling purposes? There are probably others, but I'm sure you see my point.
The problem with the MAP test isn't that the concept is poor. In theory, an adaptive assessment like this should be head and shoulders above grade level criterion-referenced tests. The problem with MAP appears to be one of quality control. Too many of the questions are poor (this does not appear to be one of them), and the number of questions I've been made aware of that have wrong answers does not speak well of NWEA's technical prowess. Wrong answers are inexcusable. My guess is that this question is either new or improperly leveled, but that's only a guess.
Back to the question itself, So, here is a good question dw. What is the correct answer? No fair Googling.
I seriously hope you're joking. This is a trivial problem for anyone who had Geometry in middle school or junior high. I guarantee you there are at least a couple APP kids in 5th grade most years that will get this problem correct without guessing. If you think this is a non-trivial question I hope you're not teaching mathematics! And WTH are you talking about with the non-Euclidean Geo?!
Beyond that, how do you have details about a specific question, where it appeared in a particular student's sequence, and intimate knowledge about that particular student? Are you a MAP administrator? Were you peering over their shoulders? Did they ask for help? Something seems fishy.
Lastly, Finally, knowing as you do that the test is designed so that a student will get 50% of the questions incorrect (the Canadian's exposed this little ditty), you will reevaluate your last shred of support for this deeply flawed exam and quit making excuses for its failures.
What do you mean by "exposed"? This is no secret. It's exactly how an adaptive test is supposed to work! If you get even 70% correct, the test is not working properly.
I have no love for this particular exam, but it pains me to hear people nitpick at the wrong things. It sullies the credibility of others who are dealing with legitimate complaints.
How to opt your child out of the MAP test:
Parents and guardians have the right to opt their children out of the MAP test. To the best of my knowledge, there is no official form, but this is what you need to do:
1. Write a brief letter/e-mail at the beginning of the school year saying you are opting your child out of the MAP test.
2. Send the letter to: your school principal, librarian (many librarians are charged with administering the test) or whomever administers the test for your school, and your child’s class teacher. If your student is in middle or high school, also send the note to the homeroom teacher and, frankly, all his/her other teachers just to cover all bases because it’s hard to know during which period the kids will be sent off to take the test.
3. Tell you child ahead of time that s/he’s not taking the test, and to tell the teacher to talk to you if there are any questions about this.
4. Request that your child read a book or do homework in the library or classroom during the test.
5. Send this letter again, before every MAP test session, as a reminder (in Sept., immediately after the December break and in May/Spring).
I'm not seeing why the example MAP question is so outrageous. It's pre-algebra and it's not beyond the capabilities of some 5th and 6th graders in the accelerated classes. It's a straightforward application of formulas.
-but, still not a MAP fan
Oddly enough, a teacher gave the same example during curriculum night and it made me shudder to think the teacher considered it so hard.
The test is operating how it's designed to operate. The question is: Is it measuring what we want it to measure? Does it measure it reliably or with any precision?
It may be appropriate as a screening tool (to identify students for further testing, support, or advancement), but is it appropriate for high stakes uses such as Advanced Learning qualification and teacher evaluations?
Seattle parent
Imagine: just issue coupons/vouchers for 'one free WechslerIV' to all the families in august and give a deadline to have have it done (Jan 31) -those that want to abstain, can. Those who participate can have scores applied to ..well, whatever it is they are actually used for in SPS.
This test is arguably more accurate (age quadrilles are used and ten subtests give you a comprehensive observation including national percentiles SANS RECALIBRATION), there's no yearly licensing fee, no trainers to pay/get subs for, no computers required, a voucher system would encourage opt-out families to actually do so, AND it's the test that many local private schools use.
Total hard cost IF EVERY STUDENT opted in: $18M. And NO hidden labor/overhead/transport/materials/software upgrade costs. Administer it every three years and this would have a bottom line of $6.2M/yr. -NorthSeattleParentof2
You're missing a huge distinction between cognitive testing and achievement testing. The Wechsler IV is an IQ test, which is similar in function to the CogAT. The point of the MAP is to identify the current achievements of students. You can have a brilliant score on an IQ test but never have been taught anything or be well taught despite low cognitive ability. That's a huge distinction