Curriculum & Instruction Committee Mtg - MAP

I wanted to follow up on some notes I had take from the C&I meeting about MAP testing.  (I'm going to do a separate thread on instructional material waivers as this one is quite long.)  

The Committee is made up of Chair Martin-Morris, Peter Maier and Betty Patu but in a somewhat unusual action, they were joined by Director DeBell and Director Sundquist for the discussion on MAP and instructional material waivers.

The meeting got off to a not-so-good start with the news that the district missed its deadline for the contract with MAP for this year.  It just fell off the radar of Mark Teoh.  It was supposed to be done by Sept. 1.   Interestingly, this news just sat there and everyone looked at their papers.  You might think that one Director might have said, "I wish I was not hearing this news."  or even "what kind of problems does this create?"  But no, no one said a thing.

Mr. Teoh continued by saying that he wanted an Action report to intro and act on and he had to be corrected that a motion is Introduced and then, at the next Board meeting, Acted on.  (The Directors had to mention a few times that from the academic side, the motions are not being written properly.) 

After reviewing the contract, I wrote to Mr. Teoh and asked if the district is in arrears or is it within the "net 30 days" in the contract.  I also asked if the district might have to pay any late fee or were the fees paid but we don't have a signed contract.  I have not heard back from him yet.

This is the kind of thing that gets us in trouble.  As I recall, an issue in Pottergate was working starting without a contract in place. 

Mr. Teoh explained the following about MAP:
  • He said that although many schools are forgoing the optional fall testing, they still have more students being tested because of rising enrollment.  He said the contract reflects that and is $20K more than last year.  
  • They estimate about a 95% completion rate.  He said they had negotiated with NWEA to get money back if students are not tested.  He said in 2008-2009 that the district got money back.  This is at odds with the contract which says they are not obliged to do so.  I asked Mr. Teoh to send me any documentation on this change.  
  • Michael said that he had understood that the district had been seriously considering NOT testing K and 9th grade because of issues around that testing.  The issue around not testing in 9th is that they already take the new state end-of-course assessments.  He didn't expand on why K wouldn't be tested.
  • Mr. Teoh made an interesting reference to NWEA "sponsors."  Does anyone know what he might have meant?  He said that he was going to give data to Wendy Louden about the 9th graders and those EofC assessments.  He said the district was still going to test Kindergarten students and that they had talked to principals about it.  
  • Director Maier stated that he would have liked to hear the discussion about direction and pace and decision points.  He said he understood the need to get the contract done but wanted to have that discussion.
  • Kathy Thompson, head of Academics, said they had talked to principals and there was no single thought.  She said that some principals were firm on the ability to opt-out in the fall and others thought every school should be doing the same thing and others thought 3 times of year was right.
  • Steve Sundquist said he thought they were narrowing the scope of MAP testing.  He referenced WaKids which I had never heard of before.  Apparently this is a new state kindergarten assessment system where the teacher visits the family (either at home or at school) and does an assessment based on a checklist.  It actually sounds like a good idea but I'm not sure why it would talked about in this context.  Steve said, "I feel like we are coming to a fork in the road and should we test Ks?"
  • Kathy Thompson said WaKids is being piloted throughout the state and in 18 schools in the district.  She said teachers need a lot of training in how to do it and will the state continue the funding.  She said she likes it but it takes skill and time.  
  • Michael continued to state that he wanted real evidence and data on the value of assessments.  He asked about other districts and what grades they tested.  He said he was skeptical about keeping the span of K-10 and would be happier with 1-8.  He warned Mr. Teoh to be able to state his case when it comes to the full Board.
  • Mr. Teoh said the info from other districts "can be made available."  (Internally I thought - why did the Board even have to ask?  How can they make decisions if staff only gives data in bits?)  He also said that 20-30 schools have opted to do fall MAP testing. I have requested this list as well. 
  • There was some document listing the costs and yes, I have asked for that as well.  
  • Martin-Morris also referenced dropping K testing. 
  • Peter said his memory is that they are doing K testing for advanced learning purposes and to give the kids "practice" for when it is a high stakes test.  I had no idea that either of these were reasons to give MAP to kindergarteners.  
  • Ms. Thompson said it was to cast a wider net around advanced learning students  and referenced some "long discussion with constituents."  I have no idea who she was referring to.
  • Mr. Teoh said that MAP and WaKids were both being given at a few schools (but again, I don't know which ones).
  • Harium said there seemed to be a disparity there and what are we really testing Ks for?  He said, "What do we get out it?  We need to own this."  
  • Michael said I expected we would see options on MAP but not just 2x a year versus 3x a year. 
  • Mr. Teoh then said that MAP might not be appropriate for some 9th graders working ahead in math but hey! NWEA has a test for those issues.
  • Michael said he wanted to see dropping K and/or 9th grade testing.  He said he had asked them to be prepared for that.
  • Mr. Teoh, during this entire time, kept talking about gathering more data and I think the directors were frustrated because they expected a different kind of report.  
  • Steve said the district appeared to be "all over the map on this issue."
  • Kathy said they did a survey of families (which I vaguely remember) and that most families wanted it 3x a year.  (I asked her for this result because I don't remember it that way at all.)
  • Betty, seemingly frustrated, said, "Well, what do we go with?"  Michael, also frustrated, said that he had heard from many high schools about the issues around testing both in time and costs and loss of use of the library.  He also said he wanted a cost benefit analysis.  
That's where that discussion ended.  As you can see, the directors clearly thought they were getting one kind of report, didn't and left confused.

Was this a case of the Board not being clear on what they wanted?

Or did staff want one thing and only presented the case they wanted?  This becomes more clear with the presentation about instructional materials waivers.  

In the end, on the issue of MAP testing we know that more students are being tested than last year because of the higher enrollments.  I would have thought that dropping the fall testing at most schools would easily outweigh any new enrollment but staff is saying that's not so.  We know that staff seems to want to keep K and 9th grade testing.

I'm sensing some real discomfort on the part of the Board.

Comments

dan dempsey said…
Hey folks ... interesting talk about disparity from directors....

Instead of talking about testing ... how about talking about results and the disparity between Auburn and Seattle.

Seattle is apparently clueless about how to meet the academic needs of educationally disadvantaged learners. More testing is hardly the solution.

Check Auburn - Seattle reading scores here

And for Auburn Seattle Math scores here.

The results indicate ==>The SPS Board and the SPS Central Administration are incompetent ...

Here are the High School End of Course Algebra scores for students that took algebra last year.

========
Fact facts:
(1) Discovery/Inquiry approach to math does not work.
(2) The current thrust for Readers workshop and Writers workshop ... does not seem to be doing the job at least in terms of reading results.
(3) The Board refuses to look at results and make evidenced based decisions..... apparently the Interim Superintendent and the Board are trying to cover for blowing millions on poor materials that fail to deliver.

======
to Improve a System requires the intelligent application of relevant data. .... => Toss out four incumbent directors ASAP.
======

Perhaps the failure to provide interventions for struggling students should have been addressed by the board ... rather than changing the policy that required effective interventions (yet was rarely if ever followed).
Linh-Co said…
Whitman has MAP testing this week and next. North Beach and Bryant opted out.

I have a hard time believing there are more tests given this year due to an increase in enrollment. The two schools that I know of account for about 800 less tests.
Anonymous said…
Don't forget that one of the biggest reasons MGJ and Susan Enfield have been such MAP advocates is because they use it for teacher evaluations. Giving up the test for K and 9th graders will cause district administrators to lose some of this leverage.

In terms of fall testing--
Some staffs may want to administer the fall test so that they can better show growth and make the results more favorable towards teachers and schools.

Too bad a test is being used for a purpose that even NWEA has warned was never intended--a teacher tracking tool. Too bad it doesn't align with the standards. Too bad five year olds (and the rest of the students) are lab rats.

--lots of money to waste in a budget crisis
MAPsucks said…
Or did staff want one thing and only presented the case they wanted?

Ding! you're the winner!

Mr. Teoh should not be in the position he's in. He was hired as a replacement to Carol Rava-Treat, as the Exec Director of Strategic Planning and Partnerships. After Brad Bernatek was run out on a rail, they put Teoh in as head of Research, Evaluation, and Assessment. He's not a numbers guy. Eric Anderson is the brains behind his boss.

The MAP fee is per student per year, no matter HOW many times you test. They have given incremental credits back for numbers not tested that make slight adjustments to the annual fee per each.

SPS HAS to keep MAP to preserve their School Performance Framework, Academic Data Warehouse and Report Card system; the deliverables they promised the Gates Foundation and other "constituents". Last I heard Anderson was not using it for his value-added formula that he's tweaking.

If the individual scores actually inform anyone on anything, that's incidental.
Anonymous said…
My 9th grader did not do MAP tests in math at all last year. I assumed it was because she was in a pre-calc class with no other freshmen, so the class did not do the MAP.

She tops out of the data range that is useful, and the district is not going to offer her any other math than the math pathway anyway, so it didn't really make sense to test her.

I am not sure if they tested the 9th graders in Geometry & Algebra 2.

-High school parent
Anonymous said…
Eckstein has opted out of fall MAP testing this year except for students new to the district.

Eckstein parent
dan dempsey said…
"Eric Anderson is the brains behind his boss."

Oh yes!!!!

But Eric Anderson's reports contain too much truth... and are either ignored or fraudulently distorted.

Ignored- was his report on the practices employed by High Performing urban schools

Fraudulently distorted was his letter to the school directors on the performance of New Tech Network schools. --- Used by CAO Susan Enfield in preparing the School Board Action report of March 12, 2010.

CAO Enfield filed the fraudulent version with the Superior Court rather than the original.

11-3-2011 =>Appeals court will take up the district's failure to certify transcripts of evidence used in making decisions as correct, as required by law RCW 28A 645.020
dan dempsey said…
About OSPI's End of Course math testing in Spring 2011 for HS students.

Students who were in an Algebra Course took the EoC#1

Students who were in a Geometry Course took the EoC#2

Students above geometry took the EoC#1.
=======
Students taking either an Algebra Course or Geometry Course in middle school took an EoC for those subjects.
whittier07 said…
Whittier staff also voted to opt out of the fall MAP testing ... so there's another 440 kiddos not taking the test. We have around 20 students new to the district that are taking the MAP test this week.

At the meeting last night Robert Vaughn said that any k-kid who applied for advanced learning testing would be given the cognitive testing and then their winter MAP scores would be used to determine Spectrum/APP eligibility. He said that schools were being encouraged to test the k-kids early in the "winter MAP window".
Charlie Mas said…
A question was raised at a recent board meeting about the security around this test and the safeguards against teachers who would improve scores - especially if the MAP is going to be used to determine teacher evaluations.

I think they should worry less about teachers doing things to improve student scores on the spring assessment and should worry more about teachers doing things to sandbag the fall assessment.

It would be really hard for teachers to pass answers to students in an adaptive online test. It would be impossible for the teachers to alter the student answers after the fact. It would, however, be very easy for teachers to somehow encourage students to give wrong answers in the fall. Then, when the students actually peform their best in the spring, the teacher will get credit for the difference.

Seriously. If I were a teacher I would be tempted to subtly encourage students to sandbag the fall test. There are ways to handicap their scores. Then, in the Spring, I would, of course, exhort them to do their very best and I would prep them in any way I could. It would appear normal and good. I wouldn't have to alter their answers.
Anonymous said…
If they are opting 9th graders out of MAPS then they also need to opt out APP 7th and 8th graders slated for EoC math exams; and ANY 8th grader taking an EoC math class.

Also parents: you want to end this test and get the $$$ back - opt out and ask for the refunded $$$ from NWEA to be returned the classroom!

Opting OUT!
Anonymous said…
does anyone else experience the variation i'm seeing in MAP scores with my 2nd (then 1st) grader? i cannot see how it can be used for advanced learning eligibility when the variations are so wide - 75, 55, 94 in reading last year (1st grade) - how is that possible? just a regular kid, doing regular work throughout the year, no concerns whatsoever, no breakthoughs, no sickness of day of testing, etc. I'M not concerned about my child, but wondering how such a high-stakes test can show such variation?? I've heard this from others as well.

- curious about MAP variation
Charlie Mas said…
Mark Teoh isn't really working out, is he?

He has kind of blown this MAP thing and he also blew the corrections to the School Reports. He did about three Strategic Plan Reset meetings that were all the same - no progress shown at all.

He isn't very subtle about only showing the Board what he wants them to see. Even they can tell that there is something else behind the curtain. Even they are asking about it.

The pointlessness of this MAP assessment is shining through. Remind me again... why are we doing this?

Oh! Right! It was supposed to provide the districtwide common formative assessment to help teachers tailor lessons for students and to help the district management determine the effectiveness of curricular alignment.

Only the teachers are not finding it useful as a formative assessment and the cost of the assessment in capital, time, and opportunity cost, is too high for the slim benefit the district gets from it.

The MAP is proving a bad idea all the way around the track. That's a lot of money that could be re-purposed to support students.
MAPsucks said…
OMG! I'm about to have a conniption! After writing to the school two weeks ago opting out my child from MAP, I hear today he was pulled out of class to take a MAP math test.

WTF! It is the principle of the thing! Do parents get a say in what their children do or don't do?! There was NO notice of MAP testing this week in the bulletin. What is this? Selective testing? Why? Is this Teoh's cunning plan?

I need a margarita, size LARGE.
Anonymous said…
To Anon at 1:31 pm.

Last year, my kindergartner scored 70 in the fall for math and then in the 90's for winter and spring. I put the 70 down to his lack of familiarity using a mouse in the Fall.

My 5th grader's test scores were quite consistent. I would expect more variability with younger kids.

My kids are at John Hay - I haven't heard one way or the other if they will be doing Fall MAP testing. Jane
Mother of Mediocrity said…
--curious about MAP variation

I see the variations in my son's MAP scores, but I don't understand them and haven't been presented with an explanation that makes sense to me.

Reading scores - RIT (& percentile) for fall, winter and spring: 202 (78), 210 (85) and 203 (60). How does someone's score drop 7 Rauch units when attending school daily without a detected head injury or diagnosed cognitive malfunction? When the test is highly variable.

Math scores - RIT (percentile) fir fall, winer and spring:
203 (84), 203 (68), 215 (86).

It is claimed that MAP is aligned to Washington State standards (MSP), but I find that hard to believe looking at my child's individual scores: his Reading and Math MSP performances did not correlate.

The MAP scores do not always correlate with MSP scores either, so together they don't show me much other than whether or not my child could benefit from outside tutoring, for example scoring far below proficiency range on an MSP strand that he showed 12-point RIT growth in from MAP test to MAP test last year.

If the school wants to use his scores to determine what level math he takes this year, they can do that without argument from me. I don't have much confidence in the SPS math curriculum for elementary school students anyway and tutor him using another math curriculum at home.

The reading tests' disparities I don't have much problem with: consistently he scores poorly in one strand on both tests, and that has nothing to do with his actual reading level. Plus the MSP reading test seems to have been much easier for SPS children in his grade than was the MSP math test, as many more children in his class met standards.
Maureen said…
Does anyone have a link to info on MAP standard errors? I think I remember someone (JoanNE?) at the Thornton Creek talk last year presenting a graph that showed s.e. and RIT scores by grade level. I'm wondering how Advanced Learning is dealing with kids whose scores are within the s.e. of the target.
dan dempsey said…
"The pointlessness of this MAP assessment is shining through. Remind me again... why are we doing this?"

Why are we doing this?

Apparently because the decision-makers wish to do it.
In the SPS no other justification is needed.

----
And the $500,000 four .. on the best Board that money can buy are all for it.
Anonymous said…
There had been talk of using a "Spring to Spring" sample for teacher evals instead of "Fall to Spring." Supposedly the growth measure is more accurate and corrects the drop in scores seen in the fall.
It woudl also serve to prevent sandbagging.
- glad it's not me any more
MAPsucks said…
Glad it's not me,

Spring to spring, WTH, why not use MSP then?
Anonymous said…
@Mother of Mediocrity thanks for sharing your child's (variable)scores too.

i understand there can be random variation, outliers, etc, but it seems that EVERYONE willing to share numbers, encounters these weird variations. the premise of MAP is amazing - i love the concept of adaptive testing, being able to access scores right away, see the sub category drill downs, but no teacher seems to be able to explain what the subcategories mean, and no one seems to trust the numbers anyway. i'd be fine with having my kids tested 3xs a year if i thought the numbers actually were worthwhile in determining growth and gaps.

-curious about map variation
Anonymous said…
oh yes, my other concern is that if MAP moves to a spring to spring type schedule, what if my child had a "bad" test that one spring, and is stuck with that score for an entire year?? at least now if i have a bad spring score, we can look back and find a test 20 points higher within the past year.

-curious about MAP variation
suep. said…
There had been talk of using a "Spring to Spring" sample for teacher evals instead of "Fall to Spring." Supposedly the growth measure is more accurate and corrects the drop in scores seen in the fall.
It woudl also serve to prevent sandbagging.
- glad it's not me any more


Glad -- has it been openly stated that MAP is being used to evaluate teachers? Or is it simply insinuated and understood? And who is saying this?

I don't believe the district has ever told us parents that's what MAP was purchased and intended for. And yet...
Useless said…
My kids' MAP scores can also be all over the place. My 4th grader scored the following on the math:
Fall 203 (84th), Winter 200 (57th), Spring 208 (69th).

My 7th grader who scores between the 98th and 90th percentile in MAP reading (last 6 tests), scored a 413 on the MSP reading, which is at standard, but not much above it.

I don't think these tests give us a good picture of what our kids know, how much they are learning or how well the teachers are teaching.
seattle citizen said…
"'The pointlessness of this MAP assessment is shining through. Remind me again... why are we doing this?'

Why are we doing this?

Apparently because the decision-makers wish to do it.
In the SPS no other justification is needed...And the $500,000 four .. on the best Board that money can buy are all for it."

Dan, don't forget the other reason we are doing it: The ex-Superintendent, unbeknownst to the Board, sat on the board of NWEA, she was their lackey. Without disclosing this, she proceeded to "select" NWEA's MAP over a couple other tests. She then sold this selection to the Board as a data-driven choice, and there you have it. Then later 'fessed up, resigned the NWEA board, and bemoaned all those pesky researchers on this blog.

THAT'S why we have MAP, for all its pros and cons. Our Foundation superintendent wanted it, just as the Foundations wanted their board.
seattle citizen said…
MAPsucks asks:
"Spring to spring, WTH, why not use MSP then?"

Because the new teacher contract requires TWO tests that are used district-wide as evaluatory tools, ostensibly for correlation. MSP is one, what's the other? MAP.

Students are tested so teachers can be evaluated, see? And as we see in this thread, the tests are just sooo reliable!

And don't get me started on THIS use of MAP/MSP to evaluate:
Little Johnny is in an LA class and also a Reading class (he's behind). His parents might or might not read with him, it depends on the season and their mood. Lately he has a tutor.

Johnny's Reading scores go up on MSP. Who gets the "credit"? His scores go down? Who gets fired?
MAPsucks said…
If the scores are not helping teachers (and according to hundreds of teacher survey comments, they are not), and if they are not helping parents, what are they good for?

MAP is the tool to automate centralized control. Want to know how school X is doing compared to school Y so you can beat up the principals to beat up the teachers? Look on your computer dashboard at JSCEE and see the little graphs in pretty colors. Want to set up some kind of "objective" merit pay system to "motivate" teachers to work harder and longer? Use tests and big growth charts in the faculty lounge and make them place post-its all over the place. Make it into a horse race but not against China or Taiwan, against each other. It gives the illusion of scientific measurement and progress.

Rubbish! Just opt your kid out.
StepJ said…
If I read it correctly..the NWEA is charging by how many students are being tested, not how often?

If the case, then schools opting out of Fall testing would not decrease the number of students tested as with increased enrollment there will still be more individual students tested in Winter and Spring.

That is an interesting distinction. So to decrease funds flowing to NWEA I better get going on opting my kids out of the test for Winter and Spring.

Our kids have also had wildly fluctuating scores. Very high in Winter. A plummet in the Spring. One Spring plummet kid had a fever of 104 the day following the test. The other said couldn't hear the test, so read it (and was younger age and couldn't read that well at the time.) Either way not a fair way to hold a teacher accountable - IMO.
Jan said…
I loathe MAP. I loathe the dishonest way that it was brought in (by a superintendent who sat on the Board of the company that sells the test -- and didn't disclose the conflict of interest. I loathe its misuse in evaluating teachers and in qualifying kids for APP, when there is no evidence for either case. I loathe the cost. I loathe the missed class time, the wasted teacher time in trying to interpret a bunch of test scores on stuff that is not aligned to District learning goals. I absolutely despise the loss of libraries for a quarter of the year.

And yet -- wouldn't it be nice if we really DID have some way where teachers could quickly, and inexpensively assess whether kids had mastered a concept or a unit and were ready to move on? If they could quickly determine that 5 kids in their class, frankly, had mastery of that concept BEFORE they started the unit -- and thus needed to be working on something else? The EOCs are a start -- but boy do I wish that we were spending our "testing" time and money on this -- rather than on the travesty that is MAP testing.

And on school comparisons -- what I wish is that the Executive Directors of the SE High Schools had shown up at RBHS, and Garfield --with the Franklin EOC scores in hand, and demanded of Mr. Howard and the new RBHS principal that their math departments figure out what Franklin is doing right -- and what Garfield and RBHS need to do differently, so that low income and minority kids at GHS and RBHS start showing the kind of improvement that Franklin is seeing. (Of course, Franklin needs to do better too -- but they are clearly doing better than RBHS and GHS.
Jan said…
A thousand thanks, Melissa, for this post. While I hope we have several new board members soon, I am encouraged by the increased interest existing board members are showing for an expensive, badly thought-out program.

So much of what you report from staff is so anecdotal, so "unattributed," and thus, so unconvincing. It is clear that they are all over the map (sorry) -- and that the program is "used" willy nilly for any number of things (or not) with not a whole lot of thought given.

It is probably wishing too much to think they will just kill it. Here is what I wish they WOULD do -- I wish they would divide the district into halves. Let schools that like it keep MAP. Let those that don't ditch it altogether and come up with alternate "interim assessments" that teachers could use to "inform instruction." And then rigorously and diligently track:
1. Costs of each alternative (MAP will lose);
2. Time spent by students on each alternative (MAP SHOULD lose, but it might be a tie);
3. Value to teachers, parents, and kids, in terms of which one actually informs instruction.
4. Loss of space and class time (MAP will lose).
5. Effects on academic achievement of having MAP (versus less formalized, locally produced assessments that align with standards, etc.) If the assessments are done well, the non-MAP kids should do better than the MAP ones, as MAP testing is pretty useless for informing instruction on any kind of real time basis.
SP said…
I really wish that there were audio recordings from these Committee meetings- Michelle Buetow is right- they should be posted online before Board meetings, but I seriously doubt that the District would do this as these meetings can be so disfunctional!

At the C&I meeting there was alot of confusion between what the Board members remembered and what the District members said came out of a Board worksession last June. Obviously no one reviewed the worksession presentation & minutes, as the
Strategic Plan Update Board Worksop 6/15/11

PowerPoint- on page 16 includes the "Plan Adjustment for 2011-13" as reducing testing to winter & spring, with fall optional except required for all new students to the district (this would apparently include all incoming K's except for the 18 elementary schools using the other WA-Kid grant schools).

The powerpoint from June also mentions a committee including parents, teachers, principals, REA, Advanced Learning & Early learning departments was to gather input and "to consider reducing grade administrations." No mention of this committee at the C&I meeting, with only the district forming the "controlled choice" options- ie Kids, do you want cooked broccoli or cauliflower for dinner?

Melissa- the survey that Cathy T. mentioned is also in this link. Cathy was very misleading when she said that "Families were in favor for testing 3x/year at all levels." Really? The survey shows 30.5% supported 3x/year and 29.5% for 2x/year, 11.6% one time, 16.5% 0 times and 11.9% No opinion.
Wow- does that sound like families support 3x/year at all levels?

There was more pushback than normal from not only DeBell, but also Maier, Patu and even M.Morris that the Board was expecting to see some other options on the table than just "optional in fall"- ie dropping K and 9th specifically, by the time the Board Action was voted on Oct 19th.

One side chuckle- Harium actually commented to Mark & Cathy T. that the MAP contract (missing) would be "helpful" to have, as well as any other attachments at the C&I committee level and "not just for the first time at the Board Meeting Introduction when it's almost too late." Gee...what a concept!
SP said…
Another reason to have audio recordings from all the Board Committee meetings and Workshops- although a bit improved from previous years, the minutes are very vague and do not reflect the various levels of discussions & specific concerns. From the June 15th workshop minutes:

"Develop Assessment tools to consistently track student progress and use data to drive improvements
FOLLOW – UP: Directors advised staff to be sure we do not lose testing of kindergartners for identification to accelerated programs.

It is assumed that kindergarten will be tested until a decision not to test is made. A decision on kindergarten testing will be made prior to the start of school.
Students new to the district will be tested and analysis needs to be done regarding grades tested. The number of students impact cost, but not the number of times the test is administered. The goal is to sign a testing contract by the end of July. Staff can adjust numbers of students and SPS will either be reimbursed if there are fewer students or pay more if there are more students."


...so...the goal for the MAP contract was to be signed by the end of July and they haven't even written the contract at the end of September? The decision on K-testing "will be made prior to the start of school" (and now will not be an Action item until October 19th?).
At the C&I meeting the level of confusion on both the Board and the District's sides continued and is a reoccuring pattern, while the district seems to just continue doing what it wants to do in the first place.
MAPsucks said…
Shoot, SP, Board action? What Board action? We no need no stinkin' Board action!

I can just see non-numbers guy Teoh figuring, oh this is just a subscription renewal. No prob! Until the amount of over 400K arises. I would wager Kathie Technow said, uh, this ain't getting paid without Board approval.

The first NWEA contract was rushed through Executive Committee where it was duly rubberstamped. Remember, that's the year the Board was told subscription fees would be paid with grant funds, then staff surreptiously took that out of the Board Action report before the vote and NWEA got paid from the General Fund.

Same ole, same ole...
Joan NE said…
Charlie Mas: "Oh! Right! It was supposed to provide the districtwide common formative assessment to help teachers tailor lessons for students and to help the district management determine the effectiveness of curricular alignment."

The NWEA product is NOT a formative assessment. It is a BLACK BOX benchmark assessment. Teachers do not get to review, after the administration, the questions presented to the kids, and their answers. The purpose of formative assessment is to reveal misunderstanding that children are having DURING an instructional unit, and which are preventing the student from having success on the learnning goals. Teachers must be able to see the questions asked and the student's answers if the test is to have formative assessment value. Access to questions/answers are necessary, but not sufficient features of an effective, useful, formative assessment system. There are a variety of additional features of MAP that render it useless as a formative assessment, including the lack of alignment to state standards and curricular materials. The description "Black Box Formative Assessment" is an OXYMORON, just as a "Truthful Liar" is an oxymoron.

Despite NWEA claims, MAP is poorly aligned to Washington State Standards. The means of aligning, and the size of the Descartes Intervals relative to (a) slope of expected student growth curves and (b) uncertainty in student RIT scores render the product basically useless for informing teachers about students learning needs and "zone of proximal development."
MAPsucks said…
Joan NE,

You Rock! I wish all the research and knowledge you've acquired re: NWEA and assessment could be transferred via mind meld to every parent in this district.
Joan NE said…
Mother of Mediocrity @ 9/28/11 3:28 PM said... ...It is claimed that MAP is aligned to Washington State standards (MSP), but I find that hard to believe looking at my child's individual scores: his Reading and Math MSP performances did not correlate.


I have studied the means of alignment. I can report out how aligment is performed by NWEA, if someone wants the details.

My conclusion is that the alignment is poor. Because the alignment is so poor, there is no justification for using MAP scores and especially MAP subscore data for instructional planning and for "flexible grouping."

Using MAP data in this way is a misuse of the data.

Does NWEA recommend this use of MAP data? YES.

Does SPS provide data reports to principals and teachers that would help them, and even encourage them, to use data in this way? YES.

Does NWEA have any research that shows that using MAP data in this way causitively leads to increased student achievement?

When I asked the last question of Eric Anderson and Mark Teoh at a meeting last spring, they were not able to cite any research that supports this use.

I asked them to seek out the best research on this question and send it to me. They have not yet supplied me with any research that validates this use of MAP data.
I would support audio recordings at committee meetings as well. The official minutes tend to be sketchy and, of course, I may view comments differently than others might hear them. (That said, I do try to be faithful to what is said.)

One problem, though, is that at these meetings, it is very intimate and there is a lot of low-talking going on. I was going nuts with Steve, Harium and Michael all going low and quiet. And it's not even a big room. Don't get me start on the BEX Oversight Committee meetings.
Joan NE said…
Anonymous said.@ 9/28/11 4:01 PM There had been talk of using a "Spring to Spring" sample for teacher evals instead of "Fall to Spring." Supposedly the growth measure is more accurate and corrects the drop in scores seen in the fall...MAPsucks responded....Spring to spring, WTH, why not use MSP then?

I have thought carefully about the District's rationale for dropping the fall test. I think the rationale is sound.

The statistical arguments against value-added models for teacher evaluation, whether using MAP data or any other data, seem to me to be quite strong. At a much larger N-size, it becomes more reasonable to make judgments based on test score data.

MAP data does have this advantage over MSP data: With the former, one can make direct year-over-year comparisions of student data.

I believe that when the MAP data is aggregated to a sample-sixe much larger than ~30, the MAP data has much more value than does MSP data. For example, I believe MAP data much more than MSP data has value for evaluating whether District program and policy decisions have had a beneficial effect on student achievement.

The MAP data would have much more value than does state assessment data (especially when the latter is expressed as%meeting some threshold rather than as raw scores) for a metric for judging whether a pilot study (say of a curriculum, or a proposed policy) is successful.
Joan NE said…
Several parents shared their child's RIT socres, showing large variability in scores, and lack of any clear trend in the sequence. I have seen large variability in my kids' scores also, and even more randomness and variability in the subscore date (see endnote #1).

My response to this question is too long for a single post.

I have MAP data from SPS that shows that such variability is typical. In fact, it would be odd if any student's data showed a relatively smooth trend over time, except perhaps for a student that is getting effective tutoring, and whose RIT score is well-within the valid range. (It's not true that the MAP test does not have a floor or ceiling. See Endnote#2)

What follows is an attempt to explain the nature of the variability in individual student's overall RIT scores.

The explication of score variability that follows is probably a difficult read...

You will see that what I write has implications for using MAP scores as an APP eligibility screen. The MAP data does provide some value that is not provided by state assessment data, but I think, at the aggregate, rather than at the individual, scale.

NWEA considers a MAP score to be valid if the SEM (standard error of measurement) for the test event is less than 4 RIT points. The 95% uncertainty interval for a MAP score is plus and minus two SEMs. SEM is typically around 3 RIT points, so the typical (95%)uncertainty interval for an individual child's MAP score is +/- six RIT points. This is a 12-point range.

Note that one grade reports, the District only reports the 68% confidence interval (+/- one SEM). About 1/3 of the time, the student's "true" score, if knowable, would be outside this range. About 5% of the time, a student's true RIT score, if knowable, would be MORE than two SEM's away from the reported test score.

[to be continued...]
Joan NE said…
[continued..and thank you for the compliment Cecilia. I also wish it would be so easy to share the knowledge..it's a lot of work to try to write it up concisely and clearly.]

To look at this another way:

--for about 3 out of every 10 students tested,student's true RIT score, if knowable, would be MORE than one SEM away from any particular reported test score.

--for about 5 out of every 100 students tested,student's true RIT score, if knowable, would be MORE than two SEM's away from a single reported test score.

(Yes, this fact ought to be taken into account in setting RIT score thresholds for APP eligibility: The higher the threshold, the greater the chance of false negatives. Also, the district should use the highest percentile score on record for determining eligibility in order to reduce the uncertainty added by score variability.)

Suppose we have some means of knowing that the TRUE academic growth of a particular child over a series of 12 MAP tests is ZERO, and that the TRUE score over this period is 220 RIT, and that the student's SEM on every test taken was exactly 3.0. It is not unreasonable to use this as an example, since beginning in late elementary or middle school (depending on which subject test we are talking about), the expected growth becomes rather small compared to the SEM. Due to the test-retest variability in MAP scores, it would be very strange indeed if the student got a score of 220 on every test.

This is what the parent likely see in the score data:

a) 8 out of the 12 scores (give or take a couple) will fall WITHIN the range 217 and 223; 4 out of 12 scores (give or take a couple) will fall OUTSIDE the range 217-223. Depending on the student's grade level, this range could translate to a huge variation in percentile scores. (The uncertaintly range for the percentile goes up as RIT score goes up.)

b) All or nearly all of the twelve scores will be within the range 214-226. Depending on the student's grade level, this range could translate to a huge variation in percentile scores.

c) There will be no trend evident in the scores: For each pair of successive tests, the parent should write down whether the score change was positive or negative. For the twelve tests, the parent will get a sequence of eleven signs ("drop, rise, rise, drop, etc). The parent will see that score increases occur just about as often as score decreases, and that the order looks random. The sequence will look like the outcome of a coin toss experiment: Totally unpredictable.
Joan NE said…
[part 3]

Now consider a child whose true RIT growth over a certain period is known, and whose SEM on every test is, again, exactly 3.0. If we subtract out the true growth from the sequence of scores, then the same results as given above will obtain.

Parents can estimate their kid's true RIT score profile by plotting out their kid's data (RIT score on vertical axis, Time on horizontal axis), then fitting by eye a line that goes through the central tendency of the data, i.e, so that about 1/2 the data points are above the line (about 1/2 are below), and so that about 1/3 of the data points are more than 3 RIT points away from the line, vertically.

I would try to fit the steepest line through the data, while meeting the state requirements. The steepest line gives the most optimistic interpretation of a student's RIt score data.

Sorry this is so technical...I hope that if a person reads this several times, it will start to become clear what I am saying. I can write up a clearer explanation, with figures to help explain, if anyone says it would be helpful.

End note #1. Sub-score variability is best discussed as a separate issue. Sub-score variability, it turns out, is a bigger problem for the district--and for NWEA--than is overall score variability.
The district (i.e, Eric Anderson) knows about this problem, but, as of my meeting with Eric and Mark T. last spring, they had not informed principals, teachers, and board members about this problem. Shame on them! NWEA suggests to districts to have teachers use subscore data for instructional (flexible) grouping of students. There is no justification for this practice. Because of individucal score variability there is NO JUSTIFICATION for having students set RIT score growth goals, as NWEA suggests, and as our District is suggesting to our teachers and principals.

Any time spent using MAP data for instructional planning and for student goal setting is a WASTE OF TIME, and will not help to promote student achievement.


Endnote #2. I wrote that typical SEM is around 3 RIT points. This is true if the RIT score is in the valid range for the MAP test. NWEA defines that valid range as the range over whiich the SEM is 4 points or less. Thus the effectively ceilling of the Reading and Math MAP tests are 245and 270, respectively. The SEM of a test score increases rapidly as score increases above these thresholds.

There are some fifth grade APP students that have such high RIT scores that their SEMS are 15 points! It is a waste of money to give the MAP test to kids whom we already know from prior testing to scores that are on the boundary of or outside the "valid" range.
Joan NE said…
Here are some scribd documents that have to do with MAP scores and standard error.

Note that there is no trend in the expected standard error of measurement. The expected SEM is around 3, for both the Reading and MATH subtests of NWEA MAP, regardless of grade level. The first figure shows this.

If SEM is not about 4 or less, the student has a RIT score that is outside the range deemed to be valid (see 1st link).

NWEA effective ceiling: http://www.scribd.com/doc/66852523/

Expected RIT scores by grade:
http://www.scribd.com/doc/53044625

NWEA defines the "Expected RIT score" to be the RIT score corresponding to the 50th percentile in their national sample.

Plot your child's scores on here to see if the trend over time looks like percentile score is staying same, increasing, or decreasing. In SPS, the typical student tends to follow an NWEA national percentile norm.
Joan NE said…
"The uncertaintly [sic] range for the percentile goes up as RIT score goes up."

That this is true is evident in the second scribd doc that I linked to.
MAPsucks said…
Okay folks! Everything you wanted to know about MAP and then some!
none1111 said…
Great charts Joan. I've had images of those charts in my mind just from researching the data, but it's great to see it in black and white. Um, and red.

Something else to note is that while NWEA is transparent about MAP's standard error (good), and they invalidate tests where the std error is too large (good), they only do this for average scores, they ignore any measure of validity due to error for all scores above 240 (very bad!)

Why should parents care? Because if your kid had a score of above 240 and there was a large standard error associated with it, you're not likely going to know about it unless you really dig. In these cases, your kid's score is much less likely to be a good estimate of your child's abilities.

Fine. So even if we assume MAP might be reasonably accurate for typically-developing students, we clearly see by these charts that the reliability goes out the window for advanced learning students. Not very good for Spectrum, and completely inappropriate for APP -- especially for selection into the program!

In reading, 240 is pretty high for most kids, but in math 240 is not at all uncommon for elementary kids in APP. For all you Spectrum and APP families, both current and prospective, understand that the advanced learning dept is using some highly unreliable scores to grant or deny your child admission to the program. This works both ways, false positives and false negatives i.e. it's just as likely that a student would be denied when they deserved entry as they would be granted when they don't necessarily need the service/designation.

This is very troubling! Advanced Learning needs to STOP using the MAP for anything to do with admission into their programs!
none1111 said…
Sorry, I should have left a link to the NWEA page describing the use (and non-use) of standard error to validate test scores. Here it is:

Why Do Particular Scores Appear Grayed Out?

See #6 - Standard error outside acceptable limits
seattle citizen said…
MAP HSPE RTI; PD PLC OSPI CCSS MSP/MAP ASAP.
Maureen said…
Joan NE and none1111 Thank You for this! Anyone interested in entering the APP Accelerated IB program at Ingraham should know that they will only be allowed to test if their Spring of 7th grade MAP scores are at the 95th percentile.

Popular posts from this blog

Tuesday Open Thread

Why the Majority of the Board Needs to be Filled with New Faces

Who Is A. J. Crabill (and why should you care)?