Reflections on Standardized Testing Forum

I attended the forum on standardized testing on Monday night at Thornton Creek Elementary.  There were about 40 people there including parents and teachers.  I sure wish more people had attended as it was a great discussion.  I wish someone from the district could have been there to see that this is how you have a public discussion.  Kudos and thanks to Chris Stewart for putting this on (and her speakers as well).

First up was Marian Wagner,  a teacher from Salmon Bay, who spoke about some of the difficulties with MAP both from a logistic perspective and a teacher perspective.  She said that at one point there had been a useful bubblesheet math test from Edu-Soft that the district math coaches had given out and she had found that useful but that has gone away.

She did point out that high achieving students don't get a lot out of MAP because their scores show no growth when, in fact, if she hasn't accelerated her teaching in class but MAP accelerates the questions but the kids don't know the material, then they can't keep up.  This was how she phrased it and I heard it as she isn't differentiating her teaching but if she could/did then the high achieving students could push the MAP test farther (and have the ability to do so). 

She also said that middle school students have realized that some of the answers can be intuitively figured out rather than basing their answer on what they know.  

She said that it was explained to teachers how to talk with students about their scores and to make a goals sheet, trying to build on each time they take the test.

She also got a laugh when she said that some kindergarteners, when instructed to put the mouse on their name (and the meaning was put your cursor on your name), picked up the mouse and held it to the screen.

When she took questions, one teacher spoke up and said it is a wide test but not a deep test and she felt that made for variable results and some confusion on the part of teachers as to how MAP should guide their teaching (if at all).

Several people asked why Dr. Enfield wants to keep MAP and some teachers seemed to think because it was easy to use.  Eric Muhs said at Ballard very few teachers, in his discussions with peers, seem to find it useful.

Sue Peters, co-editor at Seattle Education 2010, spoke next.  She went through her thread about the 15 reasons why SPS should drop MAP.  She pointed out that Jessica de Barros knew that what some of the issues about MAP were before it was implemented (and probably came out during the pilot period).  Ms. de Barros knew that 40% of the libraries would have to be used for MAP testing (and the associated problems with that).  Sue pointed out that her kindergartener's first school library experience was with MAP which is a bit sad.

She said that MAP was not really good for K-2 (the younger age group which struggles with the mechanics), the high achieving students and that NWEA wants a 3xtimes a year testing because they want the data for their uses.  I'm not sure that should be what drives how often SPS uses any test and if that is a detail in the contract, maybe we shouldn't be signing the contract. 

There was also a parent, Rick Burke, who does like MAP.  He had a good Powerpoint explaining his position.  He said he felt the value of it offsets the questionable things about it.  He said that when kids know their scores (and they do with MAP), they want to do better. 

He gave an example of how his daughter seemed to be doing well in 7th grade math but her MAP scores during the year flat-lined.  He said it gave him and his wife a notice about her math abilities that they wouldn't have received from her grades alone. 

We didn't discuss the HSPE much and I suspect that is because (1) MAP kind of dominates the discussion as it dominates the school time and(2) it is still evolving and I think parents are (rightly) confused about it.

I did point out to one dad who asked about opting out that you could opt out of both the MAP and the HSPE.  I said I had opted my sons out of the WASL at different points with no ill effects.  They both took it to get their diploma but the younger one had the opportunity to take it in 9th grade and had done so.  I don't know if that is still the case today.  Helen S. pointed out in another thread that you don't need a diploma to go to college (true) so you could opt your child out of the high school one as well.  (Although I suspect the district wouldn't allow a student to attend graduation if they didn't take it.) 

The final straw for me was when the older took the WASL in 7th grade.  He did well in math but poorly in reading and writing.  At that time, because he was in Spectrum, he got to take the SAT.  He did very well across the board.  So I told the counselor that with his WASL scores he couldn't graduate SPS but with his SAT scores he could get into UW.   He shrugged and said, just ignore that WASL score.  

I missed the last segment of the evening so if anyone else was in attendance, please let us know what was said.

At some point the issues around MAP will have to be addressed.  The district is spending millions on IT issues but that doesn't solve all the problems.  I believe the MAP contract is only for 3 years (help me out here, someone) so it definitely needs to be revisited. 

Comments

MAPsucks said…
I believe this is the comparative study cited by Marian...(?)

http://www.scribd.com/doc/52499544
Chris S. said…
We had a "voting exercise" at the end: people could choose 3 times a year, 2 times a year, 1 time a year, or Never for MAP at K-2, 3-5, and 6+.

Not everybody voted, and I haven't counted exactly, but in all grade categories it was ~15 "nevers" to 1 "3x a year."

This is a good reflection of what the audience was like. I'm curious as to what those who didn't come think. I think we will set up a doodle poll. Stay tuned.
MAPsucks said…
The NWEA contract has no specific term beyond the one year subscription. It allows for renewal ad infinitum. I've seen the contract "term" described by some as five years. This was a falsehood spread by those who really REALLY wanted it. Then, those of indolent persuasion (meaning the Board) who did not care to look into the matter simply accepted this without question.

One interesting bit of news that comes from the presentation is that, in fact, Gates pays nary a dime for MAP nor the significant hardware required (new computer labs in 23 schools, mobile laptops & carts, fiber optic backbone). It's on our dime. The "grant" monies cited early on were, in fact, state funding for the purchase and administration of diagnostic assessments like the Edusoft Math Benchmark assessment and Scholastic Reading Inventory. Stupid of us to assume it was a philanthropic grant. We all know what happens when you ASSume anything....
2E parent said…
I like the MAP. I like getting frequent objective data on how my child is doing, as a comparison to what the teacher thinks.

My child is twice exceptional, with a very high IQ and dyslexia. Teachers often have a hard time seeing his potential. His classroom performance, and MAP performance, are in the average range, and teachers don't see a need to push him to do the work he's capable of. Having regular feedback on progress helps me work with him at home on this. The MAP results have been consistent with private testing we've done, and I find having regular feedback on progress very useful.

I do wish that parents had access the more detailed info that the teachers get. I think it's called DesCartes or something like that? I'd love to be able to use that data to build an at-home study plan that would help me to get his achievement scores up to matching his IQ scores. I wish teachers would do this too, but it seems that 70th percentile is a-ok in public school, regardless of potential.
Dorothy Neville said…
There are valid reasons for having a regular and objective measure of kids' achievement levels. MAP may be the right choice for this, or it might be something else. But the 2E parent example is an important one. I have also been in the situation -- and know many parents as well -- where their kid was not appropriately identified as to achievement level, strengths and weaknesses and something like the MAP would have been useful to me. I know it has been useful for others.

So HOW do we work toward ensuring that we have best data possible to help each child be appropriately challenged in the classroom?

I'd like to see a list of all the things people like about MAP, all the things people wish MAP could or would do. And what aspect of MAP is it? The exact test from NWEA? Something more than once a year? The computer adaptive aspect? The information it gives? Because I think it would be helpful to separate the discussion of MAP in particular and standardized tests, classroom assessments in general.

I thought that a parent could request the Decartes information from the teacher. Is that not so?
dan dempsey said…
Well here is what I am going to assume.

On multiple occasions, it has been stated by various board members over the years "We choose to trust our hired Professionals." On a multiplicity of occasions the decision to trust the hired Professionals has meant ignoring large volumes of evidence to the contrary of what the Professionals proposed and the Board did that ignoring in order to pass a Professionally proposed Action.

I am going to assume that the four directors elected in 2007 are going to go right on ignoring relevant data and other research so that they can continue passing actions recommended by their trusted hired professionals. Investigating or evaluating anything that would lead those four board members to do anything other than continue their blind trust in the hired Professionals will not occur.

I shall assume as well that:

The contract "term" described as five years, was what the hired Professionals told the Board. Now remember that the Board failed to even read the $800,000 NTN contract on 2-3-10 before approving it. It is highly unlikely that such a board would have read the MAP contract much less analyzed it. I shall assume that the four directors elected in 2007 will continue to look elsewhere than look for the truth. It makes it so much easier to trust their hire professionals.
dan dempsey said…
I certainly agree with both Dorothy and 2E parent that a tool is needed for parents to receive objective data on what students have learned. I also believe it is of great importance for teachers to intervene in a timely fashion when students are struggling.

What I do not see is that the MAP provides teachers with any useful information about struggling students that can be easily applied to make intervention decisions.

I would also state I find it hard to believe that the MAP is either an accurate or cost effective tool for providing useful objective data about what most students are learning or not learning at each grade level.

The idea that the MAP is a cost effective tool with which teachers form instructional decisions or make intervention-decisions appears to be completely without merit.

===========
It appears most probable that the MAP was purchased because MAP is the preferred tool of the Broad Foundation. .... and too many Board Directors choose to trust their hired professionals.
2E parent said…
What I like about the MAP:
* frequent
* nationally normed
* not limited to grade level
* results have been consistent with private tests administered one-on-one.

My particular child does better on the computer than with pencil and paper. I recongize that this isn't true for every kid.

I like the concept of an adaptive test, but I also know that a kid with dyslexia might miss an easy problem but still be able to do harder ones. This part concerns me. Another way to do out-of-level testing is to have a very long, hard test that few people finish. Private school admisions tests work like this. That also poses problems for kids with learning disabilities who already have confidence issues that might interfere with test performance. One-on-one administration doesn't seem practical in a public school setting. I'm not sure what the right answer is here.

I don't like that it eats up so much library time, but that is an issue with SPS implemenation, not the test itself. Computer carts in classrooms seem like they would work better.
"...results have been consistent with private tests administered one-on-one."

Could you direct us to where you found this information?

On the issue of logistics, I, too, wondered why they didn't just have computer carts in the classroom with kids going and taking the test in class. (I realize it might be noisier but maybe that's the trade-off to not shutting down the library. Maybe NWEA says you have to do it in a group.) But just installing more wiring isn't going to help.

Interestingly, last night at the Board meeting, librarians were being honored and one of them said she was also the technology person at school.
2E parent said…
Sorry, I wasn't clear about that. My child's scores on tests administered one-on-one by a psychologist were within a few percentile points of his MAP scores. This has been true on all but one MAP test in the last 2 years. It's not scientific data, but it does give me confidence that I can use the MAP data to direct at-home learning, and do private testing less frequently.
Chris S. said…
2E parent, could you clarify. Are you talking about a K-2 child or a 3-5 child? It's an important distinction IMHO. Not only is it a different test, but there are different developmental issues as well.
Lori said…
I like the potential for MAP to help identify children for ALO, Spectrum, and APP. The district's testing deadlines are early in the school year, and with young children, the teacher/school may not know the child well enough to recommend or support testing. MAP can provide objective data to that end. This may be particularly important in schools that do not traditionally encourage families to test and transfer to another school when appropriate.

I don't necessarily think that three times per year is necessary, particularly given that testing prevents kids in many schools from using their libraries. Test only in the fall and winter if the intent is to inform instruction.

I like the idea that the test is adaptive, and it's important for kids (and parents) to be told that they should expect to get questions that they can't answer and not to worry about it. However, from some of the example questions I've heard about, I'm not sure if the test is measuring what people think it's measuring.

We've probably all heard about the question about symbolism in the Scarlet Letter. Well, a kid might understand symbolism, but if they haven't read that book, they probably can't answer the question. Similarly, last year my daughter had been getting division questions indicated by the line with the dot above and below, but one test period, all the division questions were represented by the parenthesis/line combo, and she didn't know what that meant. She came home asking about that symbol and said, "Oh, I'd have gotten them right if I'd known that." So the question didn't test if she could divide but rather whether she knows the various mathematical operators. Not sure that's what we want.

Finally, I'd like to see the district to a better job explaining to parents how to read the reports. I still don't know what the reported score range is. Is that a confidence interval? If so, how wide is it? How should parents interpret it when the RIT score goes down a few points? Do they understand that it probably represents statistical variability rather than lack of progress? Or is it concerning to people? There's a motto in medicine that you don't order a test if you don't know what you're going to do with the results, and I feel like that applies to MAP in some ways. I don't know how to interpret the changes in scores, and the accompanying Lexile ranges we've been given don't seem based in reality as far as helping my child choose "just right" books. So as a single snapshot, MAP helped us decide to transfer to APP, but as an ongoing assessment tool, I'm not sure it's giving my family useful information.
Anonymous said…
I also have have 2 children who are 2E.

Both of my children have had achievement testing with Kaufman & Woodcock Johnson. These results are similar to their MAP results and very dissimilar to their WASL, MSP, CBA & DRA results.

They are upper elementary & middle school.

-more 2E results
2E parent said…
K-2 and the private tests were Woodcock Johnson.
MAPsucks said…
From NWEA:

Is growth calculated differently for MAP for primary grades vs 3-5 and 6+?

Both MPG and MAP are based on the same continuum and RIT scale. That said, an MPG student performing at 180 may not, under some circumstances, perform equivalently on MAP. For example, assume the 180 student relies heavily on the audio support within MPG to achieve this score. Perhaps the student has strong phonemic awareness, but is not yet strong at decoding or comprehending text. That student may struggle when they move to MAP which does not offer this support. So the 180 MAP performer is likely to perform at 180 or better on MPG. The 180 MPG performer may not always perform at 180 on MAP if the student still requires audio support.
Anonymous said…
You can get the individual strand scores for your child's MAP test by looking up the scores at The Source. My kids are in K-5 so this was the first time I'd used the Source (I think it's used more in upper grades for homework). What I liked about this was for my kindergartner, when his math score wasn't as high as I would have expected, I could look at the 6 individual strand scores and found that there was one strand where he scored much lower. In this particular case since his lowest score was in his strongest area (computation), we put it down to his being 5 years old and making a careless mistake rather than it reflecting his abilities - but I liked that we could look at his score in this level of detail. There's more detail online at The Source than was included in the paper version we received from his teacher. Jane
Sahila said…
are we really worried about our kindergarteners' scores, at age 5????
Joan NE said…
Here are charts that parents can use to help them interpret their child's MATH MAP scores.

Go to http://www.scribd.com/
and enter "MAP STARR."

Else go to these links: http://www.scribd.com/doc/
52517469/MAP-STARR-Math

http://www.scribd.com/doc/
52515625/MAP-STARR-Math-4-10

http://www.scribd.com/doc/
52515509/MAP-STARR-Math-KG-6

Similar charts are being prepared for reading, and will be detected with this same search term in scribd.com.

MAP-STARR: MAP-STate Assessment Results Relationship: Washington, MATH.

Black dashed lines: NWEA national norms.

Continuous gray lines: RIT scores corresponding to a given percent chance of passing Spring 2010 state assessments.

Data sources:

(1) NWEA 2011 Alignment Study for Washington State;

(2) NWEA 2008 document "RIT Scale Norms For Use with Measures of Academic Progress."
Joan NE said…
MAP has two features, which, if not for these, I would feel that MAP is totally pointless and even harmful. These two features are the lack of a ceiling and floor, and the national norming. The state assessments have ceilings and floors, and are not nationally normed.

Yes, NWEA markets their product for use as a instructional planning tool.

Yes, this District's website says MAP is a formative assessment tool.

In fact the latter is not true, and the former is an non-validated, inappropriate use of this product.

I believe the MAP is going to provide some very valuable data that we parents can use to press the District to improve its curriculum decisions, program decision decision, etc.

Nevertheless, from having studied NWEA technical reports, I conclude that this District is causing harm to children in as much as it is able to coerce teachers to use MAP results for instructional planning.
Joan NE said…
About the individual strand scores:

For my fourth grader, I have a sequence of five scores for overall and for each of four strands.

The Source gives only ranges for the substrand scores. I calculated the center value on each range, and then plotted the sequences.

I found that the substrand scores were highly variable, flipping high and low in a random pattern.

I wonder if my child is unusual for having such unstable substrand scores.

What have others found?

If this is typical, then we can conclude that the substrand score data is generally meaningless.

If this is typical, then it is very inappropriate for teachers to use substrand scores as the basis for "flexible grouping."

This is an example of the inappropriate use of MAP data for instructional planning.
2E parent said…
Sahlia,
With dyslexia, early diagnosis and constant monitoring are very important. There are many stories about teachers not 'getting' that a 2E kid is anything but average. Test data can help with convincing the teacher, or with building your own supplementation plan. So, yeah, we are worried about the scores. Not because it's the path to Harvard or something, but because it can help us figure out how bring the kid's reading up to a level where the books are interesting.
SeattleSped said…
2EParent,

You have something better than MAP scores, you have an individualized education plan and (hopefully) a teacher with specialized expertise in your child's disability. It is the relationship between a student and his/her teacher, and your involvement that will make the difference for your child. Not a thrice-yrly test.

I understand your sentiments regarding having additional data. I value the re-evaluation data I get for my child, and have had private testing multiple times. But, if the expense of data warehouses, hardware, data coaches, ITs and license fees means we actually have fewer resources for teachers, small class sizes and IAs, that's a no-brainer as far as I'm concerned.
MAPsucks said…
Here's a bit of news. Guess who else (besides SPS and Broad) contracts with NWEA. Why, Teach for America does!

http://www.scribd.com/doc/51175680/RedClayTFAagreement

And they get identifiable student data so that they can track the "effectiveness" of their "test-prep instructors".

This is wrong is so many ways. That's like the Seattle Times, a private company, saying, we want to administer tests to some classes at schools, x, y, and z. You need to let us do that and let us use the data for our own purposes. Oh, and pay our fee of $10K per class, and daily fines if you don't let us.
Charlie Mas said…
I can't say whether I like MAP or not. Honestly I just don't know.

I can say that I like the product that we bought, the district-wide, adaptive, formative assessment that would assess students knowledge and skills beyond grade level, facilitate differentiated instruction, and help teachers to alter instruction to meet student needs.

What happened to that tool?

Ever since we bought MAP, I haven't heard anything about teachers using it to inform instruction or to personalize instruction or to identify gaps in instruction. Instead, I've only heard about it as a measure of school effectiveness or teacher effectiveness. What's up with that?
Jan said…
2E parent: I think your observations (as well as those of others who have perceived benefit from MAP testint) are very interesting. I too have a kid whose strengths, weaknesses, potential, etc. have required a lot of expensive private testing over the years. He is too old for MAP, but it would have been interesting.

It seems to me that parents who like MAP have mostly used it for things like identifying giftedness in kids who had not been identified by the schools (and maybe the parents) earlier, confirming parental analysis of where kids were strong or less strong in various subjects, etc. I have yet to see or hear of it being used, successfully or otherwise, to differentiate instruction in classes. And, of course, several parents have written in to say that it has not helped them at all -- but not all tests are helpful to all people, so that doesn't necessarily make MAP testing bad, or good.

Based on this observation, here is my question: if MAP is mostly useful in giving parents a way to obtain much less expensive testing of kids to determine areas of acceleration or delay, is that something that needs to be done 3 times a year? I would think identification of an APP kid otherwise missed by the system would be a one time thing. I am less clear on whether 2E parent finds value in 3X a year testing (as opposed to say, 1X per year) but would be curious as to his/her thoughts.

Along the same lines, if this is mostly an assessment of kids who are "outliers" -- gifted, twice-exceptional, severely behind, etc., so that parents can then use the data to advocate for specific placements, or figure out what to do at home to remediate problems that the schools can't solve (if there is a SPED parent anywhere who does not "shore up" academics at home -- or in some cases, take the leading oar at home -- I have yet to meet them) -- isn't that something that could happen in a very different way? Couldn't the District have those kids tested down at Stanford, on an appointment basis, or a rolling basis, where kids who are being tested are bussed down there? Or they could travel around to the schools and set up temp sites periodically?

In other words, -- in order to get the benefits we seem to see -- is it necessary or even advisable to test the entire school population 3 times per year? Because we are paying a heavy price -- in money, lost school time, lost school space, administrator time, etc. -- for the current delivery model. Is it worth it? Or should we change?

Now, of course if the "real" purpose of the test is to provide data for a teacher assessment system -- well, then, that would explain the testing population and the frequency. But that wasn't supposed to be the purpose when we got it, and it is unclear to me whether it works for that purpose (leaving aside the $1,000,000 question of whether we want to use test scores to evaluate teachers at all).
My dog is bigger than yours... said…
http://www.scribd.com/doc/52517469/MAP-STARR-Math

This is gobbledegook...it means nothing. I challenge anyone on this blog to interpret what this NWEA data set means to a classroom teacher with either the highest performers in the district or the lowest, let alone average performers.

BTW we asked for this data last year...now that it is here nobody can tell us what it means to the classroom. We asked for a RIT to Grade Level Equivalency the first year of MAP. What we got is this strange obfuscation.

And...National Norm Correlation takes into account states like Texas and Arkansas and Louisiana. Are we comparing WA to data that factors in some of the lowest performers in the US?

Also, look at the 50% Range for WA in the columns and note that students below 75% require interventions. Sounds like a great marketing tool. "Buy our product because your kids don't pass our test." Huh?

Please bring back Edu-Soft!
Anonymous said…
I hsve mixed feelings about the MAP. While my son was at an alternative elementary school last year, the school entirely disregarded the value of MAP scores--they didn't send me his scores for any of the 3 assessments until the very last day of the school year! (I didn't know about The Source a the time).

Throughout the year his teacher never spoke to me about concerns with his academic development and so we didn't have any academic goals on his IEP. When I got the MAP scores in June I was shocked at how behind he was from his peers in Math and Reading. Now at a new school we have realized with his new IEP team that he has significant academic delays and knowing those scores last year could have really helped me understand his delays and better advocate for more support sooner.

That being said, in Jan of this year I logged onto the Source and saw that his MAP scores had dropped lower than his entry scores last year! Talking with the teachers about this concern I found out he was just clicking through the screens without actually trying to take the test. Luckily they saw this (and are making adjustments for him to take the test under different conditions next time). This set of scores really mean nothing since he didn't really take the test....

-IEP mom
suep. said…
Thank you, Chris, for organizing the event. We should definitely do more of this kind of parent-to-parent forum on the various issues we parents confront in SPS.

To Melissa, I'd just like to clarify a few points.

The inappropriateness & limited value of MAP for K-2 is what SPS MAP administrators Brad Bernatek and Jessica DeBarros told a group of us parents who met with them in spring of 2010. For this reason, they said, some districts don't use MAP for those grades.

That seems like an obvious option for SPS to take next year -- discontinue giving the MAP to K-2. That would save time, money and a lot of anguish for a lot of kids.

(Btw, the mouse on the screen anecdote was mine!)

Also, DeBarros did acknowledge in a memo she wrote in 2009 before the full rollout of MAP some of the drawbacks of the test. She knew back then that it would impact libraries, and that MAP would have costs. But it was later on that she actually came up with the figure of 40 percent of SPS schools are losing their libraries to MAP during the school year. Pretty sad state of affairs.

Here's Bernatek and DeBarros' quote about the costs of MAP from a 4/20/09 memo:
“This is a major decision for SPS. There are substantial up-front and on-going costs [associated with the MAP® test].” At that time, DeBarros was a Broad Resident, hired to do a report on the various testing options. She then was hired by SPS to help administer MAP. Brad Bernatek's title at that time was Director, Research Evaluation & Assessment for Seattle Public Schools (SPS). He was also a former Broad Resident.

Overall, the main focus of my talk was the cost of MAP, of which the annual $400,000-500,000 subscription/licensing fee is one of the more minor expenses. Millions have been spent by the district on MAP-related costs like purchasing computers, hiring more data coaches to train teachers and principals in how to use and process MAP. And then there are the costs in terms of lost class time, and teachers and librarians who are basically being paid to proctor and interpret the test.

--Sue p.
Maureen said…
Here are my notes from the Forum (these weren't written for this blog, so the tone is a little casual!). Based on comments here, I may have some of my facts wrong (like who paid for computers). I really appreciated being able to be there:

It was interesting, but a little looser than I had hoped. I think a few of the speakers (e.g., Dora) were ill and couldn't make it so other people covered their material. Chris did a good job of trying to find 'pro' speakers, but I thought all they did was support the point that parents like to see numbers attached to their kids. I don't think they had any argument that supported using MAP in particular or that justified the expense. I wish someone from SPS had come and gave it their best shot. I assume Chris asked. Interesting that no one came.


Sue Peters addressed the issue of expense, I had hoped that that would be concrete (at least in part) but it seems difficult to track exactly what SPS has put into this. Someone did say that SPS in paying the subscription fees (as well as staff time, etc.) Gates only covered computers and training. W e seemed to reach a consensus that the only way to save cash money would be to reduce the number of grades that take it (not # times/year)

One very interesting thing that came out (tin foil hat time!) is that the Broad Foundation is using MAP data to evaluate the effectiveness of their Superintendent training. That is why they are pushing to have MAP implemented where ever a Broad Supe is enstated. That is how they figure out how to award the Broad prize. SO basically, they are using kids and librarians and IT people all over the country to give them data to evaluate their own program. Practically criminal. I'm sorry to say that I'm not sure who exactly raised this point and what their backup was. But it also related to something one of the parents involved in the math lawsuit said, which was that NWEA has never won out in a competitive bidding process for testing. All of their contracts hav e been established through back-door single bid methods.

Virtually everyone there (about 40 people) voted to discontinue it K-10. So the crowd was a little skewed(!) I think most of the people there were Thornton Creek/Salmon Bay (a 4th grade SB teacher (Marian?) spoke first and was very effective). Both John Miner and Jodee Reed were there.....


A Salmon Bay parent (Damian?) was filming it so maybe it will be a vailable to view? Chris will make all the back up links and documents available to people on the email list.
Patrick said…
On the whole, I'm against MAP, but it does a few good things that kids and parents don't get otherwise.

I'm not a big fan of the elementary school report card. It really just says "below grade level, at grade level, or above grade level", and I'd really like an idea of how far above or below grade level.

I'd really like to get the class work and unit tests back promptly, instead at the end of the year or never. I want to see exactly what she's doing well or having trouble with. We supplement at home, and those tests could help guide our supplemental work. I feel like I'd be intruding in the teacher's prep time to ask for photocopies of the class work and tests all during the year, and I don't have enough time off of work to come in weekly and copy them myself.

So, MAP at least attempts to give some feedback during the year that's more specific than the report card.

I also like that the test adapts to test things the students haven't learned in class. That's bad from the point of view of using it as part of a teacher's evaluation, but very valuable as a test for APP or spectrum, or students from outside of district, or whose parents teach them things on their own.

However, I am not prepared to take on faith that MAP is a valid test. I am deeply skeptical of a propietary, secret test. What statistical analysis we have does not boost confidence in MAP. Why should NWEA learn more about my child's test results that I do?

One parent at the meeting made the point near the end that people opposed to MAP need to have something else to propose. Relying entirely on the report card isn't going to do it, in the minds of many parents. Edusoft tests were before my time -- were they better? Could we give the MSP two or three times a year, and the students who get above 85 %ile get tested on the following grade's as well? Are there competitors to MAP that would do better?
MAPsucks said…
Maureen,

Put the tin foil hat away for another day...

http://www.broadprize.org/asset/1214-09summarydataanalysisprocedure.pdf

Broad uses MPR Associates (Tom Paysant's source) and NWEA data to rank and grade "urban" school districts for their Broad Prize.
MAPsucks said…
Patrick, there are a number of them but Broad and Teach For America don't use them so no-go.

To name a few alternatives: American Institutes for Research (AIR), Scholastic, Bookette etc.
MAPsucks said…
This presentation is very interesting because it describes what a true formative assessment looks like, how it's developed by teachers, and it's usefulness. It discusses them in the context of WASL. Now, to this parent's untrained eye, this process makes sense. Too bad it does not resemble NWEA's product.

What Are Common Assessments?
 “Not standardized tests, but rather teacher-created, teacher-owned assessments that are collaboratively scored and that provide immediate feedback to students and teachers.”
MAPsucks said…
Hmm, spam block ate my post BUT:

If we as parents are truly interested in formative assessments, then we should rally behind the process spelled out so well in these two links. This process offers true collaboration and assessment of student progress

Common Formative Assessments
– “An assessment typically created collaboratively by a team of teachers responsible for the same grade level or course. Common formative assessments are frequently administered throughout the year to identify (1) individual students who need additional time and support for learning, (2) the teaching strategies most effective in helping students acquire the intended knowledge and skills, (3) program concerns – areas in which students generally are having difficulty achieving the intended standard – and (4) improvement goals for individual teachers and the team."

http://www.rcs.k12.tn.us/rc/RCS_NEW/Instruction/plc/plc1.html

This document is VERY interesting because it is written in the context of WA state standards. Again, it describes something that this parent would love to see in place.

http://www.wera-web.org/pages/activities/WERA_Spring09/Ainsworth%20Keynote%20.pdf

Too bad NWEA's product doesn't resemble this in the least.
Joan NE said…
The NWEA normals on the MAP-STARR chart provides me at least with some very interesting insights into the MAP product.

The value of this chart is increased if one puts on here the uncertainty interval which is typical of student RIT scores.

The uncertainty attached to student scores is typically, +/- 6 RIT points regardless of grade level. The RIT score ranges on grade reports (typically, +/- 3 RIT pts) give only 2/3 of the full 95% confidence interval).

Parents need to draw on the uncertainty interval when they plot their child's data on the NWEA norms chart (or MAP-STARR); these intervals will help the parent to avoid overinterpreting the test-to-test variation in scores.

If you plot these vertical bars (+/- 6 points) in one of the normals in the MAP=STARR chart, you can see that the expected growth between benchmarks becomes very small in upper grades, compared to the size of the confidence interval.

What these means is that in the upper grades, a very high proportion (on the order of 40%) of students tested will see a score drop over any two benchmarks (fall-to-winter, winter-to-spring).

It is hard to understand what value 3x/yr MAP testing is providing in upper grades when the expected growth is so small compared the the uncertainty.

Im the lower grades, the expected growth is much larger relative to the uncertainty. Consequently, the data from elementary MAP tests has far more information content than from the upper grade tests.

I have located a copy of an NWEA report that is not generally available to the public, and which speaks volumes about the patterns of uncertainty in the NWEA test score database.

There are charts in this report that show the very high level of test-to-test score variability seen over the score trajectories of 300 randomly-selected individual students.

Viewed on a coarse scale, MAP data has some value for some parents, at least at the lower grades, but seems to me that have about zero value in the upper grades.
Joan NE said…
I loaded into scribd a one-page flyer called "Parent's guide to interpreting their child's MAP scores."

http://www.scribd.com/doc/52608936


This is for parents who worry that MAP scores indicate lagging acheivement (percentiles are dropping) or low-level achievement (percentiles are low).

Please comment on this flyer- let me know if it is wrongheaded. IF so, I will pull it.

One person complained that the NWEA norms chart has no value for educators.


If a teacher is inclined to accept the notion that kids working at grade level tend to plot between the 75% and 85% passage rate curves, then perhaps a teacher can use chart to estimate the levels of standards mastery that are represented in his/her classroom.

I don't know for certain yet, but I suspect that the median of SPS students is close to the NWEA national median. IF correct, then 50% of SPS ninth graders are scoring at or below the NWEA national median.

IF correct, this chart suggests that about 1/2 of SPS 9th grade have RIT scores indicating that they should be studying Grade 5 standards or lower, rather than Algebra.

The preceding interpretation of the MAP-STARR is based on two assumptions.

1. Students will have a difficult time mastering math standards if they did not achieve at least a moderately high level (say 75% or better) mastery of the prior grade level standards.

2. Having a MATH RIT score at least above the 75% passage rate line is a good indication that the student is making good progress on grade-level appropriate standards. This does not mean that plotting below this line is clear evidence of poor mastery. (Students who have very good guessing strategies on the MAP or are studying above grade-level will tend to plot above the 85% line.)

In light of this interpretation we can see how wrong-headed is SPS policy on Algebra enrollment, i.e, all kids take algebra by ninth grade at latest, and that algebra teachers do not teach remedial math.

What happens is that science teachers in HS are using many weeks of their instructional time to teach remedial math, so that students can do the math required for the science course.
Chris S. said…
This comment has been removed by the author.
Chris S. said…
I would like to highlight this from Joan's post - it's worth reading twice:

"It is hard to understand what value 3x/yr MAP testing is providing in upper grades when the expected growth is so small compared the the uncertainty."

To say another way, the change you see over a yearly testing interval might be large enough to be significant (i.e. believable, not attributable to chance) but over shorter intervals, it's almost certainly not.

And Joan can correct me if I'm wrong, but the published uncertainly measure refers to the 3-8 MAP test. I don't believe any such number has been provided for MAP for Primary Grades, NWEA's K-2 product. I am assuming it would be higher, if only for the greater confounding in younger kids by computer familiarity
Chris S. said…
I'd also like to point out that Joan's charts visually support a correlation between MAP RIT scores and state standards (an indeed the NWEA source study was designed to show "alignment" of MAP with WA state standards.)

HOWEVER, Maureen made the very good point that both MSP(state) and MAP are highly correlated with SES (socioeconomic status)so you can wonder what kind of correlation would be left if you could adjust for that. Also, that observation is probably the key to how the MAP can be simultaneously aligned to many states' standards.
Joan NE said…
This comment has been removed by the author.
Joan NE said…
Chris - the MAP statistics are poor at upper grades at the one year interval also.

Table 4.3a of the NWEA complete norms report (2008) shows that the expected one-year growth (fall to fall) for 10th grade MATH is 3.3 RIT points, and the Standard Deviation (SD) for "individual test scores around growth trajectories" is 12 RIT pts. Thus the 95% confidence interval for growth for a 10th grader is [-24 ... +27] RIT points.. Nearly 1/2 of the confidence interval for growth on MAP in 10th grade is in the negative growth realm.


I received SPS data recently that shows the expected pattern: Increasing proportion of students showing negative growth as grade level increases, over the Fall 2001-Winter 2010 benchmark interval, with roughly 40% if tested kids in Grades 6-9 showing negative growth.

The situation for reading at upper upper grades is even worse.

The best statistics for MAP (2-11) is in MAth Grade 2, where the fall-to-fall (one-year) EG is 15, and the SD is 9.

Grade 2 has the best statistics for Reading: mean fall-to-fall EG is 14, and the SD is 10.5.

Thus, less than 1/6 of 2nd grade kids should show negative growth on the Reading or Math MAP.
Anonymous said…
I'd also like to point out that Joan's charts visually support a correlation between MAP RIT scores and state standards

I'm going to split hairs and say this isn't exactly true. The correlation is between MAP RIT scores and passage rates on state tests. The skill set necessary to get comparable scores may be similar, but it is not necessarily true that the MAP is aligned to state standards. My understanding is that the "correlation" simply provides a predictor of MSP pass rate based on RIT score. Who knows what state standards have been mastered.
Joan NE said…
Correlation between RIT and State Standards? Anonymous' comment on this is correct.

I suggest very roughly that the band between the 75% and 85% level represents students working toward good or better mastery of state standards.

This is speculative; it is based on knowing the distribution of RIt scores of a class of tracked students who were studying 6th grade standards.

If one accepts this suggestion, then you can start using this chart to estimate how far behind grade level is a kid whose RIT scores are tracking at a much lower level than the 75% passage rate curve.

For example, a ninth grader at the 50% national norm perhaps has only mastered 4th or 5th grade standards. Should this student be forced to study algebra in 9th grade?

We must accelerate the kids that are at low percentiles in the earliest grade possible so that a much higher % of SPS kids are ready to study algebra by ninth grade and be successful in it.
chunga said…
Being nationally normed says nothing about the value or validity of the MAP test. As a mostly multiple choice test, MAP only provides a limited and relatively superficial view of student learning. Since it has no value as a formative assessment or tool for instructional planning and is not suited for schools or teacher assessment, the district really has no business spending millions of dollars on MAP, not to mention the time and facilities it takes up.

What is even more disturbing is that NWEA recommends teachers work with students to set target goals. Refer to this video to see how dystopian this can get: http://www.youtube.com/watch?v=MVLwu6uQK2I&feature=player_embedded.

If some parents really want to see such scores, then perhaps it can be given on an opt-in basis.
Joan NE said…
I agree with Chunga.

Due in part to national norming, MAP does provide some data value - for example it can be used to detect score inflation in independent standardized tests (I cannot now locate the published example of this that I saw recently).

Another example: Through nationally normed MAP data it is possible to compare the between-state rigor of state high stakes summative assessments. This is a proxy for comparing the rigor of different states' standards.

I agree with Chuga that MAP ABSOLUTELY should NOT be used for instructional planning.

1. This use of MAP data for this purpose has not been validated.

2. The technical characteristics of the MAP/Descartes products and the statistics noise in individual student data do not support this use of MAP data.

Our children are harmed when MAP is used for instructional planning.
Joan NE said…
I made a reference to a comment by Chuga...here is that comment:

chunga has left a new comment on the post "Reflections on Standardized Testing Forum":

Being nationally normed says nothing about the value or validity of the MAP test. As a mostly multiple choice test, MAP only provides a limited and relatively superficial view of student learning. Since it has no value as a formative assessment or tool for instructional planning and is not suited for schools or teacher assessment, the district really has no business spending millions of dollars on MAP, not to mention the time and facilities it takes up.

What is even more disturbing is that NWEA recommends teachers work with students to set target goals. Refer to this video to see how dystopian this can get: http://www.youtube.com/watch?v=MVLwu6uQK2I&feature=player_embedded.

If some parents really want to see such scores, then perhaps it can be given on an opt-in basis.
chunga said…
Joan - it's not clear whether MAP is a good proxy for checking state standards. Without some assessment of the quality of MAP, isn't it equally likely that MAP will correlate better with a "bad" state standard as with a "good" one?

And, if our goal is to have some independent benchmark of state standards, don't we already have NAEP?
chunga said…
Anonymous - I don't think you're splitting hairs. It is hardly clear that a correlation with MSP pass rate means much of anything. MSP is certainly derived from state standards, but is itself a limited, and I would argue superficial, sample of students mastery of them.
Joan NE said…
Chunga, due to NWEA having done what they call "alignment studies" for most of the states, it is possible to use the results of these studies to compare the difficuly of state assessments:

If we place on a singe chart, for a single grade level, and for multiple states, the 85% (say) passage rate curve, the relative position of these curves indicates the relative difficulty of the state assessments.

Incidentally, the MAP-STARR for Reading, and which I haven't posted yet, shows the passage rate curves to be flat from grades 8-10. This means that the state's grade 8 and 10 reading assessements have similar difficulty. Just another example of what you can learn from having nationally normed data that is not subject to inflation, is not bounded.

Yes, we do have NAEP for detecting score inflation, so MAP is not needed for this, but MAP provides another means for detecting inflation. Are not far more students tested by MAP each year than by NAEP?

Aren't NAEP tests confined to some concept of grade level standards? That is, aren't NAEP tests bounded, unlike MAP?

If so, then this gives MAP an advantage.

I have mixed feelings about MAP.

I find high quality data can be very useful for us parents who are looking to prove to the district/state/feds that certain policies/curricula are not in the best interest of children.

For certain limited applications, MAP data can be considered high quality. But for these appropriate purposes we do not need 3x/yr scores.

I would rather see our District adopt a different benchmark system that provides high quality, validly actionable, formative assessment data, accessible to parents and teachers, and that provides the aggregate data value that MAP provides.

It would be good to have a product that, like MAP, does not have a floor or ceilng.

It would be good to have a product that is valid for use with special education and ELL students.

I agree that the District should make it easy for parents to opt out, or to have an opt in program.

Be forewarned - your child's MAP data is very probably going into a personally-identified (via SSN) state-wide longitudinal data system.
Joan, I thought the issue of student identification got taken care of during the contract debate. I recall them saying that they would take that out. Is it still there?
Joan NE said…
Melissa,

I don't know. If NWEA is not getting SSNs and names, that is very good. In general, does NWEA try to get contracts thatcall for the district to pass student name/SSN? If so, that may well be a violation of FERPA.

Does SPS give MAP data to the State's CEDAR system? I woudn't be surprised if it is doing so. Something I could check into. That is what I was suggesting might be happening.
Joan NE said…
On the question of the meaning of correlations of RIT scores to percent chance of passing MSP/HSPE:

The chart helps us to understand that the correlation isn't very good. NWEA publishes the correlations.

The fact that some kids with high RIT scores are failing the MSP/HSPE, some kids with low RIT scores are able to pass these tests, tells us that the alignment isn't particularly good.

Take a bunch of kids with the same level of mastery of Math standards, and you will see a pretty large variation in RIT scores.

Take a bunch of kids with the same RIT score, and you will find a pretty large variation in level of mastery of state math standards.

You will also find a huge dstribution in the scores of this group of students on the next benchmark.

This phenomenon is documented in the 2008 NWEA Complete Norms report (see Figure 6.1 therein), and is expected due to the statistics of the test.

This is why parents and educators need to be cautioned against overinterpreting RIT scores.

Once a child has a history of several MAP scores, and there is not too much variation, then the observed growth compared to expected growth becomes more meaningful than the absolute level of the scores.

Beacause RIT scores vary significantly for students having the same level of achievement, it is inappropriate to use MAP data for instructional planning and for flexible grouping!!!!
Joan NE said…
I just now checked to see if MAP scores may be going into our state's LDS (longitudinal data System.)

I found this document.

http://www.erdc.wa.gov/datasharing/pdf/data_inventory.pdf

I do not see any mention of MAP scores, only state assessment results.

At least one state (Delaware) desires to have MAP scores be part of their LDS.

If you haven't heard of LDS, well, it's something to get educated about. There are profound privacy issues associated. Gates' money is a major impetus behind encouraging all states to adopt mutually compatible LDS's, to facilitate seamless national-scale integration of state LDS data
Joan NE said…
here is the proper link...

http://www.erdc.wa.gov/
datasharing/pdf/data_inventory.pdf
chunga said…
Joan - thanks for the clarification on the alignment studies that NWEA does (more info at http://www.nwea.org/sites/www.nwea.org/files/NWEA%20State%20Standards%20Alignment%20Study%20Methods%20_3_.pdf for those interested). I can see how MAP gives some clues about level of difficulty of state standards, but level of difficulty doesn't seem like the same thing as level of quality - that is, harder does not necessarily mean better (refer to http://www.alfiekohn.org/teaching/edweek/chwb.htm).
Po3 said…
The TFA contract I pulled up has the release of student data to TFA and TFA and disclose to 3rd party, if I am reading the following correctly.

http://www.seattleschools.org/area/board/10-11agendas/110310agenda/tfacontract.pdf

Pursuant to its obligations under the Family Education Rights and Privacy Act (“FERPA”), Seattle Public Schools hereby acknowledges that in the course of providing on-going professional development services for the purposes of improving instruction, Seattle Public Schools may disclose to Teach For America student identifiable data from individual Teachers, pursuant to 34 CFR §99.31(a)(6)(i)(c).
iii. Teach For America shall use and maintain such data as provided in 34 CFR §99.31(a)(6). In accordance with 34 C.F.R. § 99.33(b), Teach For America may re-disclose student identifiable information on behalf of Seattle Public Schools as part of Teach For America’s service to Seattle Public Schools of providing on-going professional development services.
iv. In accordance with 34 CFR §99.31(a)(6), Teach For America may also disclose student identifiable information on behalf of Seattle Public Schools to additional parties, provided that Teach For America, in advance, provide to Seattle Public
6
Schools the names of such parties and a brief description of such parties’ legitimate interest in receiving such information.
StopTFA said…
OMG! We ARE Red Clay district!
Anonymous said…
This comment has been removed by a blog administrator.
Anonymous said…
An interesting development at NWEA: does this change your opinion?
NWEA.org/mpg-partners
TeacherTeacher

Popular posts from this blog

Tuesday Open Thread

Why the Majority of the Board Needs to be Filled with New Faces

Who Is A. J. Crabill (and why should you care)?