Wednesday, October 06, 2010

MAP is definitely being used as a barrier to Advanced Learning

I now have folks reporting that they are getting explicit messages from the Advanced Learning department that only students who score in the 85th percentile or better on the MAP will be scheduled for the CogAT.

A family that submitted an application got this form letter response:
We received the parent permission form to have your child tested for Seattle Public Schools' Advanced Learning programs. Applicants' Fall MAP scores will be reviewed and applicants with both reading and math at the 85th percentile or higher will be scheduled for the Cognitive Abilities Test. Cognitive testing will be conducted from late-October to mid-December. Some schools will have Saturday test sessions. You will receive either a test date appointment letter or an initial review letter several weeks after the application deadline.
So if your kid didn't test well on the Fall MAP, you're on the appeal path from the start.

76 comments:

wsnorth said...

This doesn't sound like a barrier to me, in the past the district didn't seem to do anything to encourage students to apply for spectrum, did it? Looks like possibly a rare case of improved outreach to me.

dan dempsey said...

Received the following from a friend on east coast.


I am honestly fascinated by the idea of a lawsuit coming against a company for the insanity of its product. It makes me warm and fuzzy inside. I don't have anything organized, but here are some bullet points about my experience with that test and company:

- Everyone working for them seemed to be in sales. Nobody could speak to the academics of the test when they came for training.

- The training was ridiculous. It was the same surface-level junk over and over

- The supposed connection between the Maine Learning Results, the SAT (Maine's NCLB test) and the NWEA test never came to fruition. We were testing students with no idea where it would lead us.

- The test itself was ridiculous, testing minutia and using poorly worded, misleading questions.

- The results of the test were so erratic that there was no possible conclusion other than the tests' being invalid and students not taking it seriously.

- The physical format of the test was awful, with tiny screens that don't show entire reading passages at once.

- Salesmen refused to acknowledge the possibility that students would do better on the test as a result of taking it multiple times, which has been shown with tests such as SAT and NY Regents.

- Scores were not transferable between math and ELA, making it very difficult to decide what a score actually meant.

- There was no writing on their "writing" test.

- A huge piece of the problem, of course, was my district's willingness to hand over control and have faith in a completely unproven product.

I wish I could get more specifics for you, but it's been almost 2 years now since I've seen the test. I got the shivers (a form of PTSD, perhaps), though, when I saw the NWEAs mentioned in conjunction with Irvington, NY. What our Board of Regents is doing to our schools is embarrassing enough. If you think I can be of any help, let me know. I am always on a crusade against wasting time while promoting insanity in our schools.

Eric M said...

If you seek to exempt your child from CRAP testing in 7th grade, you'll get a stern warning from counseling staff that they can't possibly qualify for any accelerated/advanced classes. Plus they promise to have no alternate activity available beyond sitting. The library, of course, is not available, since it's fully occupied for CRAP testing.

My son got so bored, he went and took the test anyway, after a couple of hours.

Holy crap, what kind of a world have we let fall on kids?

zb said...

I think most parents would read that text as a barrier. It might not be one technically and legally (I hope it gets challenged). But, most parents would think that the letter meant that their kids won't get to take the CogAT unless they score in the 85% on the MAP.

(One might think that's a reasonable barrier. But it is a barrier).

Anonymous said...

You mean, the "barrier" to the gifted class, is that you need to actually test above the 85%? You have to be ahead in math and reading? Oh the injustice!!!! That's what private testing is for isn't it?

Signed, Let them Eat Test

Laura said...

Will SPS automatically give any student who tests above 85th percentile in MAP the Cogat? That would be improved outreach, but I doubt that's what they're planning to do.

Once a student passed the Cogat IQ threshold, will they be given another achievement test, or is MAP the achievement test.

Keep in mind, parents will have no feedback on how their student did on MAP until after the Oct. 14th deadline for applying for advanced learning.

Anonymous said...

I wonder how they are going to handle FR/L families who appeal based purely on MAP scores, when they didn't get to take the COGAT the first time around. Perhaps their Kindergartener had never used a mouse before they started school and then in week 2 they had to take the MAP.

An SPS Parent

ParentofThree said...

Here is I how read what is going on this year.

Submit application.
(or be invited to apply based on MAPs scores, this is the outreach.)
AL Dept reviews.
If MAP meets bar, student will receive test date.
If MAP does not meet bar, student does not receive test date and it is over as you cannot appeal this decision. You can only appeal the decision that is mailed first week of February.

This is a HUGE change from last year when they used WASL scores. Parents knew going into CogAT if their student met the bar and the likelyhood of having to appeal or not once the CogAT test results where sent.

This year students appear to be eliminated from the CogAT all together if their MAPs scores are not high enough - and no appeal possible.

Private schools kids at a HUGE advantage this year. (lovely)

I believe this change is in place to reduce the number of students qualifying for the program.

I believe it is called Capacity Management!

Lori said...

You will receive either a test date appointment letter or an initial review letter several weeks after the application deadline.

I wonder what a review letter is? Something that says "Sorry, the MAP scores were not high enough, therefore, you are not eligible for district-funded testing"? I hope it describes how to appeal and provides a mechanism for testing for families without the means for private testing.

I will ask the Super about these changes at the coffee chat I plan to attend next week.

Melissa Westbrook said...

WSNorth, not true. Dr. Vaughn did DO a lot of outreach including calling people whose kids did well on the WASL to let them know about the program.

This reminds me, gotta nudge Dr. Enfield on opting out again.

emeraldkity said...

The MAP test is computer based?
IMO not a very effective way to test for giftedness for anything besides who is comfortable working on a computer.

This school district really irks me.
My kid qualified to take the SAT in 7th grade through CTY, ( through her 6th gd CAT scores) but didn't qualify for the SPS gifted program.

They need to have an alternate measure to a computer test.

Lori said...

They've updated the website about all of this. Go to the "Determining Eligibility" link.

If I were a parent with a child that I thought would benefit from Spectrum but whose fall MAP scores were less than 85th percentile, basically, it looks like you have all of fall and winter to work on an appeal packet. I would do private testing for both a cognitive test like Wechsler AND Woodcock Johnson for math/reading. Adding the Woodcock Johnson is an additional expense for the family, but I think with "low" MAP scores would make for a more solid appeal.

I suspect these changes will actually lessen the diversity in Spectrum (and potentially APP) because there will be a greater burden on families to provide private test results.

And, I still want to know if this change was data-driven or if the 85th percentile was an arbitrary choice. How many of the kids who tested into Spectrum last year (need to be 87th percentile or higher) scored at least in the 85th percentile on MAP last year? It better be all of them if they are using this as a screening tool.

anonymom said...
This comment has been removed by the author.
anonymom said...

Using MAP scores certainly can be considered a barrier, however, I think the district probably implemented it as a cost savings measure. It cost the district $90 per student for Cogat testing. Many families nominate their children to take the test even if they have no reason to believe that their child is advanced, and have no intention of enrolling in APP or Spectrum. Why? Because the Cogat measures where a kid is, academically, compared to their SPS peers. It provides a parent with data, and information, about their child.

Allowing only students with MAP scores of 85% or higher to test will reduce the number of kids testing, and save the district money.

I'm not saying that I agree with using MAP scores to determine who is eligible for testing, because I don't. I'm just saying that I think the district probably implemented this as a cost savings measure, rather than an intentional barrier to advanced learning. Of course, that is just my opinion.

Anonymous said...

I'm not sure why it's any less unfair to use than the cogat, but I'm biased because my very high achieving and MAP scoring kid was excluded based on his Cogat performance despite a clearly demonstrated need for higher level work. We left the district.

-Frustrated mom.

emeraldkity said...

would do private testing for both a cognitive test like Wechsler AND Woodcock Johnson for math/reading. Adding the Woodcock Johnson is an additional expense for the family, but I think with "low" MAP scores would make for a more solid appeal.

Unfortunately, I see that as even more divisive by economics than a computer test- because it is very expensive to pay for IQ testing- we only did so , because our ins paid for most of it- but coverage has dropped.

It provides a parent with data, and information, about their child.

If parents/teachers actually are getting information, then that is something.

With WASL of course you didn't get anything but a score on the 1-4 scale & while you could make an appt to view the test booklet- you couldn't even take notes.

For someone like me, who has a memory like a sieve, that would have been a complete waste of time.

I understand that a computer test would be cheaper to administer than the WASL & I applaud using an alternative, however, it also seems expensive to keep changing the evaluation methods.

My oldest child is currently getting her masters in teaching ( in Oregon), I will be interested to find out what tests that they will be covering to use in the classroom.

Melissa Westbrook said...

Anonymom, you are right. Parents who have their child take the test with no intention of moving them anywhere based on those results DO cost the district money. Testing isn't free and, according to Dr. Vaughn, is their office's biggest expense.

Of course, there is that "unintended consequences" stuff.

Charlie Mas said...

Frustrated mom represents a perspective that reflects both the need for Spectrum and the deficiencies in the District.

If a student who is ready and able to succeed with advanced work cannot get it in a general education classroom, then the District needs to provide Spectrum so students who need that advanced work can reliably access it.

Second, there is no legitimate excuse for a school failing or refusing to provide advanced work for EVERY student who is ready for it regardless of the program they are in. This represents a gross failure on the part of the school - and the District for failing to assure school quality.

jp70 said...

What I found funny was that when my son was in Kindergarten, his teacher strongly recommended him taking the COGAT. At his parent teacher conference the teacher said that he belonged in Advanced Learning and encouraged us to appeal if his test scores didn't qualify him for spectrum. His Cogat scores did not qualify him, so he went on to private testing (due to a very convincing teacher) and tested high enough for APP. We put him in Spectrum. Then we get a letter this year from SPS congratulating us on having such a bright child scoring high enough on MAP encouraging us to apply for APP.

I thought it was great that they sent the letter to those who scored high on MAP in the spring, but there is obviously something different between the COGAT test and the MAP and the test the private testers use (can't remember what it's called) to have such huge difference in scores.

Chris said...

The letter does say [to conserve resources please don't take the test unless you'd consider taking advantage of the program.] FWIW.

hschinske said...

The MAP test is computer based?
IMO not a very effective way to test for giftedness for anything besides who is comfortable working on a computer.


Okay, but how many kids are old hands with Scantron forms? I can't see how it's any worse. If anything, testing on the computer seems rather child-friendly to me.

Helen Schinske

Rose M said...

I think that the MAP is a better test of gifted achievement than the WASL.

Lori said...

Like Anonymom, I know there are families who have/had their kids tested by the district simply because it's free and they were curious, not because they suspected their child needed access to a more rigorous program. So I understand and even support the district wanting to only test the children who are most likely to qualify for Spectrum or APP.

But, given how important it is to have advanced learners appropriately challenged, I want to be sure that the new "screening" test (MAP) casts the net wide enough not to miss anyone. Now that the district want to make data-driven decisions, I want them to tell us how they chose the 85th percentile as the cutoff.

There is an incredibly easy way to compare this new system to the old system. Take the results from all of last year's testing when parents and/or teachers had to nominate and MAP was not used. Determine how many of those tested qualified for Spectrum or APP and how many did not. Now, cross-tabulate with their fall MAP scores from last year. Determine how many kids who tested in via CogAt were at or above 85th on MAP as well as how many tested in but had a MAP score below the threshold. These two numbers allow us to calculate the "false negative" rate, or the risk of missing a Spectrum/APP-qualified child due to reliance on MAP for screening.

As a parent, I don't want there to be any false negatives. In my scenario, the district would keep running the numbers until they found the MAP percentile that had no false negatives. Is it 85th? I don't know. That number feels arbitrary to me because it's so close to the 87th percentile required for Spectrum. My gut says the cutoff should be a little lower.

So, how do we find out if this was a data-driven decision or not, other than asking at an upcoming coffee chat?

hschinske said...

There will never be a scenario with *no* false negatives, because you can always score lower than your true ability, possibly much lower. That's why no one score should ever have an absolute veto.

Helen Schinske

Lori said...

I completely agree, Helen. That's why I said "as a parent" it's what I'd want. But as a realist and taxpayer, I know it's not possible.

In reality, they would have to balance the false negative rate with the false positive rate (the percent of MAP-identified kids who ultimately don't score high enough on CogAt) to find the right threshold. Every screening test needs to balance these two factors so that you identify as many as possible while not putting too many others through the test unnecessarily.

As you lower the MAP percentile for your screening, you likely increase the false positive rate. Maybe that's okay; it all depends on your priorities (saving money versus identifying advanced learners).

ttln said...

The information on NWEA wasn't super easy to find. But I got it.

From the NWEA website:
"The 95th percentile is a nationally accepted norm for identify students for placement in accelerated programs. NWEA strongly encourages districts to set TAG guidelines that meet the needs of their district. Districts can use percentile charts in the NWEA RIT Scale Norms publication to set any desired TAG guideline."
Looking at the RIT Scale Norms Publication, a person would see that there are three levels indicated for "high achievement" the lowest of the three is at the 87th percentile. So the district looks to have lowered the Spectrum invite bar to better "fit the needs of the district."

TechyMom said...

From the Advanced Learning Appeals page:

"Appeal Process for 2010 - 2011 Testing Cycle Is Under Construction

If your child participates in the 2010-11 testing cycle, and you are unsatisfied with the eligibility results, you may appeal the decision.

Deadline: Eligibility letters with test scores and eligibility results will be mailed by the first week in February. The appeals deadline dates will be included in eligibility results letters.
"
Ok, it does say under construction, but this really sounds to me like kids who don't get to take the CogAt because their MAP scores are too low can't appeal.

I say this because of

"If your child participates in the 2010-11 testing cycle"

and
"The appeals deadline dates will be included in eligibility results letters."

Not the intial review letters, but the February letter. Will students who get an initial review letter saying their MAP score is too low get a February letter too? Did students who applied but didn't get to take the CogAt participate in the testing cycle?

Not allowing appeal of MAP scores would be a very big deal indeed. You've always been able to appeal.

I'm actually fine with the district not paying for tests for kids who didn't hit the MAP threshold, but disallowing appeals would be very bad indeed. I do like that they sent letters to everyone who got high spring MAP scores asking them to apply.

Lori said...

It suggests that you may be able to appeal if MAP is low and you aren't invited for CogAt. This is from the grid on the eligibility page:

Specific information on how to appeal will be included in eligibility and/or initial review letters and is available on the Appeals web page. Private assessments will only be considered during the appeal process.

I agree with TechyMom, though, that it's really unclear. I think the district needs a good editor who can clean up their prose and make it consistent from page to page so that there isn't so much confusion and anxiety every time they try to announce something new.

southend girl said...

What about a kids who tests vary over the year? I know kids who would have qualified with winter MAPS, but just missed for Spring.

Anonymous said...

Does anyone know what is the MAP cutoff for APP? Do you have to score higher than the 95 percentile or 99 percentile to be invited to apply?

The SOURCE does not have MAP scores, just DRA and WASL. We asked for the MAP results last year and was told by the teacher that it was such a new assessment that most teachers didn't know what to make of it yet. Because of this, the teacher declined to give out the results. Do you know how we can get this info?

Seeking Answers

ttln said...

Elementary schools are giving them out at conferences. Middle Schools? Who knows. We have our 1st quater progress reports going out on Monday. It is too late to arrange it now, but it would have been a good idea to send them home at that time.

I will shoot out an email to my principal and see if he is game for trying to hurry up and get them ready for Monday. That is if he is allowed to. I am sure a letter would have to go home explaining the results.
Who knows what kind of stir it would cause for one school to send them out and the others don't.

Melissa Westbrook said...

I'm sorry but what? Yes, we should get to see the MAP scores. We paid for them.

emeraldkity said...

Scantron can be difficult because if you get off on the bubbles- it can throw whole test off
( that is unless you have a kid who intentionally is throwing it off by making designs)

Many computer exams ( I don't know about MAP)do not let you go back and review answers, or skip ahead when you are stuck.

For students who have processing challenges ( as with executive function), this doesn't get an accurate measure of their thinking skills or of what they have learned.

wsnorth said...

"Dr. Vaughn did DO a lot of outreach including calling people whose kids did well on the WASL..."

Really, this is outreach? Dr. Anybody cold calling parents? That doesn't sound efficient or effective.

Spectrum and APP are heavily weighted towards non-FRL students whose parents know how to "work the system".

This approach, executed properly, should open these programs up to more high achievers who would otherwise not have known about them.

Rose M said...

I agree that the MAP scores should be posted on the source. They are tied to student ID number so it could be automatic.

However if you can't get your teacher or principal to tell you the scores, have your child write them down at the end of the test. They come up on the last screen.

I just have my kids bring them to me.

Charlie Mas said...

Dr. Vaughan and the District's Advanced Learning department have a long history of working to reach under-represented groups in Spectrum and APP. This has been done primarily through letters to the homes of high performing students inviting nomination for the programs.

Historically, that effort was often damaged by staff at the students' schools who discouraged participation in the programs. I don't know how true that continues to be.

For the past few years the District had taken much stronger steps to expand access to advanced learning programs.

1) They have tried (with mixed success) to expand ALOs. The idea was that students, exposed to more rigor would train up to meet the eligibility requirements for Spectrum and APP. ALO as farm team. They called it talent development.

2) They allowed middle school students to qualify for Spectrum in either language arts or math without requiring them to qualify in both.

3) Even more important, the District created Spectrum Young Scholars which allowed students in grades K-2 to qualify for Spectrum based on their cognitive ability alone, without regard to their academic achievement. The thinking here was that students without privileged backgrounds wouldn't have had enough exposure to educational opportunities to be academically advanced even if they had the raw talent.

The district still sends out the invitations, but they have shut down Spectrum Young Scholars and the opportunity to qualify in either/or Spectrum in middle school. As for the expansion of ALOs, well that's sketchy.

When Dr. Vaughan told the Board that the District was placing ALOs in three southeast Seattle schools, he promised them that these would not be "ALOs in name only." The problem, of course, is that there are a lot of ALO programs which are ALOs in name only. There are even Spectrum programs that are programs in name only. There is absolutely no quality assurance for these programs. There isn't even a report on their quality and efficacy - despite the fact that such reports are required by Board Policy.

An assessment of program quality is needed because people don't believe in the quality or efficacy of programs in low-income communities. As a result, they don't enroll their children in them.

So I'm not giving Dr. Vaughan any points for outreach when he decides to shut down either/or Spectrum in middle school, shut down Spectrum Young Scholars, and refuses to assess program quality.

Anonymous said...

I am not ready to say the sky is falling. Here's why: our daughter attends Kindergarten at our mandatory assignment school in the South End. Her teacher has referred her for advanced learning testing based on her MAP scores - the teacher emailed me to let me know this. I was very pleasantly surprised, because we have an older child in APP whose teachers (at a different South End school several years ago) did not refer kids for testing.

So, food for thought. I hope wsnorth is correct, and that more kids, not fewer, will be referred for testing. We will see.

Ruthie

Melissa Westbrook said...

WSNorth, I gave that as an example, not the only thing they did. I think he was trying to personalize the outreach.

anonymom said...
This comment has been removed by the author.
anonymom said...
This comment has been removed by the author.
anonymom said...
This comment has been removed by the author.
anonymom said...
This comment has been removed by the author.
anonymom said...
This comment has been removed by the author.
anonymom said...

wsorth said "Really, this is outreach? Dr. Anybody cold calling parents? That doesn't sound efficient or effective. "

Seriously wsnorth? Receiving a personal call from a principal encouraging your family to apply, and recruiting your child doesn't sound effective to you? Because it sounds to me like an administrator going above and beyond his call of duty to me.

Then wsnorth said "Spectrum and APP are heavily weighted towards non-FRL students whose parents know how to "work the system".

And you think this is the fault of SPS? I received a letter from the office of advanced learning encouraging me to test my son for APP/Spectrum based on his high WASL scores. That was 7 years ago, and the office of advanced learning continues to send those letters out today. Do you think they discriminate and only send those letters to white, middle and upper middle class families?

What is your solution wsnorth? What more should the district be doing? They post the test dates on their website. Schools are required to post the information in a place where families have access to it. The district offers the testing for free, so everyone, regardless of financial ability can get their kids tested. They have testing at many sites all over the district so there isn't a transportation hardship for any families. They send out letters encouraging families whose kids did well on the MAP to get their kids tested for advanced learning. And on a personal level in addition to a principal making personal phone calls recruiting families (which I'm sure isn't even in his job description), teachers are identifying kids who they think might need SPECTRUM/APP services and encouraging the families to get their child tested.

What more would you like to see SPS do? What do you see as effective outreach? And what, if any, responsibility do you see lying with the family

wsnorth said...

What do I want them to do? I think sending out letters to high achievers on the MAP test is exactly what they should do, and what they are doing. I think it casts a much wider net than having Dr. Somebody calling people whenever they have time. Of course we knew about Spectrum, and have personal experience with the program, but there was never any outreach. The district used to treat Spectrum like an annoyance that it wished would go away, now they are actively trying to get the "high acheiver" kids to try out for it! That's progress!!

Teachermom said...

Whether it is intended as a barrier or a larger net, using one test score to make big decisions is not reliable.

My son took the Cogat in Kindergarten and did not even come close to making the cut-off for Spectrum. He took it again in First grade and qualified for APP.

Had he not qualified for APP, I guess I would have been one of those parents who were abusing the system for free, frivolous testing.

"Because the Cogat measures where a kid is, academically, compared to their SPS peers."

Two misperceptions here - The Cogat is a cognitive abilities test, not an academic test. It is nationally normed, not normed on SPS students.

Some of the MAP questions I have seen defy logic. I could easily see a student with high cognitive abilities missing questions if they approached them logically.

Also, if a student with high cognitive abilities but little exposure to educational experiences were to score low on the MAP, that would be a barrier, because the MAP is an academic skills test.

anonymom said...
This comment has been removed by the author.
anonymom said...

"Had he not qualified for APP, I guess I would have been one of those parents who were abusing the system for free, frivolous testing."

I don't know teachermom? Did you have your child tested with the intent to move him into the Spectrum or APP program if he or she qualified (or were you at least seriously considering it)? If so, then you weren't abusing the system. But I know many, many, parents who get their kids tested just to see where their kid stands, academically. They have no intention whatsoever in having their child change schools to get Spectrum or APP services. I wouldn't necessarily say that those families are abusing the system, but it is costly for the district, at a time when their is no change to spare.

Anonymous said...

It is curious to read that parents are signing their children up for the CogAT tests to see how their kids are doing academicaly. Why are parents doing this? Do they not get sufficient feedback from the report cards, spelling, math, or writing tests? If not, then why not. (I do agree this use of CogAT is a waste of resource.)

I ask these questions because one of our frustration with the district is how we get feedback on our kids' performance. Only in one year with one teacher did my children get back their math and spelling tests in a timely fashion so I know at home what we need to work on. More often we get bulk return of the kids' work mid year and at the end of the year.

I really miss the old fashion letter grade because I get the idea how my children are doing based on their work and their peer's. I have a tough time with the current report card as it plots the student's work on a continuum. So I get that the child is working toward grade level in the fall and by spring always exceed or meet expectation. However, my kids K-2 DRA scores showed they were reading above grade level to begin with, but their report cards didn't reflect that. I also know because I see what they are reading.

So what good is a report card? MAP is not reported in the Source. We have to ask for it. Now we know to have our kids write down their scores after they finish taking the test just in case the teachers don't give the scores out. Does the MAP reflects the academic work my children are doing at school? Is the RIT score based on the national average or the district's? Will we get published report on the the district and school MAP scores? ( I couldn't find the report on the district MAP scores from their website, so don't know if it is published or not.)

We are spending a lot of money, time and resources on this test, so would like to get something out of this for our kids.

Still looking for answers

Anonymous said...

FWIW, last year I volunteered to set up computers and proctor the MAP testing. Some of the kids (esp K-2, poor and immigrant kids) were completely unfamiliar with computers. Obviously this would affect their test scores—think back to 20 years ago when you first sat down at a computer (personally, I was rather stupefied).
Filling in tiny circles on a page would have its own challenges for a young child. Fortunately, my child's an awesome test taker. I wonder what else he's decided about himself based on that skill.

seattle citizen said...

"Some of the kids (esp K-2, poor and immigrant kids) were completely unfamiliar with computers"

So the poor get shafted again, what a surprise. It's like a hundred years ago: "IQ tests show us that you poor people are dumb, probably inherently. As we know from HSPE, Black children aren't as smart as White children, and now we see the proof with MAP. We will do our best to make you good worker-bees by only teaching you Reading, Writing and Math, but we can't be expected to bring you to OUR level of ability, so forget the arts, history and all those other rich and deep things: Those are for rich people."

An Episcopal minister was travelling to the West in the 1870s, to be a mssionary. He is quoted as saying, "While we know we cannot expect to raise the Indians to OUR level of civilization, I will do my best to make good farmers of them."

Nothing has changed.

anonymom said...

So what is the solution SC? What is a more equitable way of assessing who is eligible for placement in APP?

As for Spectrum, my personal opinion is that Spectrum should be a self selected, opt in, program, available to any motivated student who wants to try it. I believe that a super motivated kid, even if not academically gifted, can do very well in advanced classes, and should have that opportunity.

Melissa Westbrook said...

Spectrum tends to not just go deeper but move faster. That would have to be a pretty super motivated child. I would leave that decision, if there is room in a Spectrum class, up to the teacher, not the parent.

seattle citizen said...

anonymom, students have been identified for advanced placement for years, long before there were these district-wide assessments. Cogat seems to work - the problem I've heard for years is that people aren't identifying who should take the Cogat.

There are ways to identify students to take the Cogat besides these nasty tests that seem to be pidgeon-holing students, even at a very young age.

Anonymous said...

Melissa, in our school, it is usually the parents who fill out the forms for their kids to take the CogAT tests, not the teachers. We were never told or advised by a teacher to have our kids tested. Other parents told us about the test and that is why our kids are in spectrum today.

It seems so much of what goes on day to day at the school depends on which school you go to, which teacher you have, etc. Kinda tiring to have to put so much effort just to figure out the system to get your kids educated.

Tired in SPS

anonymom said...

That's a fundamental difference between your thinking and mine, Melissa. I think a family, not the district, should be able to decide whether their child is ready and capable of doing advanced level work.

A teacher could offer their opinion, and guidance, and I would hope that a family would weigh that heavily, but in the end I think the decision should lie with the parents.

This is not a new concept. Many districts offer students "opt in" honors and advanced class options, with no testing or barriers, and do so very successfully.

I have seen many instances of kids that were not "academically gifted", but who were motivated, do very well in advanced classes. In fact they often do better than some of their "academically gifted" peers, who are not always as motivated. I think any interested student should be able to at least give it a try.

Of course nobody wants to see a child struggle, and if it becomes apparent that a child is not able to keep up in an advanced class the school should have the right to move the child into a more appropriate class.

hschinske said...

Cogat seems to work - the problem I've heard for years is that people aren't identifying who should take the Cogat.

Then you haven't been paying attention. There are plenty of problems with the CogAT, and it's quite certainly developmentally inappropriate for kindergartners -- more so than the MAP, in my opinion (not that I'm saying the MAP is great). A group paper-and-pencil test is a simply insane way to test young children.

Helen Schinske

seattle citizen said...

Helen, I am apparently mis-informed: I thought that the test Cogat) might be effective, but that the identification of who should take it was not. Evidently, you are asserting that Cogat itself is not a very good test for this purpose. I don't know much about the test.

My concern is more about how tests (MAP, HSPE etc) are being used to group students below level, to make "data points" (241 RIT; 5 HSDP...) that become THE identifing markers for students. It seems that they can be damaging in this respect.
If the tests were merely formative or informative, helping teachers identify levels along with other metrics, I wouldn't be nearly as concerned.

Do they accurately predict who would be a good candidate for APP? I don't know, but by all accounts I think not, at least not withour other points to triangulate.

Melissa Westbrook said...

Tired, yes, the information just varies from school to school and it is troubling.

"I think a family, not the district, should be able to decide whether their child is ready and capable of doing advanced level work."

Actually, a family does send it into motion by applying and then if their child tests in, making the decision that they are indeed ready and capable AND that it is the best decision to move them (should their school not offer APP or Spectrum). All schools can offer ALOs which are supposed to provide more advanced work at every school.

So we have ALOs without any testing, we have honors and AP and IB without any testing. It's not like it's not available.

Do I think the ALO situation varies wildly from school to school? Absolutely but without parents holding the district to its promise, well, then it will be what it is - a crapshoot.

frustrated said...

hschinske:

I can't find the thread anymore where the discussion of topping out on the MAP was being discussed. So I'll pick it up here.

The MAP data coaches are out making their rounds now, and one of them talked a bit about this last week. He has personally seen a MAP math score (by an adult) of 360. Now I doubt that the scores are going to be highly accurate in that range because there just aren't going to be enough correct answers to rate those questions with tight accuracy. But just because some of the published #s you've seen show high percentiles as low as 269 doesn't mean our kids are topping out. It just means that most kids don't get that high even in 11th grade. But here in Seattle there are middle school kids getting over 290 already (anyone know of 300+?), and there are still plenty questions for which they don't know the answers.

Some of the more interesting questions I heard were around the purported symmetry of the RIT scores no matter the grade/level. But typical growth of an average student starts off at something like 10-15 RIT points/year when they're young, and gradually shrinks until the typical growth is more like 1-3 points by high school (this is from memory, didn't look it up right now). The problem is that by the time the kids are getting older the error is going to be the dominant factor, more than any real gain or loss.

The MAP isn't a terrible tool. It's possible to glean helpful information from the results. But people have to be really, really careful about how they use it.

hschinske said...

He has personally seen a MAP math score (by an adult) of 360.

Huh? He doesn't KNOW what the top possible score is? Come on. Why on earth should it be any secret?

The closest I can find is a statement on the NWEA site that a RIT score outside the range 100 to 320 is invalid. http://www.nwea.org/support/article/900

"Invalidation reasons include the following:

2 – Overall RIT outside valid range: A MAP test will be invalidated if the RIT score does not fall between 100 and 320."

Helen Schinske

seattle citizen said...

Helen, does that mean that a score above 360 just doesn't make sense? That no one should be able to achieve it? Or that a score above 360 is not reliable for the purposes of placement in a K-12 level?

I'm not sure what that could mean.

hschinske said...

I can't see how a *high* score could possibly be invalid unless it just wasn't a possible score at all -- in which case, if 320 is the top possible score, 360 is *not* a possible score and the MAP data coach is talking through his hat.

Helen Schinske

hschinske said...

http://www.nwea.org/sites/www.nwea.org/files/resources/Parent Toolkit_0.pdf states "RIT scores range from about 100 to 280." Other sites mostly say "RIT scores range from about 150 to 300." http://www.pineisland.k12.mn.us/hschool/Parent%20Toolkit.pdf

Helen Schinske

hschinske said...

http://www.kingsburycenter.org/working-us/kingsbury-center-data-award-application/data-award-q states that "In reading, the test measures quite accurately through a college entrance level of achievement. Students in the upper five percent of our norms in ninth and ten percent of our norms in tenth grades might be underestimated some because of a ceiling. In mathematics, high performing students generally move from a general math assessment to end-of-course assessments in mathematics (Algebra I, Algebra II, Geometry, Integrated Math 1 and 2), usually in eighth grade. End-of-course tests are reported on the same scale as the general math assessment and have considerably more range. We would work with the researcher on ways to interpret data across these tests."

That sounds to me as though the general math assessment does NOT have enough ceiling to assess high school level students accurately.

Helen Schinske

Seattle-Ed2010 said...
This comment has been removed by the author.
Seattle-Ed2010 said...

Anonymous @12:09 said...

FWIW, last year I volunteered to set up computers and proctor the MAP testing. Some of the kids (esp K-2, poor and immigrant kids) were completely unfamiliar with computers. Obviously this would affect their test scores—think back to 20 years ago when you first sat down at a computer (personally, I was rather stupefied).
Filling in tiny circles on a page would have its own challenges for a young child. Fortunately, my child's an awesome test taker. I wonder what else he's decided about himself based on that skill
.

Incidentally, SPS's two MAP administrators, Brad Bernatek and Jessica DeBarros, told a group of parents who met with them earlier this year that the MAP test is not that reliable for K-2, so some districts don't use it for those grades for that reason.

The comment here and elsewhere about kindergarten kids who don't know how to read, kids who are unfamiliar with using a computer or a mouse, all support this, and lead to the question: Why is SPS using MAP for K-2 if it's not appropriate or useful for these age groups?

Also, MAP is clearly going to be problematic for English Language Learners as well.

All of these contribute to even more reasons why it is ridiculous for the district to try to tie teacher evaluations and pay to student MAP scores. But that is in the latest teachers' contract -- unless the levy fails.

If the levy fails on Nov. 2, that element of the teachers' contract may be in limbo, and I would be fine with that.

MAP is not designed to evaluate teachers.

And apparently it's pretty flawed in its measurement of many (all?) students as well.

It is also sapping a great deal of resources -- student, teacher, library time, and money ($4.3 million and counting).

Is MAP really the best way for our kids to spend their limited school time and SPS to spend its/our limited money? I don't think so.

Is MAP more trouble than it's worth? I believe the answer is yes.

--sue p.

somewhat frustrated said...

Huh? He doesn't KNOW what the top possible score is? Come on. Why on earth should it be any secret?

According to the data coach (forget his name now), it seems the test is somewhat dynamic, so I'm not sure it's a secret as much as they may not know the answer. Piecing together what I heard, I think they insert new "unrated" questions at random into kids' tests. Then depending on the results of the answers, correlating with the kids' overall scores, that's how the questions get ranked. I suspect they can make a decent initial guess at the level of any particular question, but it sounds like there's some adjustments that happen along the way. I don't know any more than this, and I'm a bit fuzzy on this part as well, so don't anyone go running off quoting this as factual, I'm just relaying my interpretation of what I heard.

One obvious question: do these questions affect the child's score?

The closest I can find is a statement on the NWEA site that a RIT score outside the range 100 to 320 is invalid. http://www.nwea.org/support/article/900

Nice find. I did poke around a bit looking for that info, but didn't find it at the time. It does make one wonder what a 360 means in that context, but he was adamant about that particular score, by a college math major. And that he saw it with his own eyes. But as I said above, I wouldn't really trust a score like that because of the nature of the test. Very, very few students will ever see those questions.

Interestingly, the referenced page says: "The maximum standard error is 5.5 unless the RIT score is equal to or greater than a score of 240, in which case a maximum standard error is not enforced. Does that strike anyone else as odd? If you're getting a good score (but not even remotely close to any likely ceilings), they're not considering any error factor for validation. That would almost bring me back to question the ceilings, except that I know the types of questions the top MS kids are seeing, and I know they're still good for discriminating at their level.

Another thing the data coach guy talked about was how the test had other less obvious ways of (in)validating a test, like cross correlation between different questions, etc. Statistically, sometimes they can make guesses in advance of whether you'll get a question right or not, based on how well it correlates with your previous answers. I'm not sure I believe this would work as well with reading as it would with math, but again, I'm just trying to share some of what I heard.

somewhat frustrated said...

Huh? He doesn't KNOW what the top possible score is? Come on. Why on earth should it be any secret?

According to the data coach (forget his name now), it seems the test is somewhat dynamic, so I'm not sure it's a secret as much as they may not know the answer. Piecing together what I heard, I think they insert new "unrated" questions at random into kids' tests. Then depending on the results of the answers, correlating with the kids' overall scores, that's how the questions get ranked. I suspect they can make a decent initial guess at the level of any particular question, but it sounds like there's some adjustments that happen along the way. I don't know any more than this, and I'm a bit fuzzy on this part as well, so don't anyone go running off quoting this as factual, I'm just relaying my interpretation of what I heard.

One obvious question: do these questions affect the child's score?

The closest I can find is a statement on the NWEA site that a RIT score outside the range 100 to 320 is invalid. http://www.nwea.org/support/article/900

Nice find. I did poke around a bit looking for that info, but didn't find it at the time. It does make one wonder what a 360 means in that context, but he was adamant about that particular score, by a college math major. And that he saw it with his own eyes. But as I said above, I wouldn't really trust a score like that because of the nature of the test. Very, very few students will ever see those questions.

Interestingly, the referenced page says: "The maximum standard error is 5.5 unless the RIT score is equal to or greater than a score of 240, in which case a maximum standard error is not enforced. Does that strike anyone else as odd? If you're getting a good score (but not even remotely close to any likely ceilings), they're not considering any error factor for validation. That would almost bring me back to question the ceilings, except that I know the types of questions the top MS kids are seeing, and I know they're still good for discriminating at their level.

Another thing the data coach guy talked about was how the test had other less obvious ways of (in)validating a test, like cross correlation between different questions, etc. Statistically, sometimes they can make guesses in advance of whether you'll get a question right or not, based on how well it correlates with your previous answers. I'm not sure I believe this would work as well with reading as it would with math, but again, I'm just trying to share some of what I heard.

somewhat frustrated said...

Huh? He doesn't KNOW what the top possible score is? Come on. Why on earth should it be any secret?

According to the data coach (forget his name now), it seems the test is somewhat dynamic, so I'm not sure it's a secret as much as they may not know the answer. Piecing together what I heard, I think they insert new "unrated" questions at random into kids' tests. Then depending on the results of the answers, correlating with the kids' overall scores, that's how the questions get ranked. I suspect they can make a decent initial guess at the level of any particular question, but it sounds like there's some adjustments that happen along the way. I don't know any more than this, and I'm a bit fuzzy on this part as well, so don't anyone go running off quoting this as factual, I'm just relaying my interpretation of what I heard.

One obvious question: do these questions affect the child's score?

(too big, need to continue below...)

still frustrated said...

(continued)

The closest I can find is a statement on the NWEA site that a RIT score outside the range 100 to 320 is invalid. http://www.nwea.org/support/article/900

Nice find. I did poke around a bit looking for that info, but didn't find it at the time. It does make one wonder what a 360 means in that context, but he was adamant about that particular score, by a college math major. And that he saw it with his own eyes. But as I said above, I wouldn't really trust a score like that because of the nature of the test. Very, very few students will ever see those questions.

Interestingly, the referenced page says: "The maximum standard error is 5.5 unless the RIT score is equal to or greater than a score of 240, in which case a maximum standard error is not enforced. Does that strike anyone else as odd? If you're getting a good score (but not even remotely close to any likely ceilings), they're not considering any error factor for validation. That would almost bring me back to question the ceilings, except that I know the types of questions the top MS kids are seeing, and I know they're still good for discriminating at their level.

Another thing the data coach guy talked about was how the test had other less obvious ways of (in)validating a test, like cross correlation between different questions, etc. Statistically, sometimes they can make guesses in advance of whether you'll get a question right or not, based on how well it correlates with your previous answers. I'm not sure I believe this would work as well with reading as it would with math, but again, I'm just trying to share some of what I heard.

somewhat frustrated said...

A bit more, now that I'm back reading on the NWEA site.

Article found here: http://www.nwea.org/support/article/532

states: A ceiling effect exists when an assessment does not have sufficient range to accurately measure students at the highest performance levels. It has nothing to do with the actual numbers attached to the scale and everything to do with the position of students on it. For example, in reading, the RIT scale measures with relative accuracy up to about 245. This represents the 93rd percentile at grade 10, and the 95th percentile at grade 8. If a student scores above we know that student performed high but may not be able to accurately assess how high they performed. Relative to other tests, therefore, there is very little true ceiling effect in this assessment. Even most high performing 10th graders receive a technically accurate measure of their skill.

First, if they have this info readily available, why didn't they list the top end of math? 245 is not really that high for reading, there are middle schoolers that are above that. I'm finding myself a little more suspicious of the math #s, but mostly I'm suspicious about their ability to assess their own assessment.

frustrated with blogger said...

Sorry about the redundant posts above. Blogger said it was too big to post. Stupid me, I believed it. Please delete redundant post, along with this post. Thanks.

somewhat frustrated said...

As long as we're talking about MAP, here's something that bothers me a bit.

Mathematics is a highly structured discipline. In general, you learn A, then B, then C, and there is a natural order to much of it. You're not going to be very successful in algebra if you can't do fractions, you can't do fractions without multiplication and division, and that's dependent on addition and subtraction. There are some slightly different sequences in some of the higher topics, but in general there are layers of scaffolding that are in place to move ahead.

This makes a single number result (actually a summary of results from 4 different strands) reasonably believable and not likely to bounce up and down a lot.

NWEA does not offer a social studies test because that type of structure really doesn't exist (at least not without a very strictly defined curriculum!) There really is no structural reason to teach state history before U.S. history or world history, etc. A assessment designed to test knowledge on Chinese culture wouldn't work very well in a district or building that focused on Japan. And there are a multitude of ancient civilizations that could be covered, but is there a required set or order? I'm not aware of any. So if there was a social studies MAP test, one could easily imagine kids' test scores bouncing up and down like yo-yos, depending on what they happened to cover in a particular year.

Now to the meat of my point: reading, as I think it is presented on the MAP has to fall somewhere in between the two. It's not nearly as structured and easy to assess as mathematics, but it's not as loose as social studies. Some teachers might choose to emphasize grammar at an early age, or not at all. Some might emphasize certain poetry structures, cover many of them deeply, or perhaps not at all. And yet these topics are being covered on the MAP, and I'll bet if we got access to anonymized scores we'd find a lot more bouncing around on the reading test than on math. And it explains why there's a push for standardization.

Certainly this doesn't invalidate the entire test, but as I've said before, everyone needs to be really careful about how the results are both interpreted and used.

hschinske said...

But as I said above, I wouldn't really trust a score like that because of the nature of the test. Very, very few students will ever see those questions.

I've never yet seen any statement about what's covered by the MAP math test that went above second-year algebra. Check out the math vocabulary list at http://www.benton.k12.ms.us/DesCartes/Appendix_A_S.1.pdf. (This one includes more items than the one provided by Seattle Public Schools at http://www.seattleschools.org/area/mapassess/resources/MAP_Math_Vocab.pdf) The highest level (281-290) has the terms

• exponential
• identity
• inverse
• log
• reciprocal


http://www.wwgschools.org/Northwest%20Evaluation%20Association.htm gives just a couple of examples of problems over 280, under geometry and measurement respectively:

RIT scores between 281 and 290
Symmetry and Transformations
· Solve problems involving volume with rotational transformation
New Vocabulary in this Range: none
New Signs and Symbols: none

RIT scores between 281 and 300
Area, Perimeter, Circumference
* Solve problem using ratio of rectangular areas
New Vocabulary in this Range: none
New Signs and Symbols: none


The thing is that obviously you're not going to see the top questions unless you've gotten *all* the previous ones correct (because it will cut you off after 52 questions, wherever you are).

If the impression you have about the design of the test at the high end is correct, it sounds as though if the student gets some large number in a row correct, say 48 or 50 at a guess, they then MAY or MAY NOT get further questions that are way outside the usual range. So a raw score of 52/52 could mean different things depending on whether the student hit those extreme out-of-level questions, which doesn't sound to me as though it was a very good way to distinguish at the top end between those who really could do college-level math and those who couldn't.

You'd also think that such a rare question would *have* to be an experimental one and not count into the score, as there wouldn't be enough students answering it (wrong or right) to get the question normed.

In any case, students are probably still hitting a ceiling of sorts at somewhat lower raw scores. The MAP is designed to establish a top level when it can document that a student is getting about half the questions correct at a certain level, and obviously a raw score of, say, 48 out of 52 is not going to have enough information to show that.

Helen Schinske

Anonymous said...

Im a student, and MAP testing really sucks.
It hurts your eyes, sitting in front of a computer for so long. Since its multiple choice, if you dont kmow an answer you can still get it right and move on to a higher question and get a score you dont deserve. Plus, if you really bad, you get less questions, so peple who are actually plenty smart end up getting horrible scores. I can see how its good from a teaching perspective, but overall, its a pain in the rear end to take.