New Method to Consider Progress

This article appeared in the NY Times. The premise is that if you use a "growth model", it will track student progress from grade to grade rather than comparing last year's 4th graders to this year's 4th graders.

NCLB is allowing several states to pilot the program. "Adding growth models as a way to satisfy federal requirements to demonstrate “adequate yearly progress” could make it easier for some schools to avoid penalties because they would receive credit for students who improve performance but still fall below proficiency levels."

From the article:

"Many urban educators contend that growth models are a fairer measure because they recognize that poor and minority students often start out behind, and thus have more to learn to reach state standards. At the same time, many school officials in affluent suburbs favor growth models because they evaluate students at all levels rather than focusing on lifting those at the bottom, thereby helping to justify instruction costs to parents and school boards at a time of shrinking budgets."

There were some interesting points made:
"Cohoes school officials have spent more than $1 million on programs for their most struggling students in the past five years, and wanted to find out how much they had progressed. They learned that the lowest-level students were doing fine, while their high achievers were starting to fall behind."

This speaks to a point Charlie has made in the past:
“The fact is we serve all students, and not just the lower-end students,” said Mr. Dedrick, who travels across the state to speak about growth models to school superintendents. “If you’re just concentrating on one group of kids, it’s not fair because both sets of parents pay taxes.”

"In Ardsley, N.Y., a Westchester County suburb, administrators intend to place more special education students in regular classes after seeing their standardized test scores rise in the last year."

Teachers' views:
"But as growth models become more widespread, some teachers and parents have complained that they are hard to understand and place too much focus on test scores. Teachers’ unions, even while supporting the concept, have protested the use of growth models for performance reviews and merit pay."

Comments

Anonymous said…
Seattle got a big grant from some foundation to calculate value-added scores a few years ago. They are (were?) available on the SSD website and are interesting. Unfortunately limited in value because the tests used were different in different grades. How do you measure progress when one year they have norm referenced scores and another year criterion referenced scores? And now that the WASL is the only test, how would you use that for any meaningful purpose, when it only really differentiates kids at lower levels.

And the Seattle scores were also of limited value because (or so someone from the district told me) the union was against having them. They did *not* allow value added scores to be calculated on a teacher level. Why not? I would think that teachers would like this, as a more honest assessment of what kind of impact they were making in the classroom.

My sister used to live in Tennessee and sure enough, one of the big issues was that value-added figures just validated that high performing kids were not making adequate annual progress. It is my understanding that this phenomenon has been seen in every school district where value-added scores have been calculated (including Seattle).

Why is this called "growth model" instead of value-added? Is it to get around that Tennessee Statistician who has a proprietary formula for calculating value-added scores?
Anonymous said…
We used value-added data for the CAC (closure advisory committee) analysis and found it particularly interesting - not so much for the phenomenon dorothy's sister noted, but to see that in some schools, the highest performing children were making significantly greater gains than the lower. It definitely drove recommendations in the south end -

I see that the last year it's available for is 2005, the year we used.

Couldn't you still use the WASL to compare the school's 4th grade results against that cohort's 3rd grade results? Because they also disaggregate the growth by lowest, low avg, high avg, highest, it seems it would be useful data (even if to confirm that the high performing children aren't making adequate gains).

Here is the district site with test and survey data - by school, by year.
Anonymous said…
I'm a big fan of value-added testing in theory, but really, you'd have to use the SAME test twice to get proper data, and the kids who got high scores the first time wouldn't get measured adequately. (They might even make some careless errors and score *lower* the second time around ...)

Another problem is that some schools have such a lot of kids moving in and out that you're really not testing the same kids anyway, even if you tested at the beginning and end of one school year.

I don't think the WASL is at all a good candidate for this kind of use -- it's not good enough at fine distinctions, the scoring is not consistent enough, and it costs far too much.

Helen Schinske
Anonymous said…
Mary Sullivan wrote:
"in some schools, the highest performing children were making significantly greater gains than the lower."

Fascinating. I didn't remember that they had the data broken out to determine this. Sure, in some schools that would be true, but look at Bryant, the top kids made the smallest gains. Eckstein shows similar trend.

I do think that some of this is complacency, kids are doing fine, they get all sorts of enrichment at home, no need to challenge them. However, no real conclusions can be made, because, as Helen says, these tests are not reliable, valid or whatever the appropriate term is, for kids at the top levels.

Popular posts from this blog

Tuesday Open Thread

Breaking It Down: Where the District Might Close Schools

MEETING CANCELED - Hey Kids, A Meeting with Three(!) Seattle Schools Board Directors