Update 2: So I have seen a message from President Liza Rankin on why she, Director Evan Briggs, and Director Michelle Sarju backed out of this meeting. In a nutshell: - She says there was no organization to the meeting which is just not true. They had a moderator lined up and naturally the board members could have set parameters for what to discuss, length of meeting, etc. All that was fleshed out. - She also claimed that if the meeting was PTA sponsored, they needed to have liability insurance to use the school space. Hello? PTAs use school space all the time and know they have to have this insurance. - She seems to be worried about the Open Public Meetings law. Look, if she has a meeting in a school building on a non-personnel topic, it should be an open meeting. It appears that Rankin is trying, over and over, to narrow the window of access that parents have to Board members. She even says in her message - "...with decisions made in public." Hmmm - She also says that th
Comments
ZB -- here's a brief description from http://seattle-ed.blogspot.com/:
[MAP are] the new computerized, standardized tests the district is administering this year to all students, from as young as kindergarten to grade 9.
MAP stands for "Measures of Academic Progress™" (yes, it is a trademarked product) and will be administered to the kids three times during the schoolyear. The test can take as much as two hours each session, according to the district's official announcement letter (see: http://bryantschool.org/index.php?option=com_content&task=view&id=487&Itemid=343.
Here are all my questions on the subject:
I'd like to know who decided to buy the MAP(tm) test for SPS, when and why. Was there any public input on this district choice and expenditure? How much did it cost? Was this the best use of district money when there are so many other more immediate needs? Are standardized, computerized tests appropriate for kids as young as 5 in kindergarten who aren't even reading yet? Have some kids indeed figured out how to outsmart the adjustable mechanism of the tests, thus skewing the results? Do we really want precious class time spent on even more testing? What are the tests really going to be used for -- to evaluate kids or to evaluate teachers? What have the first round of tests shown so far? I heard from one teacher that no one seems to know how to make sense of the results. Is that true district-wide? I and others would also like to know, why in Sept. 2008 did Supt. Goodloe-Johnson join the board of directors of the Northwest Evaluation Association which manufactures and sells the MAP(tm) test? Was this before or after the district decided to buy NWEA's product? Were any other companies or products considered? Did the Superintendent's position on the NWEA board influence the school district's decision to purchase that company's product? Does Supt. Goodloe-Johnson stand to profit financially from this association? Isn't this a conflict of interest?
(see:
http://www.nwea.org/about-nwea/our-leadership
http://www.nwea.org/about-nwea/faq
/General%20Information#faq-1043
http://www.seattleschools.org/area/news/sbnews/nwea
They are supposedly also using MAP results as part of gifted testing. My daughter took the cognitive abilities test up at school this past Saturday, and rather than bring her back for more testing of reading and math, the letters we got said they will use MAP results instead of standardized Woodcock-Johnson Achievement tests.
The staff at our school seems to like it. That said, I did talk to one mom whose older child had figured out that if he gave wrong answers, he'd assure himself easy questions, and some kids were discussing the "benefit" of doing this amongst themselves. So in some cases, MAP may not reflect the child's ability. I don't know what educators can do other than stress that children answer honestly, and if results are out of whack with what is known about a child, perhaps disregard the MAP results?
"In 2006, $250,000 was provided for grants to school districts to purchase diagnostic assessments (according to the statutory definition). Most school districts used the grants to purchase Measures of Academic Progress (MAP) from the Northwest Evaluation Association. In the 2007-09 biennium, $4.9 million was appropriated and about 30% of the funds went for MAP during the 2007-08 school year. Funds for the 2008-09 school year were redirected toward development of a statewide system of diagnostic assessments. There is still $3 million in the 2007-09 budget for diagnostic assessments that has not been allocated."
Helen Schinske
Helen Schinske
Of course, like with any single indicator one would need to do foillow-up classroom assessments to determine accuracy.
But it might be helpful over time to determine placement (eek, tracking! but if a student is a "fourth-grade reader" in 10th grade, SHOULD they be in regular LA10? Or maybe in LA10, but with a reading class also?
If you don't know this in August, how do you schedule the student appropriately?
The argument would be that grades tell us this, but there ARE 10th graders with 4th skills...are the grades accurate?
Additionally, MAP has a function they call "Descartes" which essentially brings up which skills a student might be lagging in given their score. This could help teachers "put a face" on the data, particularly if a teacher isn't a reading teacher and might not be regularly assessing comp, etc, or even math.
It's a snapshot, incomplete and perhaps too general (Descartes gives target skills for that particular score, not for that particular student, and its suggested areas of remediation could be off), but I think it might prove helpful.
My son enjoyed it. He liked the lack of time pressure - he felt pressured in some tests (spelling) this year because of his handwriting and he liked the computer format. He freaked out a bit when the questions got too hard and he came home asking "should I know the square root of 225? Like, HUH!?? and what about questions with letters in them and not numbers?? HUH?" But that's more about joining a school from another and being very sensitive about being left behind in what is being studied.
All in all I am keen to see what our teacher is doing with them in Third Grade and am optimistic that she is finding it useful data. It took her AGES to conduct reading evaluations on all 29 kids at the beginning of the year!!
Jessica de BaRROS was put in charge of MAP (she was in charge when I enquired about it last March). She is a Broad trainee (may be a Resident by now).
I saw in mentioned in a minutes of a recent Board meeting that the District has decided to drop the DRA (Developmental Reading Assessment) in favor of the MAP.
If you have seen an elementary school progress report for your student, then you know what a DRA assessment looks like. It is very informative. I strongly doubt that a computer based test can be as good as teacher-administered assessment. Does it save teacher time? I don't know.
The Principal at Lowell told us that they probably won't be sharing the MAP reading assessment results with parents, becuase it is not easy to interpret (at least, that is my recollection of what he said). I will miss the DRA. It is quite informative for parents.
At minimum the District should have done a comparison of the results from the DRA and the MAP in the pilot test. That comparison would show whether the MAP results are highly correlated with and as informative as the DRA. DId the Board ask whether this comparison was done? I didn't notice any mention of this in the Board minutes. Has the District done such a comparison? Who knows?
Will we find out Nov. 12 when PM is explained? Does anyone know what PM actually IS?
I find that amusing and interesting, and wonder how one deals with it. "stair-casing" (i.e. making a task more difficult in order to find the threshold of performance) is a standard psychological/behavioral technique. Stair-casing's benefit is that it allows a quicker assessment of the threshold (i.e. where one goes from being able to do the task to not doing it). But, it's prone to specifically this kind of flaw (as well as the possibility that "accidentally" errors at critical trials ended up searching for the threshold in the wrong set of questions).
One middle school teacher described how she reviews the feedback with each student & uses it to set individual goals for the next quarter. Then reviews the results at the end of the quarter. The implication was that her whole school does that.
She said that it helps her target holes in individual student learning especially below & above grade level, that she might miss when doing curriculum based assessments.
As for the DRA, well, at Lowell it was a joke because many of the first graders ceilinged it first time around. And then, even though they had gotten straight 4s on the highest level assessment in first grade, the second grade teacher HAD to spend the time to redo the assessment.
As for the validity of the DRA, I know of one teacher (not Lowell, a more challenged population) so dedicated and careful she recorded each child and then at home relistened so she could more carefully score. I know of a different school where Parents! were given a little training and they administered it.
With the amount of daily homework that he has and his extra-curricular schedule - which is not as busy as most - I'm pretty ok with the practice. I'm not a huge believer in enormous amounts of homework anyway.
At parent meetings at my daughters independent high school, the staff stressed repeatedly that they wanted us to err on the side of caution and keep our kids home when they had any flu symptoms what-so-ever. Homework is posted online and students have emails for all of their teachers so they can stay on top of things. I was allowed to go to my daughter's locker on curriculum night to get books she needed.
I don't think she had H1N1, but I was glad that kids would not be coming to school that might. Docking a student's grade for being out sick is too punitive IMHO.
Forgive me, however, for saying -- I am not exactly sitting here with bated breath waiting for the results.
I presume that the district prefers the MAP for at least two reasons in addition to the pedagogical benefits: it provides business income opportunity for a public-private partner; it satisfies the need for quantitative score for each student that is needed for purpose of data-driven decision making. To my way of thinking, these are not good reasons for adopting the MAP. I hope the district had very good appropriate reasons for dropping the DRA in favor of MAP.
I googled [K-12 "performance management" definition] at found this highly relevant doc from the Aspen Institute: http://www.tqsource.org/whatworks/WWC08buildingCapacity/resources/K-12_HCM_Framework.pdf [5 pp.]
I skimeed it very quickly: Here a couple quotes: "creating a performance management system that recognizes high performing teachers requires rethinking teacher evaluation, compensation and nonmonetary rewards for performance, the career development opportunities for exemplary teachers, and the creation of a professional culture that celebrates excellence and
continuous improvement."
Another qoute from same doc: "In education, to the maximum extent technically and practically feasible, evidence of impact on student learning should be the primary criterion of performance. At issue is what measures of student learning should be counted (e.g. value added measures based on standardized test scores, other student performance measures), what in addition to student achievement results should be included in the definition and measure of good performance (e.g. observable teacher behaviors, contributions to school improvement), and what levels of reliability and validity are necessary for making consequential decisions."
Does this speak to Sea Citizen's questions?