Open Thread
Here's an open thread for anything on your mind.
(Please note: if you want advice or have a concern to express, that's fine. We are all glad to help or guide to help if we can. But if you are going to write about something factual, please state where you found your information. It's important that this blog remain a place where we do base non-opinion statements on factual information and state our sources. Naturally, you can disagree with someone's take on what data means but I just want to make sure that non-opinion statements get backed up. Thanks.)
(Please note: if you want advice or have a concern to express, that's fine. We are all glad to help or guide to help if we can. But if you are going to write about something factual, please state where you found your information. It's important that this blog remain a place where we do base non-opinion statements on factual information and state our sources. Naturally, you can disagree with someone's take on what data means but I just want to make sure that non-opinion statements get backed up. Thanks.)
Comments
Frankly, the counselor involved my my son's kindergarten SIT was worse than useless, she was damaging. But some people are just like that.
Lowell didn't use to have a counselor. They also (in APP) didn't used to have kids with IEPs much either. Twice exceptional? No such thing. I know several people who were simply told APP couldn't accommodate them and they needed to transfer. Some teachers accommodated some needs, but it was hit or miss.
They got a counselor the year my son was in fourth grade. Now at the very end of third grade he had burst into tears and admitted that he was being picked on. This was the day before the last day in June. I spoke with his teacher and her reply, "Oh that's been going on for months." She did say she would make sure all the kids involved ended up in different fourth grade classrooms. But had she intervened or spoken with me or with the principal? Nope.
Well, second day of fourth grade I met with the counselor because day one hadn't been good. She was in her office eating a late lunch while looking over a yearbook. Seems she walked the playground at lunch and witnessed/intervened in an incident and was trying to identify the kids involved. Guess what? Yup, it was an incident involving my kid. What a difference having a counselor made. She and the fourth grade teacher actively monitored things, intervened and changed the dynamic.
The next year was Julie B's first as principal. Between getting a counselor, a younger and more activist principal, some necessary retirements and younger teachers, I do believe having SITs and dealing with IEPs and twice exceptional kids in APP is more the norm.
I cannot understand losing either the elementary school counselors or high school career counselors. Not when central administration is still so big.
Helen Schinske
Yes, Dr. Goodloe-Johnson is on a NWEA board (non-paid) and she has publicly acknowledged this. The Board doesn't seem to think its an issue. Her relationships with several institutions does seem problematic to many of us.
I think the elementary counselors do what you might think which is to try to help kids who have issues that teachers can't solve. I don't think all elementaries have them (it's probably a budget choice by principals) but many do because of the number of high needs kids they have.
I concur about the career center counselors for high schools. It is absolutely ridiculous to want kids to apply to college, fulfill the community service requirement or find an internship or summer job totally on their own. Many kids are first-generation for college and absolutely need the guidance. But we can have 28 FTE in the BEX program and 100+ academic coaches with no discussion on whether that money is worth it.
How exactly does one "fail" the MAP? There are no passing grades, just RIT scores that can be converted into percentiles. By definition, about half the children in any grade should have a RIT score below the 50th percentile, assuming Seattle public school children are representative of the population upon which the RIT scores are normed.
I really can't get too alarmed about this without more information.
MJ, how do you know that the superintendent debated MAP results with principals at a cluster meeting?
Also, they started the MAP testing right after the return to school from winter break. That seems silly to me, to test the kids when they've just had 2 weeks off school and may still be riled up and not into their school routine yet. But for whatever reason, that's what they did.
MJ, I'd also like to know whether these supposed drops were statistically significant relevant to baseline. My daughter's Math RIT went up 2 points while her Reading went down 1 point; however, both scores are right within the score range, which I believe is a confidence interval. All I can say is that my daughter's second test did not show significant improvement from baseline. I'm not at all concerned that she lost progress in Reading however even though the second RIT was indeed a lower number.
Our MAP report also shows that first graders in the district improved from a mean RIT of 164 in Math in the fall to 174 in Winter. Reading went from mean 162 to 170. Those are positive trends overall. So while it's very difficult to interpret exactly what any one child's scores "mean," it seems pretty clear that the district as a whole made progress, in keeping with what is seen in the nation at large (ie, Norm group averages)
Source, please? The only association I can find between Dr. Merwin of Heritage University and the MAP test is that she supervised two master's projects that had to do with attempting to increase MAP scores.
Helen Schinske
We sent this letter last week...so far, a nibble by Kay Smith-Blum, nothing else.
March 4, 2010
Dear Superintendent Goodloe- Johnson and Seattle Public Schools School Board members,
With the loss of the career specialists in the high schools, and the diminished economy, we have seen a strong increase in military recruiter presence in the schools. This is often in disregard of SPS policy, and it is in disregard of the unofficial SPS policy that all students deserve an equal chance.
We would like to meet with you, in committee or individually, to discuss this.
Briefly, we’ll mention some of our concerns:
Military recruiters are not following regulations in the schools. We have reports from schools that military recruiters are coming at unscheduled times, they are coming out of uniform, they do not sign in or out.
Furthermore, military recruiters are pushing the spirit of the regulations. For example, one of the military recruiters for one high school is also a football coach there.
Schools are not following regulations. This is largely because the counselors do not know the regulations. Recruiting rules are not posted, counter military recruiters are not invited. Without counter military recruiters there, military recruiters are free to minimize the chances of deployment to Iraq or Afghanistan, to not explain the 8-year military contract, or to exaggerate the educational benefits.
SPS has some of the best policies in the country, but those rules are being circumvented. No Child Left Behind (NCLB) says that schools must give military recruiters access to students in schools, and must hand over student contact information every year- but can opt-out of giving their home contact information.
SPS policies try to make sure access is no more than allowed to college and job recruiters, and allow students and parents to opt-out just once for the student’s high school career. But it appears that the military keeps each year’s list and uses it to contact even the students who have subsequently opted out. SPS policy needs to accommodate this.
There are fewer college and job visits at the schools. Career centers are actually closed at some schools. With the continued and often increased presence of military recruiters, and the decreased exposure to other options, a segment of our students are being steered only to the military.
We hope that the College and Career Center Specialist can be funded. But the futures- no, the lives- of some of the students are at immediate risk, and we’d like to offer our help to curb aggressive military recruiting in the high schools.
Sincerely,
Kathy Barker
Parent of 3, 2 still in SPS
Garfield HS PTSA Board
Washington Truth in Recruiting
206 328-2804
Mike Dedrick
Veterans for Peace
Washington Truth in Recruiting
(206) 226-2435
I know it really varies from school to school as obviously, the military knows which schools will likely get more recruits. It is wrong and unfair to allow the military (or any other group) to not act within SPS rules. I am surprised that a principal would allow this to happen.
Military service is a noble calling and I think we all respect those who serve. But there has to be total honesty about what life in the military is and isn't and any benefits (or lack thereof) involved.
FYI, you can't meet with the Board in total. That would make it a public meeting that they would have to open to everyone. The best thing to do is to set up separate meetings with each director or whichever ones you feel are key.
"We made cuts last year in the total Central Office staff and are in the process of cutting Central Office staff again this year to the tune of $6 million dollars. This will put our staff at the lowest level in has been since 1997 when our enrollment had more that 2,000 more students that we have now."
Evidently $6 million = 73 jobs in CA. Not sure who/where these jobs are coming from. My only fear is that they will be considered senior level teachers who will displace some of the amazing new teachers in SPS.
(A bit off topic but, Melissa who is the best person to contact about building maintenance issues? There are some issues coming to light at my child's school that I would like to make sure get known. Thanks.)
My son is currently at a preschool which goes thru Kindergarten, and I am considering keeping him there for K in '10-'11, and then moving him to our attendance area elementary for 1st grade (which is Coe) in '11-'12. However, I am concerned that there will not be room for him and he will be placed somewhere else.
For building maintenance issues, I would contact Mark Pflueger, the head of Maintenance. I would cc whoever your Board member is just as a heads up. Board members should know when you make an attempt to contact the district with an issue. One, it makes the district staff a bit more accountable. Two, it allows Board members to track continuing issues about individual schools. (If they don't know what is happening on the ground at schools, they have no way to help.)
The parent who does the countermilitary recruiting at Roosevelt is also a Veteran for Peace: Roosevelt has been lucky to have Dan, who as a parent, a vet, and a smart and lovely guy, is there for the kids with every visit. (SPS policy of 2007 says that a counter recruiter can be there whenever a military recruiter is.)
Not all of the schools are so lucky. There are a network of vets, parents, and peace activists who have struggled for years with district policy, and with its implementation in schools. The Vets for Peace have stepped up mightily as parent volunteers leave the schools with their graduating students....but some schools now have no one. (Ingraham, anyone?)
Every year, a very small group of volunteers gets the opt-out forms given out at open houses and curriculum nights, often with the help of the PTA. It would be terrific if the PTA could do this as a task- it does, at Garfield, where we have a military recruiting monitor (me!)
If a college or job recruiter had a quota and were as aggressive as the military recruiters, principals would rise up more. Some in SPS have been very careful. But people are often afraid to question the military, and war.
We will try to talk individually to Board members, thanks. It was easier a few years ago, when we had some supportive board members and a general activist feeling among students. Lots of beaten down people these days.... kb
What MJ is saying is that SPS schools made "below average" growth too frequently between MAP #1 and #2. In fact many went backwards. This is what (I believe) she means by "failing."
Using these subjective indicators of "growth," it is not difficult to surmise that SPS schools, in fact, performed dismally on MAP #2. The District refused to release individual school scores (other than their own) upon request by many teachers and principals. Thus, there is no way to compare a school like Alki against say Montlake. Without site scores across district there is no way to compare schools of similar socio-economic status.
Having seen the MAP numbers personally for the District as a whole, it is clear that one of two problems exist. Either the test is terribly flawed, or the math curriculum K-12 is seriously misaligned to the State Standards (to which the NWEA purports to align when designing Seattle's MAP).
The answer to your question is not known yet.
Until the plan is fully implemented (projected to be 2015) the guarantee portion of admission will be determined by the Transition Plan. The Transition Plan for 2011-2012 enrollment will not be introduced (per District stated schedule) until December of this year, and then voted upon in January of 2011.
If the current Transition Plan rules are in effect for 2011-2012 then you would have guaranteed entry to your Attendance Area school as you would be considered "new" to the District. New to the District gives you a guaranteed seat at a non-entry grade under the Transition Plan rules for 2010-2011.
There could be any number of factors as to why students would do worse but both parents and the district can't just say because of MAP scores, schools are failing.
Three interesting articles on education, timely all:
In Harlem, Epicenter for Charter Schools, a Senator Wars Against Them (NY State Senator Bill Perkins)
http://www.nytimes.com/2010/03/07/nyregion/07perkins.html
A Wholesale School Shake-up Is Embraced by the President, and Divisions Follow (Rhode Island mass firing of teachers)
http://www.nytimes.com/
2010/03/07/education/
07educ.html?scp=1&sq=
wholesale%20school%20shake-up&st=cse
Building a Better Teacher (Doug Lemov, charter operator, explores what makes a "quality" teacher)
http://www.nytimes.com/2010/03/07/magazine/07Teachers-t.html?scp=1&sq=Building%20a%20Better%20teacher&st=cse
WV says that when you're finished reading those, it'll be time for resses.
And that makes perfect sense. Once you get past material that the child is currently being taught, you would expect to see some random variation from test to test, as the child gets different selections of out-of-level material that s/he may or may not know. You see that kind of thing happen with SAT scores all the time.
Helen Schinske
There are schools that test well-above-average growth in SPS. That means they progress faster than other schools, and that they have higher level skills. If your school is not one of those with "above" or "well-above average" growth I would ask your PTA to look into funding an after-school program that supports above grade-level skill support. For example: You have a 3rd grader. Multi-digit addition is a core skill for your child. An after school program that focuses on multi-digit multiplication would put your child in a higher MAP category on the final exam (MAP #3).
The MAP test is actually a very simplistic test of basic algorithmic skills and mundane problem solving. It is nothing like the rigor a student might find in Saxon or Singapore Math.
As to the Supt.: I am hoping that the District is cross-referencing the high performing MAP schools with their respective math programs. We all know what the District adopted curricula is for K-12, but each school supplements with custom programs that build problem solving skills. I understand that some schools have waivers and do their own programs. These schools may illuminate our decision makers regarding which programs are most efficacious. I sincerely hope that the data-crunchers are looking deeper at the high performing schools and asking why they did so well.
The whole point of the MAP is that it isn't a fifth-grade test. It tests fifth-graders on factorials and trig functions if they get that far, and it regresses to subtraction without borrowing if they do worse and worse. I still don't see what's wrong with that. Seems to me exactly the kind of information that parents have been paying Sylvan and Johns Hopkins good money to get for years.
Helen Schinske
The MAP is a "benchmark assessment," [I am quoting SPS leads] and thus tests students at the levels at which they are expected to achieve at grade level. I know this for a fact because one of the MAP Coordinators employed by the District said it to a group of teachers involved in training for MAP applications. You cannot test outside grade-level unless the exam is an aptitude test or IQ test. MAP is neither (according to the District). Remember, Edusoft did not test above grade level...ever. MAP cannot be one type of test to one group of children (aptitude), and another to a different group (benchmark assessment).
MAP is a "benchmark assessment," and therefore must test at grade level based on the Standards. If it is an aptitude test, then they must call it that and let parents know that their children are being evaluated by MAP based on aptitude to do higher level math than their children see in class.
Testing above grade-level with foreign material creates stresses in test takers which leads to panic, indifference, or futility. It is not what we teachers refer to as "best practice."
This is not at all what we've been told verbally or in writing at our elementary school. My understanding of MAP is the same as Helen's: it's an adaptive test that identifies an instructional level, independent of the actual grade. The test starts with questions appropriate for the grade level and gets harder or easier depending on whether the child gets the question right or wrong.
As evidence, I can tell you that my 6 year old got multiplication and division questions on the MAP in September; these are clearly not grade level expectations!! No child entering first grade is expected to be doing division. But she got them anyway, precisely because she was doing well with the grade level expectations, and the test wanted to see how far she could go (ie, what is her instructional level, independent of grade?)
I have had the test explained at our school and know it is not a benchmark assessment except to the extent that it is normed to the percentile. Thus, it measures how kids are doing relative to grade scores.
As mentioned, of course some of the very high come down (and some very low go up). That's just reflecting variability in the measurement and kids performance over time.
What is most disconcerting is that it seems different teachers have been given different information about what the test is.
Some say a "personalized" benchmark assessment, others profess it's aptitude, still others have been told by the MAP leads that it is a "benchmark assessment."
Some questions to ponder:
Why can't you see your child's test?
Why are the test results given in Rausch Units (RIT) instead of "student meets/does not meet" standard?
Why, in an optimal assessment, does NWEA want students to answer half the questions incorrectly? (Yes, this is true!)
1) To determine individual student strengths and weaknesses to inform instruction for that individual student. For example, if a student was weak in some part of the curriculum the student could get extra support in that area.
2) To indentify trends of strength and weakness in classrooms to inform classroom instruction. For example, if a significant portion of a class were weak in some part of the curriculum the class would review that area.
Second, for people to come around now and apologize for a drop in MAP scores as "naturally occurring variance" or "kids having a bad day" is to weaken the validity of the assessment. If the assessment is so imprecise that we expect the variance in any student's score to exceed the growth we are looking for, then the test is pretty useless for measuring that growth, isn't it?
Personally, I believe the MAP is an aptitude test, given that it is designed to test students on material they have never seen, nor will see for several years (in math anyways). However, even that moniker breaks down when elementary students start seeing trig functions during the test.
NWEA has designed this test such that students will get 50% of the math questions correct and 50% incorrect (optimally). Out of 42 questions, a student will face 21 questions that he/she cannot answer. That is both bad practice and cruel.
The Edusoft test was a much better method to determine if a student was meeting standard. It was dumped last year in favor of MAP. I recall there was a committee designing the Edusoft test at great expense. I remember that an outside consultant would come to District math meetings to observe and communicate with teachers as we discussed math issues. This consultant (wherever she was from) was really trying to create quality product. When we started using her tests they were actually very good "snapshots" of student achievement. I hope Edusoft is resurrected!
"Why can't you see your child's test?"
Partly test confidentiality, and partly that every test is a different event. Parents were not allowed to see the ITBS, either, and I don't think they were allowed to see some of the other tests commonly given (DRA, Gray Oral Reading Test). Certainly not the CogAT or Woodcock-Johnson.
"Why are the test results given in Rausch Units (RIT) instead of 'student meets/does not meet' standard?"
Because it is a norm-referenced test, like the ITBS (which also had Rausch units, though they may have been called something else), not a criterion-referenced test like the WASL.
"Why, in an optimal assessment, does NWEA want students to answer half the questions incorrectly? (Yes, this is true!)"
Because it is a test that attempts to establish a ceiling as well as a floor on what the child knows -- in other words, it's just as useful to know which areas the child has NOT mastered as to know which ones they HAVE. After all, knowing what they HAVE mastered only tells you what NOT to teach them, rather than what TO teach them. Also, it's not true, as far as I can make out, that the child is likely to get half the questions in a testing session incorrect. At the level at which the MAP finally places the student, it's estimated that the student will get about half the questions correct, but at any level LOWER than that they answered most questions correctly.
Again, I don't know whether the MAP is itself a well-designed test, but I think its stated aims are admirable.
mj said: "You should be very suspicious of a company that refuses to release test items, refuses to publish the specific skills they are testing at each grade level, and cannot even explain the bizaare logic behind how they come up with their grade level designations."
There are tons of sample questions at http://legacysupport.nwea.org/assessments/ritcharts.asp illustrating the skills they look for at each level. Again, this is more detail than I ever remember getting for the ITBS (which did have a fairly detailed report). I have seen no evidence of any bizarre logic -- their grade norming looks very like that of any other norm-referenced standardized test to me.
Helen Schinske
As I said above, I'm not sure that's true. Even if it were, though, it's not that unusual for a norm-referenced assessment test to have a lot of questions that children can't answer. Getting a 50th percentile result (dead average for one's grade) on the ITBS certainly meant missing an awful lot of questions, even if not half. The MAP is designed to be more efficient in zeroing in on the student's level, thus spending less of their time on questions that are far too easy and less of their time on questions that are far too hard.
I also think it's odd to suggest that it's always bad practice or cruel for children to be presented with too many questions they can't answer. Surely if they're actually learning things in class, any time a new topic is introduced, at first they know very few of the answers? And are teachers never to give pretests?
I know many, many parents who actively seek out testing that will show their child's above-level achievement (and also below-level, if they want to show that the child has an area of weakness or a learning disability). If the MAP really does provide such above- and below-level information inexpensively, that seems to me like a boon to teachers and parents.
Helen Schinske
The press release at http://www.nwea.org/about-nwea/news-and-events/nweas-measures-academic-progress-selected-state-approved-formative-assess states: "The term 'formative assessment' refers to interim assessments, benchmark assessments, or any other similar tools that are designed and used to gauge the academic progress of students throughout a school year."
Helen Schinske
As I have said before, what if a child DOES have a "bad day", is hungry, tired or there are issues at home, does the teacher get points off of their evaluation for that?
And there is still the issue that most students have figured out, based on my discussions with my students and reading other posts here, they know how to get through the test quickly if they want to, just answer two questions incorrectly and the test becomes easier and they are done.
From what I am reading here, this test should not be used to judge a teacher's "effectiveness" or that of the principal.
I believe that $4M was set aside in the levy to have this test implemented citywide.
I'd rather see the money go to hire more teachers and decrease classroom size, bring in some enrichment after-school programs to supplement what the teacher does not have time in class to emphasize because of class size, behavioral issues and varying levels of ability.
Below is a link to contact information for all of your members. If you send an e-mail to them, copy Pamela Oakes. She will print it out and put it into the board members box to ensure that they read it.
http://www.seattleschools.org/area/board/contact.xml
The whole article is well worth reading, but the following seems particularly relevant to how the MAP results may well be used (or rather not used) in Seattle schools:
"Benchmark assessments, either purchased by the district from commercial vendors or developed locally, are generally meant to measure progress toward state or district content standards and to predict future performance on large-scale summative tests. A common misconception is that this level of assessment is automatically formative. Although such assessments are sometimes intended for formative use—that is, to guide further instruction for groups or individual students—teachers' and administrators' lack of understanding of how to use the results can derail this intention. The assessments will produce no formative benefits if teachers administer them, report the results, and then continue with instruction as previously planned—as can easily happen when teachers are expected to cover a hefty amount of content in a given time."
Helen Schinske
Do you mean Brad Bernatek?
Helen Schinske
The Herald in Everett
"Low test scores mean Totem Middle School principal likely leaving"
http://www.heraldnet.com/article/20100307/NEWS01/703079904
This is the result of student test scores having an impact on teachers and principals.
Helen Schinske
Getting rid of the language or intent that testing can be part of evaluations isn't going to happen. But getting the language about indicators of appropriate assessments and ensuring assessments pass standards of reliability and validity perhaps could. Does anyone on the board have a sophisticated understanding of statistics?
The private test was the WJ-III-B broad reading and broad math, done one-on-one with a psychologist. My understanding is that this test uses a similar approach of adding easier and harder questions based on previous answers.
Does anyone else have other standardized achievement test scores to compare MAP scores to?
If the self-administered computer test is showing similar results, that tells me that it might be a pretty good measure of achievement. It's surely less expensive than having every kid tested one-on-one.
Met standard/didn't meet standard is a waste of time, IMHO. I want to know where she's ahead, where she's behind, and by how much.
I think KSB understands statistics. Wasn't there a 'scandal' about whether or not she had a college minor in statistics?
Its not the same time period but indicates he's where we expect.
Helen Schinske
This email came directly from a MAP lead to an elementary staff when the question was posed, "What type of test is the MAP?
"1. MAP is a benchmark assessment measuring student skills in reading and math. The purposes of MAP are to inform instruction and monitor student progress."
I can see how some teachers would view MAP as formative, if they delve into the DesCarte tables that can be generated after each exam. But, I know few teachers with the time to undertake that monumental task. Most teachers examine strand performance data and differentiate accordingly.
If the MAP has morphed into something (other than a benchmark assessment) since the aforementioned email of last month, we (classroom teachers) have not been advised of the change. So, we work with the information we have: MAP is a benchmark assessment designed to measure performance against an established standard (GLE's and/or PE's).
"FERPA is a Federal law which affords parents the right to have access to their children's education records, the right to seek to have the records amended, and the right to have some control over the disclosure of information from the records. FERPA requires that a school comply with a parent's request for access to the student's records within 45 days of the receipt of a request." See www.parentempowermentnetwork.org
district use. Officials cannot use an assessment that is secret from parents.
We get strand data which is tenuously tied to standards. I have found the statistical correlation between scores and 'the test formerly known as WASL' less than precise either as MS or not MS.
Several other districts in the area use it. What do they think about it?
On another topic...This is from another nearby school district's parent/teacher information PowerPoint (I found this fascinating): "Computerized Adaptive Assessment [MAP] • In an optimal assessment, a student answers approximately half the items correctly and half incorrectly • The final score is an estimate of the student’s achievement level."
With the old EduSoft exams the goal was 100% for all questions. These questions were Standards Based. Thus, a 4th grade student was tested with 4th grade material. There is a monumental difference in the testing stratagem between these two exams.
"Eduventures Inc. ... predicted that by 2006, what it called “the formative-assessment market”—using a term sometimes treated as a synonym for benchmark assessment—would generate $323 million in annual revenues for vendors."
The paragraph you quote reinforces the fact that MAP is intended to serve formative purposes -- what else does "inform instruction" mean?
In addition, "benchmark" merely implies the use of SOME permanent standard as a comparison, which need not be GLE-specific at all.
Helen Schinske
MJ, even supposing for the sake of argument that you're right, wasn't all that far more true of the WASL? The best results in the world couldn't have informed instruction if you never *saw* them until the students under your hand were gone, and the MAP is vastly cheaper.
Helen Schinske
Grousefinder, I am curious about that Edusoft test you speak about. I've never heard of it before. Can I ask what grades were using it? Was this district wide or just some schools?
The goal was 100% mastery. How did that work in practice? How good was the test for informing instruction? Could you, did you, differentiate instruction to fit the particular student? Since it was grade level standards, did you see WASL scores improving after implementing the Edusoft test to inform instruction better? What would you do with a student who early on got 100% of the questions correct? Did you have the time and resources to teach them further material or test them on out of level material to know what the appropriate level of instruction really was?
What about kids who got closer to 0% correct? Were the results useful? Would there be some kids for whom it would be useful to give a test a couple grade levels lower, so you could get a better grip on exactly where they were in the standards -- given that all you know is that they don't know anything about grade level standards?
I don't know anything about the MAP in practice. The goals for MAP sound pretty good to me though and seem like a tool that was missing for my son. 100% mastery on grade level assessments would have been a no-brainer for him and would just be rubbing salt into the wound if it didn't mean being identified as having mastered grade level and having the opportunity to be challenged. Same thing for a kid who got close to 0%. Without actual targeted intervention to raise achievement, taking such a test is meaningless.
The Edusoft tool has distinctly different sounding goals, but it sounds pretty good as well. Seems like it could be used to foster similar goals as MAP, if below grade level or above grade level assessments were administered as indicated.
Helen Schinske
MJ, why such venom toward the other parents on this blog? We're all here because we care about kids and the state of our schools. Most of us try to have civil conversations and learn from each other, without resorting to denigration.
I'm sorry that your children hate doing the MAP test. If they feel like failures after it (which you wrote in an earlier post), perhaps no one has adequately explained the rationale for the test to them. My daughter enjoyed the test, even though she got questions wrong and saw things that she didn't understand. We explained to her the test would help the teacher know what to teach because it would identify things that she already knows, but just as importantly, things that she still needs to learn.
Our school is using the MAP data to inform instruction, exactly the point of the test. We've been told that there is a wealth of information available to the teachers, who can drill down on each student's results and learn more precisely about areas of strength and weakness. So although the report may just say "Number sense," a lot more information is available to the teacher.
In our case, my daughter scored about 2 grade levels ahead in the fall, a finding that was helpful to me because she was complaining about being bored and her homework appeared really easy, and I didn't know how seriously to take my concerns. So with the MAP results, we were able to better advocate for her and begin to explore whether her current school is a good fit. In the meantime, her teacher is using the information to inform instruction. For example, while the rest of the class continues with addition/subtraction exercises, my daughter and one other child are paired up to work on multiplication and division. Division was something she "learned about" thanks to the fall MAP test, and she wanted to know what it is and how to do it, and now she is learning it, at school, thanks to a skilled and dedicated teacher who wants to challenge each student in a heterogeneous class.
so, from my perspective as a parent, the MAP is a helpful tool. What you call "needless labeling," I call insight and actionable information that I can use to help my child have her unique needs met.
Regarding the point that as a teacher the RIT score and test results are not actionable - my experience with my sons scores were the opposite.
The test does not identify what he did not know. It showed his learning threshold. The results included and correlated with various bits of subject matter which he should learn next. It was very explicit. He should learn long division from fall scores. He did. His scores went up in Winter. I am puzzled about your experience of the test and wonder if you understand it.
My child is not labelled by the test. He is who he is and scores from what he knows and can show he knows. He found the test inoffensive. The information helped us.
Helen Schinske
It seems like there is still a great deal of ignorance about the test. If you don't like it then don't use it, but don't let your clear prejudices and outright misunderstandings cause you to advocate for the removal of an assessment that can be used to influence instruction in a positive way. I don't think it is a good evaluation of either the entire student or instruction - it is one tool in a big toolkit that can be helpful to many.
see a few reasons why....
So, if my classroom outscores your classroom, or our school outscores your school, or my kid outscores your kid, or your child appears to be three grades above their own, one must ask “compared to what?” The answer to that is: Compared to the blob; that hodgepodge of nebulous data with no correlation points to where your children should be in a standards-based school system during any given year.
As I said earlier, EduSoft exams told parents exactly where their children were based on what teachers are bound by contract to do; teach to the standards, differentiate for high performers, and remediate when required.
I am also curious about this reliance on tests to understand how your child is doing.
Between seeing how my daughter was doing with her homework, speaking to her teacher and following her progress and the understanding of the subjects that she was introduced to in school, I knew exactly how my child was doing.
Listening to some of you talk about these tests and the minuteness of parameters and percentages, I wonder exactly what you are talking about, your children, the wonderment of learning, their curiosity and where it might take them, their ability to succeed in the way that they can, in a way that is unique to them? Or someone who is to be measured tested and retested to your satisfaction. To help you feel that somehow and in some way they are average or better than the child next to them.
Is this where we’ve come to in terms of education?
Is this what it’s all about now?
My issue with the test is the emphasis on the test, particularly when it is equated to the performance or "effectiveness" of a teacher, a principal or a school.
A principal is losing her job in Everett because of low WASL scores over the last three years. The school is in a depressed area and it has been a struggle to get parents and the community involved in the education of these children.
So, instead of providing the school with funding to ensure that there is additional support for the students, the principal will be fired. It was either that or have half of the staff fired. What an awful decision to have to make as a principal.
Now, how will those children feel next time they take a WASL or a MAP test? Will they feel the weight of the world on their shoulders?
But, that's what the Race to the Top is all about. Student assessments based on testing. If the students don't do well, someone loses their job or a school closes and is "transformed". And in a profit based, corporatized and increasingly privatized society, that school will more than likely become a charter school.
http://susanohanian.org/cartoon_fetch.php?id=539
My issue with the test is the emphasis on the test, particularly when it is equated to the performance or "effectiveness" of a teacher, a principal or a school.
Again, it would be far worse to use the WASL for such purposes, as the WASL is not adapted for showing students' progress, and the only thing that seems to count is pass rates (which were never intended as data to judge individual students or teachers, only the broad effectiveness of schools and programs).
In theory (and we do NOT have enough data to know if this is a reasonable use, hence I echo Dora's concern to some extent), students who are being well taught can demonstrate their progress on the MAP, whether they're starting from remedial levels, average levels, or high levels. We would finally see teachers getting credit (again IN THEORY) for the students who start out well below level and make amazing strides in one year, even if they don't technically get up to grade level. In so far as students' test information is ever useful in evaluating teachers (and I know many people don't think it ever is), this is the kind that can be so used, in a way which could not be said of the WASL, the ITBS, or the Edusoft tests.
Will it work out well in practice? I doubt it, given the generally boneheaded way that this district has used test results in the past, and given the generally boneheaded way that NCLB has been playing out. But that doesn't mean that all tests are evil, or that they have no relevance to instruction. My own policy is going to be, as it always has been, to champion using test data intelligently or not at all.
I may mention that I have often posted on this blog about how useless average SAT scores are as a statement about a school's effectiveness. I happen to believe that the SAT is a very useful test for some applications, but I don't think that particular statistic has much meaning at all. Nor did I think Bob Vaughan was right in saying that the PSAT would be useful for tracking whether APP instruction was effective (ceiling effects, dude -- PSAT in 9th grade is too little, too late; incidentally the same appears to be true for the MAP in middle school from what I've seen). So it's not just that I'm all knee-jerk on recommending tests because my kids score high on them.
Helen Schinske
Apparently, if I find the MAP helpful, I am some sort of robot parent who wants her child "measured, tested and retested" until I'm satisfied. Or, I'm a smug parent who wants to prove to the world that my child is superior to everyone else's children. Or perhaps I'm just too stupid to understand and interpret nationally normed data.
I'll be frank. I love numbers. I love data. I studied biostatistics in graduate school, but I cannot understand what Grousefinder is trying to say here: "the median RIT range delineating each grade level (by school or district-wide) is really a composite of multiple test-taker scores. Thus, if all children are performing at a low proficiency level, then the median will be lower for the entire testing population. The MAP test obfuscates the dismal academic performance district wide, because most students perform poorly in reading and math, thus shifting the Bell-Curve leftwards.
First, all the MAP data that I have seen (and I'm just a parent, not a teacher or otherwise involved) has been mean RIT scores, not median. But even if the data were presented as median scores, what does it mean to say that they are a composite of multiple test takers? Are you saying that they didn't take the actual median of the group (ie, the score at which exactly 50% of the students scored lower and 50% higher)? How does the next sentence follow logically after that? Of course if all the test takers have low scores, the median score will also be "low." That's how medians work; they are a measure of central tendency. If the median score in SPS is truly as low as claimed, then how does the MAP obfuscate that finding? You just said that the median is low because everyone is doing terribly, but then say that somehow we are failing to see that fact. If the bell curve is truly shifted left relative to the nationally normed population, where is the obfuscation? That alone would tell us something.
Maybe folks are ready to move on from this conversation. Maybe we can't even have this conversation without access to the district's data. But I am certainly not willing to concede that my interest in and nascent support of the MAP is coming from ignorance or arrogance. The leadership at my school is excited about MAP, and I support them in attempting to find out whether and how it can be used to inform instruction.
I also don't understand this Edusoft thing. Evidently is was a benchmark assessment done district wide in elementary and middle school and was only recently replaced with MAP. Grousefinder says that parents got the results. I have never seen any such results. Don't know if my son took the assessment. I am wondering if others here recall getting such results?
More info about Edusoft math tests here: http://www.k12.wa.us/RTI/AssessmentGuide/MathematicsTechnicalReport.pdf
which is linked from http://www.k12.wa.us/RTI/AssessmentGuide.aspx, which has some more interesting stuff.
"The Assess2Know Benchmark item bank lets educators construct tests that are aligned to the Washington Learning Standards.
The Assess2Know Benchmark item bank allows districts and schools to create interim assessment programs that follow their district pacing guide or curricula, while being able to assess student mastery of the State Standards tested on the state’s summative assessments. The Mathematics item bank contains items for grades 3–11 aligned to Washington Standards. Districts can use the Edusoft® Assessment Management System, an online test generator, with this item bank to select the standards they want to assess on a particular test. After the items have been selected and the order of the items determined, a PDF or Word document of the form is created so that the assessment can be printed. Alternatively, the assessment can be administered online."
According to http://www.seattleschools.org/area/board/08-09agendas/061709agenda/nweareport.pdf, "Another key consideration was the district’s experience building assessments from an item bank, which is the strategy used currently with Edusoft. This process is very time-consuming for instructional coaches as it requires annual review for validity and alignment to the curriculum."
Helen Schinske
Helen Schinske
Helen Schinske
Incidentally, according to http://technology.usd259.org/resources/NWEA/documents/Vocabulary.pdf, the term "sonnet" isn't used until RIT levels 221-230, which for a third grader is well above the 99th percentile (and even at that level, remember that only half are expected to answer any particular question correctly). If you're seeing it at some much lower level, you might want to find out if there's a bug.
Helen Schinske
Thank you for your question & I do see your concern about teacher evaluation.
Here is how the MAP test has helped my child. My child always struggled with writing. At every teacher conference I asked about the writing. Every year his teachers told me that he just needed to try harder, focus more, not be so lazy or sloppy. He test scores declined in every subject as the years went by. Then last year he almost failed the WASL. His teacher was not surprised. The school would not evaluate him because he was not actually failing. The teacher was surprised by MAP scores that showed my son in the 99th percentile in every category. The teacher thought the MAP scores were not accurate. But I was able to use the difference between the MAP scores & the WASL score to push for a SIT & push for testing for a learning disability. Further cognitive & achievement testing supported the MAP scores. A learning disability related to small motor skills was identified. I hope that next year’s teacher will use the MAP scores when determining instruction for my child, because I can demonstrate that they better represent my child’s academic abilities than the WASL. Without MAP he is just a dumb, lazy kid.
I'm on the record elsewhere on this blog as being against using standardized tests for teacher assessment. There are just too many variables that go into test results, many of which are outside the teacher's control. I'd like to see teacher performance somehow evaluated with process measures rather than outcome measures. Unfortunately, I don't know that the state of education research has made clear what classrooms processes define an effective teacher. I did find the article in NY Times linked in another thread to be very intriguing.
I also didn't think that using the MAP for teacher evaluations was a done deal. You said "As a teacher, I do care because my professional evaluation is tied to this test and I am expected to teach the elements of the sonnet to third graders when most of them are just starting to understand what a simile is." I'm sorry, but this is just not what I've been told by teachers at our school. They seem to think that the union will successfully fight against merit pay based on the MAP. They also are not at all worried about having to somehow "teach to the test" or in your example, teach sonnets to 3rd graders. What am I missing? Is this a done deal? Are they telling you at your school to teach to the test?
Finally, you also said "As a parent, I know just by asking my child questions and having them read to me where they are academically..." Well, maybe some of us aren't that astute. I'm not a professional educator and I've never worked with children. I have one child who is 6. I look at her work and think that's what all 6 year olds are doing. Through a combination of teacher conferences, spending some time in the classroom, and yes, the MAP results, I started to get a clearer picture of what her needs are and whether or not they are currently being met. I wish it were as simple as having her read to me, but without some frame of reference, I couldn't tell if she was on track, behind, or ahead.
Why is it that folks are so worried that the MAP test will/has become the one and only thing that determines if a teacher or a student is successful (whatever that means)? We never saw that with the WASL and we all know that there are many, many ways that both the teachers and the students in SPS are evaluated. How it is it harmful to have one more way of looking at things? Especially since this is the one thing that can be tracked from year to year.
1. Add more reading to the reading comprehension questions. Having a child read no more than 4 sentences at a time is NOT reading and will not accurately assess how they will read chapter books and non-fiction text and will not help a parent to have accurate data on their child's reading ability in school and college. Please consider having them read at least a two paragraph selection if not more.
2. Look at the math questions and the state standards for each grade level and have them more aligned.
Thank you.
Your point about how to do this in the current era of standardization is spot-on. MAP is a tool that treats students individually and gives us information on them as individuals. It is designed to be used to tailor instruction to individuals - the antithesis of the standardized curriculum. How does my daughter's teacher tailor the math instruction to the individual students when he has to be on a particular lesson in the Everyday Math curriculum? I don't know the answer to it, but as someone who believes (without evidence) that instruction and curricula should be tailored to student needs, I hope that the MAP test will allow appropriate discussion of the role of standardized curricula in classrooms that are demonstrably diverse. And yes, I am just a parent - a parent that spent about 4 hours on the NWEA website when the MAP test was introduced. Before that I had basically never heard of it.
Okay, I thought you might mean that you were seeing that kind of question pop up for a child who was more in the middle of third-grade level questions, and according to the NWEA materials, that shouldn't be happening. If such a thing did happen, it might be due to miscoding or something, so it should be reported as a bug.
So you've got a lot of students who are capable of above-level work, according to the MAP. When you judge them against grade-level standards, they naturally look terrific. This is all a Good Thing in itself. The question their MAP scores raise is whether they need curricular accommodations, and how practical it would be for you to provide above-level resources and instruction.
You've probably heard the jargon phrase "zone of proximal development" -- anyway, for those who haven't, it's about the fact that people learn best when they're learning something that's somewhat new, but not so new that they can't make sense of it or put it into context. The MAP scores are one piece of data indicating that some of your students' zones of proximal development may be further out than grade-level materials can handle.
First off, is that true, based on other things you know about them? Second, if it is true (and it's almost bound to be true for some of these kids), what can you reasonably be expected to do about it? Is there anything in the current curriculum that you can compact or have them pretest out of, to allow time to work on something higher? Can students do independent reading that would expose them to higher-level concepts?
I've heard of a lot of teachers who were being expected to differentiate while at the same time expected to teach a highly structured curriculum with no room for changes. You're right, it isn't a sane expectation. One of the worst examples of such district doublethink I remember was at Whittier, when the Spectrum classes for years were expected to teach to math standards one year up -- using grade-level textbooks. Yeah, neat trick. (That changed in our last year there: they do have one-grade-up math the last I heard.)
I did see some tools out there for providing enrichment for students who score at various levels. See for example
http://www.sowashco.k12.mn.us/ro/pages/studentlinks/map/reading.htm
http://www.sowashco.k12.mn.us/ro/pages/studentlinks/map/
Those are obviously kind of canned stuff, but they might be useful in some contexts, dunno.
I don't think you should be expected to tailor curriculum to the MAP any more than high school teachers tailor their curriculum to the SAT or the ACT (which isn't a lot, except that they cover the same broad areas). But I do think some differentiation is reasonable to expect, especially if your class is advanced in general and therefore doesn't need the usual amount of review.
Finally, I for one do not work for the district and never have. My first professional job was as a librarian, and I am now an editor who also does research and fact-checking. Though I've been interested in the MAP for years, a surprising amount of what I've posted on this thread is stuff I didn't even know before looking it up to post here.
The whole reason I can stand to be an advocate in the Seattle schools at all is that I'm a data wonk who gets enjoyment out of putting all this stuff together. Otherwise it would just be far too depressing.
Helen Schinske
Additionally, just because a child gets a question about sonnets doesn't mean you must be teaching them that. What it means for reading comprehension and thinking skills is that you could encourage more complexity in the stories they read and in the writing they do. And when 3rd graders are scoring that high, you don't expect positive growth on the MAP every three months, there's too much variability. You should be protected from worrying about that.
As for math growth. Well, as a former math teacher (not in Washington State) I feel your pain regarding the scripted fidelity of implementation. But perhaps the MAP can help? You probably have interested parents who can make sense of the scores and help advocate here. If you have some kids who are ahead (or behind), and not making progress on MAP, then this is more argument against the standardization of the math curriculum calendar. What I mean is that this is the sort of data that shows that you can be -- and should be -- teaching the kids, not teaching the pacing guide.