MTSS is the future (for now)

I attended the May meeting of the Board Curriculum and Instruction Policy Committee where I heard some talk about the MAP test that I couldn't reconcile on my own. I wrote then that I would try to ask Wendy London about it, and I saw her - and Mark Teoh - about it yesterday.

My confusion from the C & I meeting was quickly resolved. Ms London had said that the MAP test was nationally normed but she had also said that it was keyed to the Washington State Standards. These can both be true because the questions our students see on the MAP test are chosen from the pool of available questions only if they are applicable to the Washington State Standards. That's how the test is geared to the state Standards. The students' RIT scores can be compared to the scores of students across the nation even if they didn't get the exact same questions that our students got. That's how they can be nationally normed. It's worth noting that the MAP test is the only assessment our students get - before the ACT and SAT - that can be compared to results from students outside the state.

I asked about the difference between how the test was sold and how it is now used. It was sold as a formative assessment, but not only isn't it being used that way (I don't think it could be used that way), but no one even talks about it being used that way anymore. Instead, the test is being used for a lot of other things, mostly as a management tool. It's used to assess teacher performance, it's used to determine eligibility for advanced learning programs, and it may be used to compare school performance. It doesn't appear produce any direct benefit for students or teachers.



Both Ms London and Mr. Teoh gave me blank looks. They weren't here when the District first bought MAP. They didn't make that decision, they didn't sell the test to the Board or the public, and they weren't going to take any responsibility for how that was done. They agreed that the test doesn't really work as a formative assessment except in the roughest of ways and not in any practicable way. The formative assessments that are actually used to inform instruction are the little tests that teachers give, known in the lexicon as Classroom Based Assessments (CBAs - not to be confused with Collective Bargaining Agreements). These more frequent checks by teachers are what they actually use (and have always used) to guide their instruction and to know which students are ahead or behind. Moreover, they both agreed that standardized data provides prompts for questions but does not provide answers. The test results suggests things so we can make a closer inquiry but does not prove them. In short, all three of us were in accord on the what the test can't do.

At the C & I meeting, however, Ms London spoke about the value of the test, and she spoke of it with conviction. I told her that I envied her strong sense of the test's value and asked if she could share it with me. That's when Ms London spoke about the role that MAP plays in the MTSS - Multi-Tiered System of Support. This is the newspeak for Response to Intervention. They are completely synonymous. There have been changes in the Elementary and Secondary Education Act (ESEA), the federal law that governs education, to bring MTSS into the core. So if you don't know about MTSS you better learn about it now because it is the new model for education in America and Ms London is an adherent.

Here's how it works. The schools are all responsible for delivering the core instruction. You know, math, literacy, science, history, arts, etc. This is "Tier I" and everyone gets it. To check that everyone is getting it, we do an assessment. The MAP is that assessment. The MAP has two values that the CBAs can't provide. First, they are district-wide so we can know where and how our curriculum is - or is not - aligned. The Standards and academic expectations should be the same in every 4th grade classroom so we need a shared assessment for them all. Second, the MAP is national, so we can confirm that we are not setting the bar too high or too low but in alignment with national expectations. Ms London sees the MAP as an exceptionally good Tier I screening assessment and that's how she intends to use it.

There is an expectation that this assessment, which is intended as a screening tool, will show that the core instruction is working for about 80% of our students. The screening tool will show that some students are working either above or below grade level. This discovery should lead to a conversation between the principal and the teacher that goes something like this:

PRINCIPAL: "Mrs. Teacher, the MAP data indicates that you have four students in your class who are working below grade level in math. What are you doing for these students?"

TEACHER: "Actually, Dr. Principal, there are six students in this class who are working below grade level in math. Three of the four who were identified by the MAP test and three others."

As a result of the MAP test results, the three students identified by the MAP and the other three identified by the teacher, are subjected to additional assessments which are more diagnostic in nature. The MAP is not a diagnostic test. It can suggest IF something isn't working well, but it cannot suggest WHAT isn't working well. These six students are now in Tier II.

Tier II begins with diagnostic assessments to discover the nature of the problem and continues with an intervention to address the problem. This intervention is delivered in the general education classroom by the classroom teacher. It could be additional, targeted skill practice. It could be a review of a concept that was missed. It could be the presentation of a lesson in an alternative way. The District is working on building a catalog of interventions to address the range of problems. At Mercer, one of the interventions which proved effective was the use of Saxon textbooks in lieu of the CMP II materials. There are further assessments to determine if the intervention has been effective. If the intervention is effective, then they are continued as needed.

If, however, the Tier II interventions were not effective, then we move on to Tier III. This represents a stronger effort at the school level and could include a referral for a Special Education needs assessment.

What is key about MTSS is that it happens at the individual student level. There is a determination of whether or not each individual student needs any intervention. There is a determination of what kind of intervention each individual student needs. There is no blanket statements about entire schools. The whole idea of "school segmentation" is exposed for the absurdity it always was. This led to more blank looks from  Ms London and Mr. Teoh who would accept no responsibility for the policies and practices of previous administrations. To this point, Mr. Teoh showed me a chart from a presentation he made to the board on the Winter MAP results. Nearly all of our schools have students scoring all along the entire spectrum of results on the MAP. Nearly every school has students among the top scores and students among the bottom scores. Every school has both high performing and under-performing students. To deliver the interventions on a school-by-school basis instead of a student-by-student basis is indisputably asinine.

MTSS, of course, also means providing something outside the core instruction for students who are working beyond grade level. There should be an identical conversation between the principal and the teacher about students who are working beyond grade level and what is being done for them. Students working beyond grade level should also get a Tier II intervention and, if that is not sufficient, they, too should be advanced to Tier III.

The MAP data could also be used to indicate some information about the class as a whole. That could lead to questions and a conversation about what the teacher is doing to get unusually strong student outcomes in some strands or what would help the teacher to improve instruction in strands that are getting weak outcomes across the class. This could not happen with CBAs. We need a tool that is the same from class to class.

The Executive Directors of Schools should be having similar conversations with principals about what they are doing for students who are working above and below grade level and what they are doing to share instructional practices that produce strong results and what they are doing to provide support when the instructional practices produce weak results. This could not happen with CBAs. We need a tool that is the same across the district.

Ms London spoke - passionately - about implementing MTSS at every school. She spoke about teachers doing 60 minutes of core instruction followed by 30 minutes of either intervention or enrichment. For every student. We are nowhere near that yet. There are some schools which have been early adopters of this practice - Maple and Mercer to name two - and they are deep in it. There are some schools that have waded in, but not yet dived and there are some who have a toe in the water. Most of them, I'm sorry to say, are still on the shore. Implementing MTSS appears to be Ms London's core focus. It is going to be a complex challenge. Not only will she have to build the catalog of interventions and enrichment, but she will have to convince people to adopt the practice.

This is a bottom up discipline. It is student-centered. It seems kind of weird to make a top-down mandate to do things in a bottom up fashion but that's where we are. I think we can help. I think that MTSS sounds marvelous. Without any of the fancy nomenclature it just sounds like good teaching to me - and good practice by principals as well. Like any great idea the response from most lay people is "This makes so much sense. Isn't it obvious? Isn't this how they are doing it already? How else would they do it?" We can provide a little bottom up pressure as well. We need to go into our children's schools and ask where our kids stand on MTSS. We need to ask what intervention or enrichment they are getting. We need to ask principals about how they are implementing MTSS in our school.

When people talk about innovation in education what they really mean is creative problem-solving. That's the promise of MTSS. Let's see if we can't get this to happen for our kids, our teachers, our schools, and our communities.

Comments

Anonymous said…
I worked in a district that implemented RTI very quickly. I was fortunate to be in a school that anticipated this and was totally prepared to put the system into place. My observations were:
1. The RTI time was in addition to the regularly scheduled math time, as opposed to instead of. This resulted in crazy scheduling issues, and this was only for Math RTI. I'm no longer there, but I can't imagine how they managed to fit in the Reading and Writing RTI time. That being said, at this school, the RTI time did not occur in the classroom, but instead students were put into groups with other students with similar difficulties, so the entire grade level had to do RTI at the same time.
2. The above-average kids received little to no benefit. Honestly, I'm not sure what benefit the average kids received, but I'm a music teacher, so I may have missed something.
3. At the middle/high school level, students are put into Tier II/III math/reading classes in addition to their regular math/reading classes. The first RTI class takes up their elective, followed by PE, followed by Social Studies. This is a problem as there are some kids that go to school for music/sports. Take that out of their day and they may lose all motivation whatsoever.
Rachel
SeattleSped said…
*Yawn* Yeah, nice concept, but we heard this in SEAAC (the heretofore moribund Special Education Advisory and Advocacy Committee) in 2009. At that time (and perhaps now?) it was all talk and no action (but different terminology).
Anonymous said…
My understanding of the MAP test being "keyed" to WA State Standards is that you can correlate the MSP pass rate to a RIT score, then focus on students with a RIT score close to the pass rate.

The MAP tests are only measuring Math and Reading, so what about social studies and science? How do you know the content is being covered?

The thought of an extra 30 minutes of math or reading instruction, on top of the 60 minutes of core instruction, means less time on social studies, science, etc. Won't taking time from other core subjects also weaken skills? As it is, my elementary child gets only about 1/2 hour of social studies a week.

not a MAP fan
jd said…
Can I speak up with a MAP success story?

One of my kids is in 2nd grade. She started off kindergarden pretty much near the middle of the bell curve -- a bit stronger than average, maybe, but perfectly typical.

Over the following three years, MAP allowed us to document her systematic slide relative to her peers, eventually hitting single digits. Her classroom teachers never noticed that there was a problem, because she's clever, and compensates well, but when it was just her having to focus on questions with no cues from her peers or other classroom materials, there was no hiding.

Prompted by the MAP results, we had her evaluated and it turns out she has classic ADHD. She started treatment, and between winter and spring her MAP scores shot up 50 percentage points. Her classroom performance improved tremendously as well, to the point where the teacher commented that she really had missed how much my daughter was struggling.

Because the teacher had not actually appreciated those struggles, MAP was the -only- tool that we had to alert us to their being a problem, and to convince her pediatrician that she was suffering from impairment of some kind at school (since "impairment in two different settings" is one of the main components to the diagnosis).

So, while I'm completely against MAP being used to evaluate teachers, my real world experience is that it was ABSOLUTELY ESSENTIAL in helping get needed interventions for my kid.
Charlie Mas said…
JD, I'm delighted by your story. This is exactly what Ms London was talking about - using the MAP as a screening tool to see if there could be a problem. That was followed by a diagnostic tool to determine what the problem was.
jd said…
Exactly, Charlie. I just think it's important to have -something- objective in place. I know plenty of well meaning people (and probably myself a few years ago), would have been all for just scrapping all standardized testing.

But, having just experienced the good that can come of having something independent of classroom assessment, I have seen the added value that can come from the test itself. I know the perils of having such a test in place for the -teachers-, but for the students it can actually serve a valuable purpose.
word said…
We very effectively used the MAP test to supplement math materials one year when we had an inexperienced teacher which manifested itself as a sudden 40 point MAP score drop. A little supplemental tutoring and we were back on track almost immediately and easily survived a year with a sub-par instructor. I do not think that the MAP should be generally used for teacher assessment but parents can make good use of the data to assess the semester by semester learning situation for their own kids.

We also find the MAP subject breakdowns instructive in targeting which math or reading concepts represent particular weaknesses.

I also think it is critical to be able to assess students using a national standard rather than something specific to Washington state.
jd said…
Charlie -- I also should add that the follow-up worked very much like Ms London envisioned. We had a site intervention team meeting with my daughter's teacher, the special ed leader, the hearing/speech pathologist, the school nurse, and the principal. We spent an hour talking about what steps we needed to take, and all of us agreed on an approach.

That said, this was at BF Day, which as Floor Pie has pointed out repeatedly, is AWESOME at dealing with their students' challenges in a supportive way. I was a bit choked up during the meeting, because I was surrounded by a group of people who clearly loved my daughter, knew her extremely well, and were all very invested in wanting her to succeed. I wouldn't give that up for all the language immersion or montessori in the world!
Linh-Co said…
We are fans of Ms. London as she seems to have a common sense approach to curriculum and instruction. This is a breath of fresh air coming from central office. I hope she sticks around and is not too dragged down by central office politics.

She would also like to have parents involved on curriculum advisory committee and not just give lip service to "community engagement".
Anonymous said…
Charlie --
Any chance you asked Ms. London if she was looking into changing the elementary and middle school math curriculum?
SPS Mom
I didn't post on the Banda interview after his contract was approved, I asked him about the math curriculum and said he felt sure they need to look at it.

I would keep up a steady drumbeat to him after he comes on-board in July.
Charlie Mas said…
SPS Mom, No, I didn't ask her about that.

The District would like to be on a seven-year adoption cycle. If we were sticking to that cycle it would be time for a middle school and elementary school math adoption. We have deferred the adoption for budgetary reasons.

Although we didn't discuss it specifically, we did talk around it a bit. Here's some reason for optimism:

1) One of the uses of the MAP, she said, was to determine if the core instruction was working. If it isn't working well for a significant portion of the students, then it would need revision.

2) Alternative materials is a part of nearly every intervention.

3) She seems to prefer bottom up solutions to top down solutions. She favors flexibility within a framework. She does not appear to have any interest in "fidelity of implementation".

In short, the top down materials mandate appears to be a dead issue.

She doesn't seem to be the sort of person who would make the mistake of substituting standardization for alignment.
Eric B said…
In your post you state that MAP doesn't work for formative assessment. I disagree. If one only uses the single RIT score it doesn't, but by drilling down into the data one can look at which areas ("strands") are weakest and strongest and instruction can be thus targeted. In addition, it can be used to group students to work on areas of common weakness. If, as a teacher 10 of my students are working below grade level, 5 of them may not be getting ratios while another 5 may not be doing well with probability. The MAP strand scores would indicate this; the overall RIT score would not. By grouping them and targeting their individual needs, I can bring them up to grade level more efficiently and quickly.
Anonymous said…
MTSS, of course, also means providing something outside the core instruction for students who are working beyond grade level.

The federal government requires that schools provide a "floor of opportunity" to all students. It does not require that students maximize their potential, as much as we might wish for such a requirement. Courts have many rulings on just how low that floor can be. It's pretty low. So, there's no "of course it means blah, blah, blah".. for students above standard. RTI is simply not about that at all.

-parent
Anonymous said…
Without surveying parents about how much they supplement the [weak] math materials, the MAP scores give an incomplete picture of their level of effectiveness. I can tell you that among my children's classmates, there is a lot of supplementation. We give Math'n'Stuff quite a bit of business. Also, what about Mercer and Schmitz Park, don't MAP scores show how much better achievement could be with other materials? It's not just the teaching...

Does "Top down" mean new and possibly better materials, while "bottom up" means spending more class time to make up for the poor materials? Why not have the strongest core materials as a base, then allow supplementation with alternative materials?

Each year I debate about whether or not to have my children take the MAP test, given their scores are used as justification that the materials are working.

tired parent
Anonymous said…
Interesting discussion, and description of the best of what can happen with standardized impersonal assessments (thanks for the anecdote JD). I'm a believer in impersonal standardized testing/assessment, if it's used properly. So often, it's not for the kinds of reasons jd describes (an example being the ability of smart students to potentially "trick" assessors who want to believe that everything is OK) It's nice to hear of at least one story where it was.

I myself was blind as a bat for 2-4th grade, and no one noticed (or at least noticed enough to do something) (because I would memorize things on the board/sneak looks/old the book closer/ etc. to avoid glasses). I couldn't hide the anymore when I had to use a teleprompter on a TV show and everyone realized that I couldn't see text. Standardized assessment might have discovered the problem much earlier.

zb
Anonymous said…
Oops, my sentence got garbled up there. I think impersonal assessments are necessary because students/teachers can "trick" themselves. But, I think that impersonal/standardized assessments also are abused (to judge teachers, students, to trump teacher's personal assessments, . . . .). I wish there was way to keep the good and not the bad.

(zb)
Maureen said…
Charlie, can you attach a "MAP" label to this blog post so we can find it later?

Do you know if the Winter MAP Results are available with the school names? I have always found it odd that MSP data is so available at the school level, but school level MAP data never seems to be posted anywhere. I am particularly interested in seeing the variation in score range across schools so we can see how difficult it might be for teachers to differentiate in different schools (actually the distribution by school and grade would be most interesting--the range could be very misleading.)
Anonymous said…
I agree that the plan outlined by Ms London seems great. However, doesn't it really require add'l aides to help students that are behind? Was there any discussion about add'l funds necessary to really intervene in an effective manner?
SPS Mom of 2
Charlie Mas said…
Lots to respond to.

@Eric B., If you can use winter MAP results to inform and differentiate instruction, that's marvelous. I've been led to believe that isn't happening very much around the district. That's what the folks in the JSCEE are hearing as well. I think that a lot of teachers are looking for more specific data refreshed more frequently.

MAP results can be used for grouping students for instruction and that is one use for it that was mentioned by Ms London.

@parent, the "of course" presumes good intent.

@tired parent, "top down" means that things are determined in the central administration and are more uniform while "bottom up" means that things are determined on a student-by-student basis.

Does home supplementation skew the outcomes when assessing the efficacy of elements of the curriculum? Yes, but it always has.

From Ms London I heard that she understands the right and wrong uses of MAP and intends to use it correctly. That said, she specifically would not offer any opinion on the use of the MAP to determine eligibility for advanced learning. That did seem contrary to her view of how the assessment should be used. We all agree that it could be used to suggest that a student might be a good candidate for an advanced learning program but she would not join me in saying that it should not be used to disqualify a student. She demurred.
Charlie Mas said…
@Maureen, I have added MAP and NWEA as keywords for this thread.

@SPS Mom of 2, Yeah. I did ask what this intervention would cost. Just so you know, the cost is not nearly as tough a barrier as the change this represents in instructional practice by teachers and instructional leadership by principals. There are schools (Maple and Mercer to name a couple) that are doing this now with their current budgets and lots of other schools that are doing it to some extent.

There is a cost since this work is labor intensive. Right now the introduction of less labor intensive methods is too hot a topic to raise. That's a second lap issue and we haven't really started running the first lap yet.

The cost will be met when our schools systems are re-designed a bit around MTSS.

Right now the District is in an awkward position. They want to move forward with this fairly bold change in practice, but they know that they lack the good will necessary to sell it and lack the authority to demand it.

It will begin as a pilot introduced by volunteer principals and schools who will get mostly moral support from the district. Look for these schools to be shown as models for other schools to follow as more schools adopt this practice. Eventually it will reach a tipping point and the story will be that this is how we work. Then the district will be able to sweep some stragglers forward. Some early success will go a long way to selling the program and widespread adoption will go a long way to the District making it the rule. It's like the state troopers. If everyone is doing 75 they can't pull everyone over so they can't pull anyone over, but if most people are driving the speed limit they can exercise authority over the few speeders.
Dorothy Neville said…
Eric B, I agree that in theory using strands for flexible grouping and intervention would be awesome. However, my understanding of the actual MAP is that their strand data is notoriously inaccurate. Joan Sias once explained to me the mechanism through which MAP adapts the test for each student and it just about guarantees the strand data will be unreliable. I do not recall the details, but if I remember correctly, MAP bounces around from one strand to the next during the adaptive process, thus the actual sampling of questions within one strand is extremely random, therefore the error measure is huge.

Also, will everyone please remember that the MAP is both reading and math? Some of the discussions pro or against MAP, both the actual test and the theoretical model, argue on just the basis of math. Does anyone find the reading MAP useful for anything except identifying outliers? How in the world can this be useful for identifying good teaching? While a math teacher might get some data about specific skills kids are lacking (although as I say above, it is limited), what do language arts and other course teachers get from MAP?

The issue of kids with hidden issues who are compensating and falling through the cracks is an eternal one. Some will be caught through the means of assessments such as MAP. Some won't. I think we do need to catch as many of those kids as early as possible, but MAP seems like such an expensive, time consuming device for the majority, there must be less costly, more responsive ways. Frankly, less emphasis on group work, more classroom based assessments that get shared with the parents in a timely manner would both be helpful (but not perfect).
Anonymous said…
MAP also has a Language usage test - grammar, punctuation and all that - but, yes, Reading and Math are the only assessments used by SPS and the ceiling of the reading test is being hit by some elementary students. It seems more a measure of how well read a student is, rather than how well they read.

Additionally, with the institution of Readers and Writers Workshop, we are not seeing novels read as a class. They are missing out on some of the teacher guided literary analysis (discussion is largely student driven) and not always covering common terms assessed on MAP (extended metaphor, personification, parallelism, oxymoron...). I'm guessing "text-to-self" is not a term evaluated on MAP.

-change the curriculum
jd said…
You know what would help effective use of the MAP results? Graphs. Makes it trivial to see long term trends for individual students, to spot individual strand scores that are anomalous, and to spot individual students in a class room who have anomalous performance. The scores are on a table in a computer -- can't be that hard!
Anonymous said…
jd- Our individual student reports do have graphs (middle school). You can see the trend, as well as the district average, all on one graph. There is another category called "norm group average," in addition to district average, but I haven't a clue what it means.

Having the comparison to a classroom or school median would be more meaningful, but that data is not shown to parents.

parent
Lori said…
Charlie said, "Does home supplementation skew the outcomes when assessing the efficacy of elements of the curriculum? Yes, but it always has."

True, but if the prevalence of supplementing is directly related to the materials being used, then it could be a significant confounding variable that makes future comparisons between different materials entirely meaningless.

Let's say that with EDM, X% of families supplement. And let's say that X is a large number because so many people have concerns about EDM.

Now, let's pretend it's 5 years later, and we are using Saxon or Singapore or whatever. Only now, the prevalence of parents providing supplementation has dropped because they have fewer concerns. Now Y% are supplementing, and Y is a small number.

Let's say that MAP scores stay flat despite the change in materials. People conclude that EDM is therefore equivalent to Saxon/Singapore/whatever.

*But* all they've shown is that (EDM + X% supplementation) is equivalent to (new materials + Y% supplementation). And, unfortunately, X and Y are numbers that they district does not/will not have and therefore can't control for.
Anonymous said…
We started supplementing after a year of EDM. How I wish we had started from day 1 of EDM. So little material was coming home, that we had no idea how inadequate the lessons had been all year (partially a teacher issue).

We taught multi-digit multiplication and long division before the lattice method and EDM's unique division methods were introduced. Our child could do math with the standard algorithms much faster than with the EDM methods. We then moved on to decimals and fractions (invert and multiply), order of operations, etc. Now in middle school, the efforts are paying off.

Thanks Math'n'Stuff (we like the Key to Fractions, Key to Decimals, Key to...series; they're simple, direct and seem to do the job).

tired parent
Anonymous said…
There's a fundamental problem with the use of MAP for assessments in support of MTSS: the district doesn't understand what it is or how to use it.

The vendors of MAP describe how each test result has an error bar, and how the test should never be taken as a 100% reflection of the idea 'true score' of the student due to these "SEM" error bars. MAP also has a ceiling (as all tests do) which make it unsuitable for assessing individual advanced learners. Instead, it can be used with confidence for aggregate measures of larger student populations, and it can be used to 'suggest' (as you say) that the student is in need of different instruction.

But that's not how the district is using it. Instead, it is being used with a hard cutoff for eligibility into Advanced Learning offerings. Hard cutoffs may be used within AL offerings to gate access to higher math.

It may be simpler to use it this way, in the same way that it is simpler to ignore reality and walk across the street regardless of the traffic. The difference, of course, is that the test-obsessed administrators who love the precise numbers in this data (but who choose to ignore the limitations of the data) aren't making decisions for themselves, but rather for other people, and choosing to ignore additional information from teachers or additional assessment with appropriate tests. Never thought I'd look back on the COGAT as the good old days.

- RollerCoasterFabio
Linh-Co said…
I do hope we will see some positive changes in C&I with Wendy London. Another thing to note, the Teaching and Learning department will now be responsible for professional development for the next school year. It seems like an obvious extension but currently HR has been responsible for providing that.
Anonymous said…
The CogAT is still used for AL admissions - MAP is being used for the academic achievement portion (rather than ITBS?), and as a pre-screening tool (good use of MAP, as long as it doesn't preclude additional testing). I'd like to know how MAP can be used for AL admissions, and then students can be denied access to higher level classes (Algebra I in 6th grade, for example) despite high MAP scores. Crazy, yes?

Using MAP for gifted and talented cutoffs is done by other districts, but they may be offering what is more like Spectrum, rather than APP. It's hard to say whether MAP scores are appropriate for the higher cutoffs of APP.

If you look at admissions for some place like Stuyvesant (high school, NY), they have their own specialized test and students are admitted by rank - available seats are filled by highest scores first.

http://www.stuy.edu/apps/pages/index.jsp?uREC_ID=126615&type=d

What is the best way to do AL testing and admissions?

parent
Anonymous said…
Wow! Great comments!

I do the same thing that Eric does: I printed out by strand the identified group list and items to be taught and tried to cover those items. Of course, Dorothy provided information I didn't know which may undermine my attempts as well might the fact that students sometimes choose right answers to items they really don't know and which might move them wrongly into another category. This often happens with brighter kids who can often but not consistently make good guesses.

I loved reading jd's comments because K-2 struggling students are not always easy to identify and parents sometimes think teacher-identified "struggling" is really immature playfulness. And few K-1 teachers would broach ADHD as a possibility because most parents are not ready to hear that. When I perceive such a situation, I start with SIT but we always go through endless interventions before coming to the conclusion - usually at the next grade level or one beyond that even - before everyone is willing to acknowledge the existence of ADHD. And there are kids who do mature, calm down and are more successful. I'm so appreciative, jd, that you didn't blame your child's earlier teachers for missing something. By seven or even eight, such behaviors are more easily identified.

Also, we can get many dynamic reports from the data. There has been a learning curve on the part of teachers. This has been my school's second year and I was one of those assigned the task of learning and reteaching teachers. I'm still learning. This is time-consumptive and one thing that administration has done to the teaching corps over the last few years is take more time from teachers. We neither have enough time to do all the teaching in a week and cover all the concepts added to our curriculum nor keep up with the pedagogy and assessment data available to us.

Re reading: my math scores went way up this year. But reading scores went down a bit. This year I taught from several programs introduced by our literacy coach. Because time was spent on these programs, actual time reading by students and one-to-one or group instruction went down. That was a mistake. I will return to my original better practice. Reading begets reading!

I do love writers workshop. I get better writing than ever before in my long career and boys who previously disliked writing time are now pencil-to-pencil in league with the girls.

I like MAP because I know the expectations. MTSS seems like a no-brainer. Isn't that what we thought we were doing all along? MAP makes it possible with more certainty.

n...
Anonymous said…
This comment has been removed by a blog administrator.
Anonymous said…
If MAP is unreliable for those scoring above a certain level, why do they have to keep taking it over and over?

ELB
Anonymous said…
"eliminating wrong answers" - yes, if that's what is happening. I've seen students stare at a screen for an awful long time and finally just choose one. With K-1, I rarely see logical guesses. Often I encourage to choose one or I see a child simply give up and choose one. Whole proctoring, you see these things. The students who have the ability to go through the process of elimination which usually reduces the choices down to two if they are unsure probably are reasonably assessed. I wouldn't argue with that. That is not typical K-1 but probably occurs somewhat at 2 because there is more ability to use abstract reasoning skills by age seven and eight.

n...
Anonymous said…
Does anyone know the details of how MAP scores are used for teacher assessment? For example, do they look at average growth per student over the course of the year for kids in a teacher's class? Do they look at individuals' prior year scores at all, to see if scores actually drop from one year to next? For middle school, do they use certain scores for certain teachers, based on subject? Do they compare the trajectory of students of one teacher to that of another teaching a similar cohort of kids and teaching the same curriculum/ classes? Even if students may tend to slide a bit over the summer, I could envision a situation in which a good teacher was able to bring those kids at least back to where they were, whereas an ineffective teacher might end up with scores that are still depressed.

There's been a lot of talk of weak teachers (e.g., HIMS APP math and LA/SS), but the sense is that since APP kids score high already, the teachers will automatically look good. If a teacher were to negatively influence learning (e.g., by teaching things that are incorrect), would MAP score analysis be able to identify some of this? Or are APP kids already scoring so high on MAP tests that the test really can't detect such subtle changes? And if that's true, than how can it be used as evidence a teacher is good?

ELB
Jan said…
ELB: good questions. Supposedly, I think, this is why MAP scores are only supposed to be a minor part of an overall assessment. But the questions you raise really highlight how nuanced the whole thing is. And throw in the teachers who may be "just ok" in many ways -- but who have a gift for being able to work with and academically rescue hard-to-teach kids. Those kids are there. They aren't going away. They will grow up and be productive (or not) adults. The teachers who have a real gift for being able to see and nurture the potential in these kids are uncommon, and are valuable assets to their schools. Good principals know who they are, and use them effectively. Clueless principals, not so much.
Anonymous said…
In looking at how MAP was sold and what it can actually do..... I am reminded of the Banks that were to big too fail. It seems Ed USA is too big ... to function effectively ... and is too big for any rational thought to be applied to decision making.

There are so many special interests driving this bus ... it will never head in an effective direction.

-- Dan Dempsey
Anonymous said…
I agree with that, Dan. It goes back to our version of capitalism currently sold in America. It is all marketing all the time for all profit. It's killing us.

n...
mirmac1 said…
Why does Denny's Jeff Clark get $271K from the Gates Foundation for RTI at his school? What about the other schools? How is that working out?
Jan said…
Dan -- you are right. Moreover, since teaching is much more akin to "raising children" than "making widgets," the entire "big corporation" model is fundamentally flawed and will never succeed (based on faulty assumptions) anyway. We need to return to as small a local model as can get the job done.

There was a time I thought that was maybe a "school based" model. I no longer think there are enough good principals to push that far down. But clearly we should return to something no larger than a "district" model. The state is pretty worthless (look at OSPI) and the Feds are beyond worthless!

Popular posts from this blog

Tuesday Open Thread

Breaking It Down: Where the District Might Close Schools

Education News Roundup