Skip to main content

Tell Senators Cantwell and Murray to Limit Testing

How does once in elementary, once in middle school and once in high school sound for state testing for federal reports?  Because honestly, the overwhelming majority of teachers can tell how a child is doing in school so why the time, cost and lack of real help for teachers/parents with multiple year testing. 

FairTest has this link to send a letter in support of Senator Jon Tester's amendment (yes, that's his name) to limit testing to three times in a student's K-12 academic career.   (This would not preclude district assessments.)

Congress is taking up NCLB on July 7th. 

From Diane Ravitch, PARCC is Falling Apart with the Departure of Ohio:

Peter Greene brings us back to the halcyon days when central planners at the U.S. Department of Education dreamed of one big set of national standards–the Common Core–and two testing consortia, both dependent on the same set of standards. The Gates Foundation funded the Common Core and continues to fund various organizations to advocate for it and to “demand” annual testing mandates. The federal government funded the two testing groups–PARCC and Smarter Balanced Assessment–with $360 million of our taxpayer dollars.

It turns out not to have been a sound investment. PARCC started with 24 (or 25) states in its consortium, and more than half those states have abandoned the Pearson-made PARCC. With Ohio’s exit from PARCC, the number is down now to 10 states plus D.C. Some of those 10 are likely to drop PARCC.

 The technological problems have been extremely annoying, and the amount of time required for the testing (8 to 11 hours) is burdensome. Here is a question: Why is it that teachers can give a 45-minute test in reading and math and find out what their students know, but PARCC requires 8 to 11 hours to get the same information?

The market for PARCC has shrunk so dramatically that Peter Greene thinks it is only a matter of time until Pearson executives decide that the tests are not worth their time, the revenue stream is too small, and bye-bye PARCC.
 
Mercedes Schneider reports this about Pearson:
 
Now, cut to June 30, 2015, and an article in the UK Telegraph regarding financial advice from UK-based market research firm, Questor entitled, Questor Share Tips: Sell Pearson on US Education Weakness.
 
The shares don’t pass our test. Sell. 
 
Understand that this is not all happening not because of unhappy teachers.  There are several reasons but one huge one is because of unhappy parents willing to stand up and express their unhappiness as well as opt their kids out. 
 
As I am prone to saying, tick tock. 

Comments

n said…
It always seems to sound self-serving when we as teachers talk about less testing. I feel that instead of respecting my expertise in teaching I'm looked at as someone who doesn't want to be help accountable. But really, we need less testing! Is there proof somewhere that all this testing has really helped? Perhaps I'd be convinced if someone could show me the evidence that maximum testing works. I know that our short school year make demands that I currently do not have the time to meet. Something always gets rushed. That is not good teaching. I'm celebrating every state and school district that have the courage and knowledgeable administrators to say enough! Seattle clearly does not.
Anonymous said…
I'm trying to figure out why the system is still graduating students who can't read or write beyond a 5th grade level. Perhaps this is the reason for government enforced testing.
Maybe there needs to be a way to measure the system and provide corrective action in order to stop administrative promotion of unprepared students.

Some of the high school students I've worked with could not perform 6th grad math, yet their teachers did not perform or seek any type of intervention before they ended up in our church's program.

Where does the root of the problem begin? Is it a classroom problem, administration problem or something else.

Accountability matters

SE SPED
Lynn said…
Are you saying you think kids have been passing the HSPE without being able to read or write beyond a 5th grade level? That doesn't seem possible. As for repeating a grade, it has been shown to do more harm than good academically.

The problem begins when children arrive unprepared for kindergarten. It takes them longer to learn to read than their peers. They're still mastering reading when their classmates have moved onto reading to learn. They lose ground over summer breaks and it's practically impossible to catch up.
Anonymous said…
Sounds like it's not the school system you want to educate students, but the parents. Is this correct?

SE SPED
Anonymous said…
SE SPED wrote:

Why the system is still graduating students who can't read or write beyond a 5th grade level. ......

Where does the root of the problem begin? Is it a classroom problem, administration problem or something else.
----

#1 Lack of effective interventions for struggling students
... in regular ed classrooms using instructional practices and materials that are ineffective caused even more students to struggle.... (the term is instructionally disabled ... those that do not have a real learning disability but are not learning efficiently because of lousy programs)

#2 Unfounded belief that differentiated instruction can meet the needs of students of vastly differing ability levels in the same classroom.

#3 Failure to institute comprehensive programs that have been shown to work.
In the last 15 years there is only one state that has shown statistically significant improvement on NAEP testing. Florida's improvement dates to when the state lowered class size and stopped promoting non-readers to grade 4. (Other changes also happened). Florida elementary schools became focused on improving the teaching of reading in grades k-3 (not just grade 3). Parents became very interested in improving their children's reading skills as well.

#4 The National Science Foundation in math has funded grants based largely on political correctness of what College Ed experts would like to work. As a result program grants are administered in a largely unscientific fashion. Proven practices do not get funding. There is only financial accountability for NSF grants, results are not important. Innovative programs, funded by NSF, most of which produced poor results cost the NSF $185 million and produced wealth for colleges and lousy results for students.

#5 Ignoring at almost every opportunity to apply relevant data to improve the system. Simple correlations do not necessarily reveal causation. Selective use of picking favorable data to choose programs is widespread and absurd. (( The SPS in a School Board Action Report for "New Tech" at Cleveland HS simply copied claims from advertising and ignored the relevant data because they never looked for the relevant data. Then spent $800,000))

Lots more problems as well. -- Currently so many companies are interested in making a buck off of schools that real improvement seems irrelevant. MAP testing, CCSS, SBAC, Pearson etc. etc.

-- Dan Dempsey
n said…
Lynn's right. Not every child is ready or moves at the same pace. I'm not sure how you get every single student at grade level without a huge injection of funds and programs/interventions. You say the "teachers did not perform or seek..." how do you know? Perhaps they knew there was no intervention. This new notion of a multi-tiered system is still words on a page. Our staff has been introduced to the system but so far no one has a clue who is going to intervene and how. Everyone has a plan. But getting the plan off the page seems to be the conundrum. And having a "meeting" doesn't get it done. It takes money, resources(those so-called interventions), and small-groupings for those kids who need them. All we see year-after-year is lip service.

I keep bringing up Hamlin-Robinson. Most kids who cannot read are on the range for dyslexia. Every school should have a highly-trained reading specialist with experience in a broad-range of interventions. Partnering with HR and even Berninger would be a beginning. I was a reading specialist once and I got it because the school didn't want to lose me because I was a successful teacher. But I was no reading specialist. I did my best to become informed. That's how it is done in Seattle. You have to spend the money and you have to demand really well-trained experts in each field. One size does not fit all. The same with sped teachers. Many are not really qualified.

And a child's success isn't guaranteed by having good teachers. Family counts. I had a student two decades ago who never wanted to take her coat off and just sat staring into space. I bonded with her, I encouraged, I was physical in a comforting way. Finally, I got her to write a story for me. It was like waking her up and she started writing so fast - she just had to get it out. She railed against the injustice of something at home and how much she hated . . . I'm sorry, I can't remember it all. The writing was short and I saved it. It is somewhere. But she got it out, put down her pencil, and went back to staring into space. We had a family support worker at that time but still that child received no extra help. And we had many kids like her.

I think you have to teach, to be in the classroom, to really see and understand the needs of our at-risk children today. The legislature doesn't help. The District doesn't help. It is all on teachers and sometimes we have no options at all. Except to make our classrooms safe places for these children. And to provide whatever engaging activities and emotional/intellectual stimulation we can.

And now with alternative programs at risk, these children have even fewer opportunities to engage.
n said…
Dan, you always nail it down so well. Thanks. Perhaps you're the brain and I'm the heart...:)
Anonymous said…
n said

Why don't you call or email Nyland and ask him to accept Dr. Berninger's offer to partner with SPS?


SE SPED
Anonymous said…
To expand upon my answers in regard to SE SPED....

SE SPED wrote:

"Why the system is still graduating students who can't read or write beyond a 5th grade level. ......"
-----------------
Check out the following for the primary school grades...

Look at Project Follow Through results which showed a particular direct instruction model to be particularly effective and especially so for educationally disadvantaged learners. Unfortunately that model was largely ignored when it came to putting it into practice on a widespread scale.

Direct Instruction model. Developed by Siegfried Engelmann and Wesley Becker of the University of Oregon, direct instruction is scripted and specifies precisely what the teacher says and what the students’ responses should be. Moreover, the program designers carefully sequenced the instruction so that students do not progress to higher-order skills unless they have mastered prerequisite basic skills. There is a high degree of interaction between teachers and students so the teacher may receive continuous feedback about how well the students are doing, and adjusts instruction accordingly. The program makes a specific distinction between on-task and off-task behavior: instruction is arranged so that students are fully engaged in learning (via frequent checking for understanding and praises by the teacher) the majority of the time.

----
National Institute for Direct Instruction

Siegfried Engelmann describes the Follow Through experiment, the results and the aftermath in a chapter from his book, Teaching Needy Kids in our Backward System

continued
Anonymous said…
Excerpt from Chapter 5 page 227 of the above book (DI = Direct Instruction model):

DI was not expected to outperform the other models on “cognitive” skills, which require higher-order thinking, or on measures of “responsibility.”
Cognitive skills were assumed to be those that could not be presented as rote, but required some form of process or “scaffolding” of one skill on another to draw a conclusion or figure out the answer. In reading, children were tested on main ideas, word meaning based on context, and inferences. Math problem-solving and math concepts evaluated children’s higher-order skills in math.


Not only was the DI model number one on these cognitive skills; it was the only model that had positive scores for all three higher-order categories: reading, math concepts, and math problem-solving. DI had a higher average score on the cognitive skills (+354) than it did for the basic skills (+297). No other model had an average score in the positive numbers for cognitive skills. Cognitive Curriculum (High Scope) and Open Education performed in the negative numbers, at –333 and –450.

On the affective measures, which included a battery of tests that evaluated children’s sense of responsibility and self-esteem, our model was first, followed by Kansas. The models that stressed affective development performed even below the Title I average.

One of the affective tests described positive achievement experiences and negative experiences. DI children saw themselves as being more responsible for outcomes than children in any other model. On the test that assessed children’s feelings about how they think other people view them and how they feel about school, DI children had the highest scores.

Note that DI was over 250 points above the Title I norm and Open Education was over 200 points below the norm. The Abt Report observed that the high performance of children in our model was unexpected because we did not describe affective outcomes as an objective. The reason was that we assume that children are fundamentally logical. If we do our job of providing them with experiences that show they are smart, they will conclude that they are smart. If they experience success in school that can also be measured in the neighborhood, those experiences serve as fuel for the conclusion that students are competent. At the time of the evaluation, I had heard more than 100 stories of our children helping older siblings learn to read or do homework. The children knew that they could do things the average kid on the street could not do.


------------------------------

Ed school professors and publishers are interested in publishing and innovation often just for the sake of innovation, while frequently ignoring what works to the detriment of students.

-- Dan Dempsey
Anonymous said…
Testing … why do we have it?

One reason is manipulation of the general public. Another reason is to improve instruction. Other reasons include Arne Duncan and SB 6696 and Race to the Top and NCLB etc.

The testing during the WASL years was much more about manipulating the public than improving instruction. To believe SBAC testing will be much different is naive.

In reviewing the WASL results at grades 4, 7, and 10 in the early years (only these three grades were tested) and comparing those results with Iowa Testing that was still in place in grades 3, 6, and 9, it was easy to see a large increase in WASL reading & math scores as years passed yet no corresponding increase in ITBS scores.

These positive WASL reading results were trumpeted by SPI Terry Bergeson, as proof reforms she led were working (the ITBS was ignored and eventually phased out).

4th grade reading pass rates went from 48% to 81% passing from 1997 to 2006
7th grade reading pass rates went from 38% to 68% passing from 1998 to 2007

4th grade math pass rates went from 21% to 59% passing from 1997 to 2006
7th grade math pass rates went from 20% to 54% passing from 1998 to 2007

4th grade WASL state scores
7th grade WASL state scores

The ITBS results showed nothing like the above improvement happening in grades 3, 6, and 9 but we never heard much about those ITBS scores. (ITBS was discontinued around 2003).

Now we are transitioning to the SBAC’s testing, which is apparently making students unpaid Beta testers (if not Alpha testers) for the SBAC.

We will learn little if anything in the first few years of this SBAC testing that will improve instruction but you can count on manipulation to show that the CCSS is just doing a great job as years pass (just like WASL reading & math scores during the ramp up years).
CCSS has been brought to you by the Gates Foundation and its lackey Arne Duncan. CCSS has the stamp of approval of Washington State legislators. (and how many Seattle School directors???)

-- Dan Dempsey
"Maybe there needs to be a way to measure the system and provide corrective action in order to stop administrative promotion of unprepared students."

I don't think testing is the only way to do this. And, we've had testing for years and years and yet students still get pushed along.
Anonymous said…
Melissa wrote:

I don't think testing is the only way to do this. And, we've had testing for years and years and yet students still get pushed along.
---------
The crux of the difficulty lies in the failure to use evidence based planning to correct this problem. Testing by itself does nothing.

Florida produced significant improvement but the requirement to pass a reading test to go to grade 4 was a very small part of the plan. The real success resulted from improved instructional practices but the testing was needed to make that happen.
----

At one time at least one SPS school was using DISTAR but it gave way to the next thing.

DISTAR is an acronym for Direct Instruction System for Teaching Arithmetic and Reading, a trademarked program of SRA/McGraw-Hill, a commercial publishing company.

The program is used particularly for historically disadvantaged and/or at-risk students.

DISTAR Reading has been extensively expanded and rebranded by SRA/McGraw-Hill as Reading Mastery while DISTAR Arithmetic I and II are still available (DISTAR Arithmetic III is out of print). DISTAR Language I and II have been updated and renamed Language for Learning and Language for Thinking. Direct instruction is one of several highly structured methodologies for teaching elementary, middle and high school students.

Siegfried Engelmann, a professor at the University of Oregon, created the Direct Instruction model. DISTAR was a direct result of Engelmann's success with Project Follow Through.

((Meanwhile Carla Santorno bought Everyday Math and told us it would eliminate the achievement gaps (now renamed opportunity gaps) if fidelity of implementation occurred.
Check the data ... Carla was just spewing happy talk. The Gaps did not narrow with EDM and fidelity of implementation. ))


-- Dan Dempsey

Popular posts from this blog

Tuesday Open Thread

Breaking It Down: Where the District Might Close Schools

Education News Roundup