Saturday, August 14, 2010

I am desperately trying to catch up on a number of stories. (This one is long but I found it fascinating so get something cool to drink.) One that caught my eye was this one from the NY Times about a study that found was asking this question:

How much do your kindergarten teacher and classmates affect the rest of your life?

The effect was thought to be short-term even for students who had great teachers.

By junior high and high school, children who had excellent early schooling do little better on tests than similar children who did not — which raises the demoralizing question of how much of a difference schools and teachers can make.

There has always been one major caveat, however, to the research on the fade-out effect. It was based mainly on test scores, not on a broader set of measures, like a child’s health or eventual earnings. As Raj Chetty, a Harvard economist, says: “We don’t really care about test scores. We care about adult outcomes.”

Early this year, Mr. Chetty and five other researchers set out to fill this void. They examined the life paths of almost 12,000 children who had been part of a well-known education experiment in Tennessee in the 1980s. The children are now about 30, well started on their adult lives.

On Tuesday, Mr. Chetty presented the findings — not yet peer-reviewed — at an academic conference in Cambridge, Mass. They’re fairly explosive.

The study found that yes, some teachers were able to help students learn vastly more than other teachers and yes, that the effect largely disappeared by junior high (based on test scores). However, there was a kindergarten "legacy".

Students who had learned much more in kindergarten were more likely to go to college than students with otherwise similar backgrounds. Students who learned more were also less likely to become single parents. As adults, they were more likely to be saving for retirement. Perhaps most striking, they were earning more.

All else equal, they were making about an extra $100 a year at age 27 for every percentile they had moved up the test-score distribution over the course of kindergarten. A student who went from average to the 60th percentile — a typical jump for a 5-year-old with a good teacher — could expect to make about $1,000 more a year at age 27 than a student who remained at the average. Over time, the effect seems to grow, too.

So why is that? Some guesses include learning skills that follow you in life such as patience, manners, discipline and perseverance.

The crucial problem the study had to solve was the old causation-correlation problem. Are children who do well on kindergarten tests destined to do better in life, based on who they are? Or are their teacher and classmates changing them?

So this study randomly assigned students to a kindergarten class (with a similar mix of socioeconomic) and so would be expected to do about the same. Nope, it didn't happen. Class size apparently did play a small role (hey, big surprise! the classes with 13-17 did better than those with 22-25). And, if there were slightly more kids from a higher than average socioeconomic status, all students tended to do better. But those weren't enough to explain the variance in test scores. So there was one thing left: the teacher.

Mr. Chetty and his colleagues — one of whom, Emmanuel Saez, recently won the prize for the top research economist under the age of 40 — estimate that a standout kindergarten teacher is worth about $320,000 a year. That’s the present value of the additional money that a full class of students can expect to earn over their careers. This estimate doesn’t take into account social gains, like better health and less crime.

Now happens to be a particularly good time for a study like this. With the economy still terribly weak, many people are understandably unsure about the value of education. They see that even college graduates have lost their jobs in the recession.

So this is a pretty important study and points to the effects of early childhood education (and probably even before kindergarten).

The corollary argument to this one can be found in Time magazine's article entitled, "The Case Against Summer Vacation".

Deprived of healthy stimulation, millions of low-income kids lose a significant amount of what they learn during the school year. Call it "summer learning loss," as the academics do, or "the summer slide," but by any name summer vacation is among the most pernicious, if least acknowledged causes of achievement gaps in America's schools. Children with access to high-quality experiences keep exercising their minds and bodies at sleepaway camp, on family vacations, in museums and libraries and enrichment classes. Meanwhile, children without resources languish on street corners or in front of glowing screens. By the time the bell rings on a new school year, the poorer kids have fallen weeks, if not months, behind.

Another major study, by a team at Johns Hopkins University, examined more than 20 years of data meticulously tracking the progress of students from kindergarten through high school. The conclusion: while students made similar progress during the school year, regardless of economic status, the better-off kids held steady or continued to make progress during the summer, but disadvantaged students fell back. By the end of grammar school, low-income students had fallen nearly three grade levels behind, and summer was the biggest culprit. By ninth grade, summer learning loss could be blamed for roughly two-thirds of the achievement gap separating income groups.

The article goes on to talk about different summer programs for low-income kids. They are NOT summer school but activities that provide enrichment (and sneak in some teaching). It's interesting that several of the comments to this story were that kids "need a break". I think the article certainly doesn't say outright "no summer for kids" but less down time and more time having fun but keeping skills sharp or encouraging new interests.


ARB said...

so this study based teacher quality on increase in testing percentile over the course of the year, if I understand it...? and it found that the this improvement was important...? I know people are up in arms about teacher evaluations, but this doesnt sound like an unreasonable way to go for most kids.

kprugman said...

Education research has a large credibility gap with voters, especially anyone acquainted with Harvard. Its like hiring Goldman Sachs to explain why people should buy junk bonds.

If Obama had revealed his plans to reform education to the public during elections, he might not gotten himself elected. I prefer Obama only because he speaks better. But I don't trust him anymore than any other politician.

If the district won't negotiate a contract that is fair for all teachers - there will be arbitration and eventually hopefully either the sup or the board or both will have resigned or gone to happier places.

kprugman said...

Education research is more suited to answering questions like whether or not pouring creamer in your coffee made your coffee smarter. Believe me, if research were relevant to clasrooms, we'd know by now.

Maureen said...

kprugman says: Education research has a large credibility gap with voters, especially anyone acquainted with Harvard. .... Education research is more suited to answering questions like whether or not pouring creamer in your coffee made your coffee smarter. Believe me, if research were relevant to clasrooms, we'd know by now.

Excuse me? So you advocate just doing random things and hoping it all turns out ok? Or doing nothing at all?

I would agree that bad research is bad. I will not agree that all research is bad. Is that what you are saying?

Unknown said...

Sorry if it's a little off-topic, but I wanted to alert you that the Seattle Times is at it again. They never give up. The article pretends to be balanced and show both sides of the issue, but the slant is pretty apparent.


Chris S. said...

One of my pet peeves is that much of what passes for "research" in the education arena does not meet the standards for research in other fields (peer-review.) I have not looked at much of the peer-reviewed ed. literature, but most of the stuff that I heard about is not even that - little more than think-tank propaganda but they can call it research.

So yes, bad research is bad. Most of the stuff that gets cited doesn't even warrant the name research.

kprugman said...

Most educational research is done by a psychologist.

Curriculum research (the writing of curriculum) is more of an art than a skill.

We spend far more time creating curriculum, than evaluating it. The evaluation phase usually involves a group of student teachers with a mentor teacher at a local teacher college.

Furthermore, few adoptions of curriculum are ever implemented correctly. The only schools I saw a math adoption done properly were in Quebec and a public charter in San Diego.

If a district were going to tie MAP scores to performance pay and teacher evaluations, then why not eliminate social promotion and grades altogether. Promote students based on their map scores. It will accomplish the same thing and cost a lot less. Chaos is better than Big Brother.

Anonymous said...

Part of this study - the original Tennessee STAR research on class size - is peer-reviewed and is pretty well regarded in most academic circles as one of the few successful true experimental studies in education. This more recent study on the Kindergarten teachers and classmates used the dataset from the original STAR research and extrapolated other information from it; as of yet, it has not been peer-reviewed, but the NY Times published it anyway.


zb said...

"I know people are up in arms about teacher evaluations, but this doesnt sound like an unreasonable way to go for most kids."

The problem with this study -- with respect to teacher evaluations -- is that it tried only cursorily to identify anything about the teacher that produced the improvement in test scores that then resulted in increased lifetime earnings.

The study was done by economists, not psychologists, and the slides (which are publicly available) suggest that the methodology is decent (though that does, and presumably will be established by peer review). It's the interpretation that's complicated. The authors show that being in a class who had higher than average testing performance increase over the course of a year was correlated with a lifetime earnings benefit. This increase could not be explained easily by the economic/racial/ELL differences in the classroom. Life time earnings benefits were "better" (in quotes, 'cause better was a statistical measurement) than other factors they could consider.

The problem with this analysis is that it can't break the problem with correlational studies -- the correlation doesn't mean that being in the "higher testing" class produced the higher life-time earnings. It's possible that there were other factors that were different between the classrooms that were the real cause of the higher life-time earnings.

And, more importantly, even if being in the class with greater test score improvement actually did *cause* the increased life-time earnings, we don't know which factors contributed (the class dynamics itself? the teacher? the absence of "bad apple" students or parents in the classroom, . . . .).

As someone else points out, the STAR study is a decent one by many scientific standards, decent by the standards of studies of large scale complex systems. And, this study seems like an interesting and potentially interesting extension. We do need to keep an eye on it, though, to make sure it does get peer reviewed.

Maureen said...

Thank you zb.

Are you familiar with any of the research SPS is claiming backs up their SERVE initiative? They posted a bibliography here.

I am wondering if any attempt has been made to evaluate teacher 'quality' independent of student test scores? From my limited reading, it seems that high 'quality' teachers are generally defined to be those whose students score well on tests.

A week ago I emailed Brad Bernatek to see if SPS has even done a basic sanity test study to see if teachers who are fired for cause under the current system (so are presumably not 'quality' teachers.) have students who show less progress over the year on the MAP. Crickets.

zb said...

Thanks for the link to the research cite. The manuscripts seem quite interesting.

They should really be linking to the NCEE paper Schochet & Chiang (2010) that Sahalia linked us to here, as well.

I think there are several interesting papers in this set of cites, ones certainly worth reading, and potentially ones that should inform our decision making about teacher hiring and evaluation.

One significant issue I see (without having read the papers carefully enough) is the implementation of solutions based on reading digests, in a way that alters important variables in the research study, but kind of looks the same. One such example picking up the idea of extra pay for teaching in challenging schools, but picking the amount of extra pay randomly. That's a variable that could have significant impacts on outcome, but gets twisted in the negotiation process, sometimes unrecognizably.

zb said...

One example of interesting research, that might actually be useful can be found in (Grossman et al 2010). In it the authors look at the attributes that might result in teachers whose kids score highly on "value-added" measures of testing outcomes (where one tries to correct for the known student factors). They look at different factors that seems to be different between the "high value added" and "middle value added" teachers and identify the use of "Explicit Strategy Instruction" as being a key difference between the teachers.

An example of a teacher using "ESI" was the following: "For
example, one high quartile teacher systematically broke down a newspaper article on “skinny
jeans” to help students understand the features of effective journalism. She instructed them on
how to compose a list of “4 W's” (who, when, where, and what), how to use that list to create a
focused lead, and then how to incorporate supporting details culled from graphic organizers.
Students then wrote their own newspaper articles with an arsenal of specific strategies."

That's research you can use -- and yes, I know, the good/excellent teachers among you will say -- of course I do that. But, this paper says that not all teachers do this, and that the ones who don't seem less effective. Teaching teachers how to implement this strategy, explicitly seems like something we could use.

I do believe that reading this research critically is important for informing policy decisions, and that dismissing it out of hand weakens our ability to see it used effectively (see, my complaint above, about policy makers skewing research so that important variables are randomly altered, making the research essentially useless).

(PS: not an education researcher or a teacher, just a parent who wants to know as much as possible about how to help teachers teach my -- and other people's -- well)

hschinske said...

I think the lifetime earnings analysis is actually the weakest part of the research, and probably thrown in so that they could have a soundbite about how much kindergarten teachers were worth. I'm far more impressed by the part about being more likely to go to college, less likely to be single parents (especially if this means less likely to be teen parents), and more likely to be saving for retirement (the last is, to me, a better measurement of financial health than income anyway).

Helen Schinske

zb said...

"But those weren't enough to explain the variance in test scores. So there was one thing left: the teacher."

I'll add that this was what the article said. But, that's not what the slides said. A "teacher" is not one thing, but a huge collection of things. And, the manuscript did not assess non-specific class effects.

Other studies (some with repeated measures of the same teacher, which hopes to neutralize the effect of class, but can only do so if classes are randomly constructed -- they're usually not) could address those questions, but the study showing life-earnings effects did not, as far as I can tell from the sldies.

zb said...

Helen -- all those factors were highly correlated with one another, weren't they? I think they used lifetime earnings because it's a nice economic number -- it allows one to monetize value. Economists live by monetized value, and once you've done that, you can more easily compare it with other things expenditures. It's not just a sound bite to point out that a great kindergarten teacher may result in 320K of extra earnings; it's the way economists think about things an a good argument for making investments.

The most recent analysis of a preschool intervention study (I'm spacing now on the name right now) does the same thing -- looks at the cost value of things like reduced arrest rates.

I believe it's an important analysis to do for policy purposes, even when we don't believe that the value of education is the value of earnings.

Anonymous said...

In memory of Gerald Bracey, be wary of statistical correlations!
ken berry