Showing posts with label causal inference. Show all posts
Showing posts with label causal inference. Show all posts

Monday, January 11, 2010

Charter Schools Close the Achievement Gap

I have been happily working with a killer set of co-authors recently: Josh Angrist, Parag Pathak and Chris Walters of MIT, Tom Kane of Harvard and Atila Abdulkadiroglu of Duke. We are writing a series of papers that evaluate the effect of charter schools on test scores. We are focused on Massachusetts, where we have an excellent working relationship with its very capable office of education research, which has allowed us access to the statewide test score data needed to undertake an evaluation of this nature.

The key empirical challenge in understanding the effect of charter schools is selection bias: kids who go to charter schools are different in both observable and unobservable ways from kids who don't. You can try to control for the visible differences, as does this study from Stanford. But what we can measure in terms of student background is pretty scant in administrative datasets: sex, race, free/reduced lunch eligibility, special-education status, previous test scores. So there remains plenty of risk for omitted-variable bias. Are kids whose parents are highly educated or motivated concentrated at charters? Kids whose test scores were plummeting in the public schools? Kids who were not challenged in the public schools? All of these differences would contaminate any effort to compare the achievement of kids at charters and kids at public schools.

We solve this problem by exploiting the randomized lotteries conducted by over-subscribed charter schools. The lottery approach focuses on students who apply to charters, comparing outcomes for those who lose the lottery to those who win. A mere coin flip (or randomly-generated number) separates the lottery winners and losers, so we can be confident that they are alike in every observable and unobservable way - except for their charter school attendance. This closely approximates the gold standard of a randomized, controlled trial.

We used this approach to study charters schools in Boston, and have just written a paper focused on a KIPP charter in Lynn, a working-class city north of Boston.* A policy report and two academic papers (on Boston and KIPP Lynn) have been born so far of this collaboration. All our results point to large and positive effects of charter schools on student achievement. The effects are larger for kids who most need help: Blacks, Hispanics, those with limited English proficiency, special ed kids, those who have the lowest baseline scores. The effect sizes are huge - kids at charters gain 0.1-0.2 standard deviations each year on their peers at the traditional public schools. How "big" is this? Wicked big, as they say in my hometown of Somerville, a few miles south of Lynn. In Lynn, the Hispanic-White test score gap is 0.5 standard deviations, as is the Black-White test score gap. Interventions that come anywhere close to gains of that size are few and far between. By way of comparison, the STAR experiment found that smaller classes (sustained for four years, from kindergarten through third grade) increased test scores about 0.2 standard deviations.

We are not the first to use the lottery approach; Caroline Hoxby, Jonah Rockoff and co-authors have looked at charter schools in Chicago and New York City using this methodology, and Roland Fryer has used this approach to examine charters in the Harlem Children's Zone. But the KIPP Lynn paper provides the first lottery evidence on a charter school that serves a large Hispanic population (the school is about half Hispanic): the New York City, Chicago and Boston charters are predominantly Black, and enroll fewer Hispanics that the surrounding schools. And it is the first randomized evaluation of a KIPP charter, which are spreading across the nation. The KIPP results are convincing evidence that charters can succeed with students who have the greatest need. A recent UFT report raised the concern that charters are failing (or ignoring) these very populations. At least in Lynn, KIPP is achieving astounding results with these kids.

..................................................................................

*That's the Lynn in the chant sung to generations of children being bounced on their parents' laps: "Ride a horse to Boston, ride a horse to Lynn, better watch out or you're gonna fall IN!"

Wednesday, August 19, 2009

Mrs. Krabappel is not yet out of a job

This report on online education is being framed by the media as showing that online education beats face-to-face instruction in the classroom. So should we replace Mrs. Krabappel with a bevy of netbooks and a wireless router?

Not so fast. The research is far too weak to draw the conclusion that teachers can be replaced with online instruction.To their credit, the authors admit this up front:

"The most unexpected finding was that an extensive initial search of the published literature from 1996 through 2006 found no experimental or controlled quasi-experimental studies that both compared the learning effectiveness of online and face-to-face instruction for K–12 students and provided sufficient data for inclusion in a meta-analysis."

What were the limitations in the existing research that led the authors to this gloomy conclusion (which did not come across in the press reports)?

I) Internal validity

Out of 1000+ studies the authors reviewed, 33 were randomized trials, and 13 were comparison-control with decent controls. The rest, ewww. The RCTs did show pretty big positive effects (0.2 SD), however, and we know that research is not democratic. So, what's the problem?

..which leads us to...

II) External Validity

a) Just one of the 33 randomized trials (and four of the 13 comparison-control studies) took place in a K-12 school. The rest were in colleges or training programs for medical professionals.

b) None of the (five) K-12 studies compared face-to-face instruction with online learning, which is the comparison the we all have in mind when we read the media reports. Rather, the studies compared 1) face-to-face instruction with 2) face-to-face-instruction PLUS online learning. No teachers were taken out of the equation for the treatment or control group.

The bottom line (which did not come across in the press reports...) is that this research tells us nothing about whether online learning and face-to-face instruction in K-12 are substitutes in the learning process. It does provide us some evidence, however, that they are complements.