Site icon Funderstanding

Can Sabermetrics Help Improve Education?

sabermetrics

Last month, the New York City Department of Education released report cards for public high schools featuring a new statistic: college readiness. This measure quantifies the number of high school graduates who would qualify to bypass remedial coursework in city colleges. The New York Times  published select findings, identifying schools with high graduation rates and high college readiness numbers and contrasting them to schools that delivered high graduation rates but conspicuously low college readiness.

Never mind the hair splitting involved in this particular assessment of school performance. The whole graphic display of these findings isn’t all that different from something you’d find Bill James poring over, Moneyball-style. Billy Beane-counting baseball nerds would be searching for the meaning behind the numbers, trying to connect the dots between raw data and actual performance.

For example, two Brooklyn high schools have a 91% graduation rate. In college readiness, one school puts up a 91%, while the other has a woeful 9%. “What’s going on there?” they’d ask. Is it possible that one school is simply rubber-stamping diplomas? Or is it possible that the college readiness side of the measurement is askew, or biased, or inappropriately applied to the lower-performing school, which may have a specialized industrial or arts curriculum? Is the college readiness matrix even reliable? Or relevant? Questions, followed by more questions, all designed to find meaning in the data.

More importantly, the sabermetricians would rigorously question the very value of the measurement itself. In this case, they might open with a discussion about whether the ability to skip remedial coursework is in any way a meaningful predictor of college success. Furthermore, they might question the value of measuring college preparedness at all. The sabermetrician’s stock in trade, after all, is questioning the unquestioned. Unexamined data, after all,  has no meaning.

In this case, they’d explore the association (if one even exists) between being required to take remedial algebra in college and eventual scholastic or professional achievement.  It’s possible to imagine a data set that nullifies the advantage of college “readiness” as measured by the NYCBOE. Sabermetricians would follow those college freshmen who were tagged for remedial writing or math classes and might find it benefited their college performance, inasmuch as their undergraduate careers began with a reinforcement of the fundamentals. Beyond the existential conversation, they’d look to cold, hard data to establish an algorithm that might be a more reliable predictor of college success.

This notion of “college readiness” is just one drop in the educational statistics bucket. A casual visit to the National Center for Education Statistics confirms that collecting data has never been a problem. It’s identifying what the numbers mean and don’t mean that has been the elusive quest. Handed over to sabermetricians, the data would be rigorously examined, decoded, and perhaps point society toward meaningful reforms. In the spirit of Bill James and his tribe’s relentless questioning of the old stats and pursuit of new stats, we need to ask:

What can sabermetrics do for education?

Is it time to draft the Moneyballers into educational think tanks?

Exit mobile version