David Leonhardt’s sweeping assessment of the performance of our government over the last 200+ years is, well, sweeping, but it captures the heart of the problem of big government: Even if well-meaning programs help people in need, these efforts are often slow, inefficient and not very effective.
Progressives deem government programs worthy ipso facto, while the far right wants to vivisect the government–the good and the bad.
A third way is to set ideological assumptions aside and put programs to the test via the gold standard: the randomized study.
Leonhardt, in today’s New York Times, reports on a small but growing trend in testing government programs:
“Less than 1 percent of government spending is backed by even the most basic evidence of cost-effectiveness,” writes Peter Schuck, a Yale law professor, in his new book, “Why Government Fails So Often,” a sweeping history of policy disappointments. As Mr. Schuck puts it, “the government has largely ignored the ‘moneyball’ revolution in which private-sector decisions are increasingly based on hard data.”
A solution? “The explosion of available data has made evaluating success – in the government and the private sector – easier and less expensive than it used to be. At the same time, a generation of data-savvy policy makers and researchers has entered government and begun pushing it to do better. They have built on earlier efforts by the Bush and Clinton administrations.
While I strongly doubt that this sensible approach will be taken up quickly given our current political environment, the trend bodes well.