How Can We Know If Social Programs Will Work?

X
Story Stream
recent articles

We would all agree that it is unwise to spend a ton of public or private money expanding a program or launching a new product unless we have a pretty good idea that it works. That's why, for instance, the federal Food and Drug Administration (FDA) requires drug companies to undertake extensive - and expensive - testing to determine if a new medicine is safe and effective. The evaluation is to protect patients of course, but also taxpayers who help pay for medicines through Medicare and other health programs.

The "gold standard" for rigorous evaluations is the randomized control trial, or RCT, in which the impact on a group of program (or product) recipients is compared with a "control" group of non-recipients. When we don't insist on rigorous evaluations for major programs there is often waste and problems. For instance, the federal Head Start early childhood education program, launched in 1965, had spent tens of billions of dollars before it was properly evaluated in a study released, after long delay, in 2010. That evaluation found the program had no demonstrable impact on children after first grade - triggering new research and a long overdue debate about the best design for early childhood programs.

So does it make sense to insist on gold-standard RCT evaluations before the government or private philanthropy invests in any promising program or social policy venture? Not exactly. Here's why.

Most innovative ventures change significantly before they are ready for prime time. During this period of evolution, a fledgling venture needs good-enough outcomes data to provide feedback loops that helps it tweak operations and test improvements. That type of data still requires money and time to collect. But if potential government or philanthropic funders insist on the venture collecting the extensive data needed for a thorough evaluation during this early stage, the effect can be to freeze innovation by forcing organizations into an expensive pause rather than continuing to refine their operations. As one expert puts it, it is "equivalent to putting auditors in charge of the R&D department."

I've seen this problem when looking at community based "hubs" that are trying to find new ways to improve neighborhoods by integrating health care, education, and social services. For instance, Briya Public Charter School, in Washington D.C., provides early education and classes for parents. Briya has a partnership in the same building with Mary's Center, a health clinic and social service organization. Working primarily with immigrants, the partnership delivers a wide range of services for local families, including parenting classes and a comprehensive set of health and other services.

But Briya/Mary's Center, like many similar hubs, struggles to get the support it needs to collect and analyze data for sustaining and improving its daily operations. At the same time, it is under pressure to assemble more elaborate and expensive data for the rigorous evaluation required to show a government agency or major private foundation that it deserves long-term support and funds for expansion. Even if small partnerships like this do try to collect data for a rigorous evaluation, they often face obstacles. For instance, many local government agencies are reluctant to share information, especially with a new organization. And if it is made available, the data is often collected in such different ways by agencies that it is difficult and costly for a small organization to use.

Do problems like this mean that rigorous evaluations are not appropriate in deciding whether to spend money on expanding and replicating promising community based strategies? No, but it does mean we should see such evaluations in context and take some steps to help creative ventures actually reach the stage where they can show they deserve major funding.

How can we do that? First, timing is everything. Insisting on a rigorous evaluation too early in the evolution of a promising community-based venture can stymie the chances of it ever perfecting a breakthrough approach. In the initial stages, private or public funders should focus on building performance and data tracking systems to help measure progress and refine tactics, not on the requirements of a formal evaluation. Unfortunately, many ventures fail for lack of support for collecting the data needed during that early phase. Only after the initial trial-and-error phase, when the organization is established, should the emphasis switch to the more rigorous data collection and analysis needed to justify major funding for expansion or replication.

Second, we would see greater creativity if local, county, and state agencies were more inclined to share data. Besides the costs and staff time needed to prepare data for release, especially in ways that make information compatible with that from other agencies, government officials are understandably worried about client privacy. Fortunately, there is progress in tackling these concerns. For example, several major jurisdictions, such as Los Angeles County, California, and Allegheny County, Pennsylvania, are working with experts at the University of Pennsylvania to address issues with sharing data. Meanwhile one private foundation, seeking to encourage evaluation-driven policymaking, is supporting "embedded" data experts as grantees within federal agencies to help cover the cost and provide the expertise needed to prepare and safely release data.

And third, lower levels of government should explore ways of providing modest venture capital funding so that organizations and partnerships can build the data and technical capacity they need for the early trial-and-error phase. Fortunately, some are taking this step. The state of Maryland, for instance, has created a fund to encourage community organizations and hospitals to develop novel partnerships for improving neighborhood health. Such venture capital funding enables promising approaches to get through the early experimental phase and prepare for a full evaluation.

There is a great deal of inefficiency in social programs, and many simply don't work. So yes, we should require a rigorous RCT before committing large amounts of public or private money on expanding a seemingly good idea. But we also need to encourage innovation, and that requires a more nuanced approach to collecting and analyzing data and fostering early success.

Stuart Butler is a Senior Fellow in Economic Studies at the Brookings Institution.  

Comment
Show commentsHide Comments

Related Articles