Art Works Blog

Taking Note: Learning is the New Word for Evaluation

The comprehensive budget proposal that President Obama submitted to Congress in March includes a report titled "Analytical Perspectives." The document lists a series of "Social Indicators," data points meant to contextualize federal spending priorities. There, under the heading of "Civic and Cultural Indicators," appear two metrics:

·       Percent of adults who attended arts events (including movies) in a 12-month period

·       Percent of adults who did leisure reading in a 12-month period

Both indicators derive from the NEA's Survey of Public Participation in the Arts (SPPA). The data's inclusion in a major policy document attests to the U.S. government's perceived value of cultural engagement as an activity worth tracking in the general population.   

Yet no one would claim these are the only indicators to watchwhether for monitoring civic and cultural engagement, or for reviewing societal outcomes as a whole. An indicator at best is an external measurement of change in some value over time, but without context it is value-neutral. Which indicators (and, hence, which values) to emphasize over others should be the joint decision of researchers, policy-makers, and practitioners.

Even after such metrics have been chosen and promoted, however, the question persists: how do we know whether investments in any single program or portfolio have "moved the needle" on a given indicator?

To answer that question definitively, a full-on evaluation is required. And not just any evaluation studya randomized, controlled trial (RCT). Hailed as the gold standard in program evaluation, RCTs track a cohort of individuals or communities over time. Such studies attempt to isolate the unique effects of a program relative to results from a different type of program and/or relative to what happens when no program is offered.

RCTs represent one extreme of the evaluation spectrum. They are not always practical to conduct, and their costs may rival or exceed the often modest allocation of resources for the  project itself. Other study designs, and even secondary data sources, may be used to explore the relationship between a program and the types of outcomes that a funder and grantee organization seek to affect.

Given the variety of experimental tools and techniques that organizations can use to assess performance, evaluations need not be viewed as stultifying or reductive. Instead, we in the NEA's Office of Research & Analysis see them as enabling a continuous learning process for funders and their applicants/grantees. Without this vital conversation involving program staff, arts practitioners, and researchers, performance measurement and evaluation are doomed to irrelevance.

In recent years, under the leadership of former Senior Deputy Chairman and current Acting Chairman Joan Shigekawa, the NEA has set into motion three projects to support the kind of continuous learning described above. Two are evaluations of NEA grant portfolios; a third project evaluates a national data resource for practitioners and policy-makers.

1)  Post-Grant Reviews of Artistic Excellence: The NEA funds all grants on the basis of two overarching review criteria: excellence and merit. In a pilot study involving three arts disciplines (dance, media arts, and arts presenting or multidisciplinary arts grants), the NEA worked with five external reviewers to assess the final work products (and other materials) resulting from recipients of NEA awards to create art.

So far, the entire process has been a shared enterprise of researchers, program staff, reviewers, and grantee organizations. The results from this initial assessment, still in progress, will teach the NEA more about the measurable components of artistic excellence and how to transmit that learning to the field through agency guidance and/or policies.

2)  ArtBeat Survey of Audience Engagement: The NEA has completed a second pilot study in partnership with grantee organizations to evaluate the extent to which NEA-funded art exhibits, live performances, and film screenings have "captivated" audience members, made them lose track of time, or altered their perceptions of others. 

The research tools and methodologies evolved from a previous pilot study, itself rooted in literature about "intrinsic impact" measurement and the psychological constructs of "flow" and "affect." (The research literature was posted to the NEA website, to inform other funders and arts organizations.) Both pilots benefited from a sample of participating grantees across virtually all NEA discipline areas.

3)  Arts & Livability Indicators: As the NEA increasingly receives and funds grant applications proposing to improve the livability of places through the arts, the agency seeks to bring best practices to the doorstep of creative placemaking practitioners. One vehicle for sharing this information is an e-storybook called Exploring Our Town, which will feature case studies and lessons learned from 70 communities’ creative placemaking projects funded through the NEA’s Our Town grants. That resource will launch on the NEA's website later this year.

Another resource for creative placemaking practitioners is a publicly accessible, national set of arts-and-livability indicators that have been developed by the NEA. The Arts & Livability Indicators consist of outcomes-related data in four broad domains: Residential Attachment to Community; Quality of Life; Arts & Cultural Activity; and Economic Conditions.

The indicators and their corresponding domains were designed to reflect outcomes that creative placemaking practitioners tend to associate with successful projects. Depending on the indicator selected, the outcomes data can be reported at the county, zip code, or Census tract level.

As mentioned above, indicators do not replace the role of program evaluations in determining the long-term effectiveness or impact of, in this case, creative placemaking in individual communities. Nor do these indicators alone prove a causal relationship between creative placemaking projects and the outcomes in question. Instead they record changes in values which, for the most part, appear highly relevant to organizations aiming to transform their communities through art and design.

The indicators' utility was evaluated by the Urban Institute, which the NEA commissioned last year to conduct a validation study, and also to draft a user's guide explaining the appropriateness and fallibility of the indicators in various conditions.

That report is being released today. We hope it will embolden readers to ask questions of themselves, their colleagues, and their community members, about which specific outcomes resonate with their creative placemaking projects. The report may spur communities to collect data not reported in the national indicatorsand it will allow evaluation studies to enhance their information collection with secondary data sources. Above all, it is our belief that the new research report, and the other two initiatives described above, will help to advance public knowledge and understanding about the contributions of the arts.

Category: 

Add new comment