- About ACM
- About Children’s Museums
- Elevating the Field
- Conferences and Professional Development
- Member Login
June 26, 2020 / News & Blog
By John W. Jacobsen with Laura Roberts, David Ellis, George Hein, and Lynn Baum
The crucible of a crisis provides the opportunity to forge a better society, but the crisis itself does not do the work. Crises expose problems, but they do not supply alternatives, let alone political will. Change requires ideas and leadership.”New York Times Editorial Board, April 9, 2020
I have been struck again and again by how important measurement is to improving the human condition. You can achieve amazing progress if you set a clear goal and find a measure that will drive progress toward that goal in a feedback loop.Bill Gates, 2013
This article explores one of the ideas that may shape the new normal for children’s museums as we come out of the lock-down: The idea of using data to get where you want to go, or navigating with knowledge. We have heard that data is important, but how can a children’s museum actually use data to inform decisions when so much is uncertain and when past data seems irrelevant? By carefully selecting the data that will track your desired impacts and benefits. This article describes how to do this using the PIID Sequence and reports the findings from the five museums that used the sequence to improve their impact.
Prior to the lock-down, many museums used data operationally. Annual budget objectives, attendance forecasts, and the number of grant proposals are examples of common tactical uses of data. The next step is to use data strategically in forward planning to evolve into the new normal and then to prove and improve value.
A museum aspires to impact its community, audiences, and supporters, who in receive benefits from the museum. Impacts are the effects desired by the museum; benefits are what matter to the beneficiaries. The distinction is important. Both are end results, or outcomes, of the museum’s activities. Both should be intentional, and both should be measured. In good times, a museum should maximize impacts; in times of trouble, it should maximize benefits.
Benefits can differ from impacts: A family visiting an aquarium receives the benefit of a quality family experience, while the aquarium’s desired impact on the family might be to heighten awareness of conserving biodiversity. Alternatively, the benefits and impacts can be aligned, which is an edge for children’s museums: New parents bring their toddler to a children’s museum to see her develop and learn with new kinds of challenges; the children’s museum’s mission is to be a resource for learning about child development. Studying the alignment between a museum’s benefits and impacts may illuminate inefficiencies. Some degree of misalignment may be desirable for strategic or advocacy reasons, but too much may be inefficient and unsustainable.
What are the steps/skills needed for museums in the new normal to gather/analyze data that will support their operations, inspire confidence in funders, and help them make informed strategic decisions about the future of their institution? This article explores those questions, as they all lead to becoming a stronger organization better able to withstand uncertainties, such as pandemics.
Museum administrators need measurements to prove our value and advocate for our institutions. More fundamentally, we need the right metrics to drive progress toward our goals so that we can improve the human condition and preserve the trust the public has in museums. In a nutshell, we need measurements to make museums healthy and effective again.
The theory is that a purpose or goal, if successfully achieved through the museum’s activities, should produce its planned results, which should be observable by tracking predetermined key performance indicators (KPIs) that are qualitatively and/or quantitatively measurable through data.
Museums that have multiple sources of revenue (earned and support) end up serving multiple purposes for their various masters. Such museums are “multi-missioned,” ideally prioritized. For instance, a science museum might say their top mission is science learning (50 percent), with community gathering (30 percent) and economic development (20 percent) as secondary and tertiary purposes.
Because we have so many outcomes, audiences, and supporters, because every museum is unique, and because each museum pursues its individual missions differently, the global field of museums has no commonly accepted metrics to measure impact and performance. Our richness and complexity challenge any simplistic assessment of a museum’s value and impact, such as attendance or collection size. As a result, one of the museum field’s most challenging needs is to find ways to articulate, measure and increase a museum’s desired outcomes.
The tool to apply this theory to practice is the “PIID Sequence” (Purposes 4 Impacts 4 Indicators 4 Data fields). The sequence starts with museum leadership articulating one or more of its intentional purposes, then stating what changes or impacts they aspire to achieve for each purpose, and what real world observations might indicate that the impact was happening. Then, what data fields might measure or document changes in that indicator. This PIID Sequence is illustrated here:
John Jacobsen, Laura Roberts, David Ellis, George Hein, and Lynn Baum ran the Assessing Museum Impact (AMI) research project from 2017 to 2019 to explore whether the strategic use of data could help museums improve their impact and performance. The four AMI volunteer advisors found that the theory is promising in practice, and that wider, deeper, and longer research is suggested. Their report, Assessing Museum Impact: From Theory to Practice – A Summary Report was published in October 2019. The report details changes that six participating museums made to their programs and operations, and the methods they found most effective in the collection and application of data. The advisory team coached the museums through the PIID Sequence to select meaningful and revealing data fields and then to use that data strategically to inform decisions about mission-related outcomes.
With the support and partnership of the New England Museum Association (NEMA), the participating museums included: Gore Place (Waltham, MA); Children’s Museum of New Hampshire; Paul Revere House (Paul Revere Memorial Association, MA); Rough Point (Newport Restoration Foundation, RI); Seacoast Science Center (NH), and the USS Constitution Museum (MA). The freely available database of Museum Indicators of Impact and Performance (MIIP 1.0.xls) lists 1,025 potential indicators that some participants found useful for guidance.
Participants took stock of what data they were already collecting and reviewed relevant historical data to serve as the basis for analyzing incremental change over time. Collectively, the participants used the following on-site and online data collection methods:
All methods were relatively low cost, as the project offered no incremental funding.
Participants generally favored quantitative data collection over qualitative data, perhaps because it was more readily available and seemingly easier to gather and apply.
Participants encountered logistical and capacity challenges. Allocating staff time, particularly during busy seasons, was difficult. Some felt their limitations kept them from fully completing the job of systematically collecting consistent data or accomplishing as much as they hoped.
However, while acknowledging the limitations, participants observed that it “doesn’t have to be as overwhelming as it sounds.” They realized they were already collecting (but perhaps not sufficiently analyzing or using) data. Further, there are both low-tech and high-tech options (online and in-person surveys, interviews, comment cards) for improving data collection and analysis. Training staff, interns, and volunteers was essential, as comfort with using data in one part of the operation (like evaluation of school programs) can inform efforts in another part. Starting with easier data-collection tasks (e.g., using stickers to ‘vote’ for favorites on a large gallery map, counting attendance at events, asking a single question, etc.),rather than daunting undertakings like visitor surveys, can get the process going. Participants found technology could make a huge difference, such as employing new customer relationship management or point of sale software. Also, carefully placing data collection points in prominent locations, where people have time to complete surveys or other instruments, was useful.
Participants were excited about the potential of social media as a source of information and feedback but often found randomness, quirks, inconsistencies, and unpredictability in ratings on platforms like TripAdvisor and Google frustrating and “perplexing.” Often qualitative social media data—visitor comments—were more useful than ratings, although time-consuming to monitor. “The reviews remind us that even with our high attendance levels, the visitor experience can always be improved.” Participants also found that positive comments on these sites provided quotes for marketing, and that responding to reviews helped boost their rankings. They also watched for overall improvement and a decline in the proportion of negative reviews and comments. As with survey data, participants found information from reviews helpful in depersonalizing criticism and improving staff performance. Other sites, such as Facebook or Twitter, present a different source of insights. By developing metrics to measure engagement with learning and content pages on these sites, museums saw the potential for “a reasonable measure of mission impact as people elect to visit our institution’s content and thereby shape their digital personas.” Finally, participants were reminded of the usefulness of web analytics for improving the museum’s website.
Participants were clear that raw data alone was not enough, that the utility of data depended on clear analysis and routine reporting. Some had reliable data from multiple years or sources that could be analyzed to identify trends. One participant used geo-location software to analyze data collected over prior years for trends and areas of opportunity in participation and visitation by schools and groups. But others confessed they had yet to “institutionalize” analysis or generate regular reports from the data they had. One noted the need for iterations in data collection and analysis, refining processes over time.
Some were left asking “so what? What do we do with this information?” Even with data in hand, analysis remained a challenge, often due to a lack of skills and experience. Participants noted other frustrations such as collecting data about visitor diversity or different departments collecting data in different ways.
Data and its analysis are helpful only when information is shared clearly and routinely, but how do we report what we have? Pages of numbers make people’s eyes cross; visuals are better. For instance, bar graphs are good when comparing similar museums or staff, where your museum is a differently colored bar seen next to others showing the same metric, such as the percentage of repeating teachers (an indicator of educational impact), or the absentee levels of floor staff. Pie charts, on the other hand, excel at visualizing different shares of a whole, such as a museum’s various sources of revenue, or a staff member’s allocation of work time.
A careful selection of KPIs is like the many gauges on the dashboard of an airplane’s cockpit that pilots use to fly safely to their destination. In order to fly to its destination, a museum needs to integrate and prioritize its KPIs to understand what they say collectively. These metrics will reveal the direction the museum is pointed in, and how that relates to where it intends to go and where it wants to make progress.
As they became more comfortable with data and analysis of that data, participants envisioned how this capacity could inform organizational assessment and decision-making as an ongoing or regular practice. For example, a deeper understanding of the sources of revenue (events, admissions, rentals, annual appeals) could help museums make decisions about the allocation of resources and policies related to revenue generation.
Involvement of staff at multiple levels can help to change organizational culture, building and sustaining staff buy-in. There may still be resistance, but if staff at multiple levels and from all departments understand the large picture, the shift can take place. Participants recognized the importance of keeping a culture of evaluation going at their institutions.
Participating museums reported using data to inform decisions about functional issues like staffing, scheduling, and budgeting. They also saw utility for thinking and operating more strategically. They were better able to articulate intentions and goals and then become more rigorous about collecting evidence to support decision-making. One noted that data either “supports what you think is happening and/or exposes false assumptions.”
Understanding visitors—who they are, why they are visiting, and what they enjoy—was a focus for many participating museums. Museums also looked at the nature and quality of the experience and the ways that interventions like signage, amenities, and tours are, or are not, successful. One looked at data about returning visitors to fine-tune activities that would appeal to them. Another looked at improving the visitor’s experience of the museum’s grounds to support introducing a grounds-only ticket option. A third used data to look at the impact of a new visitors’ center and found that it met some their objectives but fell short on some of their other aspirations.
Museums were better able to understand patterns in visitation and organized program participation, confirming impressions or identifying opportunities in the market. Importantly, some were able to demonstrate their reach into previously under-served communities. One looked at changes in the distribution of the zip codes of members to gauge how well the museum was reaching new audiences.
One museum used data to more closely analyze how school and outside community groups made decisions about enrolling in various programs. It also identified potential new markets for programs. Another was able to demonstrate an increase in facilitated school visits over time, which they interpreted as an indicator of success.
Learning assessments completed by both teachers and their students and by visitors reported positive results, primarily around development of new skills and the confidence of children.
Participants noted that data can “substantiate your claims about your museum,” supporting the case made to funders for maintaining or increasing support. It can also strengthen discussions with potential grantors, sponsors, and donors and keep the museum accountable to all stakeholders.
The authors observe that expanding the use of data from tactical operations to strategic decisions may inform and support three broad areas of museum practice:
All these are essential in creating a stronger, more resilient museum able to withstand external pressures that come from economic downturns, pandemics and other crises.
John W. Jacobsen is president (ret) of White Oak Associates, Inc., a U.S.-based museum analysis and planning firm. Jacobsen has led analysis and planning projects for museums around the world for over four decades, including when he was associate director of the Museum of Science in Boston. He is the author of Measuring Museum Impact and Performance and the Museum Manager’s Compendium.
Laura Roberts is the principal of Roberts Consulting.
David W. Ellis is consultant and President Emeritus, Museum of Science, Boston.
George Hein is Professor Emeritus, Lesley University.
Lynn Baum is principal of Turtle Peak Consulting.
Jacobsen, J. W. (2016). Measuring Museum Impact and Performance: Theory and Practice. Lanham, MD: Rowman & Littlefield.