Ever since Carnegie Museum of Art launched the Hillman Photography Initiative earlier this year, I’ve been reflecting a lot on what it means for a museum to be truly experimental. When I began my research three years ago, the major premise of the Initiative was to create something totally new in the field of photography. On the other side of a successful launch, I now realize just how ambitious our goal was. But at the time, it felt more like an amorphous challenge, albeit one that had all my problem-solving neurons firing. As with any experiment, we didn’t have a clear understanding of how the Initiative would manifest or what form it would take. Now that the project is up and running, I find myself looking back at how the Initiative was realized and some of the things we’ve learned so far.
For the initial concept phase, the goal was to be nimble and flexible, and to see what would happen when that nimbleness and flexibility confronted the complex workflow of the museum. You can read about those first two years here, in a post I wrote when we first announced the Initiative as a living laboratory for exploring the rapidly changing field of photography and its impact on the world. In it, I provide a window into the innovative process that required me to think unconventionally on a daily basis, and which led me to create a spiderlike concept map that reflected the dozens of (often opposing) paths I followed to explore the expansive world of photographic production, distribution, and consumption.
When I wrote that post last year, we were preparing to embark upon an intensive four-month planning process that gathered five internationally known experts (aka “agents”) together in a far-ranging conversation about photography. The agents include Tina Kukielski (our internal CMOA agent and co-curator of the 2013 Carnegie International), Marvin Heiferman (independent curator and writer), Illah Nourbakhsh (professor of robotics and director of the CREATE Lab, Carnegie Mellon University), Alex Klein (the Dorothy and Stephen R. Weber Program Curator at the Institute of Contemporary Art, Philadelphia), and Arthur Ou (assistant professor of photography at Parsons The New School for Design). We asked them to consider what the most exciting issues and questions were in field in which billions of images were shared daily and on a global basis. For teachers, what made their students sit up and listen? For curators, how did their research connect with the person on the street? For artists, how did the digital revolution affect their practice? What aspects of photography did they—the experts—discuss around the kitchen table with their partners, friends, and kids?
As a result of those incredibly stimulating conversations we came to the realization that the most interesting aspect of photography today is how it travels. From creation through transmission, distribution, circulation, appropriation, and (at times) even death, the photograph follows a lifecycle that can be physical or virtual (or both). The projects that emerged from these discussions—This Picture, The Invisible Photograph, The Sandbox, A People’s History of Pittsburgh, and Orphaned Images—all explore the concept of this lifecycle and speak to each other as much as they do to that central concept.
As the researcher who spent a year analyzing the state of the field, benchmarking other museums and photography centers, and reaching out to international experts in photography, I can confidently say that the process we followed to create the structure behind the Initiative was truly unique. And as the program manager who spent the following two years developing and implementing the process that gave our agents carte blanche to come up with the projects you see on our website today, I can just as confidently say that the Initiative’s engagement with photography’s various manifestations is similarly unique.
LEAP HEADFIRST (& RALLY THE TROOPS)
Since that first meeting of the agents last April, we’ve gone from a completely blank slate to an intricate set of online and onsite projects that explore complex issues. Reflecting back on the initial stages of our process, I’d say true experimentation in a museum setting requires a willingness to leap headfirst into the unknown (relevant research and benchmarking in tow, of course). It also requires some high-level buy-in to the idea that the outcome will most likely challenge some established museum processes. For example, most of our internal processes revolve around the development, approval, and implementation of exhibitions and events. Typically an exhibition is proposed by a curator and is then reviewed and approved by an internal group of departmental directors. However, the Initiative was developed and implemented outside of that normal workflow. The point was to ask outside voices (the agents) to propose the projects that the museum would then implement and build. Maybe true experimentation happens when you ask not only how to pull off an experiment in the face of established processes, but how that experiment can change those traditional systems and expectations, challenging a museum to reexamine its own assumptions, benchmarks, and even its metrics of success.
Caveat: the process we’ve gone through to launch the Initiative has been, on one hand, in-depth, well-researched, and methodical. But that’s no surprise. Museums do those three adjectives pretty well. On the other hand, it’s been totally new, without precedent and, at times, frankly terrifying in a make-it-up-as-you-go kind of way. I’ve had to pull strings and sweet-talk colleagues into doing things that weren’t even remotely noted in their job descriptions. The months leading up to the website launch were also the craziest of my professional life. Nothing we were doing was customary, usual, or practiced. Every path we were carving to make the Initiative happen was a new one that needed its own customized road crew, made up of exceptionally generous and hard-working colleagues—I couldn’t have been luckier to have had them as partners in this endeavor.
At some point during those crazy months, we realized that the process was so experimental that none of our standard benchmarking procedures would suffice as evaluation metrics. And then came the secondary epiphany: we honestly didn’t even know how to define that success, let alone measure it.
I remember the first time we convened the 15-person meeting, full of the department and division heads who had banded together to implement the Initiative, to collaboratively develop the Initiative’s metrics of success. I opened the meeting with this question: “So, how have we as a museum developed metrics of success in the past?” There was a moment of silence and then the answer: “We’ve never actually had to do that from scratch before.” Wait a minute. You mean to tell me that this program I’m managing is not only completely experimental, but the process of evaluating it is too? (This is when my problem-solving neurons got another jump start.)
So we dove in. Our director of education asked key questions like, “How does being interested in what our visitors think change the museum?” And: “Does the Initiative change the way we establish online engagement with audiences in other exhibition or collection areas?” Our web and digital media manager got us thinking when he told us he could not only track how people were navigating or clicking through the website, but where they were coming from and how long they spent on any given page. Our director of publications ruminated on whether we could use the Initiative as a model for developing standards for online writing for all museum projects, not only for content but also for tone and approach. Our marketing team discussed extending audience engagement from the typical art scene to the sciences, social sciences, and technology. From a curatorial point of view, we’re just as interested in assessing the less tangible metrics of success, such as how the Initiative shapes ideas about photography locally and internationally. How great would it be if some future program manager of another burgeoning experimental project at some hypothetical institution reached out to benchmark us?
And thus was born the “Goals and Metrics of Success” document that I find myself referring to on a regular basis. Because, like any strategic plan (or democratic constitution), you never want to be policing a dead or irrelevant document. Within days of launching the Initiative, we began gathering statistics to figure out what was going on. Were people coming to our website? Were they accessing our content? No, more: were they engaging with our content? Did we have to shift our marketing strategies? The hierarchy of content on our website? The types of demographic content we were gathering at events?
Here are some findings from our first full month of evaluation:
- We surpassed our wildest dreams in terms of video views for Part 1 and Part 2 of The Invisible Photograph. In terms of geographical distribution, our top views outside the United States have come, in order, from the UK, Argentina, Germany, Canada, Spain, and Russia. Our videos have had truly global viewership, reaching six continents. Now, if only we could get those people in Antarctica…
- Our two 20-minute videos had over 60,000 complete views and over 300,000 loads. This runs counter to the popular consensus that says shorter videos perform better and shows that there is significant appetite for more substantive content online. This is also double the total number of views we had of all CMOA-produced videos in 2013.
- The Initiative’s web activity equaled the activity on all other museum sites combined, including main site, blogs, and microsites. In terms of web campaigns, nowseethis.org is on par with other high-profile web campaigns such as the 2013 Carnegie International.
- The earned media value for the Initiative in the first month alone was approximately $4 million. To put that in perspective, in all of 2013 our earned media was $8 million, which was itself a record year for us thanks to the 2013 Carnegie International.
From the beginning of our social media campaign on March 16, Initiative-related content more than tripled the museum’s reach of Facebook posts through user sharing and liking, with May’s This Picture having the highest level of engagement of all posts and A People’s History of Pittsburgh coming in second. We tracked a significant upward trend in people “liking” CMOA that corresponded to the launch of the Initiative, with an average increase of over 1000. On Twitter, of the top 15 posts from the museum’s account @cmoa, more than half were HPI-related. These posts saw increased reach that was sometimes three to five times greater than the average museum tweet.
A sobering statistic, however, was the relatively modest onsite attendance for the Initiative’s related programs. We think this is in large part due to the fact that, in the experimental spirit of the Initiative, we did not prioritize onsite attendance when asking the agents to propose projects. We have since realized the tension this has created with our institution’s larger mission to encourage onsite attendance. So, we’re trying to make some changes that might address this issue, such as softening the price structure to enable people to pay as much as they can, so that no one is excluded who is interested in deeply exploring our content. We’re also discovering that promoting an onsite–online connection, which is at the heart of the Initiative, is one of the harder goals to accomplish. One of the best suggestions from our last meeting, made by our associate editor, was to encourage online submissions by increasing onsite payoff. We could print submissions, post them in the gallery, and then announce the “featured submissions” on our website. We think that this onsite payoff is one of the main reasons that Oh Snap!: Your Take on Our Photographs, another experimental museum project and an important precedent for the Initiative, was such a success last year. I’ll have to keep you posted on the results of all of this self-evaluation.
So—what have we learned? For a museum to be truly experimental it has to approach the problem in an unconventional way, challenge established processes, and take some real risks. It needs to actively evaluate and reevaluate itself to help the project stay ahead of the curve. (And most importantly, it needs the tools, knowledgeable staff, and a willingness to openly evaluate itself in the first place.) It needs to foster communication and trust among the participants. It has to ask: How does what we’ve done transform the museum? How does it shift our processes and internal working strategies? What works about the experimental method we’ve chosen and what requires some further tinkering? In the wake of articles like “Museums… So What?” by Robert Stein, deputy director of the Dallas Museum of Art, or “Lessons from a Year of Pop-Up Museums,” a guest post on the Museums 2.0 blog by Nora Grant, I think it’s even more important for museums to consider alternative means of reaching offsite audiences and engaging onsite visitors.
What happens next is anyone’s guess. But one thing is for sure: the more experimental the process, the more progressive you need to be to evaluate the outcome. Because as the old saying goes, if you don’t evaluate, you’ve already failed (or something like that). For any project that’s even remotely experimental, the need for unconventional thinking never ends, not after process development, not after implementation, and not even after evaluation. But, I would argue, therein lies the fun. And I think your problem-solving neurons would agree.
This is an expanded version of a post published on July 8, 2014, on the blog of the Center for the Future of Museums.