Collective Impact is Hard, But We’re Not in the Dark: Advice on Measurement

Collective impact is hard. If you’ve ever done it, you know it beyond a doubt. If you’ve never done it, just read about the five conditions of collective impact and you’ll be able to guess how hard it would be to make it all work. Entire communities have formed around tackling the complexities that we face when trying to achieve collective impact. For this reason, I’ll repeat: collective impact is hard. But we all know the kinds of problems we’re facing today require everyone to pitch in if we’re going to solve them. The good news? We’re not completely in the dark here.

My own journey with collective impact started in Spokane, Washington in 2012. I was working in the Data Center of the Spokane Regional Health District, an incredibly innovative local health jurisdiction, when collective impact swept over the city. It seemed like one night I went to sleep and everyone was thinking about coalitions, then I woke up and the whole city was talking about collective impact. Soon enough I was serving as Data Manager for Excelerate Success, an education-focused collective impact partnership and a member of the StriveTogether network. And boy, was it hard. But I was hooked! I saw value in the network way of working and saw people and partnerships across the nation working tirelessly to figure out how to do it right. The complexity of it made me realize I needed to learn more, so I went back to school. I guess this means collective impact’s what brought me to the University of Colorado Denver’s School of Public Affairs PhD program to learn about how organizations work across sectors to achieve common goals. Here’s some of the most helpful stuff I’ve learned so far:

Collective impact isn’t (exactly) a new way of working.

Collective impact is often framed as a new way of working, but this isn’t completely true. I can’t tell you how many times I attended a collective impact conference and heard something like “This is all new and unknown, but we’re in this together and we’re working hard to figure it out!” One thing I’ve learned so far is that this is only sort of true. The second part – about dedicated folks working incredibly hard to figure out how to achieve collective outcomes – is undoubtedly right. But the first part? Eh, that’s debatable, which is great news for us! It means we can learn from similar things that have already been tried, tested, and perfected. There are three areas of existing knowledge that I’ve found to be super helpful in how I think about collective impact: collaboration, measurement, and policy change. This blog post focuses on the second of these areas, measurement.

Make use of various resources and frameworks when measuring collective impact

Measuring collective impact is harder than other types of measurement because you’re not just focusing on a single project, program, or organization; you’re focusing on all these things in addition to how they’re interrelated, how they’re being implemented, and what kind of effect they’re having on people’s lives. There are often dozens and sometimes hundreds of organizations involved in collective impact networks, all trying to harmoniously direct their efforts toward collectively solving a big, unwieldy problem. It’s beautiful, really; like a symphony of organizations joined together to satisfy the wishes of the audience. But collective impact networks usually address problems that extend far beyond ‘wishes’ into the territory of ‘needs.’

Collective impact networks exist to get kids the educations they deserve; keep neighborhoods alive and thriving; govern natural resources; and improve the health of communities.  Since collective impact networks often try to measure broad-sweeping population outcomes, network governance process and performance outcomes, organizational outcomes, and programmatic outcomes, they’re being asked to track more information than most other individual organizations or collaborative groups. 

And they’re doing all of this while rapidly trying to put it back into a continuous improvement cycle! The shared nature of measurement further complicates the collective impact measurement process in many ways (at least until an effective data sharing process has been established). If you’re wading through the measurement side of a collective impact initiative, you have my deepest sympathy. Here are some ideas and resources that may just help take some of that weight off your shoulders:

            First, I’ll start with some ideas about measurement capacity. The first thing to come straight out and say is that the business of applied measurement is a tricky one. This isn’t necessarily a bad thing, but it does mean you need access to people with some training and experience in the area. Many collective impact initiatives don’t have funds to hire a full-time data manager or continuous improvement director, but they may not have to; it might be possible to partner with an organization that already has this capacity. If you’re lucky, they’ll be able to donate measurement services in-kind. But since they have salaries to pay too, it’s likely they’ll need some contribution. If you want the data side of your collective impact initiative done well but you don’t need someone working on it full-time, consider contracting with research-savvy organizations such as foundations, think tanks, local health departments, workforce development councils, chambers of commerce, or school districts. If you need more than occasional support, you could explore shared staffing models where an employee serves both organizations as part of her/his regular job. The two takeaways here are: 1) measurement done well requires research and evaluation skills, and 2) you may be able to access those skills through partners in your network.

Second, I’ll review some useful measurement tools. We’ll start with how to measure process performance and work our way up to measuring population outcomes.

  • Performance measures. Performance measurement is vital for continuous improvement purposes. It allows you to see what’s going well and what needs improvement as you’re working toward collective outcomes. This kind of information lets you course-correct along the way. The literature on collaborative governance provides excellent guidance on ways to measure the performance of collaborative networks like collective impact networks. Leading collaborative governance scholars such as Kirk Emerson and Tina Nabatchi have four recommendations for measuring performance in collaborative groups like collective impact network:
    • Use logic models
      • A logic model is a conceptual map of a program/project/initiative that describes its various parts and how they’re connected. Logic models usually specify program inputs, processes, outputs, and outcomes.
    • Link process and productivity performance
      • Clearly articulate how the everyday activities of your collective impact network lead to outputs (or products of work) that will lead to outcomes (or collective goals). Developing a logic model should help with this!
    • Use multiple units of analysis
      • As mentioned previously, measuring collective impact is particularly complicated since you’re dealing with so many different units of analysis. For this reason, Emerson and Nabatchi recommend using multiple units of analysis when assessing collaborative efforts. Possible units of analysis include: individual participants, organizational participants, the overall network, and the target population.
    • Be mindful that CGRs aren’t static
      • Another thing that makes measuring the work of collective impact networks particularly challenging is that all those moving parts are ever-changing. For this reason, collaborative governance scholars recommend assessing networks at different stages of development:
        • Formation, where the focus is on getting participants together, agreeing on a common goal, deciding what to do, and building the relationships, trust, norms, and commitment needed to enable collaboration;
        • Stabilization, where participants work to gain external legitimacy for their efforts and develop and nurture the skills needed to sustain collaboration;
        • Routinization, where cooperation becomes the norm, and participants develop rules and guidelines for continued cooperation;
        • Extension, where the collaborative effort becomes seen as a viable operation;
        • Adaptation, where the CGR or its participants respond to the outcomes that arise from collaborative actions.”

(Emerson, Kirk and Tina Nabatchi. Collaborative Governance Regimes (Public Management and Change series) (p. 186). Georgetown University Press. Kindle Edition.)

Emerson and Nabatchi focus on three areas of performance measurement: establishing principled engagement among network participants, building shared motivation, and developing the capacity for joint action. I’ve provided just a few examples from their book below, but you may want to reference the entire book, Collaborative Governance Regimes, for the full overview on measuring performance of collaborative networks (see Chapter 9: Assessing the Performance of Collaborative Governance Regimes). You can also check out this article to learn more about the Integrative Framework for Collaborative Governance.

Principled Engagement Shared Motivation Capacity for Join Action
Extent to which participants:
• Recognize shared goals
• Arrive at shared problem definition and theory of change
• Define concepts and terminology
• Are open and inclusive during communication
• Manage conflicts and disagreements
Levels of perceived trust among participants.
Extent to which participants:
• Are comfortable revealing information to others
• Deem the network to be useful and credible
• Are committed to the network
-Extent to which arrangements enable effective administration and management of the network.
-Number and types of leaders and leadership roles filled and unfilled (e.g. champion, sponsor, convener, facilitator/mediator, expert).
-Extent to which relevant knowledge was generated and developed.
-Extent to which funding, administrative support, expertise, tools, and other resources were acquired.

Source (adapted from): Emerson, Kirk and Tina Nabatchi. 2015. Collaborative Governance Regimes. Georgetown University Press: Washington, DC.

  • Program evaluation. In addition to measuring performance of your collective impact network, programs (or projects/initiatives) should be evaluated to see what kind of impact they made. Measurement questions can be short-term (did students report spending more time reading after being given books?) or long-term (did the book access program contribute to improvements in student’s reading scores on standardized tests?). Since collective impact networks often involve numerous program-type activities and community-based projects, having someone contributing to your network who has a firm grasp on program evaluation will pay off in the long run. Being able to evaluate your programs will help you understand what’s working and what can be improved, with the added benefit of giving you solid measures to report back to existing or potential funders.
  • The six iterative stages of the CDC’s framework for program evaluation include:
    • Engage stakeholders
    • Describe the program
    • Focus the evaluation design
    • Gather credible evidence
    • Justify conclusions
    • Ensure use and share lessons learned
  • Network evaluation. Just as it’s important to measure how your network is performing and to see if your programs/projects are working how you want them to, it’s important to measure the collaboration side of your collective impact network. This allows you to see what the current landscape of relationships looks like, how the network changes over time, and identify ways to improve how your partners work with one another – whether that be identifying ways to reduce duplication of effort, build new connections, or leverage an organization’s strengths.
    • There are many ways to do this. A low-cost, accessible way to get started is to bring together a diverse team of participants to develop an actor map or causal loop diagram. On the other end of the spectrum is the option of developing a custom network study, which requires specific expertise in network science, research design, data collection, analysis, and visualization. But there are also options in between these two extremes, such as the PARTNER platform. The PARTNER platform was developed by Dr. Danielle Varda and her team at Visible Network Labs to make network analysis accessible to practitioners. It still takes a bit of tech savvy to use, but it’s accessible enough that most people who are comfortable with web-based computer applications and Excel spreadsheets should be able to use it without trouble.
  • Population outcomes. Performance, programs, and networks all build up to an end goal; usually achieving population outcomes. Whether that’s reducing health disparities, improving education outcomes, or developing local economies, collective impact networks exist to accomplish the big, audacious goals that can’t be achieved by any one organization. These goals will likely take years – maybe even decades – to achieve, but it’s important to have a plan for how you’re going to see if your network is making a collective impact. With the right resources, it’s possible to measure population outcomes yourself – but the ‘right’ resources are likely expensive and may be difficult to come by! But don’t fear, there are tons of amazing (and free) resources out there to help with measuring change at the population level. The table below includes a few of my favorites, but don’t forget to check with your state or local government offices; they oftentimes provide more detailed information on specific populations.

Health Data Sources

Source Topic Access
America's Health Rankings
Health rankings and indicators by state
https://www.americashealthrankings.org/
County Health Rankings
Health rankings and indicators by county
http://www.countyhealthrankings.org/
Centers for Disease Control and Prevention (CDC) Youth Risk Behavior Surveillance System (YRBSS)
Youth health behaviors
https://www.cdc.gov/healthyyouth/data/yrbs/index.htm
Centers for Disease Control and Prevention (CDC) Behavioral Risk Factor and Surveillance System (BRFSS)
Adult Health behaviors
www.cdc.gov/brfss
Health Landscape
Mapping of health, socio-economic and environmental indictors
https://www.healthlandscape.org/

Education Data Sources

Source Topic Access
National Center for Education Statistics (NCES)
Data tools to explore K-12 and postsecondary education statistics
http://nces.ed.gov/ipeds/datacenter/
Office for Civil Rights
Civil Rights Data Collection
http://ocrdata.ed.gov/

General Data Sources

Source Topic Access
US Census Bureau: American Community Survey
Population and housing
https://www.census.gov/
Opportunity Index
Opportunity indicators by state and county
http://opportunityindex.org/
  • Sharing it out. What would be the point of measuring all your work if you didn’t share your findings with others? Whether you just want to share internally with specific groups in your collective impact network or if you want to share your findings with the whole wide world, there are some great tools available to help. Here are just a few:
    • Start by checking out the CDC’s recommendations for ensuring use of evaluation findings through effective communication and dissemination.
    • Once you’ve developed your dissemination plan, you can use various web-based platforms to share your findings. Tableau is one tool for building interactive, web-based data visualizations of your descriptive data. The public version of Tableau is free, which is a great option if you don’t need to keep your findings confidential. Tableau can be tricky to learn (even for those with some data chops) but once you’re up and running you’ll have beautiful visualizations that can be combined into dashboards or embedded on web pages. If you’re using the PARTNER platform to evaluate your network, you can use the data visualizer and report builder.
    • You can also report your findings with fun, interactive infographics. Piktochart is an easy-to-use, free software I’ve used to share findings with people who may not need to dig deep into the data or read through a whole report of technical findings. The tool lets you incorporate basic data visualizations, graphics, and narrative into an accessible and visually-appealing format.
    • Now that you’ve developed these beautiful tools for sharing your findings, where will you host them? Well, if your collective impact network doesn’t yet have a website, you should probably fix that as soon as possible. Having a digital presence is an important way to let people know what your network is about, what you’re working on, and what you’ve accomplished. Building a basic website doesn’t have to be expensive; there are many free and low-cost options such as Jimdo, Squarespace, WordPress, and Wix (Jimdo is my favorite website builder). Ensuring people can learn more about your collective impact network’s work through a dedicated website or even just a page on partner’s websites is an important way to share your network’s knowledge.

Measuring collective impact takes time, skill, and creativity. It’s a necessary part of honoring the collective impact framework’s commitment to continuous improvement; it provides the information you’ll need to course-correct your work before it goes downhill; and it will feel pretty darned good to see when your work is making a measurable difference in people’s lives. Remember to value measurement as you would any other key component of your collective impact initiative, and don’t forget that there are loads of free and low-cost resources to guide your measurement adventures.

About the Author: Stephanie Bultema

CU Denver on Network Science

As a current PhD candidate in the School of Public Affairs at the University of Colorado Denver and a researcher in the Center on Network Science, I spend most of my time learning about the connections between people, organizations, and policies. I’m currently putting this knowledge into action by providing research consultation services through Bultema Consulting LLC and serving as co-president of the School of Public Affairs PhD Student Association. I earned a B.A. in English Writing and an M.A. in Administrative Leadership from Whitworth University, with additional coursework completed through Washington State University’s Master of Health Policy and Administration program. 

Leave A Comment

Subscribe to our Monthly Newsletter!

Get new articles, updates, and resources related to ecosystem mapping, network science, and cross-sector collaboration.