Growing Network Impact: How Nonprofit Networks are Raising the Bar on Results

This paper is the work of a team at Bridgespan. The authors want to acknowledge the important research, creation and writing help from Rhett Dornbach-Bender, Andrew Dunckelman, and Justin Pasquariello. We also benefited greatly from the editorial advice of Regina Maruca and Professor William Ryan. Additionally, we would like to thank Boys and Girls Clubs, National Guard ChalleNGe program, Land Trust Alliance, Public Education Network, Big Brothers Big Sisters, and National Academy Foundation for sharing their experiences with us.

When people are asked to identify nonprofits, certain names jump to the foreground — the YMCA, Big Brothers Big Sisters, the American Red Cross, Boys and Girls Clubs, The Salvation Army, Habitat for Humanity. What these household names have in common, besides size and fame, is that they all work through a network structure, with multiple affiliates across the country striving for significant impact in the communities in which they operate. In fact, nine of the ten largest nonprofits in the United States are networks. [1]

For decades, the primary pressure facing networks was to build a bigger footprint — be in more places, serve more people. Now, that pressure is being equaled by another: to get better. Networks, with multiple sites often operating similar programs, are increasingly expected to provide donors and supporters with a higher level of evidence that their work is effective and delivered consistently across the board. While such an "outcomes" orientation isn't new, its effect on the sector has been magnified, in part because of the difficult economy.

In the course of our work, we have seen several networks take promising steps to deliver measurably better results in achieving their missions. At these organizations, staff members from the central office are working collaboratively with affiliate leaders to improve the way in which their network's high-level strategy translates into action across the entire organization. They're figuring out where their best work is being done, finding ways to become more effective, and learning how to ensure that all affiliates benefit from the experiences and know-how of their peers.

This article focuses primarily on two kinds of networks — federated and associated networks. Both are collections of independent 501(c)(3)s, whose affiliates focus on similar activities and services. But whereas federated networks (such as Big Brothers Big Sisters or the Boys and Girls Clubs of America) offer mostly standardized program models, associated networks (such as the Land Trust Alliance or Public Education Network) allow for a more varied set of program models.

Nonprofit networks go by many names and are defined in a number of different ways, but the key differences among them are the degree of central control, variations in the types of activities of affiliates, and degrees of standardization in affiliate work. The chart below shows our full definition of organizational network forms.

All of the networks we've studied are in the early stages of implementing their new approaches; however, there is visible consistency in what they're doing. Specifically, we've observed five critical, common elements in their work. These organizations all:

  • Use the network's unified strategy to drive decision making
  • Create a common language by defining the dimensions of effectiveness
  • Create paths for improvement for affiliates
  • Diagnose where the network is today and uncover pockets of strength
  • Capture knowledge that matters: Use diagnostic results to facilitate learning and improvement

The rest of this article describes each of these elements in turn, but the actual process of becoming more effective isn't linear. In fact, most if not all of the organizations we've studied move back and forth between the elements as they learn more about what drives effectiveness in their networks and put what they are learning into practice.

Our goal in this article is to outline what these nonprofits are doing to ensure that each affiliate has the support and direction it needs to do the best it can, and to improve the entire network's ability to make a difference for the people and causes it serves. Separately, with significant input from a few network leaders, we have created planning guidance materials to help others think about how to plan such a transformation. To access that guidance, please see "Preparing to Grow Your Network's Impact."

Use the network's unified strategy to drive decision making

The organizations we observed all devoted a considerable amount of time to honing a network-wide strategy before they began to try to improve effectiveness across the board. In other words, both the center and the affiliates had already reached a fairly solid consensus about what they were trying to accomplish and for whom or what. So although network members operate in distinct communities, serving populations or causes that may vary by geography — sometimes subtly, sometimes significantly — a common strategy was already keeping each site looking in the same direction. (If affiliates throughout your network do not share a common, well-articulated strategy, then the organization is not ready to launch this kind of effort. For more information on developing a network strategy, please see "National Networks: Planning Can Align a National Nonprofit Network for Full Impact.")

Given that foundation, it was a natural step for each network to look to its strategy to guide decisions about collective improvement as well. And in each case, the network strategy has helped them find common ground in deciding what an "effective" affiliate looks like, what the different aspects of "effectiveness" might be, and what data affiliates should collect and share to help one another improve.

Consider the experience of Boys and Girls Clubs of America (BGCA), a federated network whose mission is to "enable all young people, especially those who need us most, to reach their full potential as productive, caring, responsible citizens." From 1998 to 2008, under the leadership of CEO Roxanne Spillett, BGCA doubled in size, expanding from 1800 to 4000 sites and broadening its reach to four million youth, in part by entering traditionally underserved communities such as rural areas, military bases, Native American lands, and public housing. BGCA accomplished this growth both independently and through partnerships, and as the work was moving forward, Spillett and her colleagues began to set their sights on changing the way the Clubs thought about, and assessed, progress. Specifically, Spillet said, BGCA wanted to transform the clubs from being "outcome intended" to "outcome driven with measureable results."

As Spillett explained, "We had reached our goal for growth. That ten-year process was ‘phase one' of our transformation. We have clubs where they are most needed. Next, we wanted to be as deep in our impact as we are broad in our reach."

To that end, a team of leaders representing the center and affiliates used BGCA's strategy to home in on three clear goals for beneficiaries: academic success, a healthy lifestyle, and growth into a responsible adult. Spillett says those goals gave affiliates the traction they needed to design a plan to help one another meaningfully improve results. It gave them a starting — and referral — point as they articulated the specific results each affiliate could and should be striving for. It also informed their conversations about how systematic measurement could help every network member.

By mid-2010, the organization had identified a range of key performance indicators that led to achieving the three goals. Importantly, the indicators apply to all clubs, regardless of their different sizes, programming, and the nuanced characteristics of the population they serve. These include: frequency of client (beneficiary) visits, selected academic success measures, facility utilization, and financial health. Each activity, or measurement, can be traced directly back through the three goals to the common, overall strategy, which Spillett says is crucial to the ability of the network to progress as a whole, and not improve only in isolated pockets. "These measures help us assess where individual clubs stand. They also help us identify, and then prioritize, the things that individual affiliates need to do to improve their results. And, they show us where one club might benefit from the experiences of one of their peers," she said. "What we are talking about here is not incremental improvement, we are talking about transformation."

BGCA has just begun the long journey to act on the decisions it has made at the strategic level, but the first steps are encouraging. For example, over the last year, clubs have begun to track progression through school for every BGCA participant, bringing the standards of performance measurement at the local level to the mission goal of "Be Great: Graduate."

Advice from pioneering networks:

  • The effort to transform thinking, culture, and actions across a wide-flung organization should be thought of as a marathon, rather than a sprint. The organizations profiled were very deliberative in creating a unified plan that aligned the entire network. When starting this process, it's also best to think in terms what can be accomplished over a year or more, and what elements of the process should take root to enable ongoing learning and improvement.
  • This work is time-consuming and resource intensive. It is critical for the network's leaders to be clear up front about how the effort will help the organization improve at the local as well as the network level. Staff throughout the network need to understand that they will all get something out of this process: the people and causes they're trying to help will benefit.

Create a common language by defining the dimensions of effectiveness

As the BGCA example illustrates, when all affiliates share an understanding of what high performance looks like, then it becomes easier to identify the specific dimensions, or factors, that are reliable indicators of effectiveness. What information will allow the network to set clear expectations and compare results in a constructive way? What will help the network see how all affiliates are performing against their common strategic goal, and understand why some affiliates may be achieving more than others? What support will be needed to transform the effectiveness of the individual organizations and the network?

The Land Trust Alliance, a national associated network of 1700 land conservancies, provides a good example of how coming to a shared understanding about the dimensions of effectiveness can be of value to affiliates and network leadership alike. In 2003, CEO Rand Wentworth focused the work of the organization on increasing the pace, quality and permanence of land conservation in America. As he put it, "Our new lens shifted everything from activities to outcomes. It meant a cultural shift for everyone involved."

Wentworth recognized that in order to make that shift, the network would need a set of externally credible — and internally relevant — standards that define what it means to be a high-performing land trust. To that end, the Alliance developed the Land Trust Standards and Practices through a network-wide, collaborative process. The problem was that many groups were far from actually implementing those standards. So the Alliance created a national accreditation program to publicly recognize groups that implemented the standards. That program, Wentworth said, has proven to be "a magnet to draw energy and attention towards organizational transformation." To date, 130 of the Alliance's larger conservancies have received accreditation, together representing 3.6 million acres. When the current applications achieve accreditation, about 54 percent of conservation land in America will be held by an accredited land trust.

What's more, as Wentworth noted, the land trusts that are applying for accreditation are working differently. Local boards are more engaged; for example, they are paying greater attention to due diligence on the projects they pursue. Those land trusts also appear to be thinking more in terms of long-term results; they are allocating resources differently, with long-term stewardship and conservation in mind.

Importantly, each of the organizations that have received accreditation has made significant changes in its organization in order to become accredited. Some have become more purposeful about dedicating funds to support stewardship and defense of conservation land and easements. Others are fundraising to invest in the staff, training, and technology they need to document their work — a move that will help ensure that those lands can withstand legal challenges and be conserved in perpetuity.

Wentworth calls the network's new focus "an enormous driver of organizational change and development." Through a seemingly simple action — developing that common list of core dimensions of effectiveness — the quality of land management has improved in meaningful ways.

Interestingly, the Alliance's core dimensions, and those of the other networks we studied, largely divide into two major categories: program dimensions that measure results of what the affiliate achieves in delivering the programs, and organizational dimensions that track how strong the affiliate is in an operational sense. Program dimensions for the Alliance include the monitoring of compliance with the legal regulations and rules imposed by local, state, and federal governments, for example. Organizational dimensions include the strength of each affiliate's board, its fundraising ability, and its ability not only to meet legal requirements for protecting land, but also to go the extra step to ensure that there will be funds to continue to protect the land over the long term to ensure its biological value.

We find this two-category breakdown to be a useful way to capture the range of important dimensions that networks might want to consider when tracking effectiveness. In addition, while we saw significant differences in key program dimensions across networks, we saw a great deal of commonality in the dimensions chosen to assess organizational effectiveness.

Advice from pioneering networks:

  • It is important to keep the list of dimensions easily digestible, condensing them to a manageable set of things to track. Since most organizations have endless potential metrics to consider, it's worth the time and hard conversations (with center staff, affiliate leadership, funders) to home in on a core set of three to six program dimensions, and three to six organizational dimensions.

The same is true with network affiliates trying to become more effective. Once the networks we studied have developed a clear sense of the dimensions that indicate high performance, they have then shifted their focus to defining some clear, intermediate developmental stages on the way to reaching those goals. No network will have every affiliate modeling all successful practices, so this key step recognizes developmental milestones that indicate movement in the right direction. Affiliates can calibrate their own performance against those milestones, using them to evaluate their own strengths and weaknesses. The network as a whole can also use them to identify pockets of strength, and potential sources of learning.

The Public Education Network (PEN), an associated network of Local Education Funds (LEFs), provides a prime example. LEFs provide support and advocacy for public education; as PEN's mission states, the network's leaders and members work together to "build public demand and mobilize resources for quality public education for all children." In 2009, PEN's leadership developed a strategy that included building a stronger network and defining more clearly what it meant to be an LEF.

Working with a group of affiliates, PEN's leaders identified a short list of the dimensions that contributed most to any given LEF's results. There were five program indicators, including the ability of LEFs to effectively use data and research to measure and improve student outcomes, and four organizational indicators, which included leadership, governance, funding strength, and human resources.

Then, for each of these core dimensions, the group articulated three stages, or levels, of effectiveness: emerging LEF, growing LEF, and high-capacity LEF. So for example, a given LEF might be a "high- capacity" organization on its ability to use data and research to improve student outcomes. But that same LEF might be considered "emerging" on the strength of its governance.

Example of LEF's developmental stages for one program dimension and one organizational dimension

  • When deciding on the dimensions, keep orienting against the organization's big-picture goals, and keep effectiveness and improvement in mind. These dimensions shouldn't look like a checklist; instead, they should demand value. For example, a question monitoring an organizational dimension wouldn't ask: "Do you have a board?" but rather: "How strong and engaged is your board?"

    Create paths for improvements for affiliates

    No basketball coach would ever tell a player to "just play more like Michael Jordan." Professional athletes achieve their highly refined skills by progressing through a series of developmental milestones. They learn one skill, and then continue to practice it and build upon it, introducing other skills as they become ready to handle them.

    Example of LEF's developmental stages for one program dimension and one organizational dimension

    Over time, PEN's goal is to help all affiliates reach the high-capacity level on every dimension. The developmental stages give PEN's members a common standard around which to assess their current work, diagnose challenges, and gain a clear sense of which affiliates might prove to be valuable "go to" resources for advice and guidance in any particular area.

    Wendy Nadel, who leads the Yonkers LEF (Yonkers Partners in Education), notes that having clarity about the dimensions of effectiveness, and also being able to assess those dimensions against three clear stages, has helped to guide her organization's work since its inception four years ago. "We don't just talk about getting better in a generic sense," she said. "We have a very clear road map of what it means to be a high-capacity LEF with real specificity on our strengths and what we need to work on. We can now link local effectiveness with national impact."

    Advice from pioneering networks:

    • The goal of creating developmental stages is not to punish organizations that fall outside of a certain band, but rather to give clear indicators of "next steps" in terms of improvement. Even the best affiliate organizations are likely to have some developmental needs.
    • What's more, changing conditions can trigger a need for any given affiliate to revisit a dimension; for example, if an experienced leader leaves an organization. So, this approach works best when it is designed to foster shared learning and help all affiliates think in terms of improving their outcomes.

    Diagnose where the network is today and uncover pockets of strength

    Having figured out the developmental stages that characterize an affiliate's progress toward greater impact, the networks we have studied then move to diagnose their current state and give themselves a baseline against which to measure progress. They ask: Where do affiliates fall on the developmental continuum? Do any trends emerge? Are there pockets of strength, or weakness? How might the entire network strengthen performance, if affiliates across the board could improve on one key dimension of effectiveness?

    Consider the experience of the National Guard Youth Challenge Program (ChalleNGe), a federated network. ChalleNGe offers a quasi-military environment for teens that have dropped out of secondary school, with the aim of helping them become productive citizens. Each of the 33 affiliates offers essentially the same two-part program: a 22-week residential phase (during which "cadets" build job skills, prepare for the General Education Development exam, participate in community service, and develop a "life plan"), followed by a year of mentoring in the youth's community.

    In 2010, ChalleNGe's leaders re-evaluated the organization's strategy and committed to improve network impact. As part of that work, they identified four key programmatic dimensions of effectiveness: program utilization (application and acceptance to the program), success in the pre-residential phase, success in the residential phase, and post-residential placement. They also assessed affiliate performance against those dimensions and, studying the results, found wide variation. For example, at one ChalleNGe site, 90 percent of students successfully graduated from the residential program, while at another, only 58 percent did so.

    The implications of keeping youth engaged throughout the residential phase of programming were striking. If all affiliates could match the top quartile of that baseline performance set then 2,700 more young people would graduate from the residential program each year. The network would improve its overall impact by 35 percent without needing to add new locations.

    Based on this very clear idea of how the affiliates performed on these four dimensions, ChalleNGe set an equally clear goal: Over the next year and a half, each site should commit to achieving at least the same results on each of the four dimensions as those affiliates at the baseline 75th percentile.

    National Guard ChalleNGe can deepen impact through network improvement

    While not all networks will unearth such significant opportunities for untapped impact, part of the beauty of this approach is its ability to identify best practices that can translate directly to increased impact.

    Diagnosing a network's current state in detail (as ChalleNGe did) ideally means having easy access to robust data. For large networks with hundreds of agencies, such access requires a strong technology platform. Consider the experience of Big Brothers Big Sisters (BBBS). BBBS, the nation's largest youth mentoring program, has worked since 1904 to provide children with positive adult role models through one-to-one matches. While revising its strategic plan in 2010, the organization's leaders determined that they wanted to refine the way the network measured and improved performance. Fortunately, several years before, with an eye towards gathering consistent and comparable information, the network had created a common data technology platform (AIM), and had begun a gradual phase-in. With the vast majority of locations using AIM by 2011, the agencies, as a network, voted in 2011 to require that all agencies report on a common set of outcomes measures to remain a participant in the network. Since the goal is to raise the performance of the agencies, there is a two year phase-in period to complete the implementation of the system and outcomes measure in the remaining affiliates.

    Interestingly, in diagnosing the state of play at any given network: network leaders and members can often be surprised by which affiliates are able to offer leadership and guidance to their peers. The biggest aren't always the best sources of knowledge and expertise. Big Brothers Big Sisters of America (BBBSA) is the BBBS network center; its traditional method of supporting member agencies was organized by size. Excellence in individual affiliates was noted, but the groupings were driven by size and not capability levels. According to Cindy Mesko, BBBSA's vice president for agency development, the organization experienced an "'Ah Ha' moment" when they started thinking about linking agencies around levels of performance rather than just in terms of size. The data that was being gathered could be used to help in assessing the strengths and performance needs of individual agencies. This meant that some of the smaller sites with very strong track records of "long and strong matches" could play a valuable "practice leadership" role by teaching and supporting other sites on that front.

    That idea was reinforced when Mesko encountered an executive director from a medium-sized, Midwestern agency at a national conference. The affiliate leader admitted she was "embarrassed about not serving more kids." Yet as the director shared more about how the agency used data to manage is performance quality, Mesko realized that chapter's experience could serve as a model for others in the network, and was happy to inform the affiliate leader that she wanted to learn more from her work. BBBSA's experience is shared by many networks in our sample; getting clear on the true measures of success often surprises both network leadership and affiliates with the knowledge that for many networks, members strong on a given dimension may be flying under the radar when size alone dominates the discussion.

    Advice from pioneering networks:

    • Formally diagnosing the network's strengths and weaknesses is typically an improvement over evaluating based on anecdotal success or metrics like budget size and revenues. Many networks are surprised by which affiliates pop to the top of the analysis.
    • By diagnosing and documenting strengths and weaknesses along tailored dimensions, you will be able to institutionalize knowledge about the network's strengths and weaknesses, rather than having to depend solely on the knowledge of a few individuals. This formal knowledge can be used to target supports and bring affiliates together for peer learning.

    Capture knowledge that matters: Use diagnostic results to facilitate learning and improvement

    After completing the tough task of assessment, the networks we've studied face the challenge of leveraging affiliate knowledge to improve results across the board. This means figuring out what to do first and how best affiliates can learn from their colleagues throughout the network. At this point, what we've observed is that staff at the center are particularly well positioned to coordinate identification of leading affiliates on any given dimension, and also to ensure that their expertise is easily accessible to the rest of the network. The center can also play an important supporting role by being intentional in its application of resources, incentives, and supports to help affiliates move upward along the developmental trajectory.

    Consider the National Academy Foundation (NAF), a federated network of more than 400 high school academies that offers students rigorous, career-themed curricula designed to prepare them for college and meaningful careers in fields as diverse as finance, hospitality, and engineering. Starting in 2009, and driven by a strategy that required network improvement, NAF's leadership worked closely with a steering group of its academies to develop a self-evaluation tool that tracks performance indicators related to fidelity to the national model. Academies report on four components of their capacity, including both programmatic and organizational dimensions. Like the other networks profiled in this article, NAF then mapped academies into developmental stages based on their performance against these dimensions.

    According to Associate Vice President for Programs Bill Taylor, NAF has put real thought into the way its leadership and academies can support one another in moving "up the ranks" in line with these performance standards. The center has created a guidebook on the "cycle of improvement" as a resource for academies, and academies rely on centrally produced planning tools to help identify the next steps they should take to further their development. In addition, the center offers resources to help academies tailor their supports to their own needs, through offerings including a summer institute for professional development, curriculum coaching, and forums to share best practices across the network.

    Early results are encouraging. As Taylor noted, "Success is no longer nebulous now that academies have both the tools and clarity that they need to get better." What's more, in time the center hopes to tailor financial incentives — the resources that it applies to its network — to better meet the developmental needs of academies. Through a rigorous application of resources, new attention to planning for improvement, and shared ownership for getting better, NAF has piloted a promising method for helping more students succeed in life through their participation in existing NAF academies.

    While the national office is using the diagnostic findings to improve the entire network, individual academy board and staff members can use the same material for their own organizations to determine where they should showcase their strengths and where they need to focus on improvement. And although the circumstances around each academy vary from all others to some degree (NAF often uses the line "When you've seen one local [academy], you've seen one local,") having a rigorously developed set of network- wide criteria for what it takes to be effective can provide a strong road map for each academy to direct its own development, leveraging the strengths and synergy of the network along the way.

    Advice from pioneering networks:

    • Take a long-term view to improvement. Build a system for ongoing diagnosis and assessment that will last, and design plenty of feedback loops along the way. Be prepared to adjust your approach as changes are implemented and tested.
    • Not all affiliates may be willing or able to engage in a formal approach toward improving effectiveness. But over time, many will find that deepening results in their current footprint is an alternative path to getting "bigger," as well as being the most solid foundation for continued growth in the future.
    • Make an effort to communicate with current and potential funders about your efforts to improve effectiveness. Increasingly, we're finding that funders are interested in looking at growth of effectiveness, rather than just growth in size, or scope. This approach to improvement presents opportunities not just for networks undergoing the process, but for the funders that support them. Few other processes provide as data-driven a look at the relative strengths and weaknesses of a network, especially as linked to how an organization seeks to have impact. Funders will find the learning that accompanies this process a valuable input into grant-making decisions; in addition, they may find that funding this work is a real opportunity to move beyond supporting scale only through expansion.

    Moving ahead on network impact

    Each of the networks discussed in this article has embarked on an important journey to growing impact and scale systemically in their important work. Just as building and aligning the networks is a multi-year process, the work to define, embrace, and attain excellence of impact across large networks is a marathon and not a sprint.

    We look forward to following the progress of the networks in this article and to learning from others about their journeys in Raising the Bar of Network Impact. Please share your thoughts, ideas, and stories with us: alan.tuck@bridgespan.org or mandy.taft-pearman@bridgespan.org.

    [1] The nine networks are YMCA, Catholic Charities, United Way, Goodwill, American Red Cross, The Salvation Army, Boys and Girls Clubs, Habitat for Humanity, and Easter Seals. (The one non-network in the top ten is Memorial Sloan-Kettering Cancer Center). Source: Nonprofit Times' 2010 list.

    Copyright © 2011 The Bridgespan Group, Inc. All rights reserved. Bridgestar and Bridgespan are registered trademarks of The Bridgespan Group, Inc. All other marks are the property of their respective owners. This work is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Permissions beyond the scope of this license are available at Bridgespan's Terms of Use page.