The Power of Evaluation Partnerships: Learnings from a Partnership between Internal and External Evaluators

By Hassan Lubega, Kayla Benitez Alvarez, Brisa Garcia, and Christine Patton


One of our continued learnings in our evaluation practice is the importance of leaning into the unique skills and assets that both internal and external evaluation teams bring. This is especially important as we are forced to make agile decisions in an unstable funding and policy environment. We recently put this learning into practice during a joint, three-month analysis sprint between the Data+Soul and Jumpstart evaluation teams to analyze and review the implications of Jumpstart’s program data as they prepared to launch their FY26 programming.

Our evaluation goal remained the same: understanding the key program characteristics that support children served by Jumpstart. At the same time, direct collaboration between internal and external evaluation teams opened up different ways of working.

Here are three ways we were able to work in this unique evaluator-evaluator partnership:

Document Process, Not just Outcomes

An important part of Data+Soul’s work was making sure that both analysis and sensemaking were replicable for the Jumpstart evaluation team. From the outset, processes were designed to be handed-off directly to the Jumpstart team. For example, Data+Soul research associate Kayla Benítez-Álvarez led and documented the entire analysis sprint, including scripts in R and additional resources to enable re-analyzing this or other similar data sets. The sensemaking process was similar. We designed a session that allowed our teams to extract insights from the data while building a template the Jumpstart evaluation team could use to facilitate sensemaking with other Jumpstart stakeholders. In essence, our deliverable was the standard “here’s an answer to this question” with the bonus of “and here’s how we can answer it next time.”

Analysis Transparency

Throughout our analysis sprint, our two teams met nearly weekly to take a look at what was happening with analysis. Importantly, even though Data+Soul led analysis, we didn’t treat it as a black box that just Data+Soul was responsible for translating and simplifying for the Jumpstart evaluation team. Instead, we let iterative review of the analysis, however messy, be a part of refining our approach. Keeping analysis as transparent as possible allowed us to leverage the Jumpstart evaluation team’s expert eye and trust that they would question our assumptions in service of more rigorous analysis. We got to stay in the wide view longer before zooming in on conclusions. By the time we arrived at sensemaking, everyone had reviewed most of the data a few times; we had a shared trust of the methodology, so we could focus more on implications.

Peer Learning

We learned a lot from working with our peers and getting an understanding of the types of questions that are being asked and answered in the evaluation sector. Our two teams complemented each other: the Data+Soul team brought unique insights and approaches from working across issue areas (movement building, environmental justice, and healthcare among others) while the Jumpstart evaluation team brought a deep level of expertise in evaluating early education programs. This combination opened space for us to experiment a bit more: the Data+Soul team could bring suggestions and curiosities, knowing that the Jumpstart team could speak to feasibility and relevance to their context. In turn, the Jumpstart team could design and address questions that benefit from additional capacity and external perspectives, including those beyond early education. 


Since our partnership, the Jumpstart evaluation team has used the sensemaking framework to inform the facilitation of two data review sessions with all staff. The outcomes of the evaluation also allowed the team to build on the results of previously unexamined questions related to the relationship between setting-level factors (e.g., number of adults in the classroom) and children’s language environments. Furthermore, the team took inspiration from our shared analysis sprint model to explore innovative approaches to data visualization. 


In reflecting on these achievements, Christine Patton, Jumpstart’s Managing Director of Impact and Evaluation, shared: "At Jumpstart, we've always believed that understanding what's working for children isn't a one-time exercise, it's a continuous commitment. This partnership with Data+Soul didn't just answer questions about our program data; it built our team's capacity to keep asking better questions on behalf of the children and families we serve. The rigor we applied here directly informs how we think about the classroom experiences that are at the heart of our programs.” 

The Data+Soul team continues to remain committed to work with both program and evaluation teams to support data-informed decision-making. As the social impact sector navigates the erosion of public investments and restrategizes planning and sharing resources, we believe that utilizing data is key to maintaining a stable foundation for positive change.


Next
Next

From Ephemeral to Tangible: Building Evaluation Infrastructure with Pao Arts Center