Organizational Maturity Modelling 

in Evaluation

Scott Chaplowe

June, 2021

Organizations deliver the interventions (e.g. programs, strategy, policy) that we evaluate. Therefore, the ability to assess, understand, and advise organizational capacity development in relation to these interventions is an important skill set in the evaluator’s toolbox.  I have been teaching a course on Organizational Capacity Assessment (OCA) now for three years with the Encompass Learning Center, and this blog will share one framework for OCA that I have found especially useful not only for OCA, but other types of assessments and evaluation.

 

Organizational maturity modelling (OMM) is often used in the private sector, but it is likewise a valuable tool to assess an organization’s maturity in the public and civic sectors. Also called organizational lifecycle modelling, (e.g. see Management Library 2022, and DARSA 2019. PAHO 2022), a OMM framework is typically based on four to six stages that organizations commonly encounter in their development journey: e.g., Initiate, Stabilize, Grow, and Amplify. Using these stages, the OMM  can summarize the organization’s capacity level relative to the degree systems, processes, culture, practice, or whatever dimension chosen is present and supports the given capacity.

 

I like to use OMM assessment because it helps step back to provide a bird’s-eye assessment of an organization’s individual or collective capacities. We often evaluate discrete programs and projects, but organizations deliver these, and I’ve found it useful to help an organization understand how a particular capacity area, like, for example, gender equity or M&E capacities, intersects and affects the particular object of evaluation (e.g. project).  

 

I have found it useful to apply a rubrics framework to OMMs, (a rubric is a framework that sets out criteria and standards for different levels of performance and describes what performance would look like at each level - BetterEvaluation). For example, the diagram below illustrates a four-level rubric for M&E capacity. I do not get too attached to the names of the levels, as they can be changed according to the organizational context, (the four used in the example where were adapted adopted from the DARSA 2019 framework hyperlinked above).

 

What is critical in using an OMM is the collective process of sensemaking with stakeholders, where meaningful stakeholder participation not only ensures rubrics are accurate given the organizational realities, but also builds stakeholder understanding, capacities, ownership and support for organizational development informed by the exercise.

 

In this example, the rubric descriptions were co-created by a representative group of stakeholders at the given organization, and even changed as we worked with it over time, responding to emergent/shared learning. We could have gone with more levels or an odd number (e.g., 5 levels), but the preference with this organization was an even number to avoid the inevitable gravitation to the ‘neutral’ middle level. There are pros and cons of even versus odd rating scales, (i.e., see this AEA365 post) which can be discussed with the stakeholders who will be using the OMM rubrics framework.

 

I have found that when the rubrics are co-created and the rating arrived at collectively, it not only supports accuracy, but also understanding, ownership, support for, and ultimately use of the findings from the OMM exercise. In other words, the process (collective sensemaking) is as important as the product (ratings). This can be facilitated through a workshop or series of workshops (in-person or virtual) and a physical of online whiteboard (e.g. Miro or Mural or Jamboard) where people can tag comments.

 

If executed in a participatory manner, it is not uncommon for the organization to be able to revisit and use the OMM rubric exercise in the future – in essence, building their capacity to assess capacity. And remember, the rubrics need not be a locked blueprint, but can adapted to the organization’s particular needs, changes over time and in context, and assembled participants for the exercise. The key element is that people are meaningfully engaged.