The Capability Maturity Assessment: How to Conduct One That Actually Informs Investment

Most capability maturity assessments produce a report. A detailed, colour-coded, maturity-scored report that accurately describes the current state of the organization's capabilities across four or five dimensions, presented to leadership at a satisfying level of rigour, filed, and never referenced again.

The report is not the failure. The design of the assessment as a reporting exercise rather than a decision-making exercise is the failure. A capability maturity assessment that is designed to produce a comprehensive current-state picture has a different structure, a different stakeholder engagement model, and a different output than one designed to produce the investment decisions, the improvement priorities, and the transformation sequencing that the organization actually needs. Most assessments are designed for the former and measured against the latter, which is why most of them disappoint.

This post is about how to design a capability maturity assessment for investment decisions rather than documentation, what the methodology looks like when that distinction is explicit from the start, and what the specific practices separate assessments that produce organizational action from those that produce organizational paperwork.

What a Capability Maturity Assessment Is Actually Measuring

A business capability is defined by the people, processes, technology, and information that combine to enable the organization to produce a specific outcome. The maturity of a capability is the degree to which those four components are performing well enough to produce the outcome reliably, at the scale required, to the standard the strategy requires. A capability can have excellent technology and poor processes. It can have experienced people and inadequate information systems. Each combination produces a different maturity profile and a different investment implication.

Bizzdesign's January 2026 analysis of capability assessment methodology identifies three dimensions that together produce an assessment with decision-making utility: strategic importance, capability maturity, and adaptability. Strategic importance answers whether the capability is worth investing in. Maturity answers where the investment needs to go. Adaptability answers how difficult improvement will be given the current state of the capability's components. An assessment that measures only maturity, the most common approach, produces a list of gaps without the strategic context to prioritize them or the feasibility context to sequence them. All three dimensions are required for the assessment output to generate investment decisions rather than observations.

The Five-Level Maturity Scale

The five-level maturity scale, derived from CMMI and adapted extensively for business capability assessment, is the most widely used and most interoperable scale across frameworks. Its specific level definitions vary by framework, but the structure is consistent enough to be applied across most business capability domains with minor adaptation.

Level Label What It Means in Practice
1 Initial The capability exists but is ad hoc. Outcomes depend on individual effort and are not reliably repeatable. No documented process, no consistent ownership, no performance measurement.
2 Repeatable The capability is performed consistently enough to produce reliable outcomes in standard situations. Basic processes are documented. Ownership is assigned. Performance is partially measured.
3 Defined The capability is fully documented, standardized across the organization, and integrated with adjacent capabilities. Performance is measured against defined targets. Improvement is managed.
4 Managed The capability is quantitatively managed. Statistical and analytical techniques inform performance management. Quality and performance objectives are used as criteria in managing the capability.
5 Optimizing The capability is continuously improved through innovation and optimization. Quantitative targets are set and achieved. The capability is a source of organizational learning and competitive advantage.

For most enterprise-wide capability assessments, the practical distinction that matters most for investment decisions is the gap between levels two and three. Level two capabilities are performing but fragile: they depend on specific individuals, are not consistently documented, and degrade when those individuals leave or when volume increases. Level three capabilities are institutionalized: they perform consistently regardless of which individuals are executing them, are documented in a way that enables onboarding and quality management, and can be measured and improved systematically.

For strategically important capabilities, the target maturity is almost always level three or above. For commodity capabilities, level two may be entirely sufficient and level three may represent over-investment. The maturity assessment's value is in identifying which capabilities are performing below their target level given their strategic importance, not in identifying all capabilities that are not at level five.

The Four Assessment Dimensions

Within each capability, maturity is assessed across four standard dimensions. Each dimension can be assessed independently, and the combination of dimension scores produces the overall maturity rating. Assessing dimensions separately is more useful than assessing overall maturity directly because it identifies where the investment needs to go: a capability that scores poorly on technology but well on people and process has a different investment case than one that scores poorly on all four dimensions.

People

The people dimension assesses whether the organization has the human capability required to perform the capability at its target maturity level. This includes the skills and competencies of the people currently executing the capability, the availability of those people in sufficient numbers for the volume the capability needs to handle, the clarity of roles and accountability within the capability, and the organization's ability to develop and retain the talent required.

Assessment questions for the people dimension include: Is there a named owner accountable for capability performance? Are the roles within the capability clearly defined with appropriate competency profiles? Do the people executing the capability have the skills the capability requires at its current and target scale? Is there a development path for building capability-specific skills? How dependent is performance on specific individuals who are not readily replaceable?

Process

The process dimension assesses whether the procedures through which the capability is executed are documented, standardized, and integrated with the adjacent capabilities that depend on them. A capability whose processes are undocumented or inconsistently followed produces variable outcomes regardless of how skilled the people executing it are. A capability whose processes are well-documented but disconnected from the upstream and downstream capabilities it interacts with creates handoff failures that degrade the overall value stream performance even when each individual capability is technically well-run.

Assessment questions for the process dimension include: Are the processes for this capability documented at a level that allows a new team member to execute them without significant informal guidance? Are those processes consistently followed across the teams executing the capability? Do the processes include the integration points with adjacent capabilities? Are the processes actively managed and improved, or are they documentation artifacts that reflect how the capability used to work rather than how it currently works?

Technology

The technology dimension assesses whether the systems, tools, and platforms supporting the capability are fit for the purpose the capability requires. This includes both the technical quality of the systems and their alignment with how the capability needs to work. A system that is technically modern but requires extensive manual workarounds to serve the capability's actual needs is not fit for purpose. A legacy system that reliably supports the capability's requirements, even if it is technically aging, is performing its function.

Assessment questions for the technology dimension include: Do the systems supporting this capability provide the functionality required for the capability to perform at its target maturity level? Is the data available to the systems current, accurate, and in a form the systems can use? Are the integrations between the capability's systems and adjacent systems reliable? Is the technology limiting the capability's performance or scaling potential in ways that would require replacement rather than configuration to address?

Information

The information dimension assesses whether the data and knowledge required to execute the capability well are available to the people and systems performing it, at the quality and timeliness required. A capability whose people have excellent skills and whose processes are well-documented will still perform poorly if the information they need to make good decisions is unavailable, inaccurate, or late. The information dimension connects directly to the data quality and data governance work described in the data quality and data governance posts in this series.

Assessment questions for the information dimension include: Is the information required to execute the capability available when it is needed? Is that information accurate and current enough for the decisions the capability requires? Is there a single authoritative source for the information the capability depends on, or are multiple conflicting versions maintained in different systems? Are the information quality issues that affect this capability known, owned, and being actively addressed?

The Three-Dimension Assessment Model

The four-dimension maturity assessment tells the organization how well each capability currently performs. Combining it with strategic importance and adaptability produces the three-dimension model that generates investment decisions rather than maturity scores.

Strategic importance is the degree to which the capability's performance determines the organization's ability to execute its strategy and deliver value to its customers and stakeholders. This is assessed through structured conversations with business leaders who own the strategy and understand what it requires, not through the architecture team's independent judgment. The questions are specific: if this capability's performance degraded significantly, how would that affect our ability to achieve our strategic objectives? If this capability performed at a world-class level, what strategic opportunities would that create that are currently unavailable to us?

Adaptability is the degree to which the capability can be improved, given its current state. A capability with poor people capability, poor processes, and poor technology has low adaptability because all three dimensions require simultaneous improvement. A capability with strong people capability and poor technology has higher adaptability because the human capability to execute the technology improvement already exists within the team. Adaptability informs sequencing: high-importance capabilities with high adaptability should be addressed before high-importance capabilities with low adaptability, because the investment required and the time to value are both shorter.

The three-dimension matrix produces the investment logic that the capability heat map described in the capability heat map post visualizes. Strategic importance and current maturity determine the heat map position. Adaptability determines the investment sequencing within the heat map quadrants.

Who Should Be Involved and How

BPMInstitute's analysis of capability assessment practice makes the most important methodological point: conducting a comprehensive and effective assessment requires business leaders to become co-creators of the capability roadmap rather than passive recipients of the assessment results. This requires a specific facilitation design rather than a survey-and-scoring approach.

The assessment should involve three groups, each contributing different evidence. Business leaders provide the strategic importance ratings and the qualitative evidence for how well each capability is actually serving their function's needs. They know where the capability's limitations are creating operational problems, where they have had to work around system or process constraints, and what improvement in the capability would change about their ability to deliver results. Operational managers and practitioners provide the detailed maturity evidence across the four dimensions. They know which processes are documented and which are informal, which system limitations require workarounds, where the data quality issues are, and which performance problems are structural versus episodic. The architecture and transformation team provides the analytical framework, facilitates the assessment conversations, synthesizes the evidence across groups, and connects the capability maturity findings to investment implications.

The assessment conversation that produces genuine co-creation rather than validation of pre-formed conclusions is structured around specific questions rather than abstract ratings. Rather than asking a business leader to rate a capability's maturity on a one-to-five scale, the facilitator asks: what are the three biggest limitations you experience from this capability in your current operations? When this capability fails or underperforms, what happens downstream? If you could improve one thing about how this capability serves your function, what would produce the highest value? The ratings emerge from the conversation rather than preceding it, which produces both more accurate ratings and more committed stakeholders because the assessment reflects their own experience rather than the architecture team's interpretation of it.

What to Do With the Results

The output of a capability maturity assessment that is designed for investment decisions has three components that together constitute an actionable capability improvement program rather than a maturity scorecard.

A prioritized list of capability gaps, ranked by the combination of strategic importance, current maturity gap against target, and adaptability. The capabilities at the top of this list are the investment priorities for the next planning cycle. They are the capabilities where the gap between current maturity and required maturity is most consequential for strategy execution, and where the investment required to close the gap is proportionate to the return.

A dimension-level improvement plan for each priority capability, specifying which of the four dimensions is the primary constraint on maturity improvement and what specific intervention addresses it. A capability that needs people investment requires a different intervention than one that needs process documentation, which requires a different intervention than one that needs technology replacement. The investment case for each priority capability should specify the dimension-level intervention and its expected maturity improvement, not just the overall capability investment.

A connection to the value streams that each priority capability serves, documenting how the maturity improvement will improve the performance of the value stream stages that depend on it. This connection, from the capability gap to the value stream performance to the customer or business outcome affected, is what makes the investment case defensible in a budget conversation. The value streams post in this series describes how to build this connection in the context of a broader business architecture program.

The Cadence That Keeps It Current

A capability maturity assessment conducted once produces a point-in-time picture that is accurate on the day it is completed and progressively less accurate as the organization changes. The assessment is most valuable as a recurring discipline integrated into the planning cycle rather than as a periodic project.

The cadence that works for most organizations is an annual full assessment of the capabilities in the strategic priority tier, timed to inform the annual investment planning cycle, and a quarterly light-touch update on the capabilities where active improvement programs are running. The quarterly update is not a full re-assessment. It is a tracking conversation with the capability owner about whether the improvement plan is progressing, whether the maturity is changing as expected, and whether any new constraints have emerged that affect the improvement trajectory.

Organizations that build this cadence into their operating rhythm, connecting the assessment to the planning cycle and the improvement tracking to the program governance process, create something genuinely valuable: a standing view of where the organization's capability performance stands relative to its strategy, updated continuously rather than reconstructed from scratch each time a transformation program needs a current-state basis. That standing view is what allows the business architecture function to operate as a strategic advisor to investment decisions rather than as a periodic assessment service that produces reports between transformation programs.

Talk to Us

ClarityArc's capability maturity assessment practice is designed from the start to produce investment decisions rather than documentation. If you are preparing for an investment planning cycle and need a current-state capability picture that will hold up under budget scrutiny, or if you want to build the standing assessment discipline that makes every planning cycle faster and better evidenced, we are ready to help.

Get in Touch