Control Design & Evidence

Controls reduce the risk of error, fraud, and non-compliance. Effective design places controls in the flow of work, keeps evidence where it is created, and tests performance on a cadence. Evidence must be complete, accurate, valid, and timely.

Overview

A control is a policy, rule, or activity that prevents or detects an undesired outcome. A sound framework starts with objectives and risks, then designs preventive and detective controls at the step where risk occurs. The same sources should support daily decisions and audit.

Principles

Place controls in-flow

Put checks where the risk appears. Avoid end-of-month “catch-all” reviews that miss root causes.

One source of truth

Daily operations and month-end reviews must read from the same facts. No parallel spreadsheets.

Test what matters

Design for testability. Keep evidence where it is produced with an owner and retention period.

Control types

By timing

  • Preventive — stops errors before they occur (validation, approvals).
  • Detective — finds issues after they occur (reconciliations, dashboards).

By execution

  • Automated — system-enforced; repeatable and traceable.
  • Manual — human-performed; needs training and sampling.
  • Hybrid — system proposes; human confirms.

Common patterns

  • Authorizations and approvals with thresholds
  • Access control and SoD
  • Reconciliations (sub-ledger to ledger; count to system)
  • Exception handling with aging and cure time
  • Change management (request → test → approve → deploy → rollback)

Design in the flow

Steps

  1. State the control objective and risk.
  2. Attach the control to the BPMN step where risk occurs.
  3. Define data, thresholds, and the owner.
  4. Specify the evidence and where it lives (system of record).
  5. Set the cadence and the test method.

Checks

  • Clear trigger → unambiguous pass/fail rule
  • Single owner; named backup
  • Evidence is generated automatically where possible
  • Control prevents duplicates and dead loops

Evidence quality & retention

Quality (CAVT)

  • Completeness — covers the whole population or the intended sample.
  • Accuracy — correct values and calculations.
  • Validity — relates to the control objective and the period.
  • Timeliness — generated in time to act.

Retention

  • Keep records per policy and regulation (financial, safety, quality).
  • Link evidence to the step and the control ID; avoid shared drives without lineage.

Sampling

  • Test of design (TOD): is the control defined correctly?
  • Test of operating effectiveness (TOE): did it run as designed?
  • Use risk-based sampling; expand on exceptions.

Control matrix & traceability

Control matrix

Map risks to controls, owners, frequency, evidence, and tests. Keep IDs stable. One row per risk/control pair.

Traceability

Link matrix rows to BPMN steps, SOPs, and policy/DoA. Add system and data object where evidence is produced.

Minimum fields

  • Risk ID, Control ID, Objective
  • Step reference (BPMN), Owner, Frequency
  • Evidence location, Retention, Test method

Testing & monitoring

Periodic testing

Run TOD/TOE on a schedule. Escalate late tests. Track exceptions to closure.

Continuous monitoring

Automate detective checks (reconciliations, duplicate detection, threshold breaches) and alert owners.

Remediation

Assign action, owner, date. Verify fix and update SOPs and models. Retire redundant controls.

Segregation of duties (SoD)

Concept

Separate initiation, approval, recording, and reconciliation. Where separation is not feasible, add detective controls and management review.

Patterns

  • Creator ≠ Approver
  • Operator ≠ Reconciler
  • Developer ≠ Deployer

SoD catalogue

Maintain conflict rules by system role and process step. Test access rights quarterly; keep evidence of review.

System & logging considerations

Logging

  • Who did what, when, to which record (user, timestamp, object, old/new values).
  • Immutable or tamper-evident logs preferred; protect time sources.

Access & change

  • Role-based access; least privilege; periodic review with evidence.
  • Change flow with approvals, testing artifacts, and rollback plans.

KPIs vs. KCIs

KPIs (performance)

Cycle time, first-pass yield, cost per case. Proves the process is faster/cheaper/better.

KCIs (control health)

Late approvals, failed reconciliations, access review exceptions, control test pass rate. Proves the process is under control.

Board view

  • 3–5 KPIs + 3–5 KCIs per process
  • Owners and thresholds
  • Drill-through to evidence

90-day starter

Days 0–30

  • Pick one high-risk process. Write control objectives and risks.
  • Attach 3–5 controls to BPMN steps. Identify evidence and owners.

Days 31–60

  • Publish the control matrix with IDs and tests (TOD/TOE).
  • Run the first test cycle; fix design issues.

Days 61–90

  • Automate detective checks where feasible.
  • Install monthly monitoring and quarterly SoD/access reviews.

References

  • COSO Internal Control—Integrated Framework: coso.org
  • ISO 9001 documented information / process approach: iso.org
  • ISO/IEC 27001 (information security management): iso.org
  • NIST SP 800-53 Rev.5 controls catalog: nist.gov
  • ISACA COBIT (governance & management objectives): isaca.org
  • PCAOB auditing standards (SOX context): pcaobus.org

Design controls in the flow. Keep evidence where it is created.

If you want a control matrix template and an evidence checklist, ask for a copy.

Contact us