Overview
Not every task should be automated. Look for high-volume, rule-driven work with low variance and clean handoffs. Avoid automating broken designs. Fix the flow, then automate the stable parts.
Decision framework (suitability tests)
Work profile
- Volume: large and predictable
- Variance: low; limited paths
- Latency: faster response matters
Logic & data
- Rules clear; inputs structured
- Data quality sufficient (complete/accurate/timely)
- Systems expose stable APIs or screens
Risk & value
- Failure cost acceptable; rollback exists
- Control posture maintained or stronger
- Net value positive after run/maintain costs
Process & data readiness
Process checklist
- Current map (BPMN L2/L3) with variants and exceptions
- RACI and decision catalog with SLAs
- Controls placed in-flow; evidence locations defined
Data checklist
- Event timestamps defined; stable IDs
- Master/reference data owners named
- Input rules (format, range, lists) documented
Design before automate
Remove rework loops and unclear approvals first. Automation will lock defects in place if the design stays broken.
Pattern selection (fit-for-purpose)
Workflow / Case
- Route tasks, enforce steps, capture evidence
- Use for human-centric coordination
- Model with BPMN/CMMN
Integration (API/ETL)
- Move/transform data between systems
- Use APIs or ELT/ETL; prefer APIs over screen steps
- Document contracts and error handling
RPA
- UI steps only when APIs absent
- Stable screens; limited UI change
- Strong logging; small surface area
AI (LLM/ML)
- Classify, extract, summarize, draft, or rank
- Guardrails: policy, prompts, retrieval, redaction, logging
- Prefer agent-assist over full autonomy for high-risk work
Pattern picker
- Rules + APIs → integration
- Rules + no APIs (stable UI) → RPA
- Human routing + evidence → workflow
- Unstructured text/images → AI with guardrails
Human-in-the-loop & exceptions
Design
- Define thresholds for auto-approve vs. review
- Escalation path; time to cure; ownership
- Evidence captured at decision time
AI guardrails
- Usage policy and role scope
- Input redaction; retrieval from approved sources
- Logging for prompts, responses, overrides
Risk, controls & policy
ROI & measurement
Value model
- Time saved × loaded rate (labor)
- Error reduction × cost of error (quality)
- Latency gains → revenue or service lift
- Run & maintain: bot/API/model ops, licensing, support
Proof
- Baseline 4–12 weeks before change
- Track lead time, FPY, backlog, and exception rate
- Publish deltas and keep SPC on at least one KPI
Pitfalls
Automating a bad design
Fix the process first. Automation will harden defects.
Screen scraping over stable APIs
Prefer APIs. Use RPA on UI only when APIs do not exist and UI is stable.
No owner, no rollback
Every automation needs an owner, a kill switch, and a rollback plan.
90-day starter
Days 0–30
- Pick one flow; map L2/L3; list exceptions
- Score suitability (work, logic, data, risk, value)
Days 31–60
- Choose pattern (workflow/integration/RPA/AI)
- Define HITL thresholds and controls
- Baseline KPIs; draft ROI
Days 61–90
- Pilot; track cycle time, FPY, exception rate
- Install monitoring and change control
- Publish deltas; plan scale-out
References
Automate the stable parts. Guard the rest with clear rules and evidence.
If you want a suitability scorecard and pattern picker, ask for a copy.