Overview
Design first. Fix handoffs and approvals before you automate. Then pick a pattern that matches the work: human coordination → workflow, system-to-system data → integration, UI steps with no API → RPA. Keep one source of truth.
Decision framework
Work profile
- Human routing, SLAs, evidence → Workflow
- Structured data exchange → Integration
- No API, stable screens, narrow scope → RPA
Change risk
- Frequent screen changes → avoid RPA
- Unstable schemas → guard integration contracts
- New policy or approvals → workflow fits
Controls
- Approvals, logs, SoD → workflow
- AuthN/AuthZ, audit, data lineage → integration
- Runbooks, kill switch, replay → RPA
Workflow
Use for
- Human tasks with SLAs, queues, and evidence
- Cases with ad-hoc steps (pair with CMMN)
- Approvals and escalations with audit trails
Design notes
Strengths / limits
- Strong for roles, evidence, and SoD
- Slower than direct system integration for pure data flow
Integration
Use for
- System-to-system data movement and sync
- APIs, events, ETL/ELT jobs, and data contracts
- Near-real-time updates (events/streams) or scheduled batches
Design notes
Strengths / limits
- Fast, scalable, testable; clear lineage
- Needs stable schemas and API discipline
RPA
Use for
- UI steps where no API exists and UI is stable
- Legacy or partner portals with narrow tasks
- Short-lived bridges while you build APIs
Design notes
- Follow IEEE RPA definitions for scope (IEEE 2755)
- Use reliable selectors; prefer accessibility trees / WebDriver (W3C WebDriver)
- Small surface area; version-locked; kill switch and rollback
Strengths / limits
- Fast to deploy; no changes to systems
- Fragile with UI change; higher run/maintain cost than APIs
Where AI fits
Good fits
- Classify, extract, summarize, rank, or draft from text/images
- Agent-assist in workflow; suggestions with human approval
- Document intelligence with confidence thresholds
Guardrails
- Policy, prompts, retrieval, redaction
- Logging for prompts/responses/overrides
- Risk frameworks: NIST AI RMF (NIST)
Controls & risk
Workflow
Approvals at thresholds, SoD, immutable audit trail, evidence stored at the step.
Integration
AuthN/AuthZ, input validation, idempotency, schema versioning, monitoring, replay.
RPA
Runbooks, credential vault, selector governance, change alerts, kill switch, rollback.
Cost & operating model
Cost drivers
- Workflow: user seats, forms, storage, approvals volume
- Integration: API traffic, queues, data ops, reliability targets
- RPA: bot minutes, VM/runner cost, maintenance from UI change
Run rules
- Owners, SLOs, dashboards, alerts
- Change windows and regression tests
- Quarterly value review; retire low-yield bots and jobs
90-day starter
Days 0–30
- Map the flow (BPMN); list exceptions
- Score fit: workflow vs. integration vs. RPA
- Baseline KPIs and KCIs
Days 31–60
- Pick the pattern; define contracts or selectors
- Add controls; write runbooks and rollback
- Draft ROI and success measures
Days 61–90
- Pilot; track cycle time, FPY, exception rate
- Install monitoring; publish deltas
- Plan scale; retire manual steps
References
- OMG BPMN 2.0.2 — omg.org
- OMG CMMN / DMN — CMMN · DMN
- OpenAPI Initiative — openapis.org
- OAuth 2.0 (RFC 6749) — rfc-editor.org
- AMQP — oasis-open.org · Apache Kafka — kafka.apache.org
- IEEE RPA vocabulary (2755) — standards.ieee.org
- W3C WebDriver — w3.org
- NIST AI Risk Management Framework — nist.gov
Pick the right pattern. Prove the value. Keep control tight.
If you want a suitability scorecard and pattern picker, ask for a copy.