Most AI projects fail because of organizational gaps, not technology. This 12-question audit covers data quality, workflow definition, success metrics, operational ownership, and change-management capacity. Score honestly — if you cannot answer 9 of the 12 affirmatively for a specific use case, you are not ready and the project will fail. Fix the gaps first.
How to use this audit
Pick one specific AI use case you are considering — not "AI in general." Examples: AI for invoice processing, AI for customer support triage, AI for sales call analysis. Score the 12 questions below honestly for that single use case. The audit is not transferable — readiness for one use case does not mean readiness for another.
Each question scores yes or no. There is no middle ground; "kind of" is no.
The 12 questions
Section 1: Workflow definition (3 questions)
- Can we describe the workflow as a clear sequence of steps?
If the workflow is "everyone does it slightly differently," AI cannot automate it. Fix the process first. - Do we know what the team currently does at each step?
Including the unofficial steps. AI inserted into a workflow nobody can describe will produce output nobody can validate. - Is there a single person accountable for the workflow today?
Not a department. A named owner. AI rollouts without a single throat to choke fail.
Section 2: Data quality (3 questions)
- Is the input data digital and accessible?
Not "the source of truth is a spreadsheet someone emails on Tuesdays." Digital, queryable, in a system you control. - Is the data structured enough to be reliable?
Even unstructured data (PDFs, emails) is acceptable if the format is consistent. Inconsistent format is what kills AI quality. - Is the data volume large enough to validate AI performance?
For most use cases, you need at least 100 historical examples to evaluate AI accuracy. Less than that and you cannot tell if it is working.
Section 3: Success metric (2 questions)
- Is there a numerical success metric the AI is supposed to move?
Time per task, error rate, throughput, cost per unit, response time. "Productivity" is not a metric. - Do we know the current baseline for that metric?
If you do not know what it is now, you will not know if AI moved it. Measure baseline before kickoff.
Section 4: Ownership and change management (4 questions)
- Is there an operational (not IT) owner whose performance review depends on adoption?
IT-led AI projects fail more often than operations-led ones. The team who lives with the workflow has to own the rollout. - Has the affected team been told about the project before procurement?
Surprises produce resistance. Pre-procurement conversations produce buy-in. - Is there budget for change management — workshops, training, iteration?
Typically 30–40% of total project budget. Skipping it is the #1 reason adoption fails. - Is there a feedback mechanism for the team to flag AI errors?
AI quality improves with feedback. Without a structured feedback channel, errors compound silently.
“The 12 questions are not a gating exercise to delay projects. They are a diagnostic to find out which gap to fix first. Most failed AI projects had the gaps before kickoff and nobody named them.”
Scoring
| Score | Status | What to do |
|---|---|---|
| 11–12 / 12 | Ready to start | Run pilot. Move fast. Most teams here under-execute by waiting for permission. |
| 9–10 / 12 | Mostly ready | Start, but explicitly fix the 2–3 gaps in parallel. Do not pretend they are not there. |
| 7–8 / 12 | Not ready yet | Pause AI procurement. Spend 4–8 weeks fixing data, workflow definition, or ownership gaps. Re-score. |
| Below 7 / 12 | Stop | AI is the wrong project. The gaps are operational, not technical. Fix the foundation. |
What to do with a failing score
A failing score is more valuable than a passing one — it tells you exactly which gap blocks the project. The remediation tracks:
- Workflow gaps: Run a workflow-mapping exercise (1–2 days with the team). Document the SOP. Designate an owner. Re-score.
- Data gaps: Digitize the source data, structure it, build the data pipeline before the AI pipeline. Often a 4–8 week prerequisite project.
- Metric gaps: Define the metric, instrument the baseline. Cannot be skipped — a metric defined after rollout is a fiction.
- Ownership gaps: Find the operational owner before procurement. If you cannot, the project does not have an internal sponsor and will fail regardless of tool.
Common reasons businesses fail this audit
- Tool was picked first. Now the project is "implement Tool X," not "solve problem Y." The audit fails because the questions were never asked.
- IT owns the project. The operational team is downstream and uninvested. Adoption will not happen.
- The use case is too broad. "AI for customer service" is not a use case; "AI for triaging inbound support tickets and drafting first-response replies" is.
- Data is treated as someone else's problem. "We will figure out the data once the AI tool is configured." This is backwards.
Why this matters
The audit takes an hour to score. The information it produces is the difference between a 6-month rollout that works and an 18-month one that gets abandoned. We use this exact audit at the start of every AI engagement we run. Run it on yourself first.
Once you have a score, see the AI adoption playbook for the next steps, or build vs buy for the tool-selection framework.
Run this audit with us
If you'd rather have an outside read on your readiness, we'll run this audit with your operational and IT leads in a 2-hour session and produce a written remediation plan. Faster than internal scoring, especially when politics gets in the way.

