All articles
AI & Automation8 min readApril 2, 2026

The AI Readiness Assessment — Is Your Business Actually Ready?

Vendors want to sell software. Consultants want to sell strategy. Here are the six conditions that have to be true before either of them can work.

Most businesses ask: "Which AI tools should we buy?"

The question that actually matters: "Are we ready for any of them?"

After working with dozens of businesses on AI adoption — most of them mid-size operators between 50 and 500 people — I have seen this story play out in many different contexts. Companies that already ran the pilot. Already bought the software. Already told their boards they were "on the AI journey." And quietly — they got very little out of it.

The failure was not the tool. The failure was what came before it.

Research suggests that roughly 9 in 10 AI projects fail to meet initial expectations — and in most of those cases, the failure had nothing to do with the technology.

The AI market has a structural incentive problem. Vendors want to sell software. Consultants want to sell strategy. Neither has a strong incentive to tell you the hard thing: you might not be ready yet.

There are six conditions that have to be true before any AI initiative has a real chance of delivering value. Most companies can honestly answer two or three of them. The ones who can answer five or six are the ones who operationalize AI. Everyone else runs expensive pilots.

Here is the audit. Be honest with yourself.


Pillar 1: Strategy

Diagnostic question: Can you name the specific business outcome this AI initiative is meant to improve — with a number attached to it?

Not "improve efficiency." Not "reduce overhead." A statement like: "We want to cut invoice processing time from 4 days to 1 day" or "We want to reduce inbound support tickets by 30% without increasing headcount."

If you cannot name the metric, you are not ready to invest in the tool. You are buying a solution before you have defined the problem.

Warning sign: "We want to be more innovative" or "We do not want to fall behind competitors."

Quick win: Write one sentence in this format — *We will know this worked if [metric] moves from [current state] to [target] within [timeframe].* That sentence is your north star. If you cannot write it, that is the first thing to build.


Pillar 2: Infrastructure

Diagnostic question: Where does your operational data live — and can it be accessed programmatically?

AI systems process information. If your information lives in spreadsheets, PDFs, disconnected legacy systems, or in someone's email inbox — there is work to do before AI adds value.

Warning sign: "Our data is spread across a few different systems" is a solvable problem. "Our most important data exists only in paper records or people's heads" is a blocker.

Quick win: Pick one process you want to automate. List every data input that process requires. Then ask: is that data in a system that has an API, or is it manual entry? That question identifies your infrastructure gap immediately.


Pillar 3: Data Quality

Diagnostic question: If you pulled your customer, operational, or product data right now — what percentage of it would you trust?

AI makes decisions based on data. Poor data quality does not just limit what AI can do — it produces confident, fast, wrong answers.

Data quality issues are the most common and least acknowledged blocker in AI projects. Companies overestimate how clean their data is because nobody has ever had a reason to audit it before.

Warning sign: "Our data is probably fine." That is one of the most common phrases we hear before an AI project stalls.

Quick win: Run a one-week audit on a single data set that matters to your target use case. Count duplicates, missing fields, and inconsistencies. Whatever percentage you find will surprise you — and it is better to know now than after deployment.


Pillar 4: Governance

Diagnostic question: Who owns the AI initiative when the project launches — and does that person have the authority and accountability to see it through?

This question has no technical answer. It is organizational.

The companies that operationalize AI have a named owner. Not "IT" or "the AI committee." One person with clear accountability, decision-making authority, and a mandate that survives past the initial launch energy.

Warning sign: "We will figure out ownership once we see how the pilot goes." This is how pilots end — not with failure, but with nobody responsible for the next step.

Quick win: Name the owner before the budget is approved. The conversation about who that person is will surface more organizational readiness signals than any tool evaluation.


Pillar 5: Culture

Diagnostic question: Have you had an honest conversation with the people whose work will change — not to explain AI to them, but to listen to what they are worried about?

Technology resistance is mostly a communication failure. Most employees are not afraid of AI because they do not understand it. They are afraid because nobody told them specifically what would change — and what would not.

Change management is the most frequently skipped step in AI implementations. It is also the most reliable predictor of whether the tool gets adopted after go-live.

Warning sign: "We will handle change management closer to launch." Closer to launch is too late. Trust takes months to build, not weeks.

Quick win: In the next two weeks, run one small group session with the team closest to your target process. Ask two questions: "What would make your work easier?" and "What concerns do you have about this?" Take notes. Do not defend. The pattern you hear across multiple people is the change management risk you need to address.


Pillar 6: Talent

Diagnostic question: Do you have someone internally who can own the ongoing operation of this AI system after implementation — or are you permanently dependent on the vendor?

AI systems need maintenance, configuration updates, and iteration. The pilot might be run by the vendor. The live system needs an owner inside your organization — someone who understands the tool well enough to adjust it as your business changes.

Warning sign: "We will train someone once it's live." That is the right instinct and the wrong timeline. The person who will run it should be involved in building it.

Quick win: Before signing a contract, ask the vendor: "What does ongoing maintenance look like, and what internal capability do I need to manage it myself?" The clarity of their answer tells you a great deal about the implementation experience ahead.


The Scoring Model

Go back through the six pillars. For each one: can you answer the diagnostic question with genuine confidence — not "probably" but confident?

  • 5–6 out of 6: You are ready to move. The risk now is speed — do not go so fast that you skip the governance or culture work you said you had covered.
  • 3–4 out of 6: You have real readiness. Build a phased plan. Start with a use case that does not depend on the pillars where you are weak.
  • 1–2 out of 6: The right investment right now is not in AI tools. It is in building the preconditions. That work is faster than you think — and it makes everything that follows significantly cheaper.

Most businesses that come to me sit at three or four. That is not failure. That is the starting point. The audit tells you what to build first.


What Comes Next

I have turned this diagnostic into a structured 90-minute assessment — run with your leadership team, using your actual data, resulting in a phased roadmap at the end. Not a generic report. A document built around your specific gaps, your specific use cases, and what to invest in and in what order.

If you want the full version, DM me "AUDIT."

If you have a question about a specific pillar, drop it in the comments. I will answer.

Ready to transform your web presence?

Get a premium website delivered in 48 hours. AI speed, human quality.

Chat on WhatsApp