The 95% figure
You've probably seen it. An estimated 95% of AI initiatives fail to deliver measurable revenue or productivity gains. It gets cited in conference presentations, vendor decks, and industry reports — usually followed by a pitch for whatever product the person citing it happens to be selling as the solution to the problem.
The figure is contested, as most headline statistics are. But the direction is right. The majority of hotel groups that have launched AI pilots in the last three years have little to show for the investment. Most projects stalled at the proof-of-concept stage. Adoption inside teams remained low. Leadership lost confidence quickly and moved on to the next thing.
This is not a technology problem. The tools work. Claude, GPT-4, Vanna, and a dozen other AI platforms are genuinely useful for the kinds of tasks hotels need help with — data analysis, content generation, customer communication, operational triage. The technology has outpaced the industry's ability to use it.
Three reasons the
projects stall
After watching this pattern repeat across properties of different sizes and different markets, the failure modes cluster into three things — not the technology, but everything around it.
What the failing projects
have in common
The pattern is remarkably consistent. A hotel group's leadership attends a conference where AI is the dominant theme. They return with a mandate to "do something with AI." The IT or digital team is tasked with finding solutions. Vendor demos are booked. A pilot is approved for one property. Six months later, the pilot is still running. Twelve months later, it has been quietly shelved.
The brief was "use AI" rather than a specific business outcome. That's where it ends before it begins.
The pilot failed not because the tool didn't work, but because nobody redesigned the workflow around it. The revenue manager who was supposed to benefit from the AI briefing tool was still doing her morning analysis the old way — because the AI briefing wasn't integrated into how her day actually started, and because nobody had changed what she was accountable for producing.
The tool was added. The job wasn't changed. The tool became redundant.
The projects that
actually work
The implementations that stick share a small number of characteristics that distinguish them from the ones that stall. None of them are about the technology choice.
They start with a named business outcome. Not "improve revenue management" — "reduce the time between morning data availability and the first pricing decision from 90 minutes to 20." The outcome is specific, measurable, and owned by someone whose performance is tied to it.
The workflow is redesigned before the tool is deployed. The process that the AI is meant to support is mapped, critiqued, and rebuilt with the AI as a first-class participant — not an add-on. This means someone's job changes. That's uncomfortable. It's also the only way it works.
Leadership stays involved throughout. Not just at the approval stage. Not just at the review stage. The CEO or commercial director who commissioned the project is visible to the team implementing it, asking questions, reviewing outputs, and visibly treating the new system as the way the company now does things.
The one question
to ask before starting
Before any AI initiative in hospitality, there is one question that predicts whether the project will stick or stall. Not "which tool should we use?" Not "what's our AI strategy?" Just this:
"What specific decision will be made faster, better, or more consistently because of this — and who is accountable for that decision today?"
If you can't answer both halves of that question in one sentence, the project isn't ready to start. Not because the technology isn't good enough — because the organizational conditions for success don't exist yet. The work to do before the pilot is to create those conditions, not to find a better vendor.
This is uncomfortable advice for an industry that has been sold the idea that AI is a product you buy rather than a capability you build. But the 95% failure rate is the cost of not following it.
Studio Oriente's AI Readiness Audit exists precisely for this moment — before the pilot, before the vendor selection, before the budget commitment. If that's where you are, talk to us.