The question nobody
asks first
Every hotel group we speak to has the same question: which AI tool should we use?
It's the wrong question. Not because the tools don't matter — they do — but because it assumes the hard part of hotel AI implementation is selecting the right software. It isn't. The hard part is the six weeks after the contract is signed, when the vendor has moved on to the next sale and the revenue manager is staring at a dashboard that doesn't connect to how she actually works.
Hotel AI implementation fails, and it fails at a remarkable rate, not because the technology is bad. It fails because the organisation wasn't ready for it before the technology arrived.
This piece is about what ready actually looks like, and what a working implementation looks like once it's running.
Why vendor selection
is the wrong place to start
The pattern is consistent enough that you can predict where a given project will stall before it launches. It almost always happens at the same moment: the pilot ends, the vendor presents results, and leadership discovers that adoption inside the team is low.
The revenue manager ran the tool a few times. The front office team found workarounds. The GM filed it under "things we tried." The project is quietly extended, then quietly shelved.
Adoption fails not because the tool doesn't work, but because the job description didn't change when the tool arrived.
This is the core failure mode of hotel AI implementation: a new tool added to an old workflow. The workflow was designed without AI in it. Adding AI to the edge of it produces marginal efficiency at best, and friction at worst — because now there's one more system to check.
The natural response to "we need AI" is to start evaluating vendors. Demos get booked. RFPs get written. A pilot gets approved. The logic feels sound: see what's available, pick the best fit, run a test.
The problem is sequencing. Vendor selection is an answer to a question that hasn't been fully asked yet. The question isn't "which AI tool should we use." It's "what specific operational problem are we solving, who owns solving it, and what does success look like in measurable terms."
When that question is answered first, vendor selection becomes straightforward: you're evaluating tools against a defined brief, not against each other's feature lists. The demo becomes a test, not a presentation. The pilot has pass/fail criteria that existed before the vendor arrived.
Build the brief from the business problem, then find the tool that fits it.
A working implementation
has four things in common
The implementations that produce measurable results — faster pricing decisions, higher conversion on direct channel inquiries, reduced time in reporting — share a structure that's worth naming.
What a lean, working
hotel AI stack looks like
The specific tools matter less than the architecture. But for independent hotels and small groups — the properties that don't have a €200K enterprise contract budget — there is now a stack that covers most of the analytical and commercial AI surface area for a fraction of what it cost two years ago.
The data layer
A single, well-structured database containing reservation history: market segment, room type, booking channel, rate, lead time, stay dates. 20,000 rows covers a medium-sized independent hotel's last two years. This is the foundation. Without it, none of the AI tooling has anything to work with. MySQL hosted on Railway costs €5/month. For a hotel group willing to run a CSV export from their PMS weekly, the data layer is solved.
The analytical layer
Natural language querying — the ability to ask "what was ADR for the garden room type in Q1 compared to the same period last year" and receive a correct SQL query and a structured result without touching a spreadsheet — is now accessible without enterprise tooling. Vanna AI translates plain English to SQL against your actual hotel database. It requires training, which takes a few hours, not a few months.
The intelligence layer
The move from data to insight — from a table of numbers to a structured interpretation of what the numbers mean for tomorrow's pricing decision — is where Claude earns its place in the stack. Not as an oracle. As a structured reader of current data state that communicates findings in the language commercial teams actually use.
The AI is not making pricing decisions. It's removing the time cost of getting to the information that the decision requires.
The interface layer
Retool, for a hotel that wants a dashboard. A structured prompt in Slack or email, for a team that doesn't. The interface is the least important part of the stack, and the most often over-engineered. Start with the output the team will actually read, then build backward.
SO Labs built a working version of this stack for €40/month. It covers five revenue management workflows, includes a live Claude briefing that updates with every filter change, and runs natural language queries against 29,000 rows of real hotel data. The point isn't the specific tools: it's that the architectural problem is solved and the cost floor is now accessible.
What realistic implementation
looks like week by week
The most common miscalibration in hotel AI implementation is timeline expectation. Vendors underestimate setup complexity to close the deal. Buyers assume the pilot will be representative of the real deployment. Neither is accurate.
A realistic implementation for a single property or small group, starting from a clean brief:
Data audit and brief definition
What data exists, in what format, at what quality. Where the gaps are. What the specific business outcome is and who owns it. This is unglamorous work. It determines everything.
Workflow redesign
Map the current process. Identify the AI leverage points. Design the new workflow. Get sign-off from the team who will live in it, not just the sponsor who approved the budget.
Build and configure
Tool selection, integration, prompt engineering, training. The length of this phase depends entirely on data quality from phase one. With clean, structured data, it compresses significantly.
Supervised deployment
The tool is live, but the old workflow runs in parallel. The team builds the habit and the trust. Edge cases surface here. This phase ends when the team stops using the old workflow because the new one is better.
Ownership and iteration
The named owner reviews performance against the defined outcome. Prompts are refined. New queries are trained. The system is treated as a living tool, not a finished product.
The one question that predicts
whether it will work
Before any tool is selected, before any vendor is briefed, before any budget is committed — answer this question.
"What specific decision will be made faster, better, or more consistently because of this — and who is accountable for that decision today?"
Both halves matter equally. The specific decision: not "revenue management" but the 9:00am pricing call, the channel mix review, the group rate approval. And the accountable person: not the project sponsor, the person who currently makes that decision by hand.
If you can answer both halves in one sentence, the organisational conditions for a working implementation exist. The technology is the easy part from that point.
If you can't, the work to do first is not tool selection. It's creating the conditions. That might mean defining accountabilities that don't currently exist. It might mean redesigning a process that nobody has questioned for five years. It might mean having a conversation with a revenue manager about what her job looks like in two years.
That work is harder than configuring a dashboard. It's also the only work that produces a result that sticks.
Studio Oriente runs AI Readiness Audits specifically for this moment: before the pilot, before the vendor selection, before the budget is committed. If that's where you are, talk to us.