AI Analysis · Stanford HAI 2026

The 88% Problem:
Your Hotel Has Adopted AI.
Now What?

Stanford's new AI Index report opens with a sentence that should stop every hospitality executive cold: this year's data shows what happens after arrival.

What happens after arrival, it turns out, is not transformation. It is friction.

Organizational AI adoption reached 88% in 2025. That number will appear in vendor decks for the rest of the year. It will be used to argue that the moment has passed, that the laggards are now officially behind, that urgency is no longer a sales tactic but a fact.

The number is real. The conclusion is not.

Read the full Stanford HAI AI Index 2026 Report →

Adoption ≠ value

Adoption is not
the same as value

Here is what sits directly underneath the 88% headline, buried three chapters later in the same report.

AI agent deployment remains in single digits across nearly all business functions. The systems hotels are "adopting" are mostly tools for individuals. ChatGPT for a marketing manager. Copilot for a revenue analyst. A chatbot that handles the top ten FAQ categories on the website.

That is adoption in the way a gym membership is fitness. The card exists. The results do not.

— Studio Oriente · AI Analysis

Real operational transformation requires AI that connects to systems, executes multi-step workflows, and operates with enough autonomy to change outputs at scale. That kind of deployment is not at 88%. It is barely off zero. And the gap between where most hotels think they are and where that bar actually sits is the most expensive misconception in hospitality technology right now.

The reason most hotel AI sits at the individual tool level is not a technology problem. It is a knowledge problem. The tools exist. What does not exist, in most organizations, is someone who understands both the technology and the operational context well enough to connect them.

88%
Organizational AI adoption in 2025 (Stanford HAI)
<10%
AI agent deployment across nearly all business functions
362
Documented AI incidents in 2025 — up 55% year-on-year
The governance gap

The governance gap
nobody is talking about

Stanford's report makes a structural argument that deserves more attention than the adoption headlines.

AI capability is scaling. The frameworks around it are not.

Governance structures, evaluation methods, and the internal infrastructure needed to manage AI's impact are all lagging the technology itself. This is true at a global level. It is catastrophically true inside most hotel companies.

Ask a hotel group whether they have a policy governing what data can be fed into an AI tool. Ask whether they have a process for auditing AI-generated output before it influences a rate decision or a guest communication. Ask whether their AI vendor has disclosed how the model was trained, on what data, and under what safety constraints.

Most cannot answer any of those questions. They adopted the tool. They skipped the governance.

This is not a technology failure. It is a management failure. And the Stanford report is unambiguous about where it leads: documented AI incidents in 2025 rose 55% year-over-year. Organizations reporting clear responsible AI policies are still a minority. The gap between what AI can do and what organizations can manage is the defining risk of this moment.

This is the work we do at Studio Oriente before a single tool gets deployed. Not a policy document that lives in a shared drive — a working governance layer: which data sources feed which systems, who owns the outputs, where human sign-off is non-negotiable, and what the vendor is and is not allowed to do with your guests' information. The hotels we work with know the answers to those questions before they go live.

The jagged frontier

Where the productivity
gains actually are

The report documents real, measured productivity gains from AI: 14% to 26% in customer support and software development. These numbers are cited widely. They are also being misread widely.

To understand why, it helps to borrow a concept that researchers and economists have started using to describe how AI actually behaves. They call it jagged intelligence. The term, coined by Andrej Karpathy — one of OpenAI's founding researchers — describes the uneven, unpredictable shape of AI capability: extraordinary in some areas, brittle in others, and not always easy to tell which is which in advance.

The Stanford report illustrates this vividly. The same systems that can win a gold medal at the International Mathematical Olympiad cannot reliably read an analog clock. AI agents that resolve real GitHub software issues with near-human accuracy still fail roughly one in three attempts on structured computer tasks. The frontier is not a smooth line. It is a jagged edge — and the hospitality industry sits on both sides of it.

The frontier is not a smooth line. It is a jagged edge — and hospitality sits on both sides of it.

— Studio Oriente · AI Analysis, via Stanford HAI 2026

The productivity gains are real where tasks have a clear right or wrong answer and a high volume of repetitions. Answering the same fourteen pre-arrival questions. Extracting structured data from reservation forms. Categorizing guest feedback at scale. Generating the first draft of a standard communication. These are the tasks where the 14–26% gains live, and they are genuinely worth capturing.

The gains disappear — or reverse — where judgment is the core of the task. Reading a difficult guest and deciding how to respond. Handling a complaint that sits at the intersection of policy, emotion, and relationship. Recommending an experience that requires understanding what a person actually wants, not what they typed into a search field. These are the moments that define hospitality at its best, and they are precisely the tasks where AI is still weakest. Not because the tools are bad, but because the feedback loops that train AI systems — clear right and wrong, pass or fail — do not exist for the kind of nuanced judgment that hospitality runs on.

This is the map that should drive every hotel AI decision. Not "what tools can we deploy?" but "where on the jagged edge does this task actually sit?"

The knowledge infrastructure question

At Studio Oriente, before any implementation, we do a task-level audit: mapping each candidate use case against where it sits on that jagged frontier. High volume, low judgment, clear feedback loop — strong AI candidate. Low volume, high judgment, context-dependent — not yet, or not without significant human oversight built in.

We also build the knowledge infrastructure that makes AI useful in the places where it can work. What we call Meridian — a structured hotel knowledge base — gives AI systems the specific operational context they need to be genuinely useful in a hospitality setting. Rate positioning logic. Segment behavior patterns. Property-specific policies. The institutional knowledge that currently lives in the heads of your best people and disappears when they leave.

When AI has access to that context, the productivity gains are real and sustained. When it does not — when it is running on a generic model with no grounding in your specific property, market, or guest profile — you get plausible-sounding outputs that require as much human correction as the task would have taken in the first place.

The productivity gains in the Stanford report are achievable in hospitality. But they require mapping the jagged edge first, and building the knowledge infrastructure second. The tool comes last.

Guest data

The guest data problem
nobody names directly

There is a dimension of hotel AI adoption the Stanford report gestures toward but does not name directly: most AI tools in the market are built on a business model that depends on your data.

The model is familiar. The tool is free or cheap. The cost is the training signal your usage provides, the guest data that flows through the system, and the profile that gets built over time.

For a hotel, that is not a theoretical concern. Guest data is the competitive asset. Knowing that a guest prefers a high floor, books through the same corporate account, and always requests a late checkout is not metadata. It is the foundation of a relationship. Handing it to a vendor whose terms of service allow model training or third-party data sharing is a strategic error that most hotels are making right now without realizing it.

This is exactly why we are building Alma, our intent-driven CRM for hospitality. No personally identifiable information stored at rest. No guest profile that can be sold, subpoenaed, or used to train a model you do not control. The intent signals that matter for personalization — what a guest wants, when they want it, how they prefer to be approached — captured and used without creating the liability that standard CRM architectures produce.

The data governance problem solved at the product level rather than the policy level.

Employment signal

The employment signal
worth watching

The report includes a data point that has not yet made it into industry conversation.

In software development, where AI's measured productivity gains are among the clearest of any sector, employment for developers aged 22 to 25 in the United States fell nearly 20% in 2024. Headcount for older developers continued to grow.

This is not a hospitality story yet. But it is a preview of one.

AI is not replacing professionals with experience and judgment. It is compressing the entry-level path that produces them. The junior roles that used to exist as the training ground for senior talent are the first to shrink.

Hotels run on that pipeline. The front desk agent who becomes an assistant manager. The revenue coordinator who becomes a director. If AI erodes that structure before the industry has a framework for what replaces it, the talent problem ten years from now will be severe.

This is not a reason to avoid AI. It is a reason to think harder about where it is deployed and why — and to make sure the efficiency gains land in ways that strengthen the organization rather than hollow it out.

After arrival

What "after arrival"
actually requires

The Stanford report's framing is precise and worth borrowing: the technology has arrived. The question is whether the systems around it can keep up.

For hotels, those systems are not technical. They are organizational. They are the policies, the training, the decision rights, the vendor evaluation criteria, the governance structures, and the honest measurement of what AI is actually delivering.

This is what Studio Oriente exists to build. Not a technology implementation — an operational AI capability: the knowledge infrastructure, the governance layer, the measurement framework, and the human expertise to run it. We work as an embedded team, not a consulting report. Decisions get made, systems get built, and the hotel owns what gets built.

88% of organizations have adopted AI. Most of them have adopted it the way they adopted every technology wave before it: fast, broad, and shallow.

The hotels that get this right in the next two years will not be the ones with the most tools. They will be the ones that treated AI as an operational discipline rather than a procurement decision.

The others will hit 95% adoption. They will still be asking why it is not working.

Newsletter

Checked In.

What actually works when hotels put AI to use. No slide decks. Straight to your inbox — bi-weekly.