Insights7 min read

The semantic DOM: Why your AI needs a map, not a flashlight

Lane Greer

Senior Manager, Strategic Solutions Specialists

Last updated: 05/14/2026

I don’t know if you’re anything like me, but my brain goes to weird places when I’m stuck in traffic. Lately, I’ve been obsessed with the impact of live maps on urban planning.

Out in West Atlanta, the roads have been under construction for what feels like a decade. As I’m sitting there, I find myself wondering who makes the call on which roads to widen. Is a civil engineer sitting in an office looking at Waze data? Are they accounting for the fact that, because everyone has a live map in their pocket, we all collectively "reroute" around congestion, creating entirely new traffic patterns on roads that were never designed for that volume?

It’s a massive shift in the profession. When I started college, I was still on a flip phone. Exploring Dallas meant a lot of trial and error (mostly error). It wasn't until my senior year that I got a smartphone with real maps. Suddenly, the resolution of the world changed. The unknown was replaced by a digital twin of the city that told me exactly where I was and where I could go.

Today, we’re at a similar “senior year" moment with AI. It’s that tipping point where a technology moves from a novelty you play with to infrastructure you rely on. But right now, we’re asking AI agents and automated tests to navigate our digital experiences in low-resolution. Sure, they can find an "Add to Cart" button—usually. But they stumble when the user wants the [blue] [button-down] [long-sleeved] shirt in size [M] that’s [in-stock] and [on-sale] for at least [25% off] with [free shipping]. Without a semantic layer, we're forcing these agents to find their way with a flashlight in a world that desperately needs a live map.

The blindness of the raw web

Most websites are built for human eyes, not machine logic. We use React, Vue, or SwiftUI to create beautiful interfaces, but the underlying code is often a graveyard of generic tags. To an AI agent—or a Computer Use Agent (CUA)—a checkout button might look identical to a newsletter signup if they’re both just <button> elements.

This is what I call the AI sight problem.

Without a semantic layer, your AI is essentially blind. It’s guessing intent based on proximity or text labels that might change during a localized A/B test. When the guessing stops, the automation breaks.

The map is not the territory (the problem of state)

But here’s the thing: Finding the button is the easy part. Any decent "digital twin" can usually stumble its way to a primary CTA.

Where the wheels fall off is the state of the page.

An AI agent looks at a grid of color swatches and size dropdowns and sees a maze. Is the [Navy] swatch [selected], or just [hovered]? Is the [Size Large] actually [in-stock], or is that subtle strikethrough only visible to a human? Is the price a [Sale] price, or is that a strike-through of the MSRP? Does the [Free Shipping] badge only apply if they add another $10 to the cart?

When we don't provide a semantic layer, we’re forcing AI to "hallucinate" the logic of our business. It's trying to infer inventory status from a CSS class like .swatch-disabled-opaque. That’s not a map; that’s a riddle.

Notes from the lab: the industry agrees

If you think this is just a niche engineering problem, look at what the major AI labs are saying. We are currently seeing a massive shift in how AI companies expect us to build for the web.

  • Anthropic: In their documentation for Computer Use, they explicitly point to the "Accessibility Tree" as the primary interface for agents. They argue that raw DOM is too noisy and that agents need a "semantically meaningful" view of the page to be reliable. And CUA Navigation can actually be enhanced to take into account all this additional semantic decoration.

  • OpenAI: With the launch of Operator, the focus has shifted to "context engineering." The consensus is that AI agent optimization (AIO) is the new SEO. If your site doesn't provide explicit, machine-readable context, the agent will simply move on to a competitor that does.

  • Community Standards: We are seeing the rise of the llms.txt standard—a way for websites to provide a "map" for agents. But for dynamic retail, a static text file isn't enough. You need that map baked into the live experience.

The semantic symphony: FS-Skills and Fullcapture

At Fullstory, we’ve been obsessed with this "mapping" problem long before the advent of the LLM. We recently open-sourced fs-skills, a framework for "decorating" your digital experience with semantic meaning.

But here is the truth: A semantic layer is only as good as the foundation it sits on. Traditional analytics can decorate elements, but decoration without capture is just labeling the events you’ve already remembered to configure. Only Fullstory Fullcapture is optimized to capture data in a way that can power this semantic symphony.

When you combine a decorated DOM with Fullcapture, you create a high-fidelity record that serves every system in your stack. You can enrich this record using custom properties to track specific user states, or leverage setPageProperties to define the metadata of the experience itself. Even server-side custom properties can be piped back in to ensure the "map" includes backend reality like inventory levels.

This approach changes the game for five key areas:

  1. Agentic site navigation: When a CUA hits your site, it doesn't have to reason about whether a shirt is available. The semantic attribute tells it. This is the difference between an AI prompted to guess and an AI enabled to observe.

  2. Analytics & session intelligence: You aren't just seeing a click; you're seeing a "stock-out interaction" on a "promotional item." The context is baked into the event, retroactively queryable without re-instrumentation.

  3. Test automation: Your tests don't break when the UI team changes the "Out of Stock" styling from gray text to a red "X." The semantic state remains the same. No brittle CSS class hunting.

  4. Data science & ML: This is where the warehouse comes alive. When you pipe structured behavioral data into your dbt pipelines, your propensity models finally have teeth. You aren't just modeling "users who visited a PDP"; you're modeling users who specifically interacted with [on-sale] items but were deterred by [out-of-stock] size selections.

  5. Observability (component health): This moves us from generic site-wide errors to granular observability. By defining the component boundary semantically, you can monitor latency, error rates, and rage click frequency at the source.

The civil engineer of the digital experience

There is a recurring fear in the industry: If AI can navigate the site and analytics are automated, do we still need developers and analysts?

Think back to that civil engineer in Atlanta. Their job didn't disappear when Google Maps launched; it got more interesting. They moved from guessing where cars might go to architecting systems that respond to real-time behavioral data.

AI is a growth lever, not a human replacement. Adding a semantic layer doesn't make the developer obsolete; it makes them an architect of intent. Instead of spending 40% of your week fixing broken CSS selectors in Selenium or manually tagging events, you are building the source of truth for the entire enterprise.

You’re moving from tagging janitor to data strategist.

Precision over hype

Not every problem is a generative AI problem. If you’re trying to detect if a user is struggling on your site because they can't find their size, you don't need an LLM to chat about it—you need classical data science looking at behavioral signals.

But for that data science to work, it needs to know what it’s looking at. It needs to know the difference between a user browsing colors and a user frustrated by a ghost swatch that looks clickable but isn't.

The call to action for retail tech leaders

If your digital experience is just a utility, AI will automate it away. If you want it to be a destination, it has to be built on a foundation of high-fidelity behavioral data.

Ditch the wand. Stop looking for the magic AI fix. Embrace the lab coat. Start decorating your DOM with the intent, state, and meaning your systems—and your customers—deserve.

→ Curious how we're doing this at scale? Check out our FS-Skills repository or explore the documentation for custom properties to see how we're mapping the future of digital experience.

Lane Greer ✦ Subject Matter Expert
Senior Manager, Strategic Solutions Specialists, Fullstory

Lane Greer, a Georgia Tech MBA graduate, joined Fullstory in January 2018 to design and launch the Customer Success practice. After conducting extensive customer interviews nationwide, Lane wrote Fullstory's first Onboarding program. Now leading a global team of Solution Specialists, Lane remains dedicated to delighting customers. Lane is passionate about the intersection of cutting-edge technology and its positive impact on the human experience.

Additional Resources

Data Warehouse Blog 3
Turn messy event data into AI gold with Fullstory and Snowflake Cortex

AI agents need structured data to be effective. Learn how Fullstory’s Fullcapture and Silver Schema refinement ensures behavioral data is AI-ready.

View more
6-27 fs-blog-UnlockingCustomerLoyalty-og
Building lasting customer loyalty with behavioral data and AI

Discover how AI and behavioral data can boost customer loyalty through personalized experiences and proactive engagement strategies.

Read the blog
new-technical-standard
The ghost in the machine: Why AI agents are exposing our technical debt

Lane Greer outlines how integrating semantic data attributes in your UI enhances performance, analytics, and AI readiness in digital storefronts.

Read the blog
Lee-Blog-2
How to build lasting value in a world of ephemeral AI agents

Lee Dallas explains how the AI agent lifecycle is disrupting enterprise budgeting and why your data architecture is the key to lasting returns.

Read the blog
Lee-Blog (1)
The agentic workspace: Adapting and investing in the age of AI  

Lee Dallas shares tips for making strategic investments as AI tools evolve and discusses where humans fit into the agentic workspace.

Read the blog
Reactive vs. Proactive
The two sides of AI: Reactive and proactive approaches explained

Explore the two sides of AI and how businesses can use both to analyze past behavior and predict future actions for better user experiences.

Read the blog