03 March 2026

Agents are the new audience

Delta patterns aerial

In summary

  • The web is increasingly consumed by AI agents, not just humans
  • Cloudflare’s Markdown for Agents formalises machine-first content delivery
  • Discovery is shifting from search rankings to agent ingestion
  • Infrastructure decisions now influence visibility, cost and control

The web has a new audience

The web has a new audience.

For the past 20 years, we’ve optimised websites for people.

Design systems, conversion journeys, responsive layouts and engagement metrics were all built around human behaviour.

But increasingly, the first reader of your content isn’t human.

It’s an AI agent.

Whether that’s Gemini generating an answer before a user clicks, an AI Overview surfacing a quoted passage, or an autonomous system summarising your page into downstream tools, agents are now mediating how information is discovered.

And importantly, this isn’t theoretical.

When someone clicks through from an AI Overview, they often land directly on the exact sentence the model referenced. Not via a traditional search result, but via a highlighted text fragment embedded in the URL.

In other words, users are no longer always arriving from search. They’re arriving from summaries.

Sometimes we can see this. Sometimes we can infer it. And sometimes it looks indistinguishable from direct traffic.

Which means AI agents aren’t just shaping discovery, they’re reshaping measurement visibility itself.

This isn’t simply a UX evolution. It’s an infrastructure shift in how attention flows across the web.

Agents: audience or infrastructure?

Are AI agents a new audience? Or are they an infrastructure layer sitting between users and content? The answer is yes.

They behave like an audience because they consume content. They behave like infrastructure because they mediate access to humans.

And that distinction matters.

If agents increasingly decide what content is surfaced, summarised or included in downstream systems, then being “found” online no longer just means ranking well. It means being legible, extractable and efficient for machines to ingest.

That’s where Cloudflare’s recent announcement, Markdown for Agents, becomes strategically interesting.

HTML was built for humans

From an engineering perspective, HTML does some things extremely well.

It is designed to:

  • Make a visual impact
  • Structure layouts
  • Apply styling and interactivity
  • Keep users engaged

But HTML is presentation-first.

Modern websites are often generated by frameworks like React. That means the rendered page can be extremely verbose. Large amounts of markup, scripts and structural scaffolding exist around relatively small amounts of actual content.

Take a simple homepage. The visible content might be a headline and a paragraph. But the underlying HTML can be hundreds or thousands of lines long, navigation structures, scripts, tracking tags, layout divs, embedded components.

For a human, that doesn’t matter. For an AI agent, it does.

Every token processed costs. Every unnecessary element adds cognitive load to the model. And the more work an agent has to do to extract signal from noise, the greater the chance of failure, truncation or unexpected output.

Agents tend to degrade the more work they have to do, although they are improving quickly.

Markdown: content first

Markdown flips the priority. It is content-first, style-light.

Instead of scanning through layout instructions, scripts and styling layers, an agent can read:

  • Headings
  • Paragraphs
  • Lists
  • Basic structural markers

Nothing more.

For example, a simple HTML boiler plate contains little to no styling and might look like this:

HTML

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Hello World Page</title>
</head>
<body>

    <header>
        <h1>Welcome to My Website</h1>
    </header>

    <main>
        <p>Hello World!</p>
    </main>

    <footer>
        <p>2026 My Hello World Page</p>
    </footer>

</body>
</html>

The same content in Markdown becomes:

None
# Welcome to My Website

Hello World!

---

2026 My Hello World Page

For an agent, that difference is significant.

Less structure to parse, fewer tokens to consume, and lower compute overhead.

Cloudflare’s post referenced token reductions of up to 80%. In practice, I could see cases where it’s even higher, particularly on heavily styled or framework-generated sites.

Token efficiency isn’t just a technical detail. It translates directly into:

  • Lower infrastructure cost
  • Faster processing
  • Greater likelihood of full content ingestion

Discovery is shifting

If agents increasingly mediate discovery, what does it mean to “be found”?

Traditionally, discovery meant:

  • Ranking well
  • Optimising metadata
  • Structuring for crawlers
  • Improving click-through rates

Now, it may increasingly mean:

  • Being efficiently ingestible
  • Reducing token waste
  • Ensuring accurate content extraction
  • Providing clean, structured representations of your information

Markdown for Agents formalises this shift. It creates a parallel, machine-first representation of a website, designed explicitly for AI consumption.

That changes the optimisation surface. Discovery is no longer only about ranking pages. It is about being legible to machines.

Why traffic might shift without ranking changes

One of the more interesting implications is traffic volatility without obvious SEO movement.

If agents summarise content directly within AI interfaces, users may never click through. Rankings might remain stable, but engagement patterns shift.

Alternatively:

  • Agents may include your content in responses more frequently if it is cheaper and easier to ingest
  • Or exclude it if it’s too verbose, too complex or too expensive to parse

These shifts wouldn’t necessarily show up as traditional ranking changes. But they could influence impressions, referral patterns and downstream conversions.

For CTOs and Heads of Digital, this introduces a new variable: machine readability as a performance lever.

Infrastructure decisions now shape visibility

What used to be purely technical implementation details now have discovery consequences:

  • How content is rendered
  • How much scaffolding surrounds it
  • Whether a clean content representation exists
  • How compliant agents are tracked
  • Whether ingestion is measured

Cloudflare also touches on tracking markdown versus traditional web traffic, introducing visibility into agent consumption patterns.

That’s significant, because we are entering a world where:

  • Agents are identifiable consumers
  • Their access can be measured
  • Their behaviour can be optimised for

Ignoring this as “just another SEO trend” misses the deeper point. This is not about keywords, it’s about compute economics and machine mediation.

The risk of dismissal

The biggest risk in dismissing this as hype is cost and control.

If you are building agents internally, scanning your own blogs, documentation or product pages, you either:

  • Maintain workflows to extract clean content
  • Or absorb unnecessary token costs and processing overhead

Poor extraction can also lead to:

  • Missing content
  • Misrepresentation
  • Model hallucination around incomplete context

As AI systems increasingly sit between users and brands, inaccurate or partial representations carry commercial risk.

Markdown is not a silver bullet. But it does reduce friction.

And when the direction of travel is clear, more agents, more mediation, more abstraction, reducing friction becomes strategic.

Louder’s recommendation

  • Audit your machine readability, not just for SEO: Review how your core pages render for agents. If an AI system had to ingest your site today, how much scaffolding would it need to parse before reaching meaningful content? Measure verbosity, token load and extractability.
  • Separate content from presentation: Ensure there is a clean, structured representation of your content independent of styling frameworks. Whether via Markdown, APIs or structured feeds, reduce friction for machine ingestion.
  • Track agent consumption patterns: As agent traffic becomes more visible, monitor how AI systems access and interpret your content. Discovery signals are expanding beyond traditional search metrics.
  • Consider token economics in infrastructure decisions: Token efficiency is not theoretical. It affects compute cost, speed and reliability,particularly if you are building internal agents or ingesting large volumes of content.

Get in touch

Get in touch with Louder to discuss how we can assist you or your business and sign up to our newsletter to receive the latest industry updates straight in your inbox.



About Dagan Herceg

Dagan is a software engineer at Louder with a background in film who loves a good problem to solve. In his spare time you can find him practicing his newfound love (and hate) for running.