02 April 2026

Engineering better prices, or just extracting more value?

Building sky

This article is not written, or presented as legal advice nor opinion. Readers should neither act, nor rely on opinion(s) in this article and linked materials without seeking legal counsel.

In summary

  • Dynamic pricing is not new.
  • What is new is how much more personal, opaque and scalable it becomes when AI starts deciding what each customer is likely to pay.
  • That is where it stops feeling like a pricing strategy and starts looking a lot more like surveillance.

For years, marketers and digital teams have been personalising the message. Now the question is whether businesses are also starting to personalise the price.

That was the focus of my recent MeasureCamp Melbourne session, not whether pricing changes are always wrong, but where the line is when AI, behavioural data and opaque decisioning start shaping what one customer pays versus another.

Because there is an important distinction here.

Charging different prices at different times is not some brand new idea dreamt up by an AI lab. Airlines have done it for years. Hotels do it. Uber surge pricing is probably the most familiar example. Insurance pricing is built on different risk inputs. Even entertainment ticketing has normalised the idea that demand can move price around.

Most people already understand that.

What gets more uncomfortable is when two people are shown different prices for the same product, at the same time, based on signals they cannot see, do not understand, and cannot realistically control.

From personalising messages to personalising price

A lot of the underlying ingredients here are not new either.

Personalisation has been around for decades. So has testing. So has optimisation. Most digital teams already work in worlds shaped by audience segmentation, experimentation and prediction.

What AI changes is the application.

Instead of just deciding which message, product recommendation or creative variant to show, these systems can now go a step further and decide what price to put in front of you.

That is why I used the phrase “surveillance pricing” in the talk.

At a basic level, it is the idea that prior behaviour, location, engagement patterns or other data signals can influence the price a customer sees. Two people on the same site, looking at the same thing, may not get the same offer.

And often, they would have no way of knowing why. That opacity matters.

If a business wants to argue that pricing is changing because of supply, demand, availability or time sensitivity, at least that is an explanation most people recognise. But if the price changes because an algorithm has inferred you are more likely to pay, more desperate to buy, or less likely to shop around, that feels very different.

It is no longer just responsive pricing. It starts to look like value extraction.

Dynamic pricing itself is not the whole issue

This is where the conversation usually gets muddied.

Dynamic pricing, on its own, is not necessarily controversial. The ACCC’s position is that surge or dynamic pricing is not illegal in itself, but businesses must be clear about the price consumers will pay and must not make false or misleading pricing claims. The ACCC has also flagged clear and accurate pricing in supermarkets, retail and digital markets as an ongoing enforcement priority.

That feels like a sensible baseline.

Because the real issue is not simply that a price changes. It is how it changes, why it changes, and who benefits.

If a business is using AI to improve affordability, smooth out genuine supply and demand pressure, or give a customer a better outcome, that is one conversation.

If it is using AI to figure out the maximum it can squeeze out of someone based on behavioural or personal signals, that is another.

That is where the ethical and brand questions come in, well before the legal questions catch up.

The examples that make this real

A few examples came up in the session because they show how quickly this moves from theory to something much more tangible.

One was The Washington Post, which disclosed that subscription renewal pricing was based on an AI pricing model. That was notable not just because of the model itself, but because of how little visibility readers had into what that actually meant in practice.

Another was the Princeton Review example raised in reporting by ProPublica, where pricing differences were allegedly linked to neighbourhood and demographic patterns. The concern there was not just price variation, but the possibility that proxies for race or ethnicity were shaping what people paid.

Then there was the long-running Uber discussion, including reporting around whether low battery could make someone more likely to accept a higher fare. Uber denied that, but it remains a useful example of how weird and invasive the logic can become once enough signals are available.

And then there is Instacart, which is probably the clearest recent case study.

A late-2025 investigation by Consumer Reports, Groundwork Collaborative and More Perfect Union found that U.S shoppers buying the exact same groceries, from the exact same stores, at roughly the same time, were often shown different prices. Researchers found 74% of grocery items in their experiment appeared at multiple price points, with some items priced up to 23% higher for some shoppers than others. Basket totals also varied meaningfully for identical carts. By March 2026, the issue had escalated enough to attract a US congressional inquiry.

Instacart said most customers saw standard pricing, described some tests as limited and randomised, and said its experiments did not use personal or demographic data. But the broader point still stands: once pricing becomes opaque and automated, it becomes much harder for consumers to know what is happening, and much harder for regulators or journalists to prove it.

That black-box problem is really the heart of it.

Why this matters more in groceries than in luxury

One point I touched on in the session, and still think matters, is that not all categories feel the same.

There is a difference between playing pricing games with concert tickets, hotel rooms or luxury goods and doing it with essentials.

You can argue that some premium categories already rely on psychology. In some cases, a higher price can even increase perceived desirability. That logic, however flawed, is familiar.

Groceries are different.

If AI-led pricing starts creeping into essential categories, the public response is likely to be much harsher, because it moves from “smart commerce” into something that feels exploitative very quickly.

That is part of why the Australian supermarket context matters, even if we are not yet at the point of proving personalised grocery pricing at scale.

Recent Australian reporting shows electronic shelf labels are expanding quickly. ABC reported that Woolworths plans to convert all stores over the next few years, while Inside Retail reported Woolworths has already installed around 17 million ESL tags across more than 600 Australian stores and 170 New Zealand stores, and Coles is trialling the technology in 11 stores. Neither Coles nor Woolworths ruled out future dynamic pricing when asked by Inside Retail.

That does not prove surveillance pricing is happening in Australian supermarkets.

But it does show the infrastructure is moving into place, and that the conversation is no longer hypothetical.

The bigger problem: proxies, not just protected traits

One of the reasons this gets tricky so fast is that a business does not need to explicitly target protected characteristics for the outcome to become discriminatory.

An AI system may not have a variable labelled race, gender or sexuality. But it does not really need to.

It can get to a very similar place using proxies.

Postcode. Device type. income indicators. loyalty behaviour. urgency signals. household composition. frequency of purchase.

And increasingly, data from things like rewards programs or frequent flyer schemes, what you buy, where you travel, all feeding into the same picture.

There’s even been research showing you can predict something like sexual orientation just by looking at friendship networks, not because anyone explicitly told the system, but because the patterns give it away.

That is where “we didn’t intend that” stops being a very convincing defence.

The system may still be sorting people in ways that feel unfair, discriminatory, or just impossible to justify in plain English.

And if you cannot explain the logic to your customer, that should probably be a warning sign in itself.

The pub test still matters

For all the talk of AI sophistication, this is one of those areas where the old-fashioned pub test still does a lot of work.

If this practice was explained clearly on the front page of a newspaper, would an ordinary customer think it sounded fair?

Would they feel the business was acting in their interest?

Would they accept the explanation, or would it sound like a fancy way of saying, “we worked out you’d tolerate paying more”?

It gets even harder to justify when you consider how these systems actually work.

They are not just reacting to what you explicitly tell them. They are inferring things about you from patterns, signals you may not even realise you are giving off.

There’s been research showing you can predict something like sexual orientation just from friendship networks, without anyone ever stating it directly.

That is the uncomfortable part.

Because now you are not just talking about pricing based on behaviour. You are talking about pricing based on things a customer cannot easily see, control or even know are being used.

That matters because even if a business profits in the short term, the long-term brand cost could be much higher.

Some audience discussion after the session touched on exactly that tension. People pointed out that not every pricing scandal translates into immediate commercial damage. Sometimes businesses survive it. Sometimes they can hide behind recommendation logic rather than overt price differences. Sometimes the user experience gets manipulated before the price even does.

That is true. But surviving backlash is not the same as building trust.

The real question for businesses

I do not think the right question is, “Can we do this?”

The better question is, “Why are we doing it, and who does it serve?”

If AI is being used to improve customer experience, reduce friction, and make pricing more relevant in a way customers would understand and accept, that is one thing.

If it is being used to quietly identify who is most vulnerable to paying more, that is something else entirely.

And the danger is that a lot of businesses may end up adopting tools they only partly understand, with decisions they cannot properly interrogate, and outcomes they would struggle to defend once exposed.

That is the uncomfortable bit.

Because when pricing becomes a black box, accountability tends to disappear with it.

And if your best defence is, “the model did it,” you probably have a bigger problem than pricing.

Louder’s recommendations

  • Treat pricing models as governed systems, not just optimisation tools
  • Ensure you can explain pricing logic in plain English (internally and externally)
  • Avoid reliance on opaque or proxy signals that could introduce unintended bias
  • Prioritise customer experience over value extraction in AI-driven decisioning
  • Build testing and audit frameworks to validate outcomes before scaling

Get in touch

Get in touch with Louder to discuss how we can assist you or your business and sign up to our newsletter to receive the latest industry updates straight in your inbox.



About Gavin Doolan

Gavin specialises in web analytics technology and integration. In his spare time, he enjoys restoring vintage cars, gardening, spending time with the family and walking his dog, Datsun.