17 September 2025

Elevating, not automating: What ADMA Sydney tells marketers to do next

Butterfly emerged from cocoon

In summary

  • AI is already embedded in day-to-day marketing. The edge now is human judgement, responsible design, and operating-model change.
  • Personalisation will get smarter, and riskier, without clear guardrails. Trust needs to be built in, not bolted on.
  • Leaders must move beyond “pilot purgatory” and individual productivity hacks to orchestrate people + agents around outcomes.

From hype to human-centred marketing

Four in five marketers are already using AI weekly, but only one in three trust it. That tension ran through ADMA’s Elevate event in Sydney last week, where speakers from ex agency executive Tom Goodwin to Deloitte’s Dr Kellie Nuttall challenged the industry to move past hype and pilot projects. AI, they argued, isn’t just another tool for speed or scale. It’s a chance to rethink how marketing works, provided leaders bring strategy, guardrails, and human judgement back to the centre.

The opportunity? Smarter workflows, better personalisation, and more ambitious creativity. The risk? Over-automation, distrust, and what Goodwin calls “average-vertising”, campaigns optimised for attribution over ideas, for clicks instead of connection.

With the release of ADMA’s AI, Talent & Trust report on the same day, the conversation shifted beyond theory. It revealed where marketers are experimenting, what keeps them up at night, and the new skills and guardrails required to make AI effective and responsible.

From hype to how: Beyond experimentation

The ADMA event made one thing clear: AI isn’t a shortcut to “faster and cheaper.” Used well, it’s a way to elevate marketing standards rather than simply automate them.

As Tom Goodwin put it: “This is not a time to aim for cheaper or faster. This is a time to aim for better.”

But the data shows marketers aren’t there yet. According to ADMA’s report, 75% of marketers now use AI weekly, mostly for content creation (47%), brainstorming (46%), and ad copy (36%). Yet only 29% have had any formal AI training, and just 41% feel confident in data privacy knowledge.

The risk, as Goodwin warned, is flooding the industry with what he calls “average-vertising”, work designed to tick attribution boxes rather than build real brand impact: “By obsessing over attribution and averages, we’re training ourselves to be more robotic… If AI does our jobs, it’s because we made our jobs really stupid and removed all the human elements from them.”

The challenge for marketing leaders: use AI to raise quality, not just output, and keep humans firmly in the loop.

Human judgement is the differentiator

AI can write, design, and optimise at scale, but it can’t set strategy, judge creative quality, or empathise with customers. ADMA’s report found oversaturation of AI-generated content (48%) and diminished creativity (41%) are among marketers’ top concerns, proving the human edge matters more than ever.

As Sarla Fernandez put it: “AI can generate but it cannot judge… the magic, imagination, judgement, trust, that belongs to us.”

Case studies reinforce the point. Klarna famously automated customer service and marketing workflows to save US$10 million, before reversing course, citing quality loss and brand damage when human oversight disappeared.

What does this mean for marketers?

  • Use AI for lower-stakes tasks like translations, ad variants, or summarising research.
  • Keep human QA on anything customer-facing.
  • Reinvest saved time into strategy, ideas, and customer understanding.

Or as Goodwin put it: “Double down on what humans do best… be curious, not paranoid.”

Operate for outcomes, not tools

The AI, Talent & Trust report highlights a critical gap: while adoption is high, most teams lack a clear link between AI use and business outcomes. Dr. Kellie Nuttall calls this “pilot purgatory”, lots of experiments, little measurable impact.

“Leadership today is about shaping the future of work, designing outcomes and orchestrating humans and technology together,” she said.

So, what does this look like in practice?

  • Map work to tasks, not job titles.
  • Decide which tasks are agent-led, human-led, or hybrid with human review.
  • Treat AI models like products: versioned, monitored, and governed.

This aligns with ADMA’s call for X-shaped talent, people with hybrid skills across creativity, data, and strategy, and its Capability Compass framework for building AI-ready teams.

This calls for a complete rethinking of how organisations are structured. As Tom Goodwin pointed out, we’ve long operated in a linear way, people first, then process, structure, and finally tools. But AI flips that on its head. To truly harness its potential, we need to start with the tools, then design the right structures, build streamlined workflows, and finally empower people within this new framework. It’s not about automating the present; it’s about architecting the future of work.

Trust: build it in, or lose the room

Trust was the sharpest warning sign across both the event and the report. Only 36% of Australians currently trust AI, and real-world lawsuits over biased or harmful systems prove the risks aren’t theoretical.

“Trust is the business model. Without it, there is no brand, there is no loyalty, there is no growth,” said Fernandez.

The conversation at ADMA went beyond simple consent to a more fundamental question: is our data use ‘fair and reasonable’? This is a key standard likely to be introduced in future regulatory reforms, and it means that even with consent, a company’s use of data must align with the reasonable expectations of the individual. It puts the onus on marketers to justify their data use based on genuine customer utility, not just a box-ticked consent form.

ADMA recommends embedding Fairness, Accountability, Transparency and Ethics (FATE) into every workflow, not as compliance theatre, but as risk management and brand protection.

Practical steps include:

Set red lines (e.g., no health inferences without explicit consent).

  • Make consent meaningful, not buried in legalese.
  • Build incident playbooks before exposing customers to AI systems.
  • Because as personalisation gets smarter, the reputational stakes will only rise.
  • Recognising declining consent: Some consent management platforms are seeing 70-80% opt-out rates, highlighting the need to earn trust and provide real value

Discovery and ads: the next frontier

Goodwin expects AI to reshape how people search for products, brands, and information. Instead of sifting through pages of results, people will increasingly ask AI for direct recommendations, from hotels in Japan to the best vacuum cleaner for a small apartment.

That means ads will follow. LLM-powered search could produce the most contextually relevant ads yet, because systems will remember preferences and past interactions. But the same precision that makes ads useful could also make them intrusive if governance doesn’t keep pace.

“These could be the best ads we’ve ever known… but it’s also likely some will overstep the mark,” Goodwin warned.

ADMA’s report agrees, noting that trust, safety, and explainability will be essential as advertising shifts into conversational, AI-driven environments.

Culture and capability: avoid the lazy trap

AI can accelerate workflows, but several speakers cautioned against turning marketing teams into passive operators of automated systems.

“We’ve devalued our judgement and we’re looking in spreadsheets all day long,” Goodwin said.

ADMA’s research shows only 41% of marketers feel confident about data privacy, and few have access to the skilling ecosystems needed to build AI literacy across technical, ethical, and strategic domains.

Rejig’s Siobhan Savage added that taking away every “boring” task from juniors risks hollowing out on-the-job learning: “Just because AI can do it doesn’t mean we should. Some tasks teach the next generation how the work really gets done.”

The report recommends the 70/20/10 learning model (70% experiential, 20% peer-led, 10% formal training) and AI champions embedded across departments to accelerate adoption responsibly.

Louder’s recommendations

Start now, and start right.

  • Ship AI guardrails first – Publish a one-pager on approved tools, acceptable data use, and human-in-the-loop checkpoints. Add a simple ModelOps checklist covering inputs, bias checks, monitoring, and rollback plans.
  • Anchor AI to business outcomes – Select three measurable goals for the next 12–18 months (e.g., faster brief-to-concept, improved lead quality, reduced cost-to-serve) and align each AI use case to one of these outcomes with clear KPIs.
  • Redesign one workflow end-to-end – Task-map a core process like creative development or lead qualification, label each step Agent / Human / Human-review, and measure both productivity and creativity (e.g., originality scores, brand lift).
  • Run controlled sandboxes – Use closed data and real teams with defined success and failure criteria. Instrument experiments with holdouts, pre/post measurement, and cost-to-quality analysis before wider rollout.
  • Audit your technology & supply chain - Just as supply chains are mapped for logistics, map your data and technology supply chain for compliance. Ensure that every tool, platform, and data source you use - from your CRM to your ad-tech platforms - provides the robust privacy controls and verifiable consent records needed to meet regulatory and consumer demands.
  • Make trust operational – Embed privacy checks into pipelines, define “no-go” use cases, and red-team any customer-facing agents before launch to ensure transparency, fairness, and accountability.

Get in touch

Get in touch with Louder to discuss how we can assist you or your business and sign up to our newsletter to receive the latest industry updates straight in your inbox.



About Archit Sharma

Archit is an analytics consultant at Louder with a background in marketing, stats and programming. In his spare time, he enjoys playing football, curating playlists, or trying out a new single malt.