07 January 2026

What the IAPP Summit revealed about trust, AI and the future of marketing data

Storm on the horizon

This article is not written, or presented as legal advice nor opinion. Readers should neither act, nor rely on opinion(s) in this article and linked materials without seeking legal counsel.

In summary

  • Privacy, data governance and AI are converging fast, pushing marketing teams into the centre of trust, consent and risk decisions.
  • As AI systems scale and automate decisions, weak governance around data, tracking and consent creates real commercial and regulatory exposure.
  • Regulators are increasing scrutiny on tracking technologies, consent frameworks and AI use, while Australia’s National AI Plan signals capability-building ahead of mandates.

From compliance to capability

The conversations at this year’s International Association of Privacy Professionals (IAPP) Summit made one thing clear: privacy is no longer a legal afterthought. It’s becoming a core operating capability for organisations building with data, marketing technology and AI.

Across sessions, panels and case studies, the focus shifted away from whether regulation is coming, and towards how businesses can move faster by building trust into their systems from the start.

Australia vs New Zealand: different laws, converging realities

A strong New Zealand contingent brought an interesting contrast to the Australian discussion.

While New Zealand’s privacy framework is often described as more outdated than Australia’s, it has moved decisively in one area Australia is still circling: biometrics. The introduction of the New Zealand Biometrics Processing and Privacy Code was a recurring reference point, signalling a more targeted regulatory approach that still carries broader implications for identity, marketing and AI.

The takeaway wasn’t that one regime is “better” than the other, but that both markets are being pushed by the same forces: automated decisioning, richer signals, and rising expectations of accountability.

Privacy Impact Assessments are back, and they’re growing up

Privacy Impact Assessments (PIAs) were everywhere at the Summit, but not in the way many organisations still approach them.

The message was consistent: a good PIA is not a static document. It’s a living decision framework.

Modern PIAs are being used to:

  • Pressure-test AI use cases early
  • Clarify purpose and outcomes before data is collected
  • Identify downstream harms, not just immediate risks
  • Force alignment between legal, technology and business owners

As AI becomes more autonomous, PIAs are one of the few tools organisations have to slow decision-making just enough to make it safer, without stopping innovation altogether.

AI risk isn’t in the model. It’s in the data

One of the strongest themes across sessions was that AI risk rarely comes from algorithms alone. It comes from data governance.

Questions repeatedly raised included:

  • Do we actually know where our data comes from?
  • Can we justify every data flow legally and ethically?
  • What happens if we need to remove data after a model is live?

Research shared by OneTrust underlined the gap: 70% of executives say their ability to govern AI is outpaced by the speed of AI initiatives.

That gap is where re-identification risk, bias, and regulatory exposure emerge.

Concepts like model disgorgement, the ability to retroactively remove the influence of problematic data, moved from theory to practical necessity, particularly as agentic systems become more common.

Agentic AI needs brakes, not just acceleration

Agentic AI, systems that act autonomously toward goals, featured heavily, and not uncritically.

Speakers emphasised that autonomy without control isn’t innovation; it’s risk. Practical guardrails discussed included:

  • Operating AI in controlled environments
  • Keeping humans “in the middle” for high-impact decisions
  • Using AI systems to monitor other AI systems
  • Designing rollback mechanisms before deployment

The question posed wasn’t “can AI do this?” but “can we stop it if it shouldn’t?”

The 4D framework for assessing AI

Helios Salinger’s sessions introduced a pragmatic 4D framework that resonated strongly with attendees:

Design

Clear business case, success and failure metrics, staged rollouts and testing.

Data

Complex data flow mapping, legal pathways for all inputs, and vigilance around downstream harm.

Development

Supplier management, community impact, and accountability beyond internal teams.

Deployment

Guardrails, transparency and ongoing risk management, not just launch-day checks.

The framework reframes AI from a technical experiment into an organisational responsibility.

For marketing teams, one of the most pointed discussions centred on tracking technologies, and the growing regulatory discomfort around them.

Panels highlighted growing concern from the Office of the Australian Information Commissioner (OAIC) around pixels, tags and cookies, particularly where organisations don’t fully understand what’s firing, where data is flowing, or whether it’s still needed at all.

Common issues raised:

  • Legacy tags left active after campaigns end
  • Data transmitting offshore without awareness
  • Outsourced implementations creating blind spots
  • Marketing teams operating in silos from privacy and tech

Critically, users cannot realistically avoid tracking pixels, raising serious questions about the validity of consent, especially when sensitive data is involved. This has accelerated interest in privacy-safe alternatives such as aggregated and consent-led propensity modelling, where outcomes can be predicted without exposing raw identifiers.

Meanwhile, The OAIC flagged a special focus on online tracking and marketing, with particular attention on:

  • Tracking pixels and whether meaningful consent exists when sensitive data is collected
  • Limiting the use of de-identified data strictly to defined purposes such as direct marketing, and only with consent
  • Clear articulation of why data is used and what outcome it serves

Online services, apps and wearables were also called out, with examples of health and fitness apps categorising themselves in ways that may bypass tighter regulatory controls.

Looking ahead, several changes are expected to raise the bar further:

  • Tranche 2 reforms expanding obligations across data handling
  • Ongoing consent requirements for geo-tracking over time
  • Precise geolocation data likely to become a new category of sensitive information
  • Modelling techniques increasingly requiring fairness and bias assessment

Market scans by the OAIC are expected to increase. The message from regulators was clear: if you don’t know what’s running, and why, that uncertainty itself is a material risk.

The National AI Plan: signals before mandates

Australia’s National AI Centre National AI Plan, released in December 2025, provided important context for where regulation is heading.

Key themes included:

  • Investment in local data centres
  • Training and financial support to build AI capability
  • No immediate mandates around labelling AI-generated content
  • A strong emphasis on trust, safety and responsible deployment

The signal is clear: regulation will follow, but capability-building comes first.

As one panellist noted, for decades we’ve trained ourselves to accept whatever appears on a screen as truth. Now, organisations need to build the muscle for critical thinking, verification and governance.

Data retention, deletion and the end of “infinite storage”

A case study involving Navitas and BigID brought the conversation back to fundamentals: what data should organisations actually keep?

Key lessons included:

  • Treating long-held data as both asset and liability
  • Using GDPR as a baseline governance standard
  • Avoiding emotionally charged language like “delete”
  • Focusing on Redundant, Obsolete and Trivial (ROT) data
  • Mapping data sources by business value versus volume

For marketing and analytics teams, this is especially relevant. Cloud storage may feel cheap, but unmanaged data creates risk, bias and governance debt.

Louder’s recommendations

  • Build governance into how work gets done: Treat privacy, data and AI governance as core operating capability, embedded in design, delivery and decision-making, not bolted on at the end.
  • Audit what’s actually running, not what was approved: Legacy tags, pixels and third-party tools create hidden risk. If you don’t know what’s firing, where data is flowing or why it exists, it’s already a problem.
  • Be explicit about purpose before collecting or modelling data: Consent alone isn’t enough. Every data use, especially in AI, needs a clear reason, defined outcome and defensible value.
  • Design AI with guardrails and exit ramps: Autonomous systems require human oversight, controlled environments and the ability to unwind or correct decisions if risk emerges.
  • Reduce exposure by managing data value, not volume: Focus on governing what you keep. Prioritise Redundant, Obsolete and Trivial (ROT) data, and stop holding information that no longer serves a clear business purpose.

Get in touch

Get in touch with Louder to discuss how we can assist you or your business and sign up to our newsletter to receive the latest industry updates straight in your inbox.



About Candice Driver

Candice is Agency and Client Lead at Louder. In her spare time you will find her hanging out with her dog Lilly, socialising with friends, and hitting trendy bars and restaurants.