05 August 2025

YouTube is now under-16 restricted. Advertisers, it’s time to rethink your strategy

Dewy leaf

In summary

  • YouTube is now officially included in Australia’s under-16 social media laws, joining platforms like TikTok, Instagram and Snapchat under new age restrictions starting 10 December 2025.
  • Advertisers can no longer treat YouTube as a neutral content platform, it’s now regulated like social media, with shared responsibility to prevent underage exposure to ads.
  • You need to actively minimise risk by prioritising signed-in users, avoiding youth-skewed content, and documenting your targeting decisions, age filters alone won’t be enough to meet compliance expectations.
  • The UK has implemented similar legislation, prompting a sharp rise in age-checks and VPN use.

YouTube enters the compliance conversation

From 10 December 2025, YouTube will officially be included in Australia’s under-16 social media laws, joining TikTok, Instagram, Facebook, Snapchat, X, among others.

This places YouTube, not only the world’s second-most visited website, but also the platform with the largest share of TV viewing time in markets like the US, squarely at the centre of Australia’s world-first crackdown on how young people access and experience the internet.

With its scale spanning both signed-in and signed-out users, and algorithms that shape what billions of people see daily, YouTube’s inclusion indicates just how seriously the Albanese Government is taking platform responsibility, and how it’ll impact advertisers.

The decision was confirmed by Prime Minister Anthony Albanese and Minister for Communications Anika Wells, following a recommendation from eSafety Commissioner Julie Inman Grant during her National Press Club address in June 2025.

“Social media is doing social harm to our children, and I want Australian parents to know that we have their backs,” said Prime Minister Albanese.

“There’s a place for social media, but there’s not a place for predatory algorithms targeting children,” added Minister Anika Wells.

Importantly, once the Minister formally requested and received advice from the eSafety Commissioner, on 19 June 2025, it became part of the legislative process, and the government was required to act on it, regardless of lobbying pressure or media commentary. The Online Safety Amendment (Social Media Minimum Age) Bill 2024, which introduced these changes, passed Parliament on 29 November 2024 and received Royal Assent on 10 December 2024, officially becoming law.

What’s interesting for advertisers is that YouTube is no longer “just” a video platform. It’s now legally treated as a social media service, and that comes with shared responsibility.

What’s changing, and why advertisers can’t ignore it

Under the Online Safety (Age-Restricted Social Media Platforms) Rules 2025, platforms must take “reasonable steps” to prevent underage users from accessing or creating accounts, including YouTube, even though it already restricts under-13 users through YouTube Kids. The eSafety Commissioner will provide detailed regulatory guidance on what constitutes “reasonable steps” later in 2025. These “reasonable steps” will likely involve multiple verification methods, and importantly, parental consent won’t override the age restriction.

Examples of age verification technologies currently being explored or used globally and likely to inform Australia’s approach include:

  • Facial Age Estimation Technology: AI-powered systems that estimate a user’s age from a selfie, often without storing biometric data. Trials are underway in Australia.
  • Document-Based Verification: Matching a selfie with a government-issued ID (though ID checks cannot be the only method accepted).
  • Account-Based Assurance: Cross-authentication with other verified accounts, or inferring age based on account behaviour or sign-up dates.

In NSW, for example, students already have @edu.nsw.gov.au Gmail accounts hosted on Google’s infrastructure. These are identity-linked accounts that follow users across the Google ecosystem, including YouTube.

Google has also evolved its identity approach over the past decade. As of February 2025, it formally updated its policy to allow device fingerprinting for advertising purposes. While Google states this is enabled by “advances in privacy-enhancing technologies (PETs)”, this shift has drawn significant criticism from privacy advocates and regulators, including the UK’s Information Commissioner’s Office, who have called it “irresponsible.” Critics argue that device fingerprinting is much harder for users to control or opt out of compared to cookies, as it collects a broader range of real-time device and network data to create a persistent digital ID. This raises concerns about deeper profiling and reduced user privacy, despite Google’s claims of balancing privacy with advertiser needs.

While this increases precision, it also raises privacy concerns. For instance, users with long-standing Gmail accounts are now being asked to verify their age, strengthening Google’s compliance posture but also deepening its identity graph.

Platforms that fail to comply face penalties of up to AU$49.5 million. But the legislation goes beyond account creation. It’s about how content is delivered, who it’s targeted to, and whether advertisers are actively working to minimise underage exposure.

YouTube Kids is not the focus

YouTube Kids is a standalone app, specifically built for children, and it’s not the focus of this legislation. The new rules apply to YouTube’s main platform, where users can upload, comment, and watch content without logging in.

What about embeds and indirect access?

There’s a growing grey area in how underage users access YouTube content, including via embedded players on third-party websites like safeshare.tv.

If a child watches YouTube content on an aggregator site that’s not blocked by school infrastructure or firewalls, they may still see ads, even if YouTube.com itself is restricted.

This is similar to how embedded Instagram or Twitter content is often blurred or hidden until a user signs in, a mechanism some publishers like the BBC already use.

In practical terms, embeds may need to be treated with the same compliance lens as native views.

Why YouTube’s inclusion matters

YouTube was initially excluded from the new rules, prompting backlash from Meta and others who labelled the move “like banning soft drink but exempting Coke.” That pressure worked, and YouTube is now fully included.

The takeaway for advertisers? No platform is exempt. Regulatory expectations now apply to every environment where engagement happens, and that includes YouTube.

“We want kids to know who they are before platforms assume who they are,” Minister Wells said.

No sign-in? No guarantees.

The eSafety Commissioner has made one thing clear. Platforms aren’t expected to verify the age of unsigned-in users, but they are expected to take reasonable steps to reduce underage exposure.

Here’s what that means for advertisers:

  • Age-targeting settings alone won’t guarantee compliance
  • Ads on videos accessible to unsigned-in users still carry risk
  • It’s not just who you target, it’s how users access your content

Even if you do everything right in setup, shared device usage can still trip you up. That’s where platform-level controls and strong intent documentation come into play.

Why keyword filters aren’t enough

Some vendors, including Channel Factory, promise tools to avoid ads appearing next to unsafe or “nefarious” content. But most focus on content suitability, not audience verification.

Even if your ad avoids controversial content, it can still be served to an underage viewer, especially one watching anonymously.

And here’s a nuance worth noting: Apple’s ecosystem is single-user by design. That means if a parent is logged in on an iPad or iPhone, and their child watches content on that device, they inherit the adult’s profile.

That child could be served age-inappropriate ads, without any targeting error on your part. But it still counts as exposure risk.

Contextual controls are helpful, but they don’t guarantee compliance.

You already have the tools. Here’s how to use them properly

The good news: you don’t need to reinvent your tech stack. Most controls already exist within YouTube and DV360. It’s about using them deliberately.

Start here:

  • Target signed-in users - Logged-in environments carry stronger age signals.
  • Exclude high-risk categories - Think animation, gaming, family content, even if it looks brand safe.
  • Use parental content filters - Enable YouTube’s family safety tools to avoid kid-heavy placements.
  • Tighten your DV360 settings - Review inventory exclusions, audience targeting, and brand safety parameters.
  • Maintain always-on blocklists - Refresh these regularly to avoid low-signal or youth-skewing placements.

Louder’s recommendations

This isn’t just a media optimisation issue, it’s a governance one. Here’s what we recommend:

  • Audit where your ads are running - Check if your ads are appearing on youth-skewed content or being served to unsigned-in users. That’s exposure risk.
  • Tighten your audience and content controls - Focus on signed-in users, layer in parental or adult demographics, and exclude categories like animation or gaming. Keyword blocks help, but they’re not enough.
  • Build compliance into your media planning - Keep records of exclusions, rationale, and targeting decisions. These rules shouldn’t be an afterthought, they should be baked into your campaign from the start.
  • Leverage in platform Youtube content controls such as content categories and category exclusions to mitigate exposure and risk.
  • Explore always on Youtube blocklists as a standard process to update

Get in touch

Get in touch with Louder to discuss how we can assist you or your business and sign up to our newsletter to receive the latest industry updates straight in your inbox.



About Andrew Hughes

Andrew is a Consultant and Partner at Louder, focussing on how clients can maximise their return from digital media investments.