Australia’s internet safety regulator has accused major tech companies, including YouTube and Apple, of failing to take adequate action against child sexual abuse material on their platforms, highlighting serious gaps in online safety protocols.
In a report released Wednesday, the eSafety Commissioner said YouTube and Apple did not monitor how many user reports they received regarding child sexual abuse content, nor could they provide data on how quickly they responded to such reports.
“YouTube, in particular, has been unresponsive to our enquiries,” the report noted.
The findings come shortly after the Australian government reversed its decision to exempt YouTube from a proposed social media ban for teenagers, acting on recommendations from the eSafety office.
“When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,” said eSafety Commissioner Julie Inman Grant. “No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services.”
While Google has stated that child abuse material is not tolerated on its platforms and that it uses hash-matching and AI-based tools to detect and remove such content, the report argues that significant safety shortcomings remain.
The eSafety office mandated tech giants, including Apple, Discord, Google, Meta, Microsoft, Skype, Snap, and WhatsApp, to provide details on how they combat child exploitation. The responses, according to the regulator, revealed numerous safety lapses — from inadequate tools to detect livestreamed abuse, to insufficient use of hash-matching technology across all platform areas.
The report also highlighted inconsistent or weak reporting mechanisms and noted that several providers had failed to act on prior warnings to improve safety standards.
"In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff," Inman Grant added.
Meta, the parent company of Facebook, Instagram, and Threads, reiterated that it prohibits graphic content, while Google maintains that it employs industry-standard tools to remove abusive material.