Summary
Australia’s eSafety Commissioner Julie Inman Grant has issued a stark warning that the release of OpenAI’s new Sora 2 video generator could fuel a surge in sexualised deepfake abuse, particularly targeting women and children, amid already escalating online harms.

Speaking at a conference on the Gold Coast, Ms. Inman Grant said Sora 2’s ability to produce highly realistic 20-second AI videos within seconds represents a major risk for image-based abuse. “We’ve seen a doubling of deepfake image-based abuse reports over the past 18 months,” she said, adding that incidents now occur weekly in Australian schools.
Describing the trend as “putting online harms on steroids,” she warned that the technology could supercharge the spread of non-consensual sexualised content, compounding trauma for victims. Deepfake material currently accounts for a small share of total abuse cases, but regulators believe it represents only “the tip of the iceberg.”
The new AI platform falls outside the federal government’s upcoming social media bans for teens, set to take effect December 10, highlighting regulatory gaps in emerging technologies. The eSafety Commission is separately pursuing legal action against a UK-based company behind popular “undressing apps” that digitally strip images of clothing.
Meanwhile, a NSW parliamentary inquiry into online pornography warned that AI-generated sexual abuse imagery is traumatizing children, as photos taken from school and social media are being manipulated into explicit content. The report found children as young as 10 are routinely exposed to pornography, with some showing increased sexual aggression, including assaults on siblings or classmates.
The committee urged stronger laws, enforcement, and education programs to combat the rising threat of AI deepfakes and the normalization of violent, misogynistic, and exploitative content among youth.