The Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 strengthens Australia’s response to non-consensual AI-generated deepfake content by empowering the eSafety Commissioner to remove harmful material and creating a new civil cause of action for victims. It amends the Online Safety Act 2021 and the Privacy Act 1988 to provide enforcement powers, civil penalties and clear remedies.
The Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 amends two principal Acts to address the emerging threat of deepfake technology.
Schedule 1 (Online Safety Act 2021) introduces new definitions––deepfake material, non-consensual sharing and subject of deepfake material––establishes a complaints regime under the eSafety Commissioner, empowers removal notices and formal warnings, and imposes civil penalties (up to 500 penalty units) for posting or failing to remove non-consensual deepfakes. It also integrates procedural safeguards, exemptions and annual reporting requirements into the existing online safety framework.
Schedule 2 (Privacy Act 1988) creates a new civil cause of action for wrongful use or disclosure of deepfake material, allowing plaintiffs to seek injunctions, damages, correction orders or apologies without proof of actual damage. It outlines defences (including lawful authorisation and journalistic privilege), sets time limits for actions, and provides exemptions for public-interest actors (journalists, law enforcement, minors). The bill commences on Royal Assent, with operative provisions effective the following day.
Deepfake technology enables malicious actors to fabricate realistic audio–visual content that can irreparably damage a person’s reputation, dignity and privacy. Existing laws on defamation and image-based abuse do not fully capture the novel harms of AI-generated impersonation. By empowering the eSafety Commissioner to issue removal notices and imposing civil penalties, the bill creates a swift enforcement mechanism that deters non-consensual sharing of deepfakes and protects vulnerable groups such as women and children.
The creation of a statutory cause of action under the Privacy Act 1988 gives individuals clear access to civil remedies—including injunctions, damages and correction orders—without the high evidentiary burdens of proving actual damage. This aligns domestic law with Australia’s human rights obligations under ICCPR Article 17, which guarantees protection from arbitrary interference with privacy. Strengthening these safeguards maximises overall well-being by reducing the incidence of digital harm and upholding equality before the law.
Although non-consensual deepfakes can cause real harm, the bill’s broad definitions of “deepfake material” and “non-consensual sharing” risk capturing legitimate uses such as parody, satire or academic research, creating legal uncertainty and a chilling effect on free expression protected by ICCPR Article 19. Civil penalties of up to 500 penalty units and new removal powers placed in a single regulator’s hands may lead to disproportionate enforcement and overreach.
Existing legal frameworks—defamation, image-based abuse provisions and privacy laws—already provide remedies for actual harm, and the evidence for systemic failure in those regimes is limited [Judgment]. The new civil cause of action may encourage speculative litigation, impose heavy compliance costs on platforms and users, and result in forum shopping and inconsistent judicial outcomes. A more narrowly tailored approach or enhancement of existing laws would mitigate these risks without significantly expanding regulatory reach.
2025-11-24
Senate
Before Senate
POCOCK, Sen David
Unspecified
Discrimination / Human Rights, Science / Technology, Media / Advertising