Grok, deepfakes and the law: what you need to know

Is there anything we can do legally, at the moment, to protect ourselves?

As the dust settles on news that UK regulator Ofcom is investigating Elon Musk X over claims its Grok AI chatbot is generating sexual deepfakes, including images of children, Mediaweek sat down with legal expert and partner at Dowson Turco Lawyers, Nicholas Stewart, to learn what the case could mean for Australians and where the legal lines are being drawn.

Mediaweek: Let’s start with first things first – is there anything we can do legally, at the moment, to protect ourselves?

Nicholas Stewart: The laws that are in place to protect us are, in most cases, not proactive. In other words, harm must occur before the law can step in. For example, an AI image or any communication created in a way that harasses or offends, and is sent via a carriage service such as a text message or a social media DM, may constitute a serious criminal offence under the Commonwealth’s Criminal Code.

An act of simple intimidation against someone you are, or were, in a relationship with will also constitute a criminal offence, but again, the harm comes first. The reality is that bad actors will always break the law, regardless of whether it is designed to disincentivise specific conduct.

The NSW Government has recently strengthened protections against image-based abuse, taking a zero-tolerance approach to offenders who share sexually explicit material or deepfake images without consent. Similar laws exist across other states and territories, but the challenge with Grok’s nudifying capability, and that of other platforms and apps, is scale.

This content is created en masse by automated systems, directed by bad-faith actors who may not be in Australia or identifiable. Compounding the issue, current laws do not criminalise the creation of this material.

At a federal level, the Commonwealth enacted the Online Safety Act 2021, which empowers the eSafety Commissioner to order the removal of severely abusive or harmful online content.

The eSafety Commission also administers an image-based abuse scheme, allowing individuals to seek the removal of explicit or intimate images shared without consent. Penalties under the Act include fines, warnings or removal notices, but breaches are not criminal offences.

Compared with other jurisdictions, these penalties remain conservative, particularly as technological change accelerates.

MW: Are Australia’s laws lagging behind, or is this far more complicated than it seems?

NS: There have been calls for greater regulation of AI since ChatGPT launched.

Many in the legal and regulatory community saw human rights abuses as a possible consequence of light or non-existent regulation.

Australia is leading the way with its ban on social media for under-16s, but an independent review of the Online Safety Act 2021 recommended revising e-safety legislation to include an enforceable duty of care against platforms and services to prevent foreseeable harms. I agree – it should be the responsibility of tech platforms to prevent abusive content from being created.

In the United Kingdom, the Online Safety Act, passed in October 2023, imposes a statutory duty of care on platforms to keep users safe.

Specific online abuse is monitored under the UK Communications Act, which can result in significant penalties for contraventions.

In Ireland, Coco’s Law was adopted in 2021 and introduced potential jail terms of up to seven years for those found guilty of distributing intimate images without consent. The Department of Justice reported that 99 prosecutions had occurred since the Act came into effect in 2024.

MW: What can you do if you realise you’ve been exploited?

NS: Anyone who discovers they have been exploited should report the matter to the police. The eSafety Commission can also issue removal notices to platforms and assist with take-downs.

MW: If someone is caught exploiting others, will they face any repercussions?

NS: Those who use carriage services to menace, harass or offend can face criminal prosecution. This conduct could take the form of sharing intimate deepfake images.

Additionally, in NSW, anyone who shares an intimate image without the consent of the person depicted may be prosecuted. Similarly, a person who intimidates a person they have been / or are in a domestic relationship with (or their family members) can also face criminal prosecution.

MW: Taking off your legal hat, do you worry about AI when it comes to situations like this?

NS: Yes, I do worry about AI.

It has never been easier for bad-faith actors to abuse or extort us online. It has also never been easier for people to use technology to misinform or disinform, at scale. The ability to so easily create propaganda leads to misinformation, disinformation, and societal division.

I also worry about the impact on our environment – AI has an enormous hunger for energy and data. It uses more energy than an equivalent Google search, and megalitres of fresh water are wasted every day cooling data centres.

Keep on top of the most important media, marketing, and agency news each day with the Mediaweek Morning Report – delivered for free every morning to your inbox.

To Top