
Expert Comment: Chatbot-driven sexual abuse? The Grok case is just the tip of the iceberg
Dr Federica Fedorczyk, Early Career Research Fellow at the Institute for Ethics in AI, works on AI regulation and ethics with a focus on the intersection between AI and the criminal justice system. In this article, she explores how chatbot-enabled ‘deep nudes’ expose a wider ecosystem of online misogyny and why safeguards must be built in, not bolted on.
Dr Federica Fedorczyk, Institute for Ethics in AIThe mechanism is extremely straightforward. Users can upload a picture and ask Grok to remove the clothes of the person depicted, leaving them in underwear, bikinis, transparent attire, or sexualised poses. But this is not the full extent of the problem. Grok also has the capacity to publicly post these images on X, the associated social media platform. As a result, non-consensual sexualised images can be generated and immediately disseminated to a potentially vast audience.
This functionality exists since the launch of Grok Imagine and concerns had already been raised by victims. However, the issue only attracted widespread attention recently, when images of minors were also sexualised, an image of the Princess of Wales in a bikini was generated, and sexualised images of women began circulating across X at scale. After these episodes, the UK communications regulator Ofcom made “urgent contact” with Musk to raise concerns.
In response, on 3 January Musk stated that “anyone using Grok to make illegal content will suffer the same consequences as if they had uploaded illegal content”. The following day, X issued a warning to users in similar terms, asserting that it would remove unlawful material, permanently suspend offending accounts, and cooperate with local authorities and law enforcement where necessary.
But is this really the case? And, more fundamentally, if these are illegal contents, why is it so easy to make Grok produce them in the first place?
The answer is relatively simple and has never been a mystery. From the very beginning, Grok was structurally designed to operate with fewer safeguards and guardrails than other AI assistants. What has dominated the news in the recent weeks is therefore not an anomaly or a sudden glitch in the system. These capabilities have always been present in the system, and several prior warnings were ignored. For instance, when it was launched, Grok’s video generator had already produced non-consensual sexual deepfakes of celebrities; and, in the following months, in response to user requests, it has also generated sexualised images of ordinary women without their consent.
The recent wave of public outrage has merely brought long-standing problems into sharper focus.
Why is it sexual abuse?
The harm is intensified by the persistence of sexually explicit material online, which is rarely fully removable, and by the global, interconnected architecture of social media platforms, which enables abusive content to spread rapidly across audiences.
The creation and circulation of non-consensual, AI-generated sexual images is neither new nor isolated.
Since at least 2016, practices such as so-called “deep nudes” and non-consensual sexual deepfakes have become increasingly widespread. These are defined by the absence of consent of the person depicted and the deliberate sexualisation of their body against their will. The victims experience a profound violation of dignity and privacy, often accompanied by significant emotional distress, humiliation, and reputational damage.
For this reason, non-consensual sexual deepfakes are increasingly recognised not merely as offensive content, but as a form of sexual abuse carried out through digital means.
The harm is intensified by the persistence of sexually explicit material online, which is rarely fully removable, and by the global, interconnected architecture of social media platforms, which enables abusive content to spread rapidly across audiences.
The ease with which such images can be created also generates an ever-present threat that discourages women’s online participation, out of fear of sexualisation or doxing. The harm therefore extends beyond individual victims to women as a group, mirroring well-documented patterns of sexual violence in which the threat of abuse alone is often sufficient to silence and exclude.
A house of cards?
Yet, the question remains: if legal frameworks already recognise the gravity of this conduct and prohibit it, how is it that systems such as Grok, integrated into a major social media platform like X, have allowed users to generate, circulate and seemingly evade responsibility for unlawful sexualised content?
Precisely because of the severity of these harms, the creation and sharing of non-consensual AI-generated sexual images and videos has been progressively criminalised in several countries. This is reflected, for example, in Directive (EU) 2024/1385, which requires EU Member States to address such conduct within their criminal law frameworks, in the US Take It Down Act, as well as in the UK’s Online Safety Act, which criminalised the sharing of intimate images or videos without consent in 2023.
Yet, the question remains: if legal frameworks already recognise the gravity of this conduct and prohibit it, how is it that systems such as Grok, integrated into a major social media platform like X, have allowed users to generate, circulate and seemingly evade responsibility for unlawful sexualised content?
Why was it possible for users to create non-consensual sexual images of children as young as eleven or of women in bikinis, tied up, gagged or covered in blood?
The Online Safety Act requires platforms such as X not only to carry out risk assessments to identify harmful and unlawful uses of their services, but also to take proportionate steps to prevent users from encountering illegal intimate imagery and to remove such material swiftly once notified.
In practice, however, these obligations appear to have gone unmet. In response, on Monday 12 January, Ofcom opened an investigation under the Online Safety Act, stating that it was treating the matter as a “highest priority”.
In the meantime, X announced that the creation of images through Grok would no longer be open to all users but limited to paying subscribers. This appears to imply that creating non-consensual sexual images is treated as a kind of premium or deluxe service, reserved only for those who can afford access.
The investigation will assess whether X adequately identified and mitigated the risks of illegal content on its platform, including non-consensual intimate imagery and child sexual abuse material. It will also examine whether the platform acted promptly to remove unlawful material, complied with privacy obligations, properly assessed risks to children, and implemented effective age-verification measures for pornographic content. Where breaches are established, Ofcom has the power to impose substantial financial penalties and, in cases of continued non-compliance, may seek court orders to restrict access to the platform within the UK.
In the meantime, X announced that the creation of images through Grok would no longer be open to all users but limited to paying subscribers. This appears to imply that creating non-consensual sexual images is treated as a kind of premium or deluxe service, reserved only for those who can afford access.
What next?
The Grok case is only the tip of the iceberg of a wider, and even not particularly hidden anymore, ecosystem of online misogyny and abuse. As major tech companies increasingly move towards the creation and dissemination of sexual chatbots... criminalising the outcome alone is no longer enough.
In the UK, after persisting advocacy efforts, the creation of sexually explicit deepfakes without consent was criminalised under the Data (Use and Access) Act 2025. Until now, however, that law had not been brought into force. After Grok’s scandal, it was announced earlier this week that the provision would finally take effect, meaning that individuals who create or seek to create such content, including on platforms such as X, are now committing a criminal offence.
While this step forward is certainly welcome, it remains insufficient and the problem needs to be approached through a more systemic lens.
The Grok case is only the tip of the iceberg of a wider, and even not particularly hidden anymore, ecosystem of online misogyny and abuse. As major tech companies increasingly move towards the creation and dissemination of sexual chatbots – from the announced launch of “ChatGPT Erotica” to Meta’s romantic chatbots that have engaged in sexual conversations with minors – criminalising the outcome alone is no longer enough.
Platforms should articulate clear ethical standards governing users’ exposure to sexual content, including strict and enforceable limits on material related to child sexual abuse. Moreover, downstream moderation is insufficient. Risk mitigation must be embedded at the design stage, to prevent such images from being created in the first place.
The real perpetrators are not machines but humans, and the technology involved merely mirrors human violence. Without a clear political and social willingness to impose and enforce far stricter boundaries, the cycle of abuse is likely to persist.
Yet this goal remains distant. The real perpetrators are not machines but humans, and the technology involved merely mirrors human violence. Without a clear political and social willingness to impose and enforce far stricter boundaries, the cycle of abuse is likely to persist.