UK media regulator Ofcom has opened a formal investigation into Elon Musk’s social media platform X over its AI chatbot, Grok, after reports that its image-generation feature has been used to create sexualised deepfakes.
Grok has drawn increasing international criticism for enabling users to generate and share sexualised images of women and children using simple text prompts. Ofcom described the allegations as deeply troubling.
In a statement, the regulator said images showing people undressed could amount to intimate image abuse or pornography, while sexualised images involving children could constitute child sexual abuse material. X did not immediately respond to requests for comment.
Ofcom confirmed that it contacted X on January 5 to seek clarification on the measures in place to protect UK users.
Although the regulator did not disclose details of the correspondence, it said the company replied within the required timeframe.
The investigation will assess whether X has failed to meet its legal obligations under UK law.

Britain’s Online Safety Act, which came into force in July, requires websites, social media platforms, and video-sharing services that host potentially harmful content to enforce strict age-verification measures, including facial recognition or credit card checks.
The law also criminalises the creation or distribution of non-consensual intimate images and child sexual abuse material, including AI-generated sexual deepfakes.
Ofcom can impose fines of up to 10 per cent of a company’s global turnover for breaches of the rules.
Amid the backlash, Grok introduced a new monetisation policy last week, restricting access to paying subscribers through a premium service.
UK Prime Minister Keir Starmer criticised the move, describing it as an insult to victims and “not a solution”.
Indonesia on Saturday became the first country to block access to Grok entirely, with Malaysia following on Sunday.
The European Commission has also confirmed that it is reviewing complaints related to the tool.
Trending 