Elon Musk’s Grok announced on Friday that it was working to address issues with its artificial intelligence tool after users reported it transformed images of children or women into sexualized content.
“We’ve identified lapses in safeguards and are urgently fixing them,” Grok said in a post on X.
Complaints began surfacing on X following the introduction of an “edit image” button on Grok in late December. This feature enables users to alter any image on the platform—some have chosen to partially or entirely strip clothing from women or children in the photos, as per the complaints.
Reports from media outlets in India indicated that government officials are urging X to quickly share details about the actions it is taking to eliminate “obscene, nude, indecent, and sexually suggestive content” produced by Grok without the consent of individuals depicted in those images.
Meanwhile, the public prosecutor’s office in Paris has broadened its investigation into X to include fresh allegations that Grok is being utilised for producing and distributing child pornography.
The initial probe into X began in July, following reports that the social network’s algorithm was being manipulated to enable foreign interference.
In recent months, Grok has faced criticism for generating numerous contentious statements, covering topics from the conflict in Gaza and the India-Pakistan situation to antisemitic comments and spreading false information about a tragic shooting in Australia.
Trending 