Fri, Jan 16, 2026 | Rajab 27, 1447 | Fajr 05:45 | DXB
22.3°C
The European Commission has extended X’s retention order on algorithms and illegal content until the end of 2026

Elon Musk said on Saturday that social media platform X will open its new algorithm, including all code for organic and advertising post recommendations, to the public in seven days.
"This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed," he said in his X post.
Earlier this week, the European Commission decided to extend a retention order sent to X last year, which related to algorithms and dissemination of illegal content, prolonging it to the end of 2026, spokesperson Thomas Regnier told reporters on Thursday.
Stay up to date with the latest news. Follow KT on WhatsApp Channels.
In July 2025, Paris prosecutors investigated the social media platform for suspected algorithmic bias and fraudulent data extraction, which Musk's X called a "politically motivated criminal investigation" that threatens its users' free speech.
This comes after Elon Musk's controversy over Grok flooding X with sexualised photos of children.
Grok's mass digital undressing spree appears to have kicked off over the past couple of days, according to successfully completed clothes-removal requests posted by Grok and complaints from female users reviewed by Reuters.
Musk appeared to poke fun at the controversy earlier on Friday, posting laugh-cry emojis in response to AI edits of famous people – including himself – in bikinis.
X's innovation – allowing users to strip women of their clothing by uploading a photo and typing the words, "Hey @grok, put her in a bikini" – has lowered the barrier to entry. Three experts who have followed the development of X’s policies around AI-generated explicit content told Reuters that the company had ignored warnings from civil society and child safety groups — including a letter sent last year warning that xAI was only one small step away from unleashing "a torrent of obviously nonconsensual deepfakes".
"In August, we warned that xAI's image generation was essentially a nudification tool waiting to be weaponised," said Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter's signatories. "That's basically what's played out."
Dani Pinter, the chief legal officer and director of the Law Centre for the National Centre on Sexual Exploitation, said X failed to pull abusive images from its AI training material and should have banned users requesting illegal content.