Social media platform X announced Tuesday that creators who post AI-generated videos of armed conflicts without disclosing they are artificially created will face a 90-day suspension from its revenue-sharing programme.
The policy change, revealed by an executive of the Elon Musk-owned platform, is meant to protect information authenticity due to the ongoing war involving the US, Israel, and Iran.
“During times of war, it is critical that people have access to authentic information on the ground,” said Nikita Bier, X’s head of product.
Bier added that current AI technologies make it “trivial to create content that can mislead people.”

X said it will “continue to refine” its policies and tools to ensure the platform “can be trusted during these critical moments”.
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.
During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
— Nikita Bier (@nikitabier) March 3, 2026
The new AI disclosure rules mark a shift for a platform long criticised for its content moderation approach since Musk’s $44 billion acquisition of Twitter in October 2022, which led to its rebranding as X.
Since then, X has largely reduced restrictions on misinformation, viewing them as censorship.
Under the new rules, repeat offenders risk permanent removal from the Creator Revenue Sharing programme, which pays eligible users a portion of advertising revenue from their posts.
Violations will be detected through X’s Community Notes crowdsourced fact-checking system, plus metadata and other technical markers embedded in AI-generated content.
Trending 