Research published Thursday shows that even brief interactions with partisan AI chatbots can influence voters’ political views, with evidence-backed arguments — whether true or false — proving particularly persuasive.
Experiments involving generative AI models, including OpenAI’s GPT-4o and the Chinese alternative DeepSeek, revealed that supporters of Republican Donald Trump were shifted nearly four points on a 100-point scale towards Democratic rival Kamala Harris ahead of the 2024 US presidential election.
Similar experiments in Canada and Poland found opposition supporters’ opinions altered by up to 10 points after speaking with bots designed to persuade, an effect sufficient to sway a notable proportion of voting decisions.
Cornell University professor David Rand, a senior author of the studies published in Science and Nature, said roughly one in 10 respondents in Canada and Poland, and one in 25 in the US, reported a change in voting intention following AI interactions.

Follow-ups indicated that about half of the persuasive effect persisted after one month in Britain, and one-third remained in the United States, a duration unusual for social science research.
The studies found that politeness and the provision of evidence were the most effective tactics for influencing users.
Chatbots instructed to avoid using facts were far less persuasive, challenging the prevailing view in political psychology that people ignore information conflicting with their partisan beliefs.
However, not all claims were accurate, with AI advocating right-leaning candidates tending to make more false statements, likely reflecting patterns in the online data used to train the models.
Thousands of participants were recruited via online gig-work platforms and informed that they would interact with AI.
The researchers noted that further work could explore the “upper limit” of AI’s capacity to change opinions and assess how newer models, such as GPT-5 or Google’s Gemini 3, might perform.
Trending 