Talking with an AI chatbot can successfully convince people to change their votes and could affect the outcome of future elections, according to a new study.
The study, which included 1,530 Canadians, also found that the chatbots had more success convincing Canadians to switch their votes than it did with Americans.
Gordon Pennycook, a Canadian and associate professor at Cornell University, said the study set out to discover how persuasive generative AI could be when it comes to politics.
“The answer is it’s very persuasive and more persuasive than traditional forms of political persuasion, which is like ads and things like that,” said Pennycook, one of the study’s authors.
The study, published in the journal Nature, found that one in 21 respondents in the U.S. who took part in the experiment in the fall of 2024 was convinced after interacting with an AI chatbot to switch their vote to Kamala Harris while one in 35 were convinced to switch their votes to Donald Trump.
In the Canadian part of the study, which took place in the final week of the federal election in April, participants were asked which of 17 policy issues were the most important to them in deciding who to vote for in the election. All of the interactions were in English and there is no breakdown of where in Canada participants lived.
The study found that interacting with the chatbot did prompt some participants to change their voting intention.
“In Canada, in the pro-Carney condition, it was one in nine who switched, which is a lot of people,” said Pennycook. “In the pro-Poilievre condition, where the AI convinced people to vote for Poilievre, it was one in 13 who switched.
“That’s a lot of people who are changing their minds … if you were to target that at the particular right constituents of particular districts or ridings, then you could flip an election.”
As investors pour billions into artificial intelligence, warnings of a looming AI bubble are intensifying. Andrew Chang explains what’s fuelling those fears and breaks down key factors that could contribute to a burst bubble.
Images provided by The Canadian Press, Reuters and Getty Images
Pennycook said one of the reasons AI chatbots can be effective in political persuasion is that they adapt their arguments to each respondent.
The study also found that the chatbot was more effective in convincing people to change their votes when it was allowed to use facts to do so.
“The persuasive effect was almost three times larger in the Canadian federal election than the effect observed in the U.S. experiment, but depriving the AI of the ability to use facts and evidence reduced the effect by more than half,” the authors wrote.
Pennycook pointed out that participants in the study took six to eight minutes to interact with the AI chatbot, versus watching a quick ad.
Pennycook said the difference between the U.S. and Canadian impact could be linked to the constant political campaigning in the U.S.
“Americans are inundated with election content non-stop,” he said. “And so, it’s much harder to switch, to change people’s minds.”
In its conclusions, the study found that talking with an AI chatbot “can meaningfully impact voter attitudes” but said it remains to be seen how effective the technology will be if it is deployed by political campaigns.
“It seems highly likely that AI-based approaches to persuasion will play an important role in future elections — with potentially profound consequences for democracy,” wrote the authors.
While the Canadian experiment was conducted during the federal election and some ridings were won with only a handful of votes, Pennycook doubts it could have had an impact on any of the results.
“There’s no real way of knowing, but I think it seems unlikely that this study of a thousand some people would change an election,” he said, pointing out that participants came from across Canada.
Toys for children that use AI to strike up a chat are hitting the market, but experts say they’ve encountered toys giving sexually explicit information and tips on lighting matches and are calling for more regulation.
While Canada has strict guidelines on the use of things like advertising and other tools to persuade voters during the election writ period, Elections Canada says there are few, if any, rules related to the use of AI during an election campaign. However, someone could break the law if they used AI to falsely pretend to be an election official or send out material that falsely purports to be from election officials, a political party or a candidate.
Chief Electoral Officer Stéphane Perrault has made recommendations for changes to the elections law to address potential emerging threats from AI such as requiring that electoral communications generated or manipulated using AI include a transparency marker and that AI chatbots or search functions be required to indicate in their responses where users can find official or authoritative information.
The Office of the Commissioner of Canada Elections, which investigates complaints, said it received some complaints regarding the use of AI in the last election. But in a statement in June, Commissioner Caroline Simard said there was no indication that the use of AI affected the results.

Fenwick McKelvey, associate professor in communication studies at Montreal’s Concordia University and co-director of the university’s applied AI institute, praised the study, saying it documents how generative AI can affect voting intentions.
“We know that this kind of work can be persuasive,” he said.
McKelvey said political parties in other countries such as Mexico have already begun using chatbots as part of their persuasion strategies.
McKelvey said one cause for concern would be if generative AI chatbot technology was combined with the existing databases political parties have built up on Canadian voters — databases that are exempt from Canada’s privacy laws.
“The lack of oversight about databases and the data they have can be now used in ways that nobody consented to,” he said.
McKelvey said political parties should be subject to Canada’s privacy laws and the government should take steps to mitigate potential harms of AI in advertising.


