Shortly after Joe Biden announced the end of his reelection campaign, misinformation spread quickly online about a potential new candidate to succeed him.
Screenshots falsely claimed that new candidates could not be added to ballots in nine states. These went viral on Twitter (now X), amassing millions of views. The Minnesota Secretary of State’s office began receiving requests to verify the claims. However, the voting deadline had not passed, leaving Kamala Harris ample time to add her name to the ballot.
The misinformation source was Twitter’s AI chatbot, Grok, which mistakenly responded to a question about whether a new candidate could still be added to ballots.
This incident has become a pivotal moment in how election officials and AI companies might interact during the 2024 U.S. presidential election. With growing concerns about AI misleading or distracting voters, the role of chatbots like Grok in elections is under scrutiny.
A group of secretaries of state, represented by the National Association of Secretaries of State, reported the error to Grok and X. However, Minnesota Secretary of State Steve Simon said the company’s initial response was disappointing, as they didn’t correct the mistake immediately. “It shocked all of us,” he remarked.
Although this error did not directly impact voting, election officials are concerned about future AI-generated misinformation. “Next time, it might be something more critical like how or where to vote,” Simon added.
The incident highlights a new challenge: misinformation not just spreading through social media but also originating from the platforms themselves.
In response, five secretaries of state signed an open letter to X and its owner, Elon Musk. They called on Grok to follow the example of other chatbot tools like ChatGPT and direct users to trusted election resources like CanIVote.org.
The pressure paid off. Grok now directs users to vote.gov when asked election-related questions. Wifredo Fernandez, head of global government affairs at X, acknowledged the changes in a letter to the officials, emphasizing X’s commitment to ongoing communication during the election season.
For election officials, this marked a small victory against misinformation, demonstrating the importance of addressing AI errors early and often. Simon acknowledged his disappointment with the initial response but praised the company for ultimately doing the right thing.
Grok, described by Musk as an “anti-woke” chatbot with sarcastic responses, poses unique risks. The tool collects popular tweets to inform its replies, potentially affecting its accuracy. Lucas Hansen, co-founder of CivAI, a nonprofit that warns about AI dangers, noted that Grok’s decentralized approach could be a disadvantage in preventing misinformation.
Hansen also warned that Grok, which requires a paid subscription, could fuel partisan divisions by generating controversial images. Studies by the Center for Digital Hate Statistics and Al Jazeera have shown how Grok can create lifelike, yet misleading, images. Hansen cautioned that this capability allows users to produce more provocative content than ever before.