Chatbots’ misleading responses pose threat to US election participation

A recent report has shed light on the concerning trend of popular chatbots generating false and misleading information related to the ongoing US presidential primaries. The findings, based on the insights of artificial intelligence experts and a bipartisan group of election officials, reveal that these inaccuracies have the potential to disenfranchise voters.

As fifteen states and one territory gear up for Super Tuesday, where both Democratic and Republican presidential nominating contests will take place, millions of individuals are turning to AI-powered chatbots for basic information about the voting process. However, these chatbots, trained on extensive internet text data, have been found to provide AI-generated answers that are prone to suggesting non-existent polling places or offering illogical responses based on outdated information.

The report, which synthesized the workshop findings of election officials and AI researchers, highlighted the inadequacy of chatbots in providing nuanced and accurate information about elections. In a test conducted at Columbia University, five large language models including OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mixtral from Mistral, were found to have varying degrees of failure when responding to basic questions about the democratic process.

The workshop participants rated a majority of the chatbots’ responses as inaccurate, with 40% of the responses categorized as harmful due to perpetuating dated and inaccurate information that could potentially limit voting rights. For instance, when the chatbots were asked about the location of a voting precinct in the ZIP code 19121, Google’s Gemini provided a response that was factually incorrect.

The report also raised concerns about the chatbots pulling from outdated or inaccurate sources, thus highlighting the potential of generative AI to amplify longstanding threats to democracy. In light of these findings, there are growing concerns about AI tools exacerbating the spread of false and misleading information during the elections, as well as attempts at AI-generated election interference.

While some technology companies have made symbolic pledges to adopt precautions to prevent AI tools from being used to generate false information, the report’s findings underscore the need for greater scrutiny and regulation of AI in the political sphere.