News you need

AI chatbots might be helpful, but they’re also quietly left-leaning, according to new research

Summarised by Centrist

David Rozado from Otago Polytechnic conducted a study running political questionnaires on 24 different AI models, including ChatGPT and Google’s Gemini, and found that most of these chatbots lean to the left. 

As companies like Google increasingly promote chatbots as trusted sources of information, this political bias raises concerns about the unintended influence AI could have on society. 

Rozado doesn’t suggest the bias is deliberate, but it’s present. As AI models like ChatGPT are trained on large amounts of online data, a disproportionate amount of left-leaning versus right-leaning content could be a contributing factor.

Rozado also notes that earlier models like GPT-3.5 didn’t show political bias. It’s the chatbot layers on top, trained on these foundational models, that introduce it.

As AI increasingly replaces traditional sources like search engines and Wikipedia, Rozado warns that political bias in these systems could have major societal implications. 

“It is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries,” he writes.

Read more over at Science Alert

Enjoyed this story? Share it around.​

Subscribe
Notify of
guest
4 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments

Read More

NEWS STORIES

Sign up for our free newsletter

Receive curated lists of news links and easy-to-digest summaries from independent, alternative and mainstream media about issues affect New Zealanders.