logo

 

Welcome Guest! You are here: Home » Science & Technology

GPT-3 a double-edge sword: New research questions veracity of AI Models

According to a study published in the journal Science Advances, GPT-3 could produce both accurate and convincing tweets - even when they contained erroneous information. Read More

Sunday July 2, 2023 11:45 PM, ummid.com Web Desk

GPT-3 a double-edge sword: New research questions veracity of AI Models

London: Researchers at the University of Zurich have discovered that the Large Language Model (LLM) GPT-3, created by OpenAI, is a "double edge sword" and may be used to produce misinformation and fake information.

Inform but also mislead

According to a study published in the journal Science Advances, GPT-3 could produce both accurate and convincing tweets - even when they contained erroneous information.

“Our results show that GPT-3 is a double-edge sword, which, in comparison with humans, can produce accurate information that is easier to understand, but can also produce more compelling disinformation,” wrote the researchers in the paper’s abstract posted on a preprint website.

Additionally, the researchers discovered that it was difficult for people to tell the difference between tweets created by GPT-3 and those written by actual Twitter users.

"Our study reveals the power of AI to both inform and mislead, raising critical questions about the future of information ecosystems," said Federico Germani, a postdoctoral researcher at the varsity.

Misleading and False Information

This finding raises questions about using AI models to spread false information. GPT-3, for instance, might be used to create phony news reports or social media posts to mislead readers.

The model was put to test by the researchers, who asked it to write slanted reviews of products on TripAdvisor and Amazon, and fabricate false news stories about political candidates. They discovered that the model could produce text that a person could not have written and that unwary readers would likely accept the erroneous information it made.

The study draws attention to the rising worry regarding using sophisticated language models to disseminate misinformation online. Many people turn to social media and other online platforms for news and information when confidence in traditional media outlets is at an all-time low.

Overall, the study underlines the necessity of closer examination of advanced language models and their possible influence on our capacity to discriminate between reality and fiction online.

As these models are widely used, there is a need to create efficient methods for preserving the accuracy of online information and safeguarding users against the negative impacts of misinformation.

"Recognising the risks associated with AI-generated disinformation is crucial for safeguarding public health and maintaining a robust and trustworthy information ecosystem in the digital age," Biller-Andorno said.

 

For all the latest News, Opinions and Views, download ummid.com App.

Select Language To Read in Urdu, Hindi, Marathi or Arabic.

Google News
 Post Comments
Note: By posting your comments here you agree to the terms and conditions of www.ummid.com

Top Stories

Logo