By akademiotoelektronik, 20/09/2022
Artificial intelligence now makes it possible to flood the fake news web
Facebook wants to target advertising by analyzing your encrypted messages
Gilbert Kallenborn
Journalist
Register for free to LaNewsletter News
Researchers used the GPT-3 text generator to create false political messages.These texts are good enough to be able to influence the opinion of people.Disinformation goes to the industrial stage.
Internet and social networks have become, we know, important vectors of disinformation and psychological manipulation.Russian services are particularly good at this little game, as we have seen during the 2016 American presidential campaign.Obviously, they are no longer the only ones to practice this discipline, and it is all the more worrying that the recent advances in artificial intelligence now allow these actors to automatically generate fake news.This is indeed what Andrew Lohn and Micah Musser has just shown, researchers at the Center for Security and Emerging Technology (CEST), on the occasion of the Black Hat USA 2021 conference.
Perfect artificial texts
They relied on the generator of GPT-3 texts to create false tweets and false articles inspired by the conspiratorial sphere Qanon.The result is impressive.Thus, the researchers have created a microblogging wire called "Twodder" which, from a handful of initial sentences, can permanently pour out messages in perfect consistency with the ideas of the Qanon community.No one could suspect that they are generated by a machine.
The researchers also wanted to know if the ideas conveyed by these artificial texts had a power of persuasion on real users.They submitted 1,700 people to automatically generated opinions about the withdrawal of American troops from Iraq and the commercial conflict with China.Result: the arguments presented not only were judged as rather valid by more than half of the participants, but also managed to modify their way of thinking.In some cases, they even managed to reverse the initial majority opinion."Even if the arguments generated are not of very high quality, a malicious actor could use GPT-3 to create in large numbers, disseminate them on the networks and have an effect on general opinion," said Micah Musser.
To discover also on video:
However, such a disinformation operation is not within the reach of the first hacker who came.Admittedly, the programming of a GPT-3 model is relatively easy, but its execution requires significant calculation capacities.In particular, there must be a large number of GPUs on which to distribute these calculations.These GPUs can be rented in the cloud, but it is not given.You have to count about $ 50 per hour and per GPU, estimate the researchers.
Within reach of great power
To create a volume of fake news equivalent to 1 % of the entire content distributed by Twitter - or about 8.5 million tweets per day - it would take a budget of $ 65 million per year."It’s too much for an individual hacker, but for great power, it’s not much.It could then broadcast billions of false messages per year, ”says Andrew Lohn.
How to protect yourself from such a threat?Content analysis and filtering would not be of any help here, because the GPT-3 texts simulate human writing perfection.The only way, according to the researchers, is to focus on the infrastructure.To generate a volume equivalent to 1 % Twitter, you must have at least 350,000 different accounts.It's a lot...And that does not go unnoticed.
Gilbert KALLENBORN Journalistà suivre sur Les codes promos 01netDécouvrez tous les codes promo Cdiscount Découvrez tous les codes promo AliExpress Découvrez tous les codes promo Amazon Découvrez tous les codes promo Rakuten Découvrez tous les codes promo Pixmania Voir tous les codes promos
Related Articles