-
Sorting the real news from fake news on our social media networks isn’t easy.
-
Malicious actors use our psychology against us to get us to believe fake news is true.
-
By following the theories of human cognition and behavioural sciences, we can calculate quantitative measures and apply natural language processing techniques to help us spot fake news.
How can we avoid being fooled by fake news? Our social networks are bombarded by information and it is almost impossible to discern what is real and what is unreliable information.
The problem is not so much because of false content, but because malicious actors try to use our psychology against us. This explains why misinformation spreads six times faster on social networks than reliable information. As if that were not enough, misinformation is created in greater quantity (the volume problem), in many typologies (the breadth problem) and faster (the speed problem) than our ability to counteract its content. Despite this, there are some clues that can tell us if the author of an article is trying to deceive us by taking advantage of our psychology.
Following the theories of human cognition and behavioural sciences, we can calculate quantitative measures and apply natural language processing techniques to help us spot fake news. For example, the limited capacity model of mediated motivated message processing proposes that different structural and functional characteristics of a text require different cognitive efforts to be processed by our brain. That is, not all texts are equally complex. A book for young children, for instance, has a simpler structure than a scientific article. The problem is that in social networks humans tend to minimise the effort we apply to information processing. This explains why simpler content is more viral.
Our psychology, however, is also susceptible to being influenced by emotions. Studies show that content with a high emotional level is more viral, which explains why social networks are considered sources of “large-scale emotional contagion.” When we are exposed to information with high emotional content, our psychology becomes less rational and this diminishes our ability to discern what is true from what is false. Knowing this, fake content creators try to exploit emotions against us. In a recent study, we tried to reveal the strategies followed by different types of misinformation, what we call “the fingerprints of misinformation.” For this, we used more than 92,000 pieces of fake news content divided into seven categories: clickbait, conspiracy theories, fake news, hate speech, junk science, reliable sources and rumours. We then compared these with real news in terms of evocation to emotions and cognitive effort.
The fingerprints of misinformation
Using natural language processing, we calculated to what extent the different categories of misinformation differ from real news in terms of evocation to emotions (sentiment analysis and appeal to moral values) and the cognitive effort required to process the content (grammatical complexity and lexical diversity). The results indicate that there are large differences between fake and real content:
The fingerprints of misinformation: how deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions Image: Carrasco-Farré, C. (2022)
As seen in this graph, almost all types of misinformation have a simpler grammatical structure than reliable news. This makes them easier to process, because they require less cognitive effort from the user. Similarly, they are all 15% less lexically diverse, further reducing the cognitive effort required to process them. If we look at the evocation of emotions, the results indicate that misinformation is much more emotional than real news. In fact, they are ten times more negative in terms of sentiment. In addition, and this is important, they appeal 37% more to the morality of the reader. That is, they try to psychologically influence the user through ideas that convey an attack on the individual social identity (gender, religion, nationality, etc.) of the reader.
Articles with high negative content and an appeal to social identity look like the following example, taken from one of the analysed articles that reported on a shooting against two French police officers: “(…) Her candidacy [referring to Marine Le Pen] has been an uprising against the globalist-orchestrated Islamist invasion of the EU and the associated loss of sovereignty. The EU is responsible for the flood of terrorists and Islamists into France (…)” Luckily, these same results help us to make various recommendations to avoid being deceived by creators of misinformation who manipulate our psychology. Following the structure of the previous results, the following recommendations can be derived to avoid being deceived by misinformation:
Be suspicious of content that has poor grammatical and lexical structure
The creators of misinformation know that when you browse social networks you are not going to dedicate all the effort that you would in other situations; after all, you have come to them to inform yourself or have fun, not to work. Therefore, they try to avoid making it cognitively costly for you to process the content. This makes it more attractive to your brain and more likely that you believe it and share it with other members of your network. It is not that all internet content should be complex, but more about knowing how to identify content that is suspiciously simple.
Do not trust news that tries to generate some emotion or appeal to your moral values
All the research in the behavioural sciences shows that humans are not as rational as we think. Our emotions strongly influence our mental processes and our decision-making. That is why fake content creators try to exploit them so that we are not able to rationally discern whether the content we are reading is true or not. They appeal to our emotions by trying to generate anger, fear or sorrow to cloud our rationality. They also employ strategies that make us feel that our social identity – our nationality, our gender, our opinions – are in danger, creating the sensation of enemies outside our group that threaten our very existence. Again, this feeling of being attacked activates mechanisms in our psychology that make us behave less rationally and reduce our ability to discern between what is real and what is false.
So, the results of the study show that it is important to be alert when navigating the internet, especially on social networks, and to be able to detect when someone is trying to take advantage of our psychology against us. For this, there is individual work but also work as a society.
First of all, social media companies must be aware that their platforms create access to information never seen before in human history – which is incredibly beneficial – but also that their platforms are being used by malicious actors that exploit the psychology of users to pursue their economic or political ends. On the other hand, the results are also a call to public decision-makers to be more concerned about the alarmingly low levels of media literacy in the population. The speed at which technology is advancing is much faster than our psychology, our educational systems or our policy-making. This will create problems for our societies if we do not work harder to solve the challenges we already have and the challenges ahead.