Researchers already know that online fake news spreads much more quickly and more widely than real news. My research has similarly found that online posts with fake medical information get more views, comments and likes than those with accurate medical content. In an online world where viewers have limited attention and are saturated with content choices, it often appears as though fake information is more appealing or engaging to viewers.
The problem is getting worse: By 2022, people in developed economies could be encountering more fake news than real information. This could bring about a phenomenon researchers have dubbed “reality vertigo” – in which computers can generate such convincing content that regular people may have a hard time figuring out what’s true anymore.
However, those methods assume the people who spread fake news don’t change their approaches. They often shift tactics, manipulating the content of fake posts in efforts to make them look more authentic.
Context is also key. Words’ meanings can change over time. And the same word can mean different things on liberal sites and conservative ones. For example, a post with the terms “WikiLeaks” and “DNC” on a more liberal site could be more likely to be news, while on a conservative site it could refer to a particular set of conspiracy theories.
Using AI to make fake news
The biggest challenge, however, of using AI to detect fake news is that it puts technology in an arms race with itself. Machine learning systems are already proving spookily capable of creating what is called “deepfakes” – photos and videos that realistically replace one person’s face with another, to make it appear that, for example, a celebrity was photographed in a revealing pose or a public figure is saying things he’d never actually say. Even smartphone apps are capable of this sort of substitution – which makes this technology available to just about anyone, even without Hollywood-level video editing skills.
When someone sees an enraging post, that person would do better to investigate the information, rather than sharing it immediately. The act of sharing also lends credibility to a post: When other people see it, they register that it was shared by someone they know and presumably trust at least a bit, and are less likely to notice whether the original source is questionable.
Facebook could use its partnerships with news organisations and volunteers to train AI, continually tweaking the system to respond to propagandists’ changes in topics and tactics. This won’t catch every piece of news posted online, but it would make it easier for large numbers of people to tell fact from fake. That could reduce the chances that fictional and misleading stories would become popular online.
The Conversation Africa The Conversation Africa is an independent source of news and views from the academic and research community. Its aim is to promote better understanding of current affairs and complex issues, and allow for a better quality of public discourse and conversation. Go to: https://theconversation.com/africa