Data Science Investment Counter —Funding raised by UK data science companies in 2018.
£ 5.640 Million

Machine learning software from MIT and Qatari can detect fake news sites

Researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the Qatar Computing Research Institute (QCRI) have created a new system that uses machine learning to determine if a source is accurate or politically biased.

The program works reading data from Media Bias/Fact Check (MBFC), a website with human fact-checkers analysing the accuracy and biases of more than 2,000 news sites. It then uses a machine learning algorithm to classify news sites the same way as MBFC.

The accuracy of the system in early tests showed a 65 per cent accuracy at detecting whether it has a high, low or medium level of factuality and roughly 70 per cent at detecting if it was politically biased.

“If a website has published fake news before, there’s a good chance they’ll do it again,” says postdoc Ramy Baly, the lead author on a new paper about the system. “By automatically scraping data about these sites, the hope is that our system can help figure out which ones are likely to do it in the first place.”

Baly adds the system needs roughly 150 articles to accurately detect if a news source can be trusted. This efficiency would spot and eliminate fake news before they become viral.

This is not the first time machine learning is used to fight fake news. Earlier this year,  Factmata, a London-based company working on machine learning technology had raised £700,000 ($1 million) in seed from a number of investors to tackle the issue. And more recently, Google Cloud’s former chief research scientist and AI guru Fei-Fei Li had called to use machine learning to eradicate fake news.

“There is ongoing research in the [AI] community on this [fake news] and it’s an important effort,” she had said. “Just like AI has been used to call out gender inequality in Hollywood, that women are less represented, I think it’s a great use of the technology to deliver a positive message and I hope that NLP (neuro-linguistic programming) can contribute to that [fake news problem] as well.”

“It’s interesting to think about new ways to present the news to people,” says Co-author of the study Preslav Nakov. “Tools like this could help people give a bit more thought to issues and explore other perspectives that they might not have otherwise considered.”

Nakov warned the system is still a work in progress, and that, even with improvements in accuracy, it would work best together with human fact-checkers.

In the future, the MIT and QCRI researchers are planning to test the English-trained system on different kinds of biases than political, like religious or secular-leaning news in the Islamic world.


Co-working space and blog dedicated to all things data science.

Subscribe to our newsletter