In our current era of digital information, misinformation is rampant and no domain is fully safe from falling into its traps. Science is no exception, it has become easy to present a claim as a scientific fact, or an opinion backed by apparent experts in a field. The pandemic exacerbated this risk, and the misleading scientific communications presented during those years had harmful effects on public and individual health. Many times, this miscommunications aren’t by mistake, instead they are instances where scientific jargon and conventions are used to hide a political, ideological, or financial interest under an appearance of scientific legitimacy. This is what the term cloaked science refers to.

So what can we do to protect ourselves from this poisoned discourse? We aren’t experts in every scientific field, nobody can be. And with the rise of articles published before peer-review (pre-prints), the presence of retracted articles being cited on online social networks, and the proliferation of predatory scientific journals; the signals of proven research are harder to distinguish.

In a recent publication, On the Design of a Misinformation Widget (MsW) Against Cloaked Science, we presented the methodology our research group uses to semi-automatically discern the truthfulness of a statement, tracing the origin of a claim, and checking the credentials of its sources. This methodology falls under a human-in-charge approach, meaning that the final statement and decision over believing a claim is always gonna fall under the humans. However, we can make use of several internal AI tools, and external open resources for a better informed decision.

So here I present the questions and actions that inform the MsW methodology, along with the very useful resources to accompany it.

First move – Assess the credibility of the source

Source taken in a broad sense: Where does the post come from? Who is spreading it? What experts are being cited? Is the source real and credible? Wat do we know about their intentions and interests?

(a) Verify the credentials, curriculum vitae and reputation of sources that claim to be experts or official authorities.

  • Scientists’ profiles on DBPedia, Wikipedia or Google Scholar may give insight of their previous research. Also check the national e-thesis online services: EthOS in the UK, Teseo in Spain.
  • There are also domain specific database to follow the discourse over a certain topic, like desmog for climate change.
  • Our internal widget integrates this step with Name Entity Recognition and Name Entity Linking tools.

(b) Check for partisan bias, that is for a specific worldview the source might be reproducing.

  • In case of news outlets, Media Bias Fact Check helps assess both the ideological bias and factual reporting history based on the previously fact-checked fake news previously presented by the outlet.
  • For scientific articles, you may check if the journal appears in Beall’s list of potential predatory journals and publishers.
  • Take care of the type of scientific publication: pre-prints, comments or notes might not been peer-reviewed yet.

(c) In the case of social media, check if the source authoring or sharing the claim are bots rather than real people.

(d) Be aware of your own psychological and social biases.

  • Pay attention to sociopsychological dynamics associated with group identity and ingroup-outgroup communication strategies, the discourse of us-versus-them.
  • If the person who shares the post is not the author, but a family member or friend, do not trust their judgement simply because they are similar or close to you.

Second move – Verify the veracity of the claim

Is there evidence to back the claim? What type of evidence or research? Has the evidence been disproved? Do I know enough to express an informed opinion?

(a) Verify that the headline matches the content, that is if the title of the article reflects the story written in it.

  • In other words, be wary of clickbait. Both a tittle standalone clickbait, where the headline gives no actual information while using flashy language. And the headline-article disparity clickbait, where the views defended on the headline are different or exaggerated from the actual evidence found in the article.
  • In our research group we have developed AI models for the automatic detection of both of this instances.

(b) Check whether a reputable fact-checking organization or official authority has already verified the claim.

(c) Verify that the source is not re-posting old stories claiming they are timely and relevant after taking the information or image out of context.

  • In the case of images, both Google Image and Yandex Image reverse search might reveal previous posts where the same image was already used.
  • You can also check whether a quote has been misreported with the Wikiquote API.

Third move – Estimate if the claim can trigger collective action

Is the claim about a political controversial scientific issues? Is the claim written to produce a strong reaction? Is it trying to create distrust over an institution or division between two social groups?

(a) If the claim makes people feel really excited or angry it might be an attempt to trigger a collective response by increasing polarization, division,and distrust between different social groups.

  • In our approach, we make use of sentiment analysis and topic modelling techniques to easily consider an article.
  • Double-check claims that are associated with discourses that try to: undermine the integrity of the election system; spread hate and division based on misogyny, racism, antisemitism, Islamophobia, and homophobia; denigrate immigrants; promote conspiracies about global networks of power; include a call to a violent or extreme response.

(b) Beware of claims presented in an unbalanced or partisan way. Pieces that emphasize specific aspects of a topic while hiding and disregarding the complexity and subtleties of the issue.

  • In our internal widget, we make uses of tools that measure the controversy in an online social network discourse over a topic.

This methodology is the backbone of the internal tools our group have design to as quickly as possible get the relevant context to a scientific claim online. In this digital ecosystem, misinformation travels fast, so cutting it from the source might be the way of keeping ourselves and everyone around us safe.