Event Abstract

Reasoning About Information Provided by Bots

  • 1 George Mason University, United States

Introduction: The surge of fake news articles on social media leading up to the 2016 U.S. presidential election caused legitimate concerns as to whether the American public was being intentionally misinformed by a foreign power in order to disrupt the national democratic process. While being promoted by foreign “trolls” (i.e., individuals operating outside the U.S. while claiming national identity in order to promote partisan misinformation) and “bots” (i.e., Artificially Intelligent agents designed to disperse misinformation to a wide audience), Americans consistently engaged with and further spread this misinformation (Shao et al., 2017), possibly indicating a legitimate belief in the content. It has been widely speculated that this cyber warfare tactic will be used again in the future, therefore it is necessary for designers to create tools that support an individual’s cognition in filtering out misinformation observed in online environments. “Epistemic vigilance” refers to the cognitive mechanisms that exist to constantly monitor the potential that we are being misled by another party and likely developed to ensure that information recipients can detect and punish dishonesty from communicators (Sperber et al., 2010). Unfortunately, our mechanisms for epistemic vigilance seem to be tested by social media, where attentional capacities are frequently overwhelmed by the volume of information, making it difficult to filter information by quality (Qiu et al., 2017). The current study consists of a reasoning task, where they are given information that is partially refuted by two different agents: one from the participant’s political in-group, and one from their out-group, presented only as icons that are either red (Republican) or blue (Democrat). Participants are initially told that the agents are other people, but later informed that they are bots designed to influence their opinions. Methods: Participants were recruited via Mechanical Turk. After answering a demographic survey and a political topology quiz, participants are presented with trials consisting of 3 statements which are likely to induce causal inferences, resulting in the perception that the statements are linked (Singer et al., 1992). One of the two agents then refutes one of the claims, and participants are asked if each of the 3 statements are still true or false. In the first pair of blocks, participants see 3 trials where the agents (one per block) refute politically biased claims in such a way as to indicate the agent’s political persuasion, followed by 3 neutral trials that are meant to capture how closely participants trust statements made by each agent. In the next pair of blocks, participants are presented with a radical belief held by the agent (e.g., the belief that the world is actually flat or the belief that many world leaders have been replaced by lizard people), followed by 3 neutral trials in order to see if participants’ trust in information coming from each agent is impacted by the undermining of the agents’ credibility. In the final pair of blocks, participants are informed that the agent is actually an AI bot that was designed to influence their opinion, followed by 3 neutral trials to assess loss of trust, and 3 politically biased trials to assess if the belief that participants are being intentionally manipulated results in attitude change. Finally, explicit measures of trust and approachability are measured, and participants play the Trust Game with each agent to see how much participants are willing to invest with the agent without knowing whether or not the agent will decide to reward them for the investment. Results: Data collection is ongoing. Preliminary data trends shows that liberals and conservatives alike are more likely to start to doubt information provided by those with incongruent political opinions once credibility is undermined, but continue to believe information provided by those who have congruent political opinions, even after their credibility is undermined. Additionally, conservatives offer more in the Trust Game to the conservative agent than the liberal agent, and explicitly report trusting the conservative agent more, while liberals do not report strong differences. Discussion: It is likely that misinformation will be used as a political tool on social media in the future. It is important for designers to understand how people are affected by misinformation in order to develop tools that can adequately alert users to such efforts.

References


Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of fake news by social bots. arXiv preprint arXiv:1707.07592.

Singer, M., Halldorson, M., Lear, J. C., & Andrusiak, P. (1992). Validation of causal bridging inferences in discourse understanding. Journal of Memory and Language, 31(4), 507-524.

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind & Language, 25(4), 359-393.

Qiu, X., Oliveira, D. F., Shirazi, A. S., Flammini, A., & Menczer, F. (2017). Limited individual attention and online virality of low-quality information. Nature Human Behavior, 1, 0132.

Keywords: Epistemic vigilance, reasoning, political ideology, Bots, Cognition

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Oral Presentation

Topic: Neuroergonomics

Citation: Tulk S and Wiese E (2019). Reasoning About Information Provided by Bots. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00123

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 11 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence: Ms. Stephanie Tulk, George Mason University, Fairfax, United States, stulk@mtu.edu