Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 

AI's Complexity Demands Urgent Study of Trust, Bias, and Behaviors in Diverse Contexts

DATE POSTED:December 19, 2024

\

:::tip This is Part 9 of a 12-part series based on the research paper Human-Machine Social Systems.” Use the table of links below to navigate to the next part.

:::

Table of Links

Abstract and Introduction

Human-machine interactions

Collective outcomes

Box 1: Competition in high-frequency trading markets

Box 2: Contagion on Twitter

Box 3: Cooperation and coordination on Wikipedia

Box 4: Cooperation and contagion on Reddit

Discussion

Implications for research

Implications for design

Implications for policy

Conclusion, Acknowledgments, References, and Competing interests

Implications for Research

Existing research is often biased towards engineering and optimization, lacking deeper insights from a social science perspective. The time for a new sociology of humans and machines is critical, before AI becomes more sophisticated: generative AI exhibits emergent behavior that itself requires explanation[220,221], complicating the understanding of system dynamics.

\ Researchers would benefit from an agent-based modeling framework that outlines distinctions between human and bot agents: utility function, optimization ability, access to information, learning, innovation/creativity, accuracy, etc. The framework could borrow concepts from other two-agent systems, such as predator–prey, principal–agent, and common pool resource models. Controlled experiments should explicitly compare human-machine, human-only and machine-only networks, and known bots against covert bots. Experiments could manipulate participants’ perceptions of algorithms’ technical specifications, agenthood[222], emotional capability, and biases. Field interventions in online communities with endemic bot populations present another promising direction. Existing examples include social bots that gain influence by engaging human users [223,224,225,226], trading bots that manipulate prices in cryptocurrency markets [227], political bots that promote opposing political views to decrease polarization[228], and “drifters” to measure platform bias [94]. Expanding on the cases reported here, we need observational research on additional human-machine communities and contexts such as traffic systems with human-driven and driverless vehicles, online multiplayer games comprising human players, non-player characters, and cheating code, and dating markets with AI-driven chatbots [229].

\ Finally, research with artificial agents introduces ethical problems demanding careful elaboration and mitigation. Research protocols should minimize interventions [230], possibly deploying covert bots only where they already exist, ensuring their actions are not unusual or harmful [93]. Even then, bots may still face opposition from users due to privacy concerns [223]. Do people perceive certain bots as inherently deceptive? Could knowledge of the bot owner and algorithm mitigate this perception?

\

:::info Authors:

(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;

(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;

(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;

(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.

:::

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\