Your resource for web content, online publishing
and the distribution of digital products.
«  
  »
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 
 

Can AI really reason about cause and effect? A new study puts LLMs to the test

Tags: content
DATE POSTED:February 17, 2025
Can AI really reason about cause and effect? A new study puts LLMs to the test

A new study from New York University and the University of Tübingen, led by Hanna M. Dettki, Brenden M. Lake, Charley M. Wu, and Bob Rehder, asks whether AI can reason about causes as humans do or if it relies on patterns instead. Their paper, “Do Large Language Models Reason Causally Like Us? Even Better?”, probes four popular models—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—to see whether they grasp complex causal structures or merely mimic human language.

How the study tested causal reasoning in AI

The researchers compared human reasoning with four LLMs—GPT-3.5, GPT-4o, Claude-3, and Gemini-Pro—using collider graphs, a classic test in causal inference. Participants (both human and AI) were asked to evaluate the likelihood of an event given certain causal relationships. The core question: do LLMs reason causally in the same way humans do, or do they follow a different logic?

AI now handles molecular simulations: Thanks to MDCrow

Key findings: AI can reason but not like humans

The results revealed a spectrum of causal reasoning among AI models.

  • GPT-4o and Claude-3 showed the most normative reasoning, meaning they followed probability theory more closely than human participants.
  • Gemini-Pro and GPT-3.5, on the other hand, displayed more associative reasoning, meaning they relied more on statistical patterns rather than strict causal logic.
  • All models exhibited biases, deviating from expected independence of causes. However, Claude-3 was the least biased, meaning it adhered most closely to mathematical causal norms.

Interestingly, humans often apply heuristics that deviate from strict probability theory—such as the “explaining away” effect, where observing one cause reduces the likelihood of another. While AI models recognized this effect, their responses varied significantly based on training data and context.

AI vs. human reasoning: A fundamental difference

One of the most intriguing insights from the study is that LLMs don’t just mimic human reasoning—they approach causality differently. Unlike humans, whose judgments remained relatively stable across different contexts, AI models adjusted their reasoning depending on domain knowledge (e.g., economics vs. sociology).

  • GPT-4o, in particular, treated causal links as deterministic, assuming that certain causes always produce specific effects.
  • Humans, by contrast, factor in uncertainty, acknowledging that causal relationships are not always absolute.

This suggests that while AI can be more precise in certain structured tasks, it lacks the flexibility of human thought when dealing with ambiguous or multi-causal situations.

Why this matters for AI in decision-making

The study reveals an important limitation: LLMs may not generalize causal knowledge beyond their training data without strong guidance. This has critical implications for deploying AI in real-world decision-making, from medical diagnoses to economic forecasting.

LLMs might outperform humans in probability-based inference but their reasoning remains fundamentally different—often lacking the intuitive, adaptive logic humans use in everyday problem-solving.

In other words, AI can reason about causality—but not quite like us.

Featured image credit: Kerem Gülen/Ideogram

Tags: content