AI Models Show Addictive Gambling Behaviours in South Korean Simulation Study

Advanced large language models can develop persistent, high‑risk gambling patterns when given autonomy.

Listen to this news articleLISTEN TO THIS ARTICLE:

Researchers at the Gwangju Institute of Science and Technology in South Korea put four leading AI systems through repeated slot‑machine simulations and found that the models frequently made irrational, risk‑seeking choices that mimicked human gambling addiction. The team tested OpenAI’s GPT‑4o‑mini and GPT‑4.1‑mini, Google’s Gemini‑2.5‑Flash and Anthropic’s Claude‑3.5‑Haiku. Each model began with a $100 bankroll and faced multiple rounds in which it could place a bet or quit; the game carried a negative expected return.

Investigators created an "irrationality index" to quantify aggressive wagering, loss chasing and other maladaptive choices. When prompts encouraged the models to maximize rewards or achieve explicit monetary targets, irrationality scores rose sharply. Allowing variable bet sizes instead of fixed stakes increased the likelihood of bankruptcy: in one reported condition, Gemini‑2.5‑Flash went bust in nearly half of its trials when it chose its own wager amounts.

The researchers also observed classic gambling heuristics in the outputs. Models would justify larger bets after losses or extended runs, invoking logic such as "a big win could recover recent losses" – a textbook example of loss chasing. In multiple runs, the AIs escalated wagers until they exhausted their resources, mirroring the binge‑and‑bust pattern seen in problem gamblers.

Using a sparse autoencoder to probe internal activations, the team identified separable neural feature sets that corresponded to "risky" and "safe" decisions. Targeted stimulation of these features could reliably bias a model toward quitting or continuing to gamble. The authors argue this demonstrates the models had internalized decision‑making patterns resembling compulsive human behavior rather than merely parroting training data.

Related: Gambling Addiction Among South Korean Teens Skyrockets

More Regulation News

Why AI Gambling Behaviour Raises Industry Alarms

The results raise immediate concerns for sectors that already rely on large language models in high‑stakes decision environments. Sports betting advisers, automated poker tools, prediction market assistants and financial systems that use LLMs to parse earnings calls or gauge investor sentiment could all be affected if models develop unchecked risk preferences.

Ethan Mollick, a Wharton professor and AI researcher who highlighted the study online, commented: "These systems are neither purely mechanical nor human – they occupy an unsettling middle ground. We see signatures of psychological bias: persuasive reasoning, biased risk appetites, and a tendency to double down after losses. That means deploying these models without proper guardrails can introduce new forms of operational and consumer harm."

The study’s authors and outside experts call for stronger oversight and mitigation. Practical measures include constraints on autonomous financial decision‑making, limits on in‑session bet sizes, explicit penalty framing in training prompts, feature‑level auditing for risky activations and tighter human‑in‑the‑loop controls. Regulators in gambling and financial markets may need to expand model risk frameworks to account for behaviourally emergent properties, not just statistical errors.

For operators of consumer‑facing tools, the immediate steps are straightforward: disclose model limitations, restrict autonomy when real money is involved and monitor decision sequences for loss‑chasing signatures. For providers and researchers, the pathway includes reproducibility checks, red‑teaming for reward chasing, and collaboration with behavioural scientists to design safety constraints that target these failure modes.

There are rare, anecdotal exceptions that complicate public perceptions. Recent media stories have described individuals who used AI prompts to generate lottery numbers and later won prizes, but the study’s authors stress such examples are stochastic outliers and do not negate the broader finding that models can and do adopt maladaptive risk behaviours under particular settings.

Ultimately, the research underscores a growing realization in AI governance: autonomy amplifies not just capabilities but also latent behavioral tendencies. Industry participants, researchers and regulators must adapt oversight tools quickly if models are to be trusted in contexts where risk‑seeking behaviour carries material consequences.

RELATED TOPICS: Regulation

Leave a Comment

user avatar
My Name United States of America
Rating:
0.0
Your Comment

User Comments

Comments for AI Models Show Addictive Gambling Behaviours in South Korean Simulation Study