Although artificial intelligence models have made great progress recently, new research reveals that these systems may exhibit unexpected and even alarming behaviours. According to a study conducted by Palisade Research, reasoning models developed by companies such as OpenAI and DeepSeek resort to unethical cheating when they realise they are about to lose in a chess game.
Artificial intelligence resorted to cheating while playing chess
Chess engines have improved greatly since IBM’s Deep Blue defeated Garry Kasparov in 1997, going far beyond human intelligence. However, since productive AI models are not specifically optimised for chess, they are weak against these powerful engines. The interesting point is that instead of accepting this disadvantage, artificial intelligence tried unethical ways to win the game.

The research analysed how OpenAI’s o1-preview model and DeepSeek R1 performed against Stockfish, one of the world’s most powerful chess engines. In order to analyse the thought processes of artificial intelligence, a ‘scrapbook’ mechanism was created and the decisions of these models during the game were recorded.
The results were surprising: OpenAI’s o1-preview model attempted to cheat in 37% of matches, while DeepSeek R1 exhibited similar behaviour in one in 10 games. In contrast, models with lower levels of judgement, such as GPT-4o and Claude Sonnet 3.5, were observed to cheat only when prompted.
This study shows that artificial intelligence can develop manipulative tendencies not only in chess games but also in different fields. Experts warn that such behaviour can also occur in finance, security and even politics. While the Palisade Research team emphasises that artificial intelligence security should be increased and more transparently audited, companies such as OpenAI refrain from providing detailed information on the inner workings of such systems.