A top expert on chess cheating explains how AI has transformed human play
In the decades since IBM supercomputer Deep Blue defeated chess world champion Garry Kasparov in 1997, artificial intelligence has transformed the way humans play the game, and not always for the better.
AI-powered chess “engines” are used legitimately by players for training and research before matches. Engines are also sometimes deployed by unscrupulous players during games as a kind of cognitive exoskeleton that helps them easily overpower their betters. For some high-level players, even getting the advice of a machine for a move or two at a critical moment is all you’d need to win. Cheaters have been caught sneaking off to the bathroom to check moves on contraband smartphones.
“Have you heard the one about how your phone is more powerful than the world’s biggest supercomputer in 1993?” asks Kenneth W. Regan, a computer science professor at the University at Buffalo (SUNY) and an international chess master, the rank below grandmaster. Today, Regan says, “ordinary code running on our smartphones can destroy any human player on the planet, including Magnus Carlsen.”
Regan is one of the chess world’s top experts on cheating, and he’s been closely following the explosive events in the chess world in recent weeks. Carlsen, 31, the current top player, has rocked the game’s international community with allegations that a 19-year-old competitor who recently defeated him, Hans Niemann, “has cheated more — and more recently — than he has publicly admitted.” A 72-page report released this week by one of the game’s top platforms, Chess.com, citing statistical evidence, concluded that Niemann “has likely cheated in more than 100 online chess games, including several prize money events.” (Niemann, who has acknowledged cheating in the past but protested allegations of more recent cheating, did not respond to a request for comment.)
But unless someone has been caught stowing an iPhone in a toilet tank, catching cheaters is a “wicked problem” for statisticians like Regan, who parse massive data sets of chess moves and rating jumps to try to detect the difference between an inhuman player and a merely gifted one.
The practice of using math to search for the machine ghosts haunting the human game is a challenge that stretches the discipline of statistics to its limit, and the exercise is not always conclusive. Regan told Chess.com investigators that he believes Niemann cheated in matches in 2015, 2017 and 2020. But he also hasn’t seen evidence that Niemann has cheated since then, including when he defeated Carlsen.
Regan spoke with The Times about how AI has transformed chess and how hard it can be to detect machine cheaters. This interview has been edited for length and clarity.
What’s the first time you looked into an allegation of cheating in chess?
It happened during the 2006 World Championship match. During the second game, [Veselin] Topalov’s manager, Silvio Danailov, released a letter with statistical accusations that [Vladimir] Kramnik had been cheating with the chess program Fritz 9, and listed correlation figures. The cofounder of ChessBase, Frederic Friedel, went on the chat channel and asked, “Is there anyone here qualified to evaluate such statistical evaluations?” I realized as a mathematical computer scientist and international master, I was qualified. This was a challenge, first of all, to try to reproduce the allegations, which for the most part I did not. The tool they used had huge positive bias [toward a finding of cheating].
Why is it considered cheating to use the help of an engine?
Training what goes into your brain, with a computer, is absolutely done by the world’s top players. Just not over-the-board [during an in-person game]. You have programs like ChessBase or other interfaces with computers, and you go over games that your rivals have played. A computer can help you find a new way to play, a trap that you can spring on one of your rivals. That happens all the time. It’s called opening preparation.
Chess is a really old game, and computers can show even the world’s greatest players new ways of playing it.
Absolutely. Garry Kasparov is considered the first player to make extensive use of opening preparation, checked with computers. Kasparov, after he lost to Deep Blue, decided to take the advice “if you can’t beat ’em, join ’em.” He popularized and sponsored what he called “advanced chess” tournaments, where humans and computers do play teams. In a series of advanced chess tournaments played in 2005 to 2008, I corroborated that human-computer teams actually made better moves [than a computer playing alone].
Machines alone can beat the world’s best chess master easily, but human-machine teams can beat that machine.
That was a very counter-intuitive result. [Former U.S. Defense Deputy Secretary] Robert Work famously used “centaur” chess to promote the Third Offset Strategy, where humans and computers work together, combining human strategy and computer speed. This fed a big argument at Defense about whether tactical battlefields should be just AIs acting alone and coordinating things, or a human-computer team. I actually was consulted on this.
Given that your smartphone can defeat Magnus Carlsen, how do you know if a human on Chess.com or whatever isn’t cheating?
That’s the problem. In very large egalitarian tournaments, you cannot have everyone use the measures, in particular dual-camera. It’s a second camera to take a transverse view of the area in front of your head and desk. It’s vital for high-level tournaments.
We’re talking about surveillance for the higher level, higher-intensity gameplay. The players themselves have to be closely monitored, to make sure they’re not bending the rules.
If you send someone to monitor a player playing online, it’s called hybrid chess. And there have been just a few tournaments of that nature. Magnus Carlsen ran a few of his previously online tournaments in a mode using the online interface, but with the players present on-site.
In the current controversy, I’ve read some pretty crazy commentary, like, “Oh, maybe one of the players was using a sex toy as a remote communication device” to send vibrations to signal what an off-site machine thinks he should do, which I assume is not substantiated.
Yeah, but there’s one case in 2013 that I was involved in, where a player getting buzzes on his thighs was substantiated. It’s believed to have been a cellphone in a pocket. It was Morse code.
Absent observational evidence, where you catch somebody going to the bathroom and looking at their phone, how easy is it to otherwise prove cheating using something like statistical analysis to look for anomalous play?
It’s very difficult. A number of people have popularized the term “wicked problem,” meaning it’s not mathematically cut and dried. The parameters of the problem need to be tailored to human considerations. I deliberately use a simplified model that is basically just high school statistics. An insurance company will use a predictive analytic model for a home insurance policy that will judge the annual risk of substantial damage from fire, earthquake, flood or hurricane based on the risk rating of the neighborhood the home is built in. I have something like that based on ratings of players; the math is very similar. A hole in one is the classic unlikely event in golf. However, if you have a lot of golf tournaments being played in a weekend, so that you get more than 10,000 golfers teeing off at par threes, you will see some holes in one. I’m going to see some rare events just by natural chance, by virtue of having so much data.
To bring it back to your wicked problem: Because rare events can and do happen, ironically, all the time, if a young chess player comes out of nowhere and defeats Magnus Carlsen in a way that no one was expecting —
Now here’s the key distinction. Young players coming out of nowhere, that happens all the time. If you have something else that distinguishes you, I call it a “black mark” that reduces the sample. If you put a black mark into the jackets of 10 of your 5,000 golfers, and one of them hits a hole in one, now you have a real coincidence. Hans Niemann is “marked” because he has confessed to cheating online. That fact relevantly, rather than in a cherry-picking way, distinguishes him from other people.
Has all this increasing technological and scientific innovation been good for chess as a game?
Certainly for the fact that chess was available online. But I have mixed feelings. The past 10 years, a lot of the high-level games have been really challenging and interesting, where people have prepared opening traps. AlphaZero [a chess engine] woke people up to the fact that pushing the side pawns — the rook pawn and the even riskier knight’s pawn — those have more value than humans had realized. That has led to a whole panoply of interesting positions that hadn’t previously been played. I do fear, though, that, eventually, chess will be played out. Chess games now routinely go 15 moves deep before the novelty [a truly unique chess move], and often more. Just the mere fact of how much more time you need to spend online with computer preparation to keep up — one of the topics often brought to me is [Elo] rating inflation: “There are more people rated 2700, and they’re not as good as Fischer, Karpov and Kasparov were 40 years ago, when they were the only 2700-plus players in a decade.” I find by my metrics the opposite. Not only do they deserve their ratings, but we’re seeing an example of Lewis Carroll’s quip, that you need to run faster to stay in place.
More to Read
The biggest entertainment stories
Get our big stories about Hollywood, film, television, music, arts, culture and more right in your inbox as soon as they publish.
You may occasionally receive promotional content from the Los Angeles Times.