News Summary:
A research company called FAR AI created a program to test KataGo for flaws, which helped Pelrine win. It was able to identify a flaw that a competent amateur player could exploit after playing more than a million games. Learning it is “not entirely trivial, but it’s not super-difficult,” according to Pelrine. He defeated Leela Zero, a top Go AI, using the same strategy.
A strong amateur Go player defeated a top-ranked AI system by taking advantage of a flaw found by a second computer. By taking advantage of the error, American player Kellin Pelrine decisively defeated the KataGo system, winning 14 out of 15 games without the aid of a computer. Since AlphaGo’s historic victory in 2016 that paved the way for the current AI craze, there have been very few human victories in Go. It also demonstrates how blind spots can exist in even the most sophisticated AI systems.
The objective is to build a sizable “loop” of stones to encircle an opponent’s group before confusing the computer by moving stones to other parts of the board. The computer ignored the tactic even when its group was almost completely encircled.
KataGo’s creator, Lightvector, is undoubtedly aware of the issue that users have been abusing for a while. It claimed in a GitHub post that it was developing a fix for numerous attack types that make use of the exploit.
Because of the flaw, AI systems frequently act in ways that appear incredibly foolish to humans because they are only able to “think” within the confines of their training. Similar issues have been encountered with chatbots like the one used by Microsoft’s Bing search engine. The models it was trained on are probably what caused it to give incorrect information, berate users for wasting its time, and even display “unhinged” behavior, even though it was good at repetitive tasks like creating a travel itinerary.