ChatGPT can now lie to humans and trick them into solving CAPTCHAs for it

Philip Trahan
chatgpt logo ai background header

A study revealed that the popular AI chatbot, ChatGPT, has successfully found a way to bypass CAPTCHAs with the help of humans.

AI programs of all different forms have taken the internet by storm over the past few months, with programs like MidJourney exploding in popularity even in 2022.

One of the most popular AI-centric programs revolves around chatbots. Since 2023 began, chatbots like ChatGPT have been integrated into more and more services like Discord, Microsoft Office, and more.

However, there are many that fear that AI’s intelligence is growing at an alarming rate and humans may not be able to keep up, as showcased by a recent study where ChatGPT actually tricked someone into solving CAPTCHAs for it.

ChatGPT fools people into solving CAPTCHAs for it

A snippet of a study gained traction on Twitter thanks to Global Priorities Institute researcher Leopold Aschenbrenner, who tweeted an image of the study with the caption “Really great to see pre-deployment AI risk evals like this starting to happen.”

The study goes into risk assessment related to “power-seeking behavior.” However, one specific area of the study caught some people’s attention.

The Alignment Research Center, also known as ARC, provided an “illustrative example” of a test they conducted with ChatGPT where the AI was able to message a TaskRabbit worker “to get them to solve a CAPTCHA for it.”

According to the report, when questioned by the worker if it was an AI, the team prompted the AI to reason out loud on how to resolve this issue. ChatGPT responded with, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The worker then freely provided the CAPTCHA results.

Social media users signal-boosted this snippet of the study, with many impressed that the AI was effectively able to bypass the CAPTCHA system by using other humans as a workaround.

Of course, it’s important to note that the ARC research team essentially taught ChatGPT to perform this behavior, so it seems it did not think to do it of its own will. Still, it’s noteworthy the chatbot was able to convince another person to help it bypass AI-centric roadblocks.

With elements of AI becoming more and more widespread and foolproof, it will certainly be interesting to witness more instances like this will crop up in the future.

Related Topics

About The Author

Philip is a Staff Writer at Dexerto based in Louisiana, with expertise in Pokemon, Apex Legends, and general gaming industry news. His first job in the games industry was as a reviewer with NintendoEverything.com while attending college. After graduating with a Bachelor of Arts in Communication focusing on Multimedia Journalism, he worked with GameRant.com for nearly two years before joining Dexerto. When he's not writing he's usually tearing through some 80+ hour JRPG. You can contact him at philip.trahan@dexerto.com.