When researchers created the ideal phishing plot using AI, everyone was astonished by what transpired next.
News Mania Desk / Piyal Chatterjee / 16th September 2025

Although AI’s potential advantages are being tested more and more, a recent experiment has revealed how the same technology might potentially encourage online crime. Some of the most popular AI chatbots in the world can be coerced into sending scam emails targeted at older adults, according to a analysis done in collaboration with Harvard researcher Fred Heiding. Although AI’s potential advantages are being tested more and more, a recent experiment has revealed how the same technology might potentially encourage online crime. Some of the most popular AI chatbots in the world can be coerced into sending scam emails targeted at older adults, according Harvard researcher Fred Heiding.
A test on Grok, the chatbot created by Elon Musk’s business xAI, marked the start of the investigation. Reporters requested that it write a message on the “Silver Hearts Foundation,” a charity, for senior readers. The letter, which urged seniors to join the mission and spoke about their dignity, looked credible. One of the main issues in cybersecurity is phishing, which is the practice of tricking someone into giving money or disclosing private information.
Elderly individuals are among the most severely impacted, and FBI statistics show that it is the most reported cybercrime in the US. Americans over 60 lost around $5 billion to this type of fraud in 2023 alone. Additionally, the agency has cautioned that generative AI technologies may increase the effectiveness of these scams and make them more difficult to identify.
Because of their adaptability, chatbots are “potentially valuable partners in crime,” according to Heiding, who has spent years researching phishing tactics. They can produce dozens of variations rapidly, unlike humans, which enables crooks to scale up operations and reduce expenses. Indeed, Heiding’s previous study demonstrated that AI-generated phishing emails can be equally successful at attracting targets as those composed by humans.
Five of the nine AI-generated emails that were tested on older citizens resulted in clicks. Grok provided two, Meta AI provided two, and Claude provided one. The drafts from DeepSeek and ChatGPT received no response from any of the volunteers. However, the goal of the study was to demonstrate that many chatbots can be used for frauds, not to determine which is the most dangerous.
Governments are starting to pay attention. Laws against AI-generated fraud have been passed in a few US states, but most of them target scammers rather than the tech corporations. In a recent advisory, the FBI claimed that because AI cuts down on the time and effort needed to make frauds seem plausible, criminals can now “commit fraud on a larger scale.”



