AI chatbots are quickly becoming the primary way people interact with the internet. Instead of browsing through a list of links, you can now get direct answers to your questions. However, these tools often provide information that is completely inaccurate, and in the context of security, that can be dangerous. In fact, cybersecurity researchers are warning that hackers have started exploiting flaws in these chatbots to carry out AI phishing attacks.
Specifically, when people use AI tools to search for login pages, especially for banking and tech platforms, the tools return incorrect links. And once you click that link, you might get directed to fake websites. These sites can then be used to steal personal information or login credentials.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.
Researchers at Netcraft recently ran a test on the GPT-4.1 family of models, which is also used by Microsoft’s Bing AI and AI search engine Perplexity. They asked where to log in to fifty different brands across banking, retail, and tech.
Out of 131 unique links the chatbot returned, only about two-thirds were correct. Around 30 percent of the links pointed to unregistered or inactive domains. Another five percent led to unrelated websites. In total, more than one-third of the responses linked to pages not owned by the actual companies. This means someone looking for a login link could easily end up on a fake or unsafe site.
If attackers register those unclaimed domains, they can create convincing phishing pages and wait. Since the AI-supplied answer often sounds official, users are more likely to trust it without double-checking.
In one recent case, a user asked Perplexity AI for the Wells Fargo login page. The top result wasn’t the official Wells Fargo site; it was a phishing page hosted on Google Sites. The fake site closely mimicked the real design and prompted users to enter personal information. Although the correct site was listed further down, many people would not notice or think to verify the link.
The problem in this case wasn’t specific to Perplexity’s underlying model. It stemmed from Google Sites abuse and a lack of vetting in the search results surfaced by the tool. Still, the result was the same: a trusted AI platform inadvertently directed someone to a fake financial website.
Smaller banks and regional credit unions face even higher risks. These institutions are less likely to appear in AI training data or be accurately indexed on the web. As a result, AI tools are more prone to guessing or fabricating links when asked about them, raising the risk of exposing users to unsafe destinations.
As AI phishing attacks grow more sophisticated, protecting yourself starts with a few smart habits. Here are seven that can make a real difference:
AI chatbots often sound confident even when they are wrong. If a chatbot tells you where to log in, do not click the link right away. Instead, go directly to the website by typing its URL manually or using a trusted bookmark.
AI-generated phishing links often use lookalike domains. Check for subtle misspellings, extra words, or unusual endings like “.site” or “.info” instead of “.com”. If it feels even slightly off, do not proceed.
Even if your login credentials get stolen, 2FA adds an extra layer of security. Choose app-based authenticators like Google Authenticator or Authy instead of SMS-based codes when available.
If you need to access your bank or tech account, avoid searching for it or asking a chatbot. Use your browser’s bookmarks or enter the official URL directly. AI and search engines can sometimes surface phishing pages by mistake.
If a chatbot or AI tool gives you a dangerous or fake link, report it. Many platforms allow user feedback. This helps AI systems learn and reduces future risks for others.
Modern browsers like Chrome, Safari, and Edge now include phishing and malware protection. Enable these features and keep everything updated..
If you want extra protection, the best way to safeguard yourself from malicious links is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at CyberGuy.com/LockUpYourTech.
Password managers not only generate strong passwords but can also help detect fake websites. They typically won’t auto-fill login fields on lookalike or spoofed sites.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com/Passwords.
Attackers are changing tactics. Instead of gaming search engines, they now design content specifically for AI models. I have been consistently urging you to double-check URLs for inconsistencies before entering any sensitive information. Since chatbots are still known to produce highly inaccurate responses due to AI hallucinations, make sure to verify anything a chatbot tells you before applying it in real life.
Should AI companies be doing more to prevent phishing attacks through their chatbots? Let us know by writing us at Cyberguy.com/Contact.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.
Copyright 2025 CyberGuy.com. All rights reserved.
Cybersecurity researchers are warning that hackers have started exploiting flaws in chatbots to carry out AI phishing attacks. Technology News Articles on Fox News