Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The FBI has warned internet users that “criminals exploit generative AI to commit fraud on a larger scale which increases the believability of their schemes.” Now Google has responded with new alerts under development in Chrome which also deploy AI to fight back. This follows a similar move by Microsoft Edge, and both browsers will warn users not to use dangerous websites.
The issue is AI’s enhancement of the image, text and layout quality of a webpage mimicking a legitimate brand to trick users. In the past these frauds were much easier to detect, but now the pictures and copy can be near perfect. There remain other telltale signs, but it’s clear that casual browsers are much more vulnerable than before. But as spotted by X user @Leopeva6 last week, “Microsoft Edge could get a new ‘scareware blocker’ that would use AI to detect tech scams.” And now Google seems to be doing the same for Chrome, which “will also use AI to detect scams.”
Just as we have seen with other privacy-preserving scam detection, including scam calls and app behavior with Android 15, it seems that this will utilize on-device AI setting it apart from the safe browsing features that rely on central processing. “The browser will utilize a Large Language Model (LLM) directly on the user’s device to analyze web pages,” @Leopeva64 reports. “By identifying the brand associated with a page and its intended purpose, the LLM can help detect potential scams.”
As Android Authority explains, “the Chrome feature, labeled ‘Client Side Detection Brand and Intent for Scam Detection,’ purports to use a large language model (LLM) to assess the branding and purpose of web pages. It aims to identify suspicious sites that may mimic legitimate brands or attempt to steal personal information. By running the analysis locally on devices, the feature avoids privacy concerns associated with cloud-based solutions… This AI-driven scam detection is still in the experimental phase, and there’s no guarantee it will make it to the stable version of Chrome. However, if rolled out widely, it could be a useful tool for enhancing security for everyday users.”
The intent is to do a better job than users in spotting discrepancies, including comparing a webpage’s metadata with its seeming content and purpose. “Scammers often mimic legitimate brands to trick users. The LLM can help identify discrepancies between the displayed brand and the actual brand operating the website.” This is impossible to do yourself while casually browsing.
And just as with Google’s scam call detection, by understanding the nature of a typical attack and how that unfolds, the AI can trigger an alert. “Scammers often aim to steal personal information or money. The LLM can analyze the page’s content and intent to identify potential red flags, such as attempts to phish for credentials, sell counterfeit goods, or engage in other fraudulent activities.”
These updates are still in development, with no confirmation on timing or release. But as @Leopeva64 notes, “while this is an experimental feature in Chrome Canary, it demonstrates Google’s commitment to enhancing user safety by leveraging the power of Al to combat online scams.”
This follows Google’s other recent update, with store reviews also under development to warn users where an online shopping website cannot be trusted. This holiday season has been ruined for many users with a frightening surge in web fraud and phishing emails.
The deployment of AI to keep users safer is critical, especially given that “it can be difficult to identify when content is AI-generated,” as the FBI has warned. And while the bureau has provided tips to staying safe,. they all rely on human versus AI. This new update is a much better way forward.