Google has revealed that malicious actors are using techniques such as landing page cloaking to impersonate legitimate sites to commit fraud.
“Cloaking is specifically designed to prevent moderation systems and teams from reviewing content that violates our policies, allowing them to deploy fraud directly on users,” said Google’s Vice President and Trustee of Safety. Director Laurie Richardson said.
“Landing pages often mimic well-known sites, creating a sense of urgency to manipulate users into purchasing counterfeit or unrealistic products.”
Cloaking refers to the act of serving different content to search engines such as Google and to users, with the ultimate goal of manipulating search rankings and deceiving users.
The tech giant also claimed that users who clicked on ads were redirected to scareware sites via tracking templates, claiming their devices were infected with malware, redirecting them to other fake customer support sites, and threatening users. He said he had also observed a trend of cloaking, in which people are tricked into revealing sensitive information.
Here are some of the other tactics scammers and cybercriminals have employed recently:
Abusing artificial intelligence (AI) tools to create deepfakes of celebrities and leveraging their credibility to commit investment fraud Using hyper-realistic impersonations in fake crypto investment schemes Apps and landing pages Clone scams are used to trick users into visiting similar pages leading to credential and data theft, malware downloads, and fraudulent purchases Leverage key events and combine them with AI to deceive and promote people. products or services that do not exist
Google told The Hacker News that it plans to issue these advisories on online scams and scams every six months as part of its efforts to raise awareness of the risks.
Many crypto-related scams, such as pig butchering, originate in Southeast Asia and are run by Chinese organized crime syndicates. They lure individuals with the promise of well-paying jobs, but end up trapped in fraudulent factories located throughout Burma and Cambodia. Laos, Malaysia, and the Philippines.
A report released by the United Nations last month found that criminal organizations in the region are “applying new service-based business models and technologies such as malware, generative AI, and deepfakes to their operations, while exploiting new underground markets and cryptocurrencies.” We are strengthening this by quickly integrating.” We provide solutions to their money laundering needs. ”
The United Nations Office on Drugs and Crime (UNODC) describes the incorporation of generative AI and other technological advances in cyber-enabled fraud as a “powerful force multiplier” that not only increases the efficiency of fraud but also It is said that this technology has the effect of lowering the barrier to entry into fraudulent activities. A clever criminal.
In early April, Google sued two app developers based in Hong Kong and Shenzhen for distributing fake Android apps used to carry out a consumer investment fraud scheme. Late last month, the company, along with Amazon, filed a lawsuit against a website called Bigboostup.com for selling and posting fake reviews on Amazon and Google Maps.
“This website sells fake product reviews to malicious parties for inclusion on Amazon store product listing pages, as well as fake reviews for business listings on Google Search and Google Maps,” Amazon said. ”
The development comes a little more than a month after Google announced a partnership with the Global Anti-Scam Alliance (GASA) and DNS Research Federation (DNS RF) to tackle online fraud.
In addition, the company blocked or removed more than 5.5 billion ads for policy violations in 2023 alone, and rolled out live fraud detection in its Android phone app to protect users from potential scams and abuse. He said there was. Gemini Nano on-device AI model.
“For example, if a caller claims to be from your bank and requests an urgent transfer of funds due to a suspected account compromise, fraud detection can process the call and identify it as spam. “Gives you a tactile and visual alert that the call may be a scam.” .
Another new security feature is the introduction of real-time alerts to Google Play Protect that notify users of potentially malicious apps, such as stalkerware, installed on their devices.
“By examining an app’s actual activity patterns, Live Threat Detection can now detect malicious apps that attempt to hide their behavior or remain dormant for a period of time before performing suspicious activity. “Now,” Google said.