Online scams are a common threat in the digital age, and it’s important to know how to spot and avoid them. With the rise of generative AI tools like ChatGPT, Bard/Gemini, Grok and more, the scams will only become more sophisticated and difficult to detect. This is one of the unfortunate prices we must pay for the benefits that AI brings us.
Here are some examples of common online scams, with suggestions on how to stay safe from them.
Table of Contents
Phishing
These are attempts to trick you into revealing sensitive information, like passwords and credit card numbers. Enhanced phishing attempts could involve AI creating deceptive emails or messages to specific individuals, making them more convincing and harder to detect.
You can protect yourself by using security software and keeping it updated, using multi-factor authentication for your accounts, and backing up your data. To detect and avoid phishing attacks, always verify the identify of the person on the other end of the call/message/email etc. A scammer might give up after you ask a few pointed questions, and focus his/her attention elsewhere. See also the section on spear phishing in this article.
Fake invoices and identities
If you run an agency, startup or company, you may be vulnerable to this. Additionally, if your job puts you in charge of approving any kind of public facing utility such as credit cards, this applies to you as well. Scammers may send you a fake invoice pretending to be from a reputable company, asking you to pay for services you didn’t order. Always verify the information before taking action. If you receive an email or text message from a company you do business with and you think it’s real, it’s still best not to click on any links. Instead, contact them using a website you know is trustworthy. Scammers may also go a step further to use AI to create synthetic identities that appear real, and then apply for credit cards, loans, or government benefits illegally.
The fake identities would fall apart if one looks carefully enough.
Wrong Numbers
Scammers may try to trick you into calling them by using a number that looks similar to a legitimate one. For example, they might use a number that differs by just one digit from a bank’s actual customer service number. Using loopholes in number recognition apps or services such as TrueCaller, scammers can make their number appear on your caller ID as a legitimate entity. They might send text messages that appear to be from a reputable company, asking you to call a number for solving some issue with your account, unpaid dues, suspension of services etc. In rare cases, the fake number could be listed on a counterfeit website designed to mimic a real business or service.
Always verify the number before making a call. Use the contact information provided on official documents, e.g. the bank statements or welcome kit. Always be wary of if someone creates a sense of urgency; if you’re being pressured to do something quickly, it’s a red flag!
Catfishing
This is a form of online scam where a scammer creates a fake profile on a dating site to lure you into a romantic relationship or some kind of emotional involvement. The goal is to make the victim believe that the other “person” is real, and is in need of help, or companionship, or financial assistance, etc.
With the help of AI image generation tools like Stable Diffusion, it has become especially easy to make life-like illustrations of real humans. Alongside the images, it is also easy to generate a complete profile with name, date of birth, interests, hobbies, employment and more — all without the scammers using their own imagination for any of this.
Deepfakes
A deepfake is an AI generated image, audio or video that looks and sounds indistinguishable from a real person. It is a fake generated with so called “deep learning” technology, hence the name. Scammers might impersonate individuals like customer service or law enforcement or tax officials. If the scam is more targeted, they may also impersonate family members, colleagues, employers, or any number of other people in the victim’s life. These materials may be used to convince a specific group or individual about a specific point of view, depending on the eventual goals of the scam. It may be as simple as asking for money to be transferred — to as elaborate as making a judge decide against a defendant based on fabricated evidence.
The best defence against deepfake scams is awareness and skepticism. Be wary of online sources that seem unusual or incredible. Do not forward messages to your social circles that seem unbelievable or emotionally dramatic. If you are in a profession related to law enforcement or givernment decision making, hire a personal cybersecurity assistant and educate your family members about the risks of deepfakes.
Fake content
Scammers could use AI to generate seemingly legitimate reviews, or even entire websites to promote fraudulent products or services. It was already common to pay content writers fake reviews on Amazon, Yelp and similar sites. With the help of AI tools like GPT-4, the human-ness of the generated text has gone way up, making it harder to detect AI generated reviews.
Fraudulent investment advice
Scammers might falsely claim to use sophisticated AI algorithms to guarantee high returns on investments, luring victims into Ponzi schemes or fraudulent investment opportunities.
The best way to avoid losing your money to such fraud is to execute investment decisions that you personally understand, and ask someone in your social circle whom you can trust.
Automated Social Engineering
Social engineering means hijacking communication methods for specific purposes.
Agencies and organizations with the intent of modifying the public mood can use AI to simulate thousands of virtual people (also known as “sock puppets“), engage in conversations. They can use the capabilities of language models to mimick real human interactions to build trust. For instance, an AI-driven chatbot could impersonate a trusted individual or organization, persuading the target to reveal confidential information or perform certain actions. Then, AI systems can analyze a person’s online behavior and preferences, allowing the automated system to tailor its approach. For example, the AI might identify someone who frequently donates to charities and then impersonate a charitable organization to solicit funds or information.
Unlike traditional social engineering, which is limited by the need for human interaction, AI can conduct thousands of personalized attacks simultaneously, drastically increasing the scale and reach of social engineering campaigns. Once deployed, AI systems can learn from interactions with real humans, adapting their strategies in real-time to improve success rates. If a certain approach doesn’t work, the AI can quickly change tactics. This makes it more worrisome and harder to repel than traditional
Spear phishing
According to Fortinet, spear phishing refers to “a cyberattack method that hackers use to steal sensitive information or install malware on the devices of specific victims. Spear-phishing attacks are highly targeted, hugely effective, and difficult to prevent.”
A paper submitted in 2023 on arXiv found that large language models (a form of AI) are pretty good at helping with two main steps of a spear phishing attack: gathering information (reconnaissance) and writing convincing messages. The efficiency of cybercriminals can get a big boost from these AI models. To test how well these LLMs can scale up spear phishing, the study created unique phishing emails for over 600 British MPs using GPT-3.5 and GPT-4. The results were a bit concerning. Not only did the messages seem very realistic, but they were also super cheap to produce, costing less than a penny each. The study explored how simple tricks can get around the safety measures built into these LLMs.
To protect against spear-phishing attacks, organizations should enable automatic software updates, utilize data protection measures, minimize password usage through tools like password managers, and implement multi-factor authentication. Additionally, educating employees about identifying phishing attempts and practicing common sense, like verifying email requests and avoiding sharing sensitive information online, is crucial for security.
Enhanced hacking
The negative meaning of the word “hacking” is roughly “using tricks and lies to bypass barriers, and gain access to restricted or prohibited material” (there is a positive meaning too, by the way). Hackers have always needed some degree of technical sophistication and skill, making it harder for every bad person to successfull carry out nefarious schemes. With the advent of AI that can plan, reason and write code, these barriers have come down significantly. AI could now be used to identify vulnerabilities in software and systems more efficiently, leading to more sophisticated cyber-attacks.
Unfortunately, there is not much one can do as a regular citizen of the internet short of hiring their dedicated cybersecurity teams. Again, if you are a government official or politically exposed person, it is highly recommended that you do so.
Manipulative Advertising
Advertisers are some of the original mass media manipulators. AI has only made things worse.
AI algorithms analyze vast amounts of data from various sources like social media, browsing histories, purchase records, and even device usage patterns. This data provides insights into individual preferences, behaviors, and psychological traits. AI models use this data to construct detailed psychological profiles. These profiles can include a person’s interests, habits, emotional triggers, vulnerabilities, and decision-making patterns. Armed with these profiles, marketers create highly personalized advertisements. These ads are tailored not just to align with a person’s interests, but also to tap into their emotional and psychological vulnerabilities, making them more effective in influencing decisions. As a last step, AI can adjust the content, timing, and format of ads in real-time based on continuous user interaction data, further refining the effectiveness of these ads.
Such ads can exploit emotional vulnerabilities or psychological weaknesses, potentially leading consumers to make decisions they wouldn’t have made otherwise. Consumers are losing trust in digital advertising, feeling that their personal space and privacy are being invaded. Hyper-targeted ads based on psychological profiling can be seen as manipulative, leading to calls for more ethical advertising practices. Last but not the least, persistent exposure to such targeted content, especially if it plays on insecurities or anxieties, can have negative effects on mental health.
There is no general solution to this, except for going increasingly off-the-grid. At the very least, we can cultivate a mentality of detachment and wariness towards manipulative material to lessen its impact.
Fake News and propaganda
Fake news is perhaps as old as advertising itself, but in the AI age, the term has assumed more urgency. Advanced language models today can produce text that is coherent, contextually relevant, and stylistically similar to human writing. This capability can be misused to create convincing but entirely fabricated news stories. Unlike human-generated misinformation, AI tools can be used to produce large volumes of fake content quickly and efficiently, making it easier to flood information channels with misleading narratives. Motivated groups can use these tools to tailor content to specific groups of society, such as those having specific beliefs, to increase the likelihood of the misinformation being believed and spread via social media or personal messaging.
Conclusion
While none of the ideas presented here are new, the methods unlocked by AI makes these especially easy for essentially anyone to carry out these things at a large scale. You should check out our guide on protecting your mind for more insights into being less open to persuasion and psychological manipulation.
For a everyday person, who has other things to do in life, the best defense against these methods is awareness and common sense. A little bit of skepticism also goes a long way!