Innovative scammers are using artificial intelligence (AI) to separate you from your money. Even now the FBI and other agencies are warning citizens about the dangers of scams that use AI. ­Bottom Line’s expert Steven J.J. Weisman, Esq., explains what to watch out for and how to protect yourself…


The “grandparent scam.” Your cell phone rings at 3:00 am. You see your grandson’s name on the caller-ID. When you answer, you hear his frightened voice. “Grandpa,” he says, “I can’t explain right now, but I just got arrested and I need $500 tonight so they’ll let me go. Can you wire the money to my lawyer? I’ll text you his information in just a second. Please don’t tell Mom and Dad.”

The scam: The voice you hear may sound like your grandson, but it was generated using AI. Your real grandson is probably sleeping peacefully at home.

How are scammers pulling this off? Spoofing the caller-ID is easy and doesn’t even require AI. Spoofing the voice is made possible by the recent mainstreaming of “deepfake” machine-learning technology. Given just a one-minute sample of a person’s voice, smart software can “learn” enough to mimic all kinds of words and phrases. The scammer creates the script, calls the victim using the spoofed caller-ID and then feeds the script into AI, which generates and plays the fake audio.


Fake people. Last year, a marketing video appeared on YouTube in which a well-dressed CEO urged viewers to buy into his company’s predictive AI technology, which could select investments and generate “high returns and stable passive income.” The video targeted investors who didn’t want to miss out on the next big opportunity.

The scam: There was no predictive AI technology behind the sales pitch. In fact, there wasn’t even a CEO. That lifelike figure addressing YouTube ­viewers was generated by AI.

“Deepfake” programs allow users to manipulate the video image of a real person to make it say or do what they’d like. Scammers have deployed deepfakes to make celebrities appear to endorse products. One deepfake scam has Elon Musk’s image telling viewers to send him cryptocurrency, and he’ll send them back twice the amount in cash.

Deepfakes also are used in “­sextortion” scams. The fraudster tells you that he/she has a video of you performing a sex act and demands a ransom in exchange for not distributing it to your friends. Of course, it’s not really you in the porn video, but it looks like you. Should you pay the money? If you don’t, there’s still a chance that the scammer won’t even bother posting the video—but he/she might, in which case, you’d have to explain to friends that it’s not really you.

Deepfake technology has improved exponentially over the past several years. In 2017, it required two computers working 56 hours to produce a famous deepfake of Barack Obama. Today, casual users can access programs that make convincing deepfakes in just minutes. 

Better phishermen. The most effective fraudulent use of AI is the least glamorous. Scammers use AI tools to make traditional phishing and smishing attempts more fruitful. Phishing is the use of scam e-mails to get information or money from a victim…smishing is the same thing done using text.

In pre-AI days, phishing was a numbers game. Scammers sent the same message to thousands of e-mail accounts hoping to hook just one person. With so many scammers located overseas, it was easier to detect these phishing attempts because poor grammar and syntax were dead giveaways.

But AI tools now allow for spearphishing, targeted approaches that are more likely to meet with success. Using an AI chatbot, a scammer can ask for a bio of a potential victim that includes his/her interests, activities, educational background, organizational associations and so on. Example: Knowing that a victim is a Bruce Springsteen fan, a scammer can craft a phishing attempt offering discounted Springsteen tickets. That could have been done before AI if a scammer had been willing to devote hours of ­Internet sleuthing for every target, but an AI chatbot can deliver a useful bio in seconds. And AI chatbots excel at helping non-native speakers appear fluent. So even if the e-mail was written by a rudimentary English speaker living in Romania, a chatbot will render the writing flawless.


It sounds like we now live in a world where nothing can be trusted…and to some degree, that’s true. Never trust—without verifying—that anyone or anything is who or what they purport to be. Most AI fakes now are not quite so sophisticated that they can reliably fool skeptical people. Example: If the grandparent mentioned on page three asked, “Tommy, why didn’t you call your mother first?” there would be an unnatural delay and he might even hear keyboard tapping before “Tommy” replied. But scammers and the technology are getting smarter at a blistering pace. Instead of counting on being able to detect AI scams, use the same practices that have always been effective against scams…

Stop being your own worst enemy. How did the scammer know you were Tommy’s grandfather? How did the overseas fraudster know you were a Springsteen fan? In both cases, the answer is social media. We put incredible amounts of personal information out there for the whole world to see. If Tommy is posting pictures of his vacation in Cabo San Lucas, it’s easy to concoct a story about getting jailed in Mexico. A scammer even might be able to glean that Tommy refers to you as “Paw-Paw” instead of “Grandpa” and tailor the deepfake accordingly. Set your accounts to private, and be thoughtful about what information you share.

Slow your roll. Scams involve pressure and urgency. Tommy is in trouble now…this crypto offer expires tomorrow…your account will be closed by the end of the business day. Most true situations in life are not that urgent. You usually have time to assess the situation and do a little homework.

Use common sense. Most legitimate businesses won’t contact you via phone or text to ask for information or money (that includes the IRS). And no legitimate business asks to be paid in gift cards. Such requests are red flags. Also, the old adage holds—if it seems too good to be true, it is.

Verify. If you get a breathless call from a loved one, hang up and call him/her back. Or consider having a family code word—If “Tommy” doesn’t say “Pickles” when challenged, you’ll know it’s not him. Check out a company with the Better Business Bureau. Go on a celebrity’s verified social-media page to see if he/she really did endorse something.

Related Articles