• August 29, 2023
  • Posted by Austin Vaive
  • 6 read

Artificial (Social) Engineering: How Hackers are Using AI to Craft Scams

GECU Voices brings you guidance and insight from experts within the Credit Union. Today’s blog post was penned by Austin Vaive, Information Security Manager. 

In the age of rapid technological advancement, artificial intelligence (AI) has emerged as a powerful tool that transforms various aspects of our lives. However, with great power comes great responsibility, and AI is no exception. While AI has proven to be revolutionary in many positive ways, it has also opened new avenues for cyber threats, with social engineering being a prime concern. 

Would you be surprised to learn that the previous paragraph was entirely crafted by an AI ChatBot?

Hi, actual-human Austin at the helm now. It’s surprising to some that the most common strategy used by cyber criminals does not involve the stereotypical “hacker” in a dark room with lines of computer code, but rather a simple email. Contrary to the depiction of cyber criminals in pop culture, hacking into a company’s computer system by hand generally requires a great deal of skill and time investment. For the average cyber criminal trying to turn a profit, it is much easier to send emails to potential victims with a crafty message to trick them into providing their password or other sensitive information. 

Since cybercrime is one of the fastest growing industries in the world, totaling $6 trillion in 2021 according to Cybersecurity Ventures,1 these criminals are primarily interested in the best financial return on their investment. Since traditional phishing messages are generated by hand, oftentimes by individuals who do not speak English as their native language, they can be relatively easy to spot due to spelling and grammatical errors as well as strange word choices. If you’ve ever received a message that asks you to, “Kindly send your banking accounts numbers,” you are familiar with these attempts. 

The recent popularity of “chat generators” powered by artificial intelligence could make it more difficult for individuals and security professionals attempting to determine if an email is legitimate or a phishing attempt. One of the most popular AI chatbots, known as “ChatGPT,” can automatically generate a response based on a given prompt. While the developers of ChatGPT attempt to limit its use in unethical schemes, it is still relatively easy to provoke the AI to draft emails that could be used in a phishing campaign. 


As we can see in this example, the “bad actor” was able to ask the AI to generate an email template, encouraging the recipient to open a link, which is a common method of attack for a phishing message. The use of AI makes it much easier for cyber criminals that may not be fluent in English to generate more convincing messages that lack many of the traditional red flags used to identify phishing messages.  

AI also opens the door for a greater volume of phishing attacks that are customized to a specific target, known as spear phishing. These attacks are generally handcrafted to single out a specific individual, including details and personal information that would be more likely to convince the target to fall for the scheme. 

These spear phishing attempts are generally much more effective than traditional, generic phishing emails, but take more time and skill to craft. AI can make crafting these emails much less difficult and time consuming. By inputting a few personalized details about the target, say, for instance, that they are a fan of the Cincinnati Bengals, an attacker could instantly craft a convincing message tailored to the specific target they have in mind. 

Another strategy used by cyber criminals that leverages AI is voice imitation scams. According to a Federal Trade Commission common alert, cyber criminals are using voice cloning technology to mimic friends or family members of a victim. The bad actors are able to take a short audio clip of an individual’s voice, from something like a social media post, and use artificial intelligence to record a new clip that sounds nearly identical to the original recording. This new recording can be used to convince the target that their loved ones are in danger, or request money or sensitive information. If you are unsure if an incoming call is truly a friend or relative, the best course of action is to hang up and call them directly using a number you know is legitimate. 

While AI is a new frontier of technology that can make cyber crime easier, the good news is the basic strategy for protecting yourself from these types of scams has not changed. Some steps you can take to protect yourself include:

  • Verify the sender. Examine the sender’s email address, and make sure it matches the displayed name. Scammers will often change their name to something that appears legitimate, so it’s important to check the actual address the email came from. 
  • Look for urgency. Scammers will often try to instill a sense of urgency in their messages to convince their target to comply with their goal without fully reading or thinking about the message. If an email is trying to convince you to take action in an urgent manner, slow down, and read the message again before clicking on anything.
  • Verify links by hovering. Hover your mouse over any link included in a message to display the URL before clicking. Many scammers will attempt to hide links in a way that appears to lead to a legitimate site, but hovering your mouse cursor will reveal the true destination. If the destination doesn’t make sense, don’t click the link!
  • Pay attention to grammar and formatting. While AI-crafted emails can aid scammers in sounding more legitimate, it’s still important to check for strange grammatical choices or formatting that doesn’t look quite right. Especially when an email appears to come from a large organization, check to ensure the logos and email signature match with what you would expect to see. 
  • Verify requests from colleagues or companies. If you’re still unsure if an email is legitimate, it’s a good strategy to contact the sender through a known, legitimate communication channel. If the email appears to come from a work colleague, friend, or business you recognize, contact them via a previously used phone number to verify the legitimacy of the email. 

If you ever suspect you’ve fallen victim to an AI scam and compromised your accounts, report it to General Electric Credit Union (GECU) immediately. We’ll take steps to protect your hard-earned funds and sensitive information. 

Back to blog home