This site in other countries/regions:
Author: McAfee January 25, 2023
ChatGPT: Everyone’s favorite chatbot/writer’s-block buster/ridiculous short story creator is skyrocketing in fame. 1 In fact, the AI-generated content “masterpieces” (by AI standards) are impressing technologists the world over. While the tech still has a few kinks that need ironing, ChatGPT is almost capable of rivaling human, professional writers.
However, as with most good things, bad actors are using technology for their own gains. Cybercriminals are exploring the various uses of the AI chatbot to trick people into giving up their privacy and money. Here are a few of the latest unsavory uses of AI text generators and how you can protect yourself—and your devices—from harm.
Besides students and time-strapped employees using ChatGPT to finish writing assignments for them, scammers and cybercriminals are using the program for their own dishonest assignments. Here are a few of the nefarious AI text generator uses:
1. Malware. Malware often has a very short lifecycle: a cybercriminal will create it, infect a few devices, and then operating systems will push an update that protects devices from that particular malware. Additionally, tech sites alert their readers to emerging malware threats. Once the general public and cybersecurity experts are made aware of a threat, the threat’s potency is quickly nullified. Chat GPT, however, is proficient in writing malicious code. Specifically, the AI could be used to write polymorphic malware, which is a type of program that constantly evolves, making it difficult to detect and defend against. Plus, criminals can use ChatGPT to write mountains of malicious code. While a human would have to take a break to eat, sleep, and walk around the block, AI doesn’t require breaks. Someone could turn their malware operation into a 24-hour digital crime machine.
2. Fake dating profiles. Catfish, or people who create fake online personas to lure others into relationships, are beginning to use AI to supplement their romance scams. Like malware creators who are using AI to scale up their production, romance scammers can now use AI to lighten their workload and attempt to keep up many dating profiles at once. For scammers who need inspiration, ChatGPT is capable of altering the tone of its messages. For example, a scammer can tell ChatGPT to write a love letter or to dial up the charm. This could result in earnest-sounding professions of love that could convince someone to relinquish their personally identifiable information (PII) or send money.
3. Phishing. Phishers are using AI to up their phishing game. Phishers, who are often known for their poor grammar and spelling, are improving the quality of their messages with AI, which rarely makes editorial mistakes. ChatGPT also understands tone commands, so phishers can up the urgency of their messages that demand immediate payment or responses with passwords or PII.
The best way to avoid being fooled by AI-generated text is by being on high alert and scrutinizing any texts, emails, or direct messages you receive from strangers. There are a few tell-tale signs of an AI-written message. For example, AI often uses short sentences and reuses the same words. Additionally, AI may create content that says a lot without saying much at all. Because AI can’t form opinions, their messages may sound substance-less. In the case of romance scams, if the person you’re communicating with refuses to meet in person or chat over video, consider cutting ties.
To improve your peace of mind, McAfee+ Ultimate allows you to live your best and most confident life online. In case you ever do fall victim to an identity theft scam or your device downloads malware, McAfee will help you resolve and recover from the incident. In addition, McAfee’s proactive protection services – such as three-bureau credit monitoring, unlimited antivirus, and web protection – can help you avoid the headache altogether!
1Poc Network, “I asked AI (ChatGPT) to write me a rather off short story and the result was amazing”
2CyberArk, “Chatting Our Way Into Creating a Polymorphic Malware”