According to the Norton Consumer Cyber Safety Pulse report, cybercriminals are now capable of creating deepfake chatbots, opening another way for threat actors to target less tech-savvy people. Researchers warn that those using chatbots should not provide any personal information while chatting online.
“I’m excited about large language models like ChatGPT, however, I’m also wary of how cybercriminals can abuse it. We know cybercriminals adapt quickly to the latest technology, and we’re seeing that ChatGPT can be used to quickly and easily create convincing threats,” said Kevin Roundy, senior technical director of Norton.
Hackers impersonate legitimate chatbots
The report said that the chatbots created by hackers can impersonate humans or legitimate sources, like a bank or government entity. They can then manipulate victims into giving their personal information to steal money or commit fraud.
Researchers noted that people should avoid clicking any links in response to unsolicited phone calls, emails or messages.
Hackers using ChatGPT to generate threats
Norton also highlighted that cybercriminals are using ChatGPT to generate malicious threats “through its impressive ability to generate human-like text that adapts to different languages and audiences.”
“Cybercriminals can now quickly and easily craft email or social media phishing lures that are even more convincing, making it more difficult to tell what’s legitimate and what’s a threat,” Norton added.
Earlier this year, a research conducted by BlackBerry found that AI chatbots can be used against organisations in the form of AI-infused cyberattacks in the next 12 to 24 months.
“Some think that could happen in the next few months. And more than three-fourths of respondents (78%) predict a ChatGPT credited attack will certainly occur within two years. In addition, a vast majority (71%) believe nation-states may already be leveraging ChatGPT for malicious purposes,” the report found.