The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code.
ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code.
The chatbot tool was released by artificial intelligence research laboratory OpenAI in November and has generated widespread interest and discussion over how AI is developing and how it could be used going forward.
But like any other tool, in the wrong hands it could be used for nefarious purposes; and cybersecurity researchers at Check Point say the users of underground hacking communities are already experimenting with how ChatGPT might be used to help facilitate cyber attacks and support malicious operations.
“Threat actors with very low technical knowledge – up to zero tech knowledge – could be able to create malicious tools. It could also make the day-to-day operations of sophisticated cybercriminals much more efficient and easier – like creating different parts of the infection chain,” Sergey Shykevich, threat intelligence group manager at Check Point told ZDNET.
OpenAI’s terms of service specifically ban the generation of malware, which it defines as “content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm”. It also bans attempts to create spam, as well as use cases aimed at cybercrime.
However, analysis of activity in several major underground hacking forums suggests that cyber criminals are already using ChatGPT to develop malicious tools – and in some cases, it’s already allowing low-level cyber criminals with no development or coding skills to create malware.
January 9, 2023
Written by Danny Palmer