Security

ChatGPT Malware Shows It’s Time To Get ‘More Serious’ About … – CRN


Security News


Kyle Alspach


Security researchers this week posted findings showing that the tool can in fact be used to create highly evasive malware.


 ARTICLE TITLE HERE


With security researchers showing that OpenAI’s ChatGPT can in fact be used to write malware code with relative ease, managed services providers should be paying close attention.

This week, researchers from security vendors including CyberArk and Deep Instinct posted technical explainers about using the ChatGPT writing automation tool to generate code for malware, including ransomware.

[Related: Google Cloud VP Trashes ChatGPT: Not Cool]

While concerns about the potential for ChatGPT to be used this way have circulated widely of late, CyberArk researchers Eran Shimony and Omer Tsarfati posted findings showing that the tool can in fact be used to create highly evasive malware, known as polymorphic malware.

Based on the findings, it’s clear that ChatGPT can “easily be used to create polymorphic malware,” the researchers wrote.

Deep Instinct threat intelligence researcher Bar Block, meanwhile, wrote that existing controls in ChatGPT do ensure that the tool won’t create malicious code for users that lack know-how about the execution of malware.

However, “it does have the potential to accelerate attacks for those who do [have such knowledge]”, Block wrote. “I believe ChatGPT will continue to develop measures to prevent [malware creation], but as shown, there will be ways to ask the questions to get the results you are looking for.”

The research so far is showing that concerns about the potential for malicious cyber actors to “weaponize” ChatGPT are not unfounded, according to Michael Oh, founder and president of Boston-based managed services provider Tech Superpowers.

“It just accelerates that cat-and-mouse game” between cyber attackers and defenders, Oh said.

As a result, any MSPs or MSSPs (managed security services providers) who thought they still had more time to get their clients fully protected should reconsider that position, he said.

If nothing else, ChatGPT’s potential for malware creation should “drive us to be much more serious about plugging all the holes” in customers’ IT environments, Oh said.


Kyle Alspach

Kyle Alspach is a Senior Editor at CRN focused on cybersecurity. His coverage spans news, analysis and deep dives on the cybersecurity industry, with a focus on fast-growing segments such as cloud security, application security and identity security.  He can be reached at kalspach@thechannelcompany.com.




READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.