A hacker likely used generative AI to help them deliver malware to users in France, according to a new report from HP's Wolf Security.
HP security researchers discovered the suspected AI use in June when the company's anti-phishing system, Sure Click, flagged an unusual email attachment meant for French language users. The attachment contained an HTML file that asked the user to type in a password to open it. The researchers managed to “brute-force” the protection and guess the right password, which revealed the HTML file produced a ZIP archive that secretly contained a piece of malware, known as AsyncRAT.

AsyncRAT is an open-source remote access management tool that can be easily abused to become malware. In this case, the hackers behind the email attachment decided to use it to remotely control the victim’s computer.
But while investigating the attack, HP’s security researchers noticed something odd: The malicious code within the email attachment’s JavaScript and in the ZIP archive — the two components to deliver the attack— weren’t scrambled or obfuscated at all.
Instead, the computer code was easily readable. “In fact, the attacker had left comments throughout the code, describing what each line does – even for simple functions,” HP’s report says. “Genuine code comments in malware are rare because attackers want to their make malware as difficult to understand as possible.”

The comments also suggest that generative AI developed the code to deliver the AsyncRAT malware. That’s because chatbot programs such as OpenAI’s ChatGPT and Google’s Gemini will typically explain each line of computer code if you ask them to write a computer program.
“Based on the scripts’ structure, consistent comments for each function and the choice of function names and variables, we think it’s highly likely that the attacker used GenAI to develop these scripts,” HP’s report adds.
The company’s findings arrive as other companies, including OpenAI and Microsoft, have also spotted state-sponsored hackers using generative AI to refine their phishing attacks and conduct research. But using generative AI to develop actual malware is rare. In April, cybersecurity provider ProofPoint discovered a separate case of hackers possibly using generative AI to develop a PowerShell script to deliver malware.
In a statement, HP’s security researcher Patrick Schläpfer said: "Speculation about AI being used by attackers is rife, but evidence has been scarce, so this finding is significant."
The company’s report adds that generative AI has the potential to “lower the bar” for cybercriminals to spread malware. But others, like Google’s VirusTotal, are more skeptical, and say it’s still hard to tell if a malware attack can be traced to a generative AI program.
“How do I know if you’re copying the code from your neighbor, from [coding site] Stack Overflow, from some AI, it’s very difficult to say,” VirusTotal researcher Vicente Diaz said in May. “So it’s already a hard question.”