Hackers are increasingly turning to artificial intelligence (AI) to enhance their cyberattack strategies, according to a recent report by Microsoft Threat Intelligence. From reconnaissance to post-compromise activities, AI is being used to streamline operations, scale attacks, and reduce technical barriers. This development underscores the growing sophistication of cyber threats and the need for robust defenses.
The report highlights that generative AI tools are being employed for a variety of malicious tasks. These include crafting phishing emails, summarizing stolen data, debugging malware, and configuring attack infrastructure. By leveraging AI, attackers can execute their operations more efficiently while maintaining control over their objectives and deployment strategies.
How AI Is Transforming Cyberattacks
AI is proving to be a powerful tool for cybercriminals, enabling them to automate and enhance various stages of an attack. For instance, threat actors are using AI to draft convincing phishing lures, translate content for global campaigns, and even generate or refine malicious code. These capabilities significantly reduce the time and effort required to launch attacks, making them more accessible to less technically skilled operators.
Microsoft has observed specific threat groups, such as North Korean actors Jasper Sleet and Coral Sleet, integrating AI into their operations. Jasper Sleet, for example, uses generative AI to create realistic digital personas, complete with culturally appropriate names and email formats. These personas are then used to secure remote IT jobs in Western companies, providing attackers with insider access to sensitive systems.
AI’s Role in Malware Development and Infrastructure
Beyond social engineering, AI is also being utilized to develop and refine malware. Attackers are using AI coding tools to generate malicious code, troubleshoot errors, and adapt malware to different programming languages. Some experiments even suggest the emergence of AI-enabled malware capable of dynamically modifying its behavior at runtime.
In addition, AI assists in infrastructure creation. Coral Sleet, for example, leverages AI to generate fake company websites, provision servers, and test their deployments. When AI safeguards attempt to block malicious use, attackers employ jailbreaking techniques to bypass restrictions and achieve their goals.
Emerging Trends and Defensive Measures
Microsoft’s report also notes the experimental use of agentic AI, which can autonomously perform tasks and adapt based on outcomes. While this technology is still in its early stages, its potential to further enhance cyberattacks is a growing concern.
To combat these threats, Microsoft advises organizations to treat AI-powered attacks as insider risks due to their reliance on legitimate access. Defenders should focus on detecting unusual credential use, hardening identity systems against phishing, and securing AI systems that could become targets themselves.
This trend is not isolated to Microsoft’s observations. Other major players like Google and Amazon have reported similar findings, with threat actors abusing AI platforms like Gemini AI to execute sophisticated attacks. These developments highlight the urgent need for organizations to adapt their cybersecurity strategies to address the evolving threat landscape.
- ClickUp Data Leak Shows $4B Came Before Customer Security for Over a Year
- Fast16 Malware Targeted Microsoft Windows Engineering Software Before Stuxnet
- eBay DDoS Claim Follows Marketplace Outage Reported by Users
- METO Systems Named in Insomnia Ransomware Claim
- SANS Took Nearly $500K From ICE for Cyber Training
WordPress Bot Protection
Bot Blocker for WordPress
Detect bot traffic, monitor live activity, apply bot-aware rules, and control AI crawlers, scrapers, scanners, spam bots, and fake trusted bots from one clean WordPress admin interface.






