January 15, 2024
Lately, I have been wondering about the emerging threat factors that are impacting the cyber kill chain and how the cyber kill chain and related frameworks [MITRE ATT&CK™, Diamond] processes, tools, and defenders need to adapt to these changes in 2024. Given today’s hottest topics, this line of thinking naturally led me to AI. The use of AI provides improved capabilities to both attackers and defenders. For cybercriminals, AI provides a significant boost in the tactics, techniques, and procedures (TTPs) used, making their attacks more sophisticated and harder to detect and mitigate. For defenders, AI helps to automate the process of identifying the nature, source, and intent of attacks—time-consuming work that is currently done by humans. AI also can provide defensive agility – an ability to switch to new defensive paradigms quickly and with ease in the face of an uptick in attack sophistication and scale. By enhancing detection and prevention, while also adding new predictive powers to the cyber defender, defensive AI can help to close off attack vectors at the scale and pace needed to address new AI-supported attacks.
Adversaries and Cyber Kill Chain
The use of AI as an adverarial weapon has increased the realism of social engineering and phishing attacks, improved the ability 0f attackers to capture and compromise credentials, and made their hunt for exploits and chaining of vulnerabilities more scalable. AI also makes creating and distributing malware faster and easier. For example:
- Spear phishing – Advanced Large Language Models (LLMs) are proven capable of mining information available on social media, and then combining the results with AI text generation techniques to make it increasingly difficult for people to distinguish between benign email messages and spearphishing messages.
- Authentication-based attacks – Videos rendered using AI are fairly detectable now, but synthesized voice cloning is very much a threat to organizations that use voice biometrics as part of authentication flows. Existing tools, such as Lyrebird, can generate fake audio of an individual’s voice using sample recordings as an input to fool an authentication system.
- C2 autonomy – AI can assist malware developers deeper into the cyber kill chain as demonstrated by DARPA’s Cyber Grand Challenge (CGC) competition. For the final part of the cyber kill chain—command and control [C2] — AI/ML could enable deployed malware to act independently, living off the land and not having to “phone home” for instructions.
- Deception – AI could be used to deceive people and cause them to question what is real, or to accept a fabricated version of reality. For example, voice spoofing could be used to deceive someone into thinking that a caller is his mother asking for money. Image spoofing can be used to make an autonomous vehicle mistake a stop sign for a speed limit sign. Deep neural networks prove useful for generating chaff traffic that resemble the distribution of real attacks and have successfully fooled defenders.
- Evasion – Attackers have developed AI-powered malware that dynamically modifies its behavior to evade detection systems.
- Vulnerability detection and chaining – Threat actors employ Machine Learning (ML)-driven fuzzing, allowing them to probe systems to discover new zero day vulnerabilities. ML systems also make it easier for threat actors to reverse engineer code and to discover ways to chain vulnerabilities so as to gain escalated privileges. We should also expect more cyber events like moonbounce and cosmic strand, as attackers are able to find or exploit firmware vulnerabilities to get a foothold below a device operating system.
The bottomline: generative AI (genAI) and other advanced tools provide the ability for threat actors to finetune the creation of malware and execute personalized attacks in a very rapid fashion. Although the basic offensive tactics and techniques of the cyber kill chain don’t drastically change, the innovative chaining of these techniques using AI-powered insight and the adaptation of new AI deception technologies can lead to greater sophistication of attacks. The impacts of AI-based attacks will be pervasive throughout the cyber kill chain and extending to the supply chain and OT / ICS infrastructures. The adoption rate of AI-powered tools such as these is likely to increase, providing an easy solution for less capable threat actors or for those that want to expand operations to other regions and lack language skills.
One noteworthy aspect of AI-powered attacks is how open source data forms the foundation for the attack. With ML, the attack recon phase is no longer just about scanning the IT infrastructure for vulnerabilities. Now, AI makes it easier for attackers to digest social media, voice and images, credentials on the dark web, patents, corporate announcements, SEC filings, employee lists, contract announcements, etc. to quickly identify prospects and create targeted attacks. Also, data is often the focus of the attacks such as the case of ransomware. By mining captured data, AI tools may help ransomware attackers to better understand the value of data they have captured so to set ransom values. Organizations also need to give greater consideration to how their IP data is exposed and protected, as genAI tools scoop up data available on the Internet to train their LLMs. If organizations want to avoid their IP from getting utilized by genAI tools, and to reduce their exposure to the recon phase of the attack life cycle, then they will need to ensure their attack surface is now hidden and protected on a data level rather than just at an application level. Organizations need to also consider how their attack surface is expanded due to unsanctioned AI tools that have been introduced to their environments by employees and ensure they have a tight policy on data that gets published to the Internet – i.e., to update their policies on what they allow genAI tools to scrape and collect and what they allow employees to publish. Furthermore, new research confirms that less than 10% of companies are prepared to mitigate internal threats associated with AI. These blind spots and new technologies open the door to threat actors eager to infiltrate corporate networks or to insiders trying to gain unauthorized access to sensitive data.
AI also expands the attack surface through its own set of vulnerabilities and specific attacks on the AI pipeline as covered in MITRE ATLAS™ – the newest MITRE ATT&CK™ style framework devoted to AI. According to MITRE:
“MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from AI red teams and security groups. There are a growing number of vulnerabilities in AI-enabled systems, as the incorporation of AI increases the attack surface of existing systems beyond those of traditional cyber-attacks.”
MITRE ATLAS™ was developed as part of a collaboration between MITRE and Microsoft.
Some examples of how AI systems are attacked are shown in the two figures below and include various new techniques such as model inversion, data poisoning, perturbation attacks, and prompt injection; and as the industry adopts more AI tools, AI attack surfaces across these novel applications will expand. For example, with data poisoning, the attack surface evolves to include the training data, necessitating new defense strategies for data in the cyber kill chain. There are also bad actors who would try to steal your models to figure out how they work – for instance, stealing your fraud detection model so they can learn to beat it. NIST provides a taxonomy of attacks on AI models and the development pipeline called Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. OWASP has also chartered an AI Exchange which is as an open source collaborative document to advance the development of global AI security standards and regulations. It provides a comprehensive overview of AI threats, vulnerabilities, and controls to foster alignment among different standardization initiatives.
Bug bounty programs are becoming popular as a way to improve the security of AI models and the AI development pipeline by respective providers. In announcing its AI bug bounty program, Google pointed out that generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation, or misinterpretations of data (hallucinations).

NIST AI 100-2e2023 – Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Microsoft and MITRE also teamed together to create security tools to identify and combat some of these vulnerabilities and attacks associated with AI tools. The two companies announced that Microsoft’s Counterfit red-team AI attack tool, released in 2022, is now integrated in MITRE’s Arsenal plug-in. Arsenal is a tool that “implements the tactics and techniques defined in the MITRE ATLAS™ framework and has been built off of Microsoft’s Counterfit as an automated adversarial attack library,” MITRE’s announcement explained. The Arsenal plug-in is specifically designed for use by security practitioners who may be lacking in-depth knowledge about AI and machine learning technologies. Additionally, Microsoft’s Counterfit has now been integrated into MITRE’s Caldera, which is used to automate red-team attacks using emulated attack profiles. Microsoft’s announcement included links to a bunch of tools and guides concerning AI security. Its “Taxonomy” document for engineers and policymakers is an indepth description of possible failure modes and attacks for AI systems. It is worthwhile to note that the document highlights how machine learning failure modes are meaningfully different from traditional software failures from a technology and policy perspective.
The cyber kill chain has always included the supply chain as an attack vector, however, now there is a new supply chain devoted to AI tool development. According to Robust Intelligence:
“The proliferation of sophisticated, open-source models has been a boon for companies looking to accelerate AI adoption. But in the rush to leverage these resources, companies have largely overlooked the AI supply chain risk. Public model repositories like Hugging Face and PyTorch Hub make it simple for anyone to find and download models without first understanding potential vulnerabilities in third-party software, model, or data. The general lack of awareness of AI supply chain risk makes it a compelling opportunity for bad actors.”
Better defensive mechanisms for LLM repos are needed, as outlined in this Institute for Security and Technlogy (IST) report – gated and structured access schemes, secure application programming interfaces, and LLM-based defensive systems would be helpful.
Data is also part of the supply chain for AI and training the model could lead to unintended data leakage if the underlying data becomes accessible to those not authorized to see the information. Should a model be unintentionally [or intentionally] trained on sensitive data [such as trade secrets or PHI] , it could conceivably regurgitate it – or something similar – to a party who shouldn’t have access. One recent example of a problem in the AI data supply chain included child porn images in a dataset used to train models. Whether it’s inventing events and presenting them as factual, or fabricating sources for references, or being trained on the wrong data, genAI has a misinformation and supply chain problem.
To help combat AI supply chain risk, Robust Intelligence released the AI Risk Database in March 2023 as a free and community-supported resource to help mitigate supply chain risk in open-source models. The database includes over 260,000 models, and provides supply chain risk exposure that includes file vulnerabilities, risk scores, and vulnerability reports submitted by AI and cybersecurity researchers. To support the continued advancement of the AI Risk Database, Robust Intelligence partnered with MITRE, to creat an enhanced version of the AI Risk Database, which is now available on GitHub with a long-term plan to host it under the broader set of MITRE ATLAS™ tools.
Defenders and Cyber Kill Chain
From a defender’s point of view, cyber kill chain frameworks help to map adversary behavior and enable understanding and anticipation of each step an attacker might take. Thoroughly analyzing the cyber terrain can empower defensive strategies during cyber operations, enhancing overall cybersecurity posture. For example, MITRE ATT&CK™ reflects the phases of an adversary’s attack life cycle and the platforms (e.g., Windows) adversaries are known to target, providing a taxonomy of adversarial TTPs with a focus on those used by external adversaries executing cyber attacks against networked systems. Primary components of ATT&CK™ include:
- Tactics, denoting short-term, tactical adversary goals during an attack
- Techniques, describing the means by which adversaries achieve tactical goals
- Detection methods for each technique, captured as descriptive text in ATT&CK™
- Mitigations, describing technologies and practices which have been observed (in one or more of the curated data sets) to mitigate the techniques with which they are associated
ATT&CK™ also defines sub-techniques, describing more specific means by which adversaries achieve tactical goals at a lower level than techniques (typically related to specific technologies or platforms), and associates mitigations and detection methods with sub-techniques. ATT&CK™ provides information about APT groups and about malware used by one or more APT actors.
MITRE’s D3FEND™ knowledge graph of defensive countermeasures is a starting point to begin to understand defensive cybersecurity techniques and their relationships to offensive/adversary techniques across the cyber kill chain. However, D3FEND™ does not prescribe specific countermeasures, it does not prioritize them, and it does not characterize their effectiveness. It also does not directly address countermeasures for attacks on AI models or the AI pipeline. And it does not include ways to use AI for defensive strategies. It is important to consider how the cyber kill chain is extended for the defender due to AI-powered attacks and how AI can help improve defenses against these attacks.
Research has shown that a security-led approach – i.e., leveraging an industry-informed framework of cyber kill chain – can provide better ML models by enhancing the resolution on the attack stages and alert types, which are critical in attack analyses. The security-led paradigm is a tightly coupled approach between defining the problem (attack scenario) and finding the right model. Security researchers define the problem statement by identifying a broad attacker method, not just a tool or single exploit, and data scientists find the appropriate algorithm to identify that method, working closely with security researchers in iterating over the solution. The security-led approach uses the cyber kill chain to frame the analysis of attacker behaviors and traffic patterns unique to the target organization’s environment to reduce alert noise and surface only relevant true positive events.
One way to improve defenses using AI is by leveraging an anticipatory intelligence architecture – an early warning system – that is built on a security-led approach. Threat intelligence companies are exploring how unconventional signals can be used to forecast cyber events. Unconventional signals are not directly linked to specific exploits or vulnerabilities within the targeted organization. Adversarial telemetry may be observed before any malicious activity occurs to the future victim. These signals may include, for example, increasing negative tones toward an organization on social media or news reports, activity on the dark web, attacks on suppliers, attacks on other industry sector members, information from the GDELT Project, and similar platforms such as MapsPulse, Premise, BuildingFootprintUSA, and NV5 Geospatial formerly L3HARRIS – Geospatial. AI algorithms can process massive volumes of this telemetry data from various sources simultaneously, enabling them to detect subtle patterns and warnings of cyber threats that may go unnoticed by traditional threat intelligence platforms. Previous research has shown the viable concept of using these unconventional signals to forecast cyber attacks and attack intensities through ML models.
An AI-powered anticipatory intelligence architecture should produce warnings of attack that have a relatively high probability of occurrence. These warnings should lead to the identification and investigation of possible attack scenarios (denial of service, defacement, malware, phishing, etc.) and cueing of cyber threat intelligence systems and threat hunting campaigns. AI-based early warning systems in conjunction with threat intelligence platforms, such as Censys and Recorded Future, and integrated with vulnerability management tools like Nucleus Security, can provide the latest knowledge of global as well as industry-specific dangers to better formulate threat hunting priorities based on who and what is most likely to be used to attack your systems.
The Diamond Model of intrusion detection is particularly skillful at visualizing and understanding complex attack scenarios and can provide useful insight to guide analysis of signals of an AI-based warning system and other cyber intel. The Diamond model consists of 4 components:
- Adversary: Where are attackers from? Who are the attackers? Who is the sponsor? Why attack? What are expected TTPs? What is the activity timeline and planning? How can the attack be attributed?
- Infrastructure: Infected computer(s), C2 domain names, location of C2 servers, C2 server types, mechanism and structure of C2, data management and control, and data leakage paths
- Capability: What skills do the attackers have to do reconnaissance, deliver their attacks, attack exploits and vulnerabilities, deploy their remote-controlled malwares and backdoors, and develop their tools?
- Target: Who is their target country/region, industry sector, individual, or data?
By modeling the relationships between adversaries, victims, infrastructure, and capabilities, the Diamond Model helps threat hunting teams see how the different elements of a cyberattack interact with and influence each other. So the Diamond model can provide a window or guide to the AI early warning models, helping threat hunters in the development of their hypotheses of attack scenarios – i.e., to be clear and unambiguous in asking the model questions, to review the model responses critically, and to account for context in both the questions and model responses. Attack scenarios can be represented as part of a model-based engineering effort; using attack tree or attack graph analysis; in terms of fault tree analysis or failure modes, effects, and criticality analysis (FMECA); or based on the identification of loss scenarios from SystemTheoretic Process Analysis (STPA).
Common elements across the attack scenarios (e.g., recurring adversary TTPs) can be starting points for identifying candidate mitigations. By connecting attacker TTPs to countermeasures, ATT&CK™ helps defenders to identify and prioritize countermeasures. Mapping an attacker’s actions to the ATT&CK™ framework using a timeline of events showing the progression of a threat, can result in faster and more accurate threat investigations. Using ATT&CK™ as a guide to perform feature engineering of the ML model, AI-powered predictive insight can help identify countermeasures to “defend forward” against specific attacker TTPs and preempt an attacker’s activities.
By understanding what threat actors’ motivation and capabilities are from AI models that are guided by the Diamond model, and how they are likely to execute each stage of the attack using model features from ATT&CK™, an analyst can quickly assess the severity of a potential and likely attack and identify gaps in the organization’s defenses. For example, you may find out that the adversary is a state-sponsored hacker group, that they are targeting your critical infrastructure, and that they are using spear phishing, malware and lateral movement techniques.
DARPA’s Intelligent Generation of Tools for Security (INGOTS) program is an example of defending forward. The program aims to identify and fix high-severity, chainable vulnerabilities before attackers can exploit them. INGOTS will pioneer new techniques driven by program analysis and artificial intelligence to measure vulnerabilities within modern, complex systems, such as web browsers and mobile operating systems. INGOTS will pioneer a new metrology, one which characterizes and measures interdependent exploitability, for the next generation of security vulnerabilities. To do so INGOTS will develop new approaches to automatic vulnerability characterization, exploit primitive prediction, and exploit combinatorics. Throughout, INGOTS will develop datasets capturing artifacts and features of vulnerabilities and exploits to further drive program analysis and AI approaches for rapid risk assessment.
Despite progress in applying LLMs to researching the notion of “defending forward” using cyber kill chain frameworks, the emerging research and tools are mostly hampered by the lack of datasets. Often, research relies on old datasets that might not reflect the current sophistication of AI-powered attacks, or datasets that can not be easily mapped to the stages of the cyber kill chain. There are few initiatives for sharing data, such as the Veris framework but even then, the data is heavily anonymized and not aligned to the kill chain stages. Furthermore, there is limited experimental evidence of explicit association and linkage between existing datasets and the corresponding machine learning with the cyber kill chain data modeling. The infrequency of certain types of attacks also cause challenges where the positive instances are much fewer compared to the negative instances (no attack). Dealing with imbalanced data sets is challenging, because it is not often possible for a classifier to learn the minority class. Moreover, there are many accompanying questions that need to be answered, like which instances to under or over sample, how to generate new synthetic instances, and how to avoid model collapse.
There are also other critical challenges for analysts leveraging ATT&CK™, as described in this report by Cyentia, including the rapid pace of updates, ambiguous tactic-technique relationships, sub-technique under-reporting, and the absence of reporting in specific industry segments or ICS environments. According to Cyentia, the basic problem of ATT&CK™ is that hierarchical structures are missing or inconsistent. The techniques cannot be assigned exclusively to individual tactics. Techniques can often be used by multiple tactics and across multiple phases of an attack. The identifiers of both tactics and techniques are also not traceable. This makes it extremely difficult to understand and work with these entries and to apply them as part of a feature engineering task when developing an AI model.
However, MITRE Engenuity is targeting this problem of dataset poverty and ATT&CK complexity to improve cyber defenses and aligning efforts to ATT&CK. Its Center for Threat-Informed Defense project, Sensor Mappings to ATT&CK, gives cyber defenders the information they need to identify and understand cyber incidents occurring in their environment. Various tools and services are available to collect system or network information, but it is not always clear how to use those tools to provide visibility into specific threats and adversarial behaviors occurring in their environment. These mappings between sensor events and ATT&CK data sources allows cyber defenders to create a more detailed picture of cyber incidents, including the threat actor, technical behavior, telemetry collection, and impact. Funding for this research comes from a who’s who of security technology providers and users including among others Palo Alto Cortex, IBM, Citi, and Verizon. Other recent research from the Center includes improvements to TRAM – Threat Report ATT&CK Mapper. Up to the current release, the task of mapping TTPs to Cyber Threat Intelligence (CTI) reports was difficult, error-prone, and time-consuming. TRAM’s use of LLMs was improved to identify which adversary tactics, techniques, and procedures (TTPs) are found in cyber threat intelligence (CTI) reports.
Along with this promising research, AI-powered tools that follow a security-led approach are emerging. These tools reflect the need for a modern solution that can not only help SOC teams predict and investigate the most pressing cyber events, but provide guidance on how to remediate.
For example, Rubrik‘s ability to provide time series data insights directly into Microsoft Sentinel enables organizations to address evolving cyber threats and safeguard their most sensitive information. With this integration, the platform is designed to automatically create a recommended task workstream in Microsoft Sentinel by leveraging large language models and generative AI through OpenAI.
Meanwhile, Fortinet provides a portfolio of AI-backed tools to cover the cyber kill chain. For example, Fortinet uses AI to improve deception techniques to flag early-stage reconnaissance that precedes actual cyberattacks. Palo Alto’s Cortex also delivers an integrated suite of AI-driven, intelligent products for the SOC for detection, prevention, and response.
Nozomi covers OT environments with its Vantage IQ offering. Vantage IQ uses AI and machine learning to provide AI-assisted data analysis to help security teams reduce cyber risk and speed up incident response. Key features of Vantage IQ include:
-
- AI-powered insights, where alerts are automatically correlated, prioritized and supported with root-cause information and deep neural networks that identify activity patterns in network data
- AI-based queries and analyses that allow users to leverage natural language queries and get answers to common questions about vulnerabilities, network assets and other environmental details
- Advanced predictive monitoring, which provides early warnings about system behaviors that deviate from the norm
- Time Series feature to alert on network changes with an additional level of alerting on unusual changes in the bandwidth of activity going through sensors monitoring those networks
In addition, I am particularly excited about the possibilities that a new start-up called DistributedApps is working on. They are trying to solve a hard AI problem involved with coordinating decision making among distributed agents – how, to whom, when to distribute knowledge and decision-making across different network nodes and intelligent agents. This type of capability could help serve SOAR, IAM, CTI, and XDR tools. They are exploring the use of decentralized small LLMs which can be coordinated by blockchain Consensus algorithm and smart contracts and use ZKP for data privacy. This type of technology could also really serve the Web3.0 initiatives for which I am a fan.
So what are your views on the impacts of AI on the cyber kill chain? How are you employing AI for cyber defense? What are the challenges in employing AI for cyber defense and for cyber early warning systems? Let me know your views and comments on this evolving topic.
And thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, authenticity, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing OT / IIoT and IoT systems, Augmented Reality, or other emerging technology topics. Also, email chrisdaly@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.