April 18, 2024

It is evident over the last few years that central national governments are applying tighter controls on the security of software and hardware products – from labels for IoT devices in the US and abroad, to controls over AI research and bans on high risk AI models, to more timely reporting requirements on vulnerabilities, ransomware, and breaches by publicly-traded companies by the SEC and CISA; to increased controls over the hardware and software supply chain; to mandatory cyber controls for DoD contractors, and, overall, to greater surveillance of the cyber ecosystem. This same central government focus on central control extends to identities, to information flow, and to the production, distribution, and use of technology; and, is evidenced in various government-sponsored cyber research and national strategies. For example, from the US National Cybersecurity Strategy (March 2023):

“To realize this vision, we must make fundamental shifts in how the United States allocates roles, responsibilities, and resources in cyberspace.

…We must rebalance the responsibility to defend cyberspace by shifting the burden for cybersecurity away from individuals, small businesses, and local governments, and onto the organizations that are most capable and best-positioned to reduce risks for all of us.”

This statement signifies that big tech, big cyber security, and the central government will take the lead in cybersecurity – basically, we should expect that they will collect all, analyze all, instruct all. But aren’t these the same entities that have got us to where we are today? Microsoft’s lax security and risk managment cuture led to the Exchange hack according to the Cyber Safety Review Board. And Rand commented in September 2023,

“… there is one aspect that has yet to garner the attention it deserves: the security devices employed to protect the system contained the very vulnerabilities used by the actors to gain access. This is not the first time security software has been abused [e.g., SolarWinds, Fortinet, Ivanti, Cisco, Palo Alto, more], as it presents a juicy target: operating at elevated levels of privilege and storing some of the most sensitive data.”

<a href="https://www.vecteezy.com/free-photos/cyber-security-ai">Cyber Security Ai Stock photos by Vecteezy</a>

Courtesy of vecteezy.com

And the government hasn’t exactly been the best steward of the public data as local, state, and federal government branches have all experienced major cyber incidents in the last few years. Meanwhile, the detect and stop approach espoused by the leading cybersecurity and software companies seems not to be working. Threat and vulnerability-driven approaches are hard and require a lot of data and staffing. Some of the pillars of zero trust [e.g., EDR], which has been hailed by the government experts as the next best thing, are maligned as being too complex to adopt, to scale, and too many false positives / negatives.

As I was thinking about this growing and intrusive global trend of tighter central control and the increasing effects of the cyber issues that we still collectively face, my questions are:  Can AI play a decisive role in turning the global tide of cyber attacks? Is it possible to achieve verifiable, reliable, explainable, auditable, robust and unbiased AI without costly government intervention? And is it possible to have a more democratic approach to the control of AI resources [i.e., identity and data]? I was wondering in light of these questions what kind of influencers we currently are seeing in some of the most important cyber security research areas. In my view, the most important cyber research areas all have an element of AI included.

In this two-part article I explore some of the key research areas where AI and cybersecurity are intimately entwined, to include:

  1. AI and Enterprise Security [Part 1]
  2. AI and Cyber-Physical Systems Security [Part 1]
  3. AI and the Intersection of Humans, Agents, and Cyber [Part 1]
  4. AI and Secure by Design [Part 2]
  5. AI and the Democratization of Cyber Defense [Part 2]

Give me some feedback on what you think of these choices and what are the highest priority research areas in your view? You can find the second part of this feature at this link.


  • AI and enterprise security.

Everywhere you look in enterprise cybersecurity now, you see AI research for a variety of use cases – threat analysis, anomaly detection, malware detection, incident response, authentication, compliance, vulnerability detection, deep fake detection, network design and security, the list is endless. For example, the Center for Security and Emerging Technology (CSET) has a CyberAI Project that focuses on the intersection of AI/ML and cybersecurity, including analysis of AI/ML’s potential uses in cyber operations, the potential failure modes of AI/ML applications for cyber, how AI/ML may amplify future disinformation campaigns, and geostrategic competition centered around cyber and AI/ML. It is likely that as vendors turn this research into product, that security analysts will be replaced by their AI counterpart. For example, SentinelOne is already announcing an AI security analyst called Purple AI. It is intended to “scale autonomous protection across the enterprise with patent-pending AI technology.”

The National Academies of Sciences, Engineering, and Medicine convened a workshop in 2019 to discuss and explore concerns about the potential implications of AI and ML for cybersecurity. The workshop covered promising AI use cases across the cyber kill chain, offense vs. defense usages, and challenges to AI use in the cyber domain. Although opinions differed somewhat in the proceedings report, it was generally acknowledged that the key issue of explainability necessary for users to trust AI is a on-going research issue that needs to be addressed, especially when it comes to defensive cyber, while outcomes are more important than explainability when it comes to offensive cyber.

The European Union Agency for Cybersecurity (ENISA) looked at cyber research and AI a couple of years back and came out with this report. This study makes recommendations to address some of the challenges through research and identifies key areas to guide stakeholders driving research and development on AI and cybersecurity. At the same time, the recently enacted EU AI Act brings with it some hefty penalties including fines up to €35 million (about $38 million USD) for use of prohibited AI systems and up to €15 million (about $16.3 million) for non-compliance with requirements for high-risk systems. I wonder if this will put a damper on some of the exuberance surrounding AI research. I predict that this yin and yang about AI research/use and regulation will continue through 2024 and 2025, and for years beyond.

I looked at the research and application of AI to cyber defense several years back and came out with this article. In it, I reflect on how AI can be a game-changer to the operation of Security Operations Centers (SOCs). Check it out at this link.


  • AI and cyber-physical systems security.

The critical infrastructure [often referred to as Operational Technology – OT] remains the soft underbelly for cyber attackers. Critical infrastructures have fewer resources (monetary and personnel) to tackle cyber threats, so research to improve AI-driven automation will be important to tackle next generation threats. The growing use of AI in autonomous vehicular and robotic medical systems has also created a very active area of research and regulatory oversightas AI, security, and safety collide. What is interesting to me is if regulatory oversight for safety and security can keep pace on the accelerating use of AI in the critical infrastructure space. And given previous approaches to regulation, will AI necessitate changes to the regulatory regime and protocols to keep up?

One area of research in critical infrastructure security that I believe is particularly signifcant – modeling interdependent cyber-physical systems to assess the relationship [chaining] and impact of vulnerabilities. AI, digital twins, and simulation capabilities are all tools that can assist in the research and provide insights on this topic. Next-generation interconnectivity is collapsing the boundary between the digital and physical worlds, and exposing some of our most essential systems to disruption. Understanding interdependencies is vital to defending your system and to building a resilient system, whether that be interdepencies affecting your supply chain, your grid, your sensors, and so much more. One example of research in this area by DOE’s Idaho National Lab combines AI and digital twin technology to improve the ability of pharma OT systems to identify, detect, respond, and recover to cyber threats and vulnerabilities in sub-second times. New frameworks are needed to assess vulnerabilities and identify resilience strategies for critical infrastructure systems that are highly interdependent.

AI may possibly help to cope with the increasing complexity of interdependent systems, but it can also add more complexity and risk to a system if decisions made by the AI system are not explainable. The Industry IoT Consortium’s AI Framework is aimed at providing guidance to enable the use of AI, while making it safe to use in the operation of industrial IoT systems.

Using AI to minimize your blind spots and assure access to your essential resources is only half the equation. For many parts of the critical infrastructure, you also need to ensure that safety is also built in and that is also why AI-driven interdependency analysis across systems, security, and safety considerations is so crucial. Interdependency analysis is also a major technique that nation-states use to identify targets, whether that be on an individual level or a system level. As such, I see AI would be very helpful in both the defense and offense side.

Along this line of research on interdependencies, I discovered some research by DarkTrace – a cybersecurity company. The Darktrace AI Research Centre examines how AI can be applied to real-world problems, to find new paths forward to augment human capabilities, including Attack Path Modeling Research, and cyber recovery. Understanding interdependencies is critical to the success of these research efforts.

In terms of government-funded research of critical infrastructure, the US Department of Energy recently announced a $45 million investment into cybersecurity research for the energy sector, including projects on artificial intelligence detection and response and quantum communication for the grid. DOE’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER) will fund 16 projects that are largely aimed at reducing cyber risks and improving the resilience of the electricity, oil, and natural gas sectors. The investments came shortly after the nation’s top security officials sounded the alarm on Volt Typhoon, a China-linked hacking group that has targeted critical infrastructure in ways that signify destructive or disruptive intent. One of the projects is an “artificial intelligence and data processing capability” that can detect and respond to hacks for grid edge-devices, an umbrella term for customer-owned controls like smart thermostats and electric vehicle charging stations. Another AI-focused project is a framework for automating vulnerability assessments, discoveries, and mitigations in distributed energy resources.

As a corollary to the development and use of AI tools, ORNL has established the Center for Artificial Intelligence Security Research, or CAISER to address emerging AI threats. With a particular focus on cyber, biometrics, geospatial analysis, and nonproliferation, CAISER will analyze vulnerabilities, threats, and risks related to the security and misuse of AI tools in national security domains. NIST and DHS CISA and others are also developing standards, tests, and tools to ensure AI safety and improve cybersecurity practices by the application of AI. For example, according to the NIST AI website, NIST is performing the following AI research:

      • Conducting fundamental research to advance trustworthy AI technologies and understand and measure their capabilities and limitations.
      • Applying AI research and innovation across NIST laboratory programs.
      • Establishing benchmarks and developing data and metrics to evaluate AI technologies.
      • Leading and participating in the development of technical AI standards.
      • Contributing to discussions and development of AI policies, including supporting the National AI Advisory Committee.
      • Hosting the NIST Trustworthy & Responsible AI Resource Center providing access to a wide range of relevant AI resources.

So does this mean liability rests with the government when these tests fail to detect unsafe AI? And does this mean AI doesn’t get released til it passes the test? How high the bar? What kind of safe harbor provisions will companies receive that undergo testing?

The Cloud Security Alliance (CSA) is also undertaking an AI Safety Initiative that has the backing of several AI heavyweights such as OpenAI, Anthropic, and Google. This initiative is focused on the safe use of AI, rather than applying AI to a safety problem. According to CSA, the initiative provides a

“… unique structure for rapid innovation and collaboration with governments, industry and NGOs. CSA shall:

•  Create trusted best practices for AI and make them freely available, with an initial focus on Generative AI

•  Give customers of all sizes confidence to accelerate responsible adoption due to the presence of guidelines for usage that mitigate risks

•  Complement AI assurance programs within governments with a healthy degree of industry self-regulation

•  Provide forward thinking program to address critical ethical issues and impact to society resulting from significant advances in AI over the next several years.”

It features a structured framework comprising four working groups that will address the multifaceted challenges surrounding AI safety. While the initial focus lies on AI security through upcoming deliverables, the long-term objective of the CSA AI Safety Initiative is to encompass both AI security and AI safety. The structure involves a cooperative blend of  private organizations, government, and academia to align around industry standards, as outlined in Google’s Secure AI Framework (SAIF). CSA recently announced the addition of more members to the Initiative.

The Emerging Technology Observatory recently conducted a survey of AI research with a special focus on AI safety. The survey’s key findings included:

      • AI safety research is growing fast, but is still a drop in the bucket of AI research overall.
      • American schools and companies lead the field, with Chinese organizations less prevalent than in other AI-related research domains.
      • Notable clusters of AI safety research from ETO’s Map of Science covered themes including data poisoning, algorithmic fairness, explainable machine learning, gender bias and out-of-distribution detection.

So, at least from a research perspective, it seems that AI safety is growing in importance but still has a ways to go in terms of the spend on it.


  • AI and the intersection of humans, agents, and cyber.

A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning [DRL]. It shows machines can learn to steer human choice-making through their interactions with us. The method can also be used to defend against influence attacks. According to The Conversation, machines could be taught to alert us when we are being influenced online, for example, and help us shape a behavior to disguise our vulnerability (for example, by not clicking on some pages, or clicking on others to lay a false trail).

Along a parallel thread, SRI has been chosen to deliver cyber-psychology-informed network defense technology for IARPA. The program focuses on the psychology of cyber attackers by understanding human limitations, such as innate decision-making biases and cognitive vulnerabilities. SRI has a long history of research in the AI space, more than 50 years. It created the AI Center, one of the earliest labs focused on AI, and delivering world-changing advances like the first mobile robot with the ability to perceive and reason, commercial quality speech recognition, and the original Siri virtual personal assistant. I look forward to seeing how this research unfolds.

It also seems to me that AI should be able to improve reputation-based filters to better expose new types of phishing scams that utilize public cloud hosted URLs. These so-called Fully Undetectable or FUD Links represent the next step in phishing-as-a-service and malware-deployment innovation. Basically, attackers are repurposing high-reputation infrastructure for malicious use cases. One recent malicious campaign, which leveraged the Rhadamanthys Stealer to target the oil and gas sector, used an embedded URL that exploited an open redirect on legitimate domains, primarily Google Maps and Google Images. This domain-nesting technique makes malicious URLs less noticeable and more likely to entrap victims.

Along with human-computer interaction enhanced by AI, we are also seeing research and product announcements regarding AI-based agent-computer interaction. It’s similar to human computer interaction (HCI) but focuses more on LLMs and LVMs. AI-based agents such as Devin and SWE-agent are already being released from research and making inroads in the world of software coding, testing and bug hunting. It will be interesting to see what impact these tools eventually have on the bug bounty platforms, the software supply chain, as well as on the future of software engineers. [The agent’s AI model and version might have to be included in SBOMs?] The AI for Cyber Defence (AICD) research centre in the UK  is also  applying cutting edge, deep-learning based approaches to intelligent agents in the following cyber-related areas:

      • Autonomous cyber operations and network defense: to what extent can a computer network be actively managed and defended by intelligent autonomous agents [similar to the DARPA Grand Challenge]?
      • AI for systems security: can your attacker model resist an autonomous adversary [similar to the DARPA Grand Challenge]?
      • Adaptive fuzzing and state-machine learning: can IAs find vulnerabilities in mainstream applications?

Cyber defenders of Operational Technology (OT) systems, through necessity by the way these systems are architected and missioned, must place a high level of trust in the operation of their systems. Trust is a combination of many factors, mainly: reliability, resiliency, safety, security, privacy, ethics, and usability. When you add robotics and autonomous systems, and the interaction of humans to these systems, the trust equation can get very complicated very fast. All research activities in this space are looking at the challenges of applying AI to solve this complexity equation while optimizing the trade-offs among the trust factors listed. [Check out my white papers on trust at the Downloads page here.] It is critical that progress is made in this area due to the looming threat of China’s Volt Typhoon and similar adversarial nation-state actors.

Trust in cyber defense has been a subject of much research over the last several years – zero trust that is. Zero trust is an approach where each major activity by a subject on the network is not trusted but monitored and checked for proper authorization – from initial provisioning, to authentication, to access to any resource.  It involves knowing who and what resource is on your network, what obligations and permissions are allowed, what is the provenance, authenticity, and reputation of a subject and object, what is the current state of the subjects and objects on your network, and the ability to lock down or separate the processing and communication between subjects and objects. Access management controls extend to every major access attempt.

In the same way that zero trust is applied to the general cyber security of the enterprise, any critical artificial intelligence (AI)-based product or service should also be continuously questioned and evaluated. This suggests a zero trust approach to AI. For example, all data sources and code used by AI systems need to be monitored and evaluated, and we are already seeing some provenance and reputation solutions that cover software such as SBOM that should be extended to AI. However, to improve transparency and explainability for AI, content and model provenance solutions need to be improved for AI models, as discussed in this research report. The Content Authenticity Initative is one organization that is working to promote adoption of an open industry standard for content authenticity and provenance for AI use.

AI can also enhance the performance of enterprise zero trust implementations. Researchers are already evaluating the following examples, according to this report by Cloud Security Alliance:

      • Behavioral Analytics and Anomaly Detection: Empowered by AI, behavioral analytics meticulously scrutinizes user actions to establish a baseline of ‘normal’ behavior and to flag anomalies and potential threats. By serving as a sentinel for unauthorized access or compromised accounts, AI reinforces the very essence of zero trust.
      • Automated Threat Response and Remediation: AI can automate response measures to different cyber threats. This includes swift isolation of compromised devices, withdrawal of access privileges, or the seamless initiation of incident response protocols.
      • Adaptive Access Control: AI technologies that are embedded in the fabric of access control systems can dynamically adjust privileges in response to real-time risk assessments. Enriched with context such as user location, device health, and behavior patterns, AI generates an informed narrative for granting or denying resource access.

KuppingerCole Research also released a report recently – Leadership Compass on Zero Trust Network Access – that predicts the Zero Trust Network Access (ZTNA) market will reach a staggering $7.34 billion by 2025, with a Compound Annual Growth Rate (CAGR) of 17.4%. According to the report, this significant growth is driven by how ZTNA can successfully address the security requirements that are due to the increasing adoption of remote work, cloud-based applications, and the ever-evolving landscape of cyber threats. ZTNA can be complex and costly to implement, but having AI assistance may be the key to this accelerated adoption rate.


  • Summary

AI and cybersecurity promise to be disciplines that are getting woven together in every aspect of computing and analytics. Getting the complexity of the interaction of these two disciplines understood and controlled will be an on-going research effort for years to come. In the meantime, the race between cyber defenders and cyber attackers in the employment of AI will be accelerating. Will this race result in mounting losses for the defender? Or can AI turn the tide on attackers? Check out the second part of this series at this link to find out more.


Thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Let us know if you are innovating in the cyber space or have a cybersecurity product you would like discussed on Active Cyber™. Please give us your feedback on this article or other content and we’d love to know some topics you’d like to hear about in the areas of active cyber defenses, authenticity, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing OT / IIoT and IoT systems, AI/ML, Augmented Reality, or other emerging technology topics. Also, email chrisdaly@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.