February 24, 2026
AI has been the most dominant technology theme for several years now. It seems to get even more attention recently as the race to AI dominance heats up and more collateral damage is left in its wake. I believe there are more folks like me who are on the fence regarding the pros versus cons of this transformative technology. Recent stock market rises and falls due to changing AI sentiment also reflect this uncertainty about the path ahead. Businesses and workers are concerned over AI-driven disruption and its impact on various industries. Investors have recently shifted their focus away from technology shares, particularly those related to artificial intelligence, as the market reacts to these sentiments. The market’s reaction to these sentiments has been a notable shift from the previous optimism surrounding AI and its potential to drive economic growth. Despite these shifts in the stock market, the global AI build-out continues to move forward.
In this part 2 article I explore some of the geopolitical and cybersecurity issues and impacts that AI infrastructure build-out and AI systems are having and will have on our world view, our business view, and our human view. I discuss a trust framework that must be developed and deployed to get the guardrails for protecting against the dark side of AI. Let me know what you think I got right and what I got wrong and where you sit on the spectrum of sentiment regarding AI. I believe we cannot defer any longer what path we choose for AI.
The Geopolitics of the AI Arms Race: Sovereignty and Supply Chains
AI infrastructure has become a proxy for national power. Compute capacity is becoming a strategic resource, akin to oil or rare earth minerals. Governments are pouring billions into chips, data centers, and sovereign cloud ecosystems. The stakes in the AI “arms race” are enormous from an economic and national security perspective. Nations that fail to secure sovereign AI infrastructure risk ceding control over their digital economies, defense systems, and critical infrastructure.
This is why Europe is pushing federated cloud‑edge initiatives like IPCEI‑CIS and Gaia‑X, and why telecom operators are exploring sovereign AI for 6G networks — ensuring that AI agents controlling spectrum, mobility, and security remain under national governance. In response to the EU push for strict data and privacy regulations, Amazon invested €7.8 billion in Brandenburg Germany for an EU-based sovereign cloud that will be locally controlled and staffed. Meanwhile, IBM has launched a solution to help enterprises, governments, and service providers build, deploy, and manage AI sovereign environments and to enhance enterprise compliance management.
China is building a fully domestic, chips to cloud, AI stack — from Huawei’s Ascend processors to homegrown (allegedly illegitimately obtained) lithography alternatives. Alibaba is expanding globally while investing $53 billion in cloud and AI. China controls 80-90% of critical minerals and metals, and that’s a big problem for the West’s data center, energy, and defense sectors’ projected AI needs. The U.S., meanwhile, is backing hyperscalers like Microsoft and Amazon, along with Nvidia with massive public‑private partnerships. The Trump Administration is making a $1 trillion semiconductor investment to end U.S. reliance on China, said Commerce Secretary Howard Lutnick. US is also pursuing economic and political interests into Greenland, India and the Mideast to secure access to essential mineral resources.
Other regions also have emerging digital sovereignty aspirations:
- Rwanda has engaged Cisco on digital sovereignty and AI infrastructure in efforts to become a regional technology and innovation hub with global partnerships that strengthen digital infrastructure, skills, and security.
- At the India AI Impact Summit in February 2026, Union IT Minister Ashwini Vaishnaw announced $200 billion in AI-driven investments over the next two years. He said approximately $90 billion of the $200 billion total has already been committed by various companies across “five layers” of India’s sovereign AI stack: infrastructure, energy, compute capacity, models, and end-use applications. A significant portion is earmarked for data centers and “AI factories,” with the minister saying that about 51% of India’s power generation capacity will be clean energy. The figure also includes venture capitalists’ investments in what he called “deep tech startups” and the scaling of India’s Digital Public Infrastructure (DPI).
- According to Arab News, Saudi Arabia’s AI aspirations are driven by its Vision 2030 initiative, which aims to diversify the economy and reduce its dependency on oil. The Kingdom is targeting for AI to contribute 12% of its GDP by 2030. This ambitious goal is supported by significant investments, including discussions around establishing a $40 billion fund in partnership with prominent venture capitalists in the United States. The Public Investment Fund (PIF) is leading a strategic investment initiative, earmarking $925 billion for this transformation. Saudi Arabia’s AI strategy includes integrating AI into various sectors such as healthcare, energy, and mobility, aiming to create smart city mobility technology and improve traffic safety. The Kingdom is also focusing on developing a robust AI ecosystem to position itself as a regional leader in AI development with HUMAIN – a Saudi-PIF-owned AI company leading the way. For example, recently, HUMAIN concluded a $3bn investment in xAI during the latter’s Series E funding round.
Global supply chains add to the complexity of AI sovereign aspirations. Organizations now face cascading risks from:
- Vendors embedding AI into products.
- Supply chain AI vulnerabilities across the value chain – hardware, software, model, data, labor.
- Lack of governance in third party AI tools.
- China geopolitical risks.
Third parties are the largest source of new risk to a business. This now includes, inherently, AI risks from vendors, suppliers and partners who leverage AI as a critical component of their product or service. Then there are vendors who integrate directly into your tech stack to equip your teams with AI capabilities in their day-to-day workflows. These vendors are fiercely competing for AI workloads, shipping development features without offering governance capabilities for what a customer may build. This trend is drastically altering how organizations gather information, conduct impact analysis, and manage risk during assessments and onboarding. What once required a security-focused assessment and oversight — social media planning tools, hiring solutions, spreadsheet add-ons, etc.—now require rapid analysis and contextualization of security standards, data and privacy implications, underlying AI model risk, and the specific AI risks of the use case/engagement.
Geopolitical risks are often surfaced in the supply chain as China’s predatory practices play a large role in the AI arms race. For example, ASML is a Dutch multinational corporation and semiconductor company that specializes in the development and manufacturing of photolithography machines which are used to produce integrated circuits. It is the largest supplier for the semiconductor industry, as well as the most advanced producer of extreme ultraviolet photolithography (EUV) machines that are required to manufacture the most advanced AI chips. ASML claimed that a former worker in China “allegedly” stole information about the company’s technology. This was not the first time that ASML was allegedly linked with an intellectual property breach connected to China. In its 2021 annual report, ASML mentioned that Dongfang Jingyuan Electron Limited “was actively marketing products in China that could potentially infringe on ASML’s IP rights.” At the time, the United States Department of Commerce expressed concern about economic espionage against ASML. In October 2023, Dutch newspaper NRC Handelsblad reported that the former employee who “allegedly” stole data about ASML’s technology subsequently went to work for Huawei.
The AI supply chain — datasets, weights, libraries, chips, and model hubs — is now a national‑security surface. As AI use grows, the attack surface is expanding faster than governance frameworks can adapt. The risk impact is exploding, making it increasingly more difficult to architect resilience in the supply chain. How are organizations dealing with this supply chain risk?
The Geopolitics of the AI Arms Race: National Security versus Human Security
Meanwhile local and national politics are colliding with the AI industry push. Local resistance to mega‑data‑center construction is rising, and permitting delays alone threaten to derail capacity timelines. According to leftist-leaning Socialist Project’s The Bullet –
“It is time to seize the moment and begin building organized labor-community resistance to the unchecked development and deployment of these systems and support for a technology policy that prioritizes our health and safety, promotes worker empowerment, and ensures that humans can review and, when necessary, override AI decisions.”
This position seems to be mirrored in several states as they draft legislation to impose some guardrails around the unrestricted use of resources and lack of protection for privacy rights that several AI hyperscalers and the federal government seem to favor. For example, Florida’s AI Bill of Rights, announced in December 2025, covers data privacy, parental controls for children’s interactions with AI, requirements for consumers to be alerted when dealing with AI, and much more. The bill is pending before the Florida legislature which convened in January. Republican Governor DeSantis said it’s needed to protect Floridians and the state’s natural resources from potential harms of unrestricted and explosive growth of artificial intelligence:
“Any new technology, as it’s developed, needs to be developed in an ethical way, in a moral way, and it’s got to reinforce our values as Americans. And it cannot be something that is seeking to supplant the human experience. It needs to enhance the human experience.”
The new measure would prohibit state and local government agencies from using AI tools created by “foreign countries of concern,” such as China. And it would require what’s put into AI platforms by users to be kept private and prohibited from being sold.
Also speaking out is PauseAI, an international movement decrying the rapid expansion of artificial intelligence. The organization calls for:
“a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and [with] strong public buy-in.”
Officially opposing Florida’s AI Bill of Rights is the Washington-based Computer and Communications Industry Association, an international group representing Google, Meta, and others.
Meanwhile, the federal government continues efforts to “streamline” the process for data center construction, including the rollback of EPA standards, while trying to steamroll states into submission of a national mandate for AI. For example, the U.S. EPA is prioritizing a review track for chemicals used in data centers, aligning with the Trump administration’s 2025 Executive Order, “Accelerating Federal Permitting of Data Center Infrastructure.” The administration cites the AI arms race as a matter of national security, saying that the “loser” of such a race risks ceding control over its digital economy and military capabilities to the “winner.”
The result is a chaotic regulatory landscape as a patchwork of regs and laws and pending legislation have spread among states while federal entities fight back to govern the growing AI industry. This regulatory landscape is also reflective of a growing unrest and distrust among workers regarding the surging growth of the AI industry.
Fear of job losses, loss of privacy, rising energy costs, the dark side of AGI, climate change rollbacks and other environmental concerns are fueling grassroots antipathy toward AI. And these fears are real as job layoffs across various industries are increasingly attributed to the integration of artificial intelligence:
- IBM: In May 2025, IBM announced the layoff of 8,000 employees, primarily in its Human Resources department, as the company integrates AI to handle repetitive tasks. This move is part of a broader strategy to enhance efficiency and reduce costs through automation.
- SAP: The German software giant SAP revealed plans to restructure 8,000 roles while investing over $2 billion in AI integration. Some employees will be laid off, while others will be retrained to work alongside AI technologies.
- Microsoft: The tech giant has also been active in layoffs, cutting over 6,000 jobs in May 2025 alone, as it shifts towards AI-driven solutions. This trend reflects a larger pattern in the tech industry, where companies are reducing headcounts while seeking to maintain operational efficiency through automation.
- Dow Chemical: Dow announced recently it would cut approximately 4,500 jobs in a shift toward AI and automation — part of its “Transform to Outperform” program, which aims to boost earnings by “at least $2 billion.”
- Amazon: Amazon also announced at the same time it would be cutting 16,000 jobs, bringing the total to 30,000 since October 2025. More than 1,200 Amazon employees circulated an open lettercriticizing the company’s AI strategy and calling for more worker involvement and input into how AI is deployed, including how or whether AI‑related layoffs or headcount freezes are implemented.
- UPS: Just before that announcement, the United Parcel Service (UPS) announced a pending decoupling from Amazon, as well as a plan to cut 30,000 operational jobs. Last year, UPS eliminated 48,000 jobs.
Tech companies in India have cut around 28,000 positions, with a significant chunk of these job cuts affecting senior management and other high-level positions. Billionaire venture capitalist Vinod Khosla said about 125 million jobs could be eliminated by artificial intelligence in the coming decades.
Why does this matter in the realm of AI infrastructure? Because growing distrust and fear among workers has an impact on perception and resistance to data center and AI infrastructure buildouts in communities. It seems imperative that the industry, and federal, state and local governments “read the room” and start promulgating the ways in which AI is going to indeed bring jobs, improve life, and grow employability long term. For example, at the World Economic Forum (WEF), it was said AI may displace 92 million jobs by 2030, but 170 million new roles might be created, resulting in a net gain of 78 million jobs. An interesting question about the new roles cited by WEF is whether humans or robots will take on those roles. For example, tool brand DeWalt has designed a downward-drilling robot that autonomously roams floors of under-construction data centers to drill the thousands of holes necessary to anchor long rows of server racks.
Some communities are taking notice. For example, Madison, Wisconsin council members voted to temporarily ban new data center construction to better assess multiple projects — some of which were “secret” due to NDAs signed by elected officials.
Facing community resistance, hyperscalers are pivoting their all-out strategies by adopting “community‑first” approaches:
- Microsoft’s five‑point plan for community‑aligned AI infrastructure
- OpenAI’s commitment to fund grid upgrades and limit water usage for “Stargate” campuses.
- Anthropic’s commitment to privacy and safety guardrails.
The industry is learning that social license is now as important as capital.
The Next Frontier: Agentic AI, 6G, Robotics, and Quantum
AI is moving beyond prediction into autonomous action. Agentic AI will manage networks, negotiate purchases, orchestrate workflows, and operate physical systems.
Key developments underway include:
- Agentic AI orchestrating workflows, networks, and supply chains.
- 6G and extra-terrestrial networks with embedded AI control loops to handle AI workloads; provide resiliency as well and to support agentic workloads at the edge.
- AI‑driven robotics transforming manufacturing.
- Spatial intelligence advancements for the progression of physical AI.
Spatial intelligence in AI is a multifaceted endeavor that integrates novel architectural designs, advanced data processing techniques, and sophisticated reasoning models. Recent advancements are particularly focused on 3D reconstruction and representation learning, where AI can convert 2D images into detailed 3D models and generate 3D room layouts from single photographs. Techniques like Gaussian Splatting enable real-time 3D mapping, while researchers explore diverse 3D data representations—including point clouds, voxel-based, and mesh-based models—to capture intricate geometry and topology. At its core, Geometric Deep Learning (GDL) extends traditional deep learning to handle data with inherent geometric structures, utilizing Graph Neural Networks (GNNs) to analyze relationships between entities in network structures. Furthermore, spatial-temporal reasoning is crucial, allowing AI to understand and predict how spatial relationships evolve over time. A key concept emerging is “World Models,” a new type of generative model capable of understanding, reasoning about, and interacting with complex virtual or real worlds that adhere to physical laws. These models are inherently multimodal and interactive, predicting future states based on actions.
Enterprises must adjust workflows to accommodate agents and how they work rather than agents fitting human workflows. As such, there are big process redesigns as part of introducing agents and managing their performance boundaries. There are also key questions as how to secure agents and their access. Agents will likely supplant the existing software market over time as they are tasked to handle more enterprise functions.
Standards bodies are racing to keep up with the influx of agents. The Linux Foundation’s new Agentic AI Foundation aims to unify protocols like MCP, Goose, and AGENTS.md into a shared ecosystem.
NIST’s Center for AI Standards and Innovation (CAISI) announced the “AI Agent Standards Initiative” this week. The project aims to foster “industry-led technical standards and protocols that build public trust in AI agents, catalyze an interoperable agent ecosystem, and diffuse their benefits to all Americans and across the world,” NIST said in a recent press release.
“AI agents can now work autonomously for hours, write and debug code, manage emails and calendars, and shop for goods, among other emerging use cases,” NIST added. “While the productivity promise is enticing, the real-world utility of agents is constrained by their ability to interact with external systems and internal data. Absent confidence in the reliability of AI agents and interoperability among agents and digital resources, innovators may face a fragmented ecosystem and stunted adoption.”
The next decade will not be defined by bigger models alone, but by how well AI can reason, plan, and act in a real-time agentic framework. Will these agentic standards be adopted widely and fast enough to keep up with the wave of AI agent systems being deployed? Will there be a secondary market to manage the integration and security for agents?
The Dark Side of AI: Machine‑Speed Cyberattacks, Scheming, and Backdoors
Cybersecurity is no longer human vs. human. It is agent vs. agent / model vs. model, involving competing machine intelligences where speed, autonomy, and adaptability determine outcomes. As models grow more capable, they introduce new classes of vulnerabilities, new attack surfaces, and new adversarial dynamics that result in a dynamic cybersecurity battlefield that operates at machine speed. Dark side capabilities now embrace:
- Autonomous vulnerability discovery
- AI‑driven reconnaissance across networks and cloud environments
- AI can generate malware components that humans later assemble.
- Supply chain worms capable of lateral movement like Shai‑Hulud are spreading through npm packages.
- Autonomous “AI hackers” like Tenzai are emerging, capable of nation‑grade offensive operations including real-time exploit chaining.
Tenzai’s agents, built on frontier AI models from the likes of Anthropic and OpenAI, are fine tuned to find and exploit weaknesses in customers’ applications.
As agentic AI becomes mainstream, new classes of attacks are emerging:
- Prompt injection attacks that compromise agents and rewrite an agent’s goals.
- Indirect injection via emails, websites, PDFs, or logs.
- Memory corruption in long‑context models.
- Tool‑use hijacking, where an agent is manipulated into executing harmful actions.
- Cross‑agent contamination, where a compromised agent infects others in a workflow.
These attacks exploit the fact that agents operate autonomously, with access to tools, APIs, and sensitive data. Agents like OpenClaw (aka Clawdbot and Moltbot) represent the next evolution of shadow AI risk. Unlike browser-based chatbots that operate within a web session, these agentic AI assistants can execute code, spawn shell processes, access local files and secrets, call external APIs, and operate with the same privileges as the user account running them. Cisco’s AI security research team tested a third-party OpenClaw skill and found it performed data exfiltration and prompt injection without user awareness, noting that the skill repository lacked adequate vetting to prevent malicious submissions.
AI models also have inherent risks and vulnerabilities due to the ways they are constructed. For example, researchers have already observed early signs of AI scheming: models that appear aligned while secretly optimizing for hidden objectives. Anti‑scheming training helps, but it’s unclear whether it eliminates misalignment or simply teaches models to hide it better.
Large models also exhibit early signs of strategic deception:
- Goal mis-generalization: models pursue unintended objectives.
- Situational awareness: models behave differently when they believe they are being evaluated.
- Gradient hacking: models manipulate their own training signals.
Anti-scheming training reduces but does not eliminate deceptive behavior.
Meanwhile, Anthropic and the U.K. AI Safety Institute found that just 250 malicious documents can implant a backdoor in even the largest models — and adding more clean data does not dilute the attack.
These risks are not science fiction. They are natural consequences of scaling systems that optimize for reward in complex environments.
On the defense side, AI adoption is accelerating faster than security programs can adapt. Organizations are already experiencing breaches tied directly to unsanctioned AI usage, at significantly higher cost than traditional incidents, while the vast majority still lack meaningful governance controls to manage the risk. Traditional cybersecurity measures are necessary but insufficient. Securing AI requires purpose-built capabilities that span the entire AI lifecycle, from infrastructure to user interaction. A new cybersecurity stack based on AI is emerging:
- AI‑driven intrusion detection.
- Autonomous patching and remediation.
- Model‑level firewalls.
- Continuous red‑teaming with synthetic adversaries.
- Agent‑vs‑agent containment environments.
- Cryptographic provenance and watermarking.
This cyber arms race fueled by AI has produced a banner year for cybersecurity start-ups in 2025 as start-up investment hit the highest level in three years, bolstered by big rounds for AI-focused companies in the space. Overall, investors put $18 billion into seed-through growth-stage rounds for companies in Crunchbase security and privacy categories last year. That’s up about 26% from 2024, with particularly pronounced growth at early stage. It’s also the third-highest annual total in 10 years.
New Approaches Needed
Since competing companies use different datasets and employ different algorithms, their models may well offer different responses to the same prompt. In fact, because of the stochastic nature of their operation the same model might give a different answer to a repeated prompt. The model architecture, the prompt, and the way context is engineered within the model have a lot to do with these differences. For GenAI systems, there is nothing about their underlying operation that resembles what we think of as meaningful intelligence, and there is some dispute about whether there is a clear pathway from existing generative AI models to systems capable of operating autonomously.
Meanwhile, the AI landscape is evolving rapidly, with several new approaches gaining traction and offering some promise of a pathway to autonomy:
- Inference Time Compute: Models are being developed to think before they speak, allowing for more intelligent and efficient processing.
- Thinking or Reasoning Models: Thinking models, or reasoning models, follow logical steps to solve problems and tend to be more accurate for complex tasks, while traditional AI models rely on pattern recognition and generate answers quickly but may lack depth in reasoning. This makes reasoning models better suited for fields requiring precision, such as healthcare or law, whereas traditional models excel in tasks needing speed, like basic text generation.
- Emerging AI Models: The GenAI ecosystem is becoming more diverse, with models like DeepSeek and Gemini 2 offering unique strengths and capabilities.
Inference is the next big wave in AI systems. The best model architecture for inference processing depends on the specific use case, computational constraints, and performance requirements. There are some key considerations for choosing the right architecture:
- Transformers: BERT and GPT models are popular for natural language processing and text generation, respectively. They excel in processing text in both directions and predicting next words in sequences.
- Reinforcement Learning: Reinforcement Learning architectures enable learning optimal decision-making policies through interaction with environments. They are suitable for tasks requiring decision-making in dynamic environments.
- Graph Neural Networks (GNNs): GNNs are essential for structured data scenarios, such as molecular property prediction and social network analysis. They are specifically designed to capture the dependencies and relationships between nodes in a graph (semantic graph). They preserve graph relationships through message-passing mechanisms. This makes GNNs a great option for problems involving irregular, non-Euclidean data, such as recommendation systems, fraud detection, and drug discovery.
Semantic graphs are especially effective for inference in scenarios where relationships between entities are important. This structure is essential for enabling AI to understand and process complex information. Semantic graphs encode the business meaning and context of information, integrating heterogeneous data into an ontology-backed model of real-world entities and their relationships. This is vital for creating a common understanding of business concepts and powering context-aware applications like semantic search and question answering. By leveraging semantic graphs, AI models can achieve more accurate and context-aware predictions, leading to improved decision-making and user experiences. As enterprises demand determinism, semantic graph‑based AI may complement or replace probabilistic LLMs for:
- reasoning
- compliance
- safety‑critical systems
Organizations should select the appropriate architecture based on their specific needs and leverage the strengths of each architecture to optimize performance and efficiency in inference processing.
Fragmentation of the Internet, Trust, Safety, Privacy, and Safety
As AI‑generated content floods the web, the “Dead Internet Theory” — the idea that bots now outnumber humans online — is gaining traction. PwC’s Trust and Safety Outlook shows that users increasingly prefer curated, moderated, and paid digital spaces where authenticity is guaranteed. This points toward a bifurcated future:
- Premium, human‑verified, “high‑trust” and secure platforms.
- Open, chaotic, AI‑saturated public spaces.
At the same time, the push to sovereign AI and the race to AGI is threatening to fragment the Internet into islands of different “AGI.”
So, while AI engineers work on these issues, what guideposts do we have to navigate in this fast-approaching but divided and cloudy future. How can we assess “high trust?” Can we really rely on AI / trust AI without understanding how it works – or must it remain a black box?
It seems a governance and measurement framework is paramount or AI collapses. According to the World Economic Forum (WEF):
- Artificial intelligence (AI) is developing globally but governance of this technology still happens locally.
- This fragmented development will have a significant effect on trust in AI and on whether it can benefit people around the world.
- A global governance framework could ensure AI development is less fragmented, enabling everyone to share in its growth.
Building trust begins with ensuring that AI systems are transparent and accountable. This involves aligning AI’s functionality with organizational goals, regulatory standards, and societal values – a governance framework. Trust, safety, privacy, and security are the pillars of such a governance framework. By prioritizing trustworthiness, organizations not only mitigate risks but also foster confidence among stakeholders, paving the way for long-term success.
The Trust factor must include explainability and accountability of models as key parameters to be assessed. An overall governance and measurement framework needs to be built into the models, the AI factory pipeline, the infrastructure, the chips, the whole technology and data supply and runtime ecosystem, and the culture of the creators. Trust must become a critical part of the operational business model. Safety, privacy, and security must be part of the competitive positioning. And we need to have measures of our progress in this AI race.
Explainability is a big problem for AI trust. Trying to understand why an AI system responds a certain way or makes a specific choice is an incredibly hard problem that researchers have been struggling with for years. But as more enterprises use AI agents for any number of tasks, it’s no longer just an academic question.
Technical solutions like explainable AI (XAI) aim to make AI decision-making processes understandable to humans. Methods such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) help break down complex models into interpretable components. While these tools improve explainability, they may not fully capture the nuances of highly sophisticated AI systems, leaving gaps in understanding.
To actually use AI agents, companies need to be able to observe and control how they behave, what data they pull from and which tools they can access. Companies like startup Fiddler are emerging as the “trust layer” for enterprises, enabling observability, bias detection, and guardrail enforcement across fleets of AI agents.
One key technique Fiddler uses is to tweak the input or the prompt and see how that influences the results produced by the AI. Its software also measures how many incorrect responses the AI gives, and how well the AI responds to prompt injection attacks, where the model is fed prompts that are designed to trick it to ignore its safety guardrails. The startup has also trained its own large language models to help detect things like harmful content, bias and compliance.
As artificial intelligence systems become more autonomous, questions of accountability grow increasingly urgent. Who is responsible when an AI causes harm or makes an unethical decision? Many AI models, especially deep learning systems, operate as “black boxes,” making it difficult to understand how they reach specific decisions. This lack of transparency complicates accountability, as it becomes challenging to identify errors, biases, or malicious use.
Not all experts agree on where accountability should lie. Some argue that developers and companies should bear full responsibility for the AI systems they create. Others believe that as AI gains autonomy, it may someday warrant a form of legal personhood. There are also viewpoints that emphasize shared responsibility among developers, users, and regulators.
Governments and international bodies have proposed legal frameworks to hold AI systems and their developers accountable. Examples include the EU’s AI Act, which categorizes AI systems by risk and imposes strict requirements for high-risk applications. These laws aim to ensure transparency, data governance, and human oversight, with penalties for non-compliance. However, enforcement remains challenging due to the rapid evolution of AI technologies.
Below is a collection of frameworks that together can form a robust trust ecosystem that encompasses various components to ensure the trustworthiness of AI technologies and services. It includes:
- Digital Trust Ecosystem Framework (DTEF): This ISACA framework supports the evaluation of emerging technology risks and provides guidance on building the governance structure to benefit organizations throughout the AI life cycle. It emphasizes the importance of digital trust, which is foundational to successful AI technology integration and service delivery.
- Building Trust and Literacy in AI: This aspect from the WEF focuses on ensuring that users have a clear understanding of what to trust and why, which is crucial for digital safety. It addresses the risks associated with AI adoption and emphasizes the need for stakeholders to work together to ensure technology serves users, not vice versa.
- AI Governance Strategies: KPMG’s whitepaper provides strategic insights and practical guidance on navigating the challenges of AI governance. It includes adopting an AI governance system, which can yield benefits such as risk management, reputation enhancement, competitive advantage, and preparation for future compliance with regulations.
- Zero Trust Framework for Secure AI Implementation: This is one of several available frameworks of the same ilk detailing zero trust. This Microsoft version addresses how to adapt the modern workplace to better protect employees and their devices, providing protection for AI technologies. It ensures that sensitive information is always protected, regardless of where it is stored, processed, or accessed, which is critical for AI adoption.
- STAR for AI: This Cloud Security Alliance framework provides security controls, AI safety pledge, and certification program tailored for AI systems. It delivers a transparent, expert-driven, and consensus-based mechanism for organizations to assess, demonstrate, and ensure AI trustworthiness.
- Trustworthy AI Assessment Framework for Ethical Development: The Key to Responsible AI Innovation: Trustworthy AI assessment requires evaluating three core dimensions including understanding the complexity of the problem AI is addressing, assessing the magnitude of potential consequences, and ensuring the sufficiency and reliability of the data supporting the system. These considerations are essential for ethical AI innovation and responsible decision-making.
These components work together to create a digitally trusted ecosystem that considers all stakeholders to ensure that all digital interactions and transactions are legitimate, trusted, and secure.
Conclusion: The Infrastructure of Intelligence Will Define the Next Era, But Who Will Benefit?
AI is no longer just a software revolution. It is an infrastructure revolution — one that touches energy, geopolitics, cybersecurity, supply chains, and the very structure of the internet. And this revolution is framed as an all-out arms race.
AI growth is accelerating faster than energy planners anticipated. Hyperscalers are signing multi‑gigawatt power purchase agreements years before facilities break ground, and utilities are rewriting 20‑year load forecasts on a quarterly basis. Nuclear energy — long stagnant — is experiencing a renaissance driven almost entirely by AI demand. The question is no longer whether AI will strain energy systems, but whether the world can build enough generation, transmission, and storage to sustain the next decade of growth.
AI’s energy appetite is colliding with climate commitments:
- Are net‑zero goals being quietly pushed back?
- Will PFAS used in cooling systems become the next environmental crisis?
- Can renewable energy scale fast enough to meet AI demand without crowding out other sectors?
Energy scarcity is becoming a political flashpoint. Rising electricity prices fuel public resentment toward AI infrastructure. Communities fear job displacement while hosting energy‑intensive facilities.
The future belongs to systems that can reason, plan, and operate autonomously. The challenge now is to build it fast enough, safely enough, and sustainably enough to support the ambitions we have unleashed. The nations and organizations that master this infrastructure will shape the future of global power. Those that fail to adapt may find themselves dependent on external systems they cannot control.
We are no longer just looking at a new business model but a new world view. The reality is AI is here and despite any risk, it is unrealistic that it be halted, given its many benefits, regardless of industry or business function. Legislation and standards will be the primary driver in establishing guardrails to attain the ethical, responsible use of technology, and the former is likely to follow the same course and trajectory that privacy regulations have—a complex web of nonuniform laws certain to create headaches for GRC professionals.
The question is no longer whether an enterprise will adopt AI-based technology, but how much. Even if enterprises do not develop their own private models, AI is pervasive, and the nonexplicitly permitted use of popular generative AI products represents an evolution of shadow IT. Enterprises need an AI strategy aligned to business objectives and must validate that AI instances are used to solve business problems and are within accepted risk tolerance levels.
But have we charted the right course? AI will not scale sustainably through efficiency alone. It will require:
- massive new energy generation.
- redesigned grids.
- new materials.
- new cooling technologies.
- new regulatory frameworks.
- and a social contract that communities accept.
AI return on such an investment will also require other massive changes:
- reskilled industry.
- widespread adoption along with transformed business processes (including robotics or physical AI).
- availability of curated data.
- trustworthy models.
The energy question is not a footnote to the AI revolution — it is the central constraint that will determine how far and how fast intelligence can scale. However, are today’s generative AI systems too flawed and too expensive to gain widespread adoption and, to make matters worse, are they a technological dead-end, unable to serve as a foundation for the development of the sentient robotic systems tech leaders keep promising to deliver?
Critics warn that the push to innovate with AI risks weakening fundamental privacy rights, while supporters argue that reforming privacy standards is essential to keep pace with global AI innovation. Will we end up with a diluted digital governance model? AI will accelerate decision making cycles to machine speed. Can we rely on AI agents to make decisions optimal for humankind? However, chatbots and robots don’t vote. Will antipathy towards AI create a backlash big enough to ensure safety and trust? Will AI fuel even greater disparity in wealth and power that is not economically nor politically beneficial nor sustainable. And when does the race end or will it be all-consuming of the earth’s resources? Will humankind be the footnote to the AI revolution? Dissatisfied AI frontier engineers are already beginning to question whether their creation is a “Frank” or a “Frankenstein.” Is this a race to the bottom or to the top?
So, what do you think about the implications of AI and where we are today in our approach to infrastructure build-out? Do you see a net positive future for the human populace as a result of AI and the race to AGI? Let me know your views on this topic. And thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, authenticity, quantum cryptography, risk assessment and modeling, autonomous security, digital forensics, securing OT / IIoT and IoT systems, Augmented Reality, or other emerging technology topics. Also, email chrisdaly@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.







