Cybersecurity
AI-Cyber Threats: Challenges, Issues, and Governance
By JoonKoo Yoo
Director, Center for Security Strategy of Sejong Institute
December 8, 2025
  • #Technology & Cybersecurity

Key Takeaways:

- The convergence of AI and cyber domains requires security professionals and policymakers to address both traditional cyber risks and new risks introduced by autonomous systems and machine learning algorithms. 

- Technological competition among major powers, especially the US and China, is a driving force behind debates on AI governance. 

- At present, it is likely that international norms for cyber and AI will develop separately, with discussions focusing on managing AI threats within cybersecurity and cyber threats within AI.




As artificial intelligence (AI) technologies rapidly advance, their integration with cyber capabilities is generating unprecedented threats that challenge current governance frameworks. The convergence of AI and cyber domains requires security professionals and policymakers to address both traditional cyber risks and new risks introduced by autonomous systems and machine learning algorithms. This dynamic environment necessitates a proactive approach to governance, calling for ongoing collaboration among governments, industry leaders, and international organizations to effectively adapt to these challenges. AI not only amplifies cyber threats but also becomes a target and serves as a countermeasure tool against cyberattacks. Addressing the cyber-AI nexus requires a comprehensive, integrated, and transnational strategy.

A key factor in responding to the cyber-AI nexus is strong cooperation with allied nations that share similar threat perceptions and values. However, establishing international norms for this nexus is particularly difficult due to the differing characteristics and normative frameworks of AI and cyber technologies. Technically, AI is often considered tool-oriented, while cyber is viewed within a broader information and communications technology (ICT) context. In the notion of dual-use nature of cyber and AI, cybersecurity is often discussed in terms of international security threats, while AI governance and norms are developed with clear distinctions between commercial and military applications. As AI-related threats become more prominent, these distinctions may blur and AI may be increasingly considered in the international security context, similar to cybersecurity.

Characteristics of International Norms for the Cyber-AI Nexus

Rapid advancements in cyber, space, and AI technologies have accelerated global discussions on governance for emerging technologies. Proposals for international norms are progressing quickly at global, regional, and national levels, with new governance bodies, standard-setting, and norm-creation efforts gaining prominence. Responding to future discussions on international law and norms has become essential, with national legislation advancing swiftly. Laws from regions like the US and EU are increasingly becoming multilateralized.

Technological competition among major powers, especially the US and China, is a driving force behind debates on AI governance. Leading nations are competing over standards and norms to promote innovation and limit competitors’ access. Establishing international standards and norms remains a challenge with new technologies due to their dual-use nature, which brings both significant benefits and the potential for disruption, particularly in security and military contexts.

The dual-use characteristic of these technologies facilitates their rapid integration into security, military, and economic sectors. The growing interconnection and integration of these technologies create complex challenges for international normative discussions. As norms for individual technologies are still being developed, convergence can delay progress. For instance, in United Nations Group of Governmental Experts (UNGGE) discussions, emerging technologies were collectively referenced within the Internet of Things (IoT) context, with varying opinions on whether to include AI.

Cybersecurity discussions began earlier, resulting in numerous international legal and normative documents over the past two decades. However, the scope, nature, and implementation of these norms continue to evolve with new threats and ICT advancements. AI governance discussions are more recent, and international normative debates are primarily theoretical, except for debates on autonomous lethal weapon systems under the Convention on Certain Conventional Weapons (CCW). Since 2020, major technological powers have announced AI strategy policies and launched legislative initiatives, accelerating global governance and norm formation. For example, the LAWS Group of Governmental Experts (GGE) is exploring treaty formulation, and the EU has adopted the AI Regulation and the 2024 Framework Convention on AI at the regional level.

Cyber and AI share significant technical similarities as computing and software-based technologies, often serving similar functions related to information and data. Nevertheless, questions persist about comparing artificial intelligence to human intelligence. According to the US "Political Declaration," AI is defined as the capacity of machines to perform tasks that require human intelligence.

Current Issues and Controversies in Formulating Interconnected Norms

Within the cyber and AI norms nexus, each framework emphasizes different elements. The cyber norms system, after extensive deliberation, adopted the application of international law, based on the belief that cyberspace is not lawless. This system prioritizes identifying and mitigating threats to cyberspace and its infrastructure. Early UNGGE discussions assessed risks, threats, and vulnerabilities from a cybersecurity perspective. Once consensus was reached on risks and vulnerabilities, the focus shifted to threats causing these risks.

In contrast, AI has developed a normative structure focused on prohibition versus regulation, highlighting meaningful human control and a risk-based approach to AI utilization. Regulations like the EU AI Act distinguish between prohibited and regulated commercial AI applications based on risk. In the military context, the LAWS GGE differentiates between fully autonomous (prohibited) and partially autonomous (regulated) weapons, depending on meaningful human control. Despite AI’s dual-use nature, its normative structure remains distinct and has not yet prioritized threat discussions, though this may shift as AI-based threats become more prevalent.

Connecting cyber and AI necessitates addressing threats and risks in an integrated way at the foundational purpose level of normative structures. Current discussions are cross-cutting within cybersecurity, with an emphasis on "AI-driven cyber threats." Another issue is the fragmented emergence of AI standardization, with separate conversations for commercial and military applications and even within the military domain. However, there are signs of convergence toward international security across commercial safety, international security, and military security. This trend suggests greater alignment with cybersecurity norms and underscores the growing importance of defining the normative scope of AI. Fragmentation across AI, data, computing, and algorithms/models is leading to distinct legal regimes within these domains.

Outlook for International Norms on the Cyber-AI Nexus

Discussions on international norms for the cyber-AI nexus are still in their early stages. It is likely that norms will initially develop independently within each domain, with some elements addressed through cross-cutting approaches. International norms will focus on mitigating risks and responding to threats at the intersection of AI and cybersecurity, guided by the high technical similarity between both fields, which depend on networks, data processing, transmission, and computing technologies.

Threats from non-state actors, such as targeted cyberattacks leveraging AI, may accelerate these discussions. The misuse of AI by both state and non-state actors is becoming a central issue in cyber-AI discourse. Coordinating national policies and legal frameworks to address adversarial AI worldwide will be a crucial first step. The US and EU are expected to identify threat factors and propose countermeasures through their respective cyber-AI strategies and policies, providing a foundation for global and regional organizations to develop concrete implementation processes. Multiple US cyber and AI strategy reports, including those from the RAND Corporation, are already addressing AI-based cyber threats. Meanwhile, global AI initiatives are increasingly converging within an international security framework, as demonstrated by the 2024 UN resolution led by South Korea.

Protecting critical infrastructure has been a core aspect of cybersecurity norm formation, and responses to AI breaches, vulnerabilities, and adversarial exploitation are expected to become major issues in future AI normative debates. Currently, AI governance focuses mainly on prediction and developers, with less attention to regulating end users. However, advances in threat intelligence analysis are making it possible to track adversarial AI usage, laying the groundwork for governance that balances responsibility between developers and end users. In the short term, the urgent threat of harmful AI is expected to drive discussions, prompting the creation of foundational guidelines and platforms.

AI has the potential to significantly enhance cyber defense capabilities by improving intrusion detection, vulnerability management, threat detection, and incident response. However, comprehensive management of AI-cyber integration is hindered by AI’s inherent instability and the fragmented nature of cyber infrastructure, complicating the creation of norms for the cyber-AI nexus. Nevertheless, as AI adoption accelerates, the formation of related norms is also advancing rapidly, with the possibility that international norms may be established faster than in cybersecurity. Although global AI governance is sometimes compared to governance models in space, chemical, and nuclear fields, the strong ties between AI and cyber infrastructure, algorithms, and data make normative linkages inevitable.

At present, it is likely that international norms for cyber and AI will develop separately, with discussions focusing on managing AI threats within cybersecurity and cyber threats within AI. Technologies such as digital twins exemplify interconnected systems that combine new technologies—such as ICT, IoT, data, computing, blockchain, and AI—and serve as response tools for security threats via the cyber-AI nexus. However, ongoing global technological innovation leads to fragmented and overlapping standards, norms, and policies. The absence of common standards results in duplication, conflict, and inconsistency. In this context, the EU’s Cyber Resilience Act and AI Act stand out for emphasizing responses to cyber threats linked to emerging technologies and strengthening overall resilience.

Joonkoo Yoo is a Professor of the Korea National Diplomat Academy, Ministry of Foreign Affairs and concurrently serves the inviting professor of the Law School of SungKyunKwan University and GSIS of Yonsei University. He has been a member of advisory committee of Ministry of Defence(MOD), Trade, Industry & Energy(MOTIE), and Defence Acquisition Programme Agency(DAPA). He currently specializes in international law focusing on cyberspace security, outerspace, and emerging technologies issues and has served as a legal adviser for the cyber security UNGGE/OEWG, and LAWS GGE. Professor Yoo has been teaching in Seoul National Univ. GSIS and Yonsei Univ. He served as Deputy Director of the Presidential Committee for the G20 Seoul Summit. Before teaching, Dr. Yoo was a legal adviser specializing in International Trade and Defence Acquisition with Aitken, Berlin & Brooman in Washington D.C..