This week, NATO leaders convened their first-ever Data & AI Leader’s Conference on big data and artificial intelligence (AI) with a focus on building sound foundations for both governance and responsible use of these technologies. Given the increased attention and focus big data and AI are having in allied capitals, NATO’s goal is to help define a common set of standards and principles which members and industry can adopt in terms of how to use AI in a responsible and implementable way. This week’s internal deliberations on this topic serve as an important inflection point regarding the alliance’s collective ability to define what NATO and its members need and should be focusing on in terms of developing AI-enabled capabilities for the coming years. It is also equally about building the foundations and momentum for knowledge and experience-sharing as it relates to the overall issue of digital transformation.
The questions surrounding the development and use of big data analytics and AI have taken on increased importance in recent years, not just because of the advancements in the technology itself, but also the increased use of these technologies in applications such as weather forecasting, financial transactions, and, to a degree, military threat detection.
For its part, NATO headquarters has been very active in terms of laying the foundations for what its role will be in not only developing responsible policies but also the facilitation of funding of some of the most important and critical emerging and disruptive technologies (EDTs) its members will be relying on the decades to come.
Late last year NATO released its AI strategy, which stated that, given the rapid advancement of AI technologies, the alliance needed to provide a foundation for members to adopt for responsible use of AI, accelerate adoption of these technologies into capability development, and safeguard against malicious use of AI among other objectives. It also released its companion, NATO’s Data Exploitation Framework Policy, for which a public summary is currently being prepared for release.
NATO’s Emerging Security Challenges (ESC) Directorate, in coordination with Allied Command Transformation (ACT), is leading the charge on implementing the follow-on actions resulting from the strategy’s adoption.
Since its release, NATO leaders officially endorsed the establishment of the Defense Innovation Accelerator for the North Atlantic (DIANA) and its related NATO Innovation Fund (NIF). Both instruments are meant to bring nations and their industries into closer partnership to fund, develop, and field dual-use EDTs, with AI being one of the primary technologies. Akin to the United States’ Defense Innovation Unit (DIU) and Defense Advanced Research Projects Agency (DARPA), the goal here is leverage the best of commercial minds alongside government and military counterparts to create solutions that can be used for alliance purposes and that also can be commercialized. If these technologies do not have a path to commercialization, then the projects will not be funded via DIANA or the NIF.
To show that NATO is serious about the partnership between industry and government, just last month, NATO hosted its EDGE conference. It brought together alliance leaders from civilian headquarters, ACT, and one of its procurement agencies, the NATO Communications and Information Agency, along with national government and industry representatives to discuss a range of emerging and disruptive technologies (EDTs) that will be leveraged to provide for tomorrow’s NATO. The three-day conference was punctuated with a wide array of insights, aspirations, and predictions for how EDTs, including AI, need to be developed at pace and in a collaborative manner across both commercial and government sectors.
Also last month, alliance defense ministers approved the establishment of a NATO Data and Review Board (DARB), which is designed to build on the NATO principles for responsible use of AI by helping to build trust among users, serve as a forum for best sharing practices, and guide responsible AI adoption practices. This effort is intended to align with numerous member state initiatives.
One of the key desired products to come out of this process is the development of a responsible AI standards certification process to acknowledge when a technology system conforms with the AI principles determined by the DARB. While NATO can create standards based on the inputs of its members, it is not a regulatory body and, thus, cannot enforce standards enforcement. That would be up to the individual nations or a potential third party to monitor and ensure enforcement. Regardless, developing a set of principles with a standard certification process is a step in the right direction to ensure commonality and some degree of interoperability across NATO member states. The goal for NATO is to have its first draft completed by the end of 2023.
Why This Matters
The timing for NATO and its member states to take this on is important. The advancement of technological development is occurring rapidly, and it is not just contained to nation states. The diffusion of technical knowledge is allowing individuals, non-state actors, terrorists, financial criminals, and others to hold previously inconceivable degrees of power and influence over public goods, systems, and lives. As evidenced by the war in Ukraine, China’s threats to Taiwan, and North Korea’s action in the Sea of Japan—not to mention what is unseen in space, undersea, and cyberspace—the world is in a very dynamic state right now.
Coming together to develop a responsible set of commonly accepted and adopted rules of the road for AI’s use is critical to “do no harm” against each other and NATO’s partners around the world. Given the amount of money spent annually by national governments and industry on developing technology and understanding how to integrate it into existing systems and structures, spending time to include an AI aspect of this is essential. Developing a rule set that aligns with national and organizational expectations will result in a more efficient, predictable, and transparent use of AI-enabled technologies.
No one nation or institution can develop such standards and rules on their own. There is a plethora or knowledge to be learned by all and shared by all. This is opportunity for smaller, newer NATO allies to play an outsized role in helping the alliance understand AI technologies and how such technologies can be deployed to protect NATO systems, increase awareness of possible threats, and, if necessary, to defeat enemies on the battlefield. There is tremendous technology prowess across the alliance, but it also varies by country. NATO’s work can serve as a catalyst for collaboration and experimentation when it comes to this form of capability development.
Challenges to Overcome
As the DARB develops the key components of this responsible use policy, it will be examining various use cases across numerous domains to test how a policy would work in the real world. To achieve this, NATO headquarters staff will work with ACT, NCIA, and other directorates that each have experience to really field test the use cases. This requires data sharing from member states and industry. Given the importance and breadth of this initiative, a pan-NATO approach to involve other directorates and other agencies is required. Having a civilian and a military perspective on EDTs, including AI policy, is prudent.
Having member states share national experimentation data and making national representatives available to participate in use case exercises will be key. Otherwise, this effort will be for naught. NATO can leverage talent, but it needs commitments from the member states to share data, be open-minded, and to provide the resources necessary in order to allow this initiative to deliver what it was tasked to do. Some nation states have already indicated that they do not believe NATO should be in the business of investing in and funding EDTs. They believe this should be a national competency and not a NATO one. Therefore only 22 nations out of 30 NATO members are part of the NIF. Developing a NATO-wide set of AI principles is in each NATO members’ interest. It should be complementary to any national policy that has been or is being developed. This NATO effort may well help nations that have not yet developed their own respective AI set of principles to achieve faster harmony with the rest of their alliance partners.
Elements Needed to Succeed
Arguably developing a set of an alliance-wide set of AI principles of responsible use and the work it will take to achieve a standards certification are big moments for NATO. For progress to truly be made on this initiative, the following needs to happen:
- NATO leadership needs to hold itself and the member states accountable. The secretary general needs to keep not only ESC and ACT but the members themselves engaged in this effort. This initiative should not be allowed to slip or miss its deadline. It also should not be allowed to proceed at the pace of its slowest member. To achieve this, there needs to be culture change at NATO and amongst national teams. NATO is being asked to do a lot to prepare for a “NATO 2030” future. If old bureaucratic norms do not modify and change, then the commitments allied leaders made at Madrid or in Brussels will never realize their full potential. The future is being written now about how big data and AI will be used in economies, in health systems, and on the battlefields. NATO should work collectively to get this done right and expeditiously.
- Member states need to lean in and share what is going in their respective national inner workings to have best baselines to develop policy for responsible use. This requires data-sharing of use cases and the best and the brightest technology developers, testers, and users. It means ensuring that data and knowledge gaps do not inhibit NATO’s ability to develop a framework for the common good. This effort needs to include the sharing of practical use cases, where there are challenges, where there are successes, and what lessons are being learned at the individual national level which can and should be understood by all.
- Industry needs to recognize this is an opportunity to truly be a partner with NATO and member states in shaping a collaborative, data-driven, use-case tested, executable set of principles and standards for the future. By its design, industry will be key beneficiaries of both DIANA and the NIF. All projects to be funded by these two sources should result in the commercialization of the technologies. What industry can do, however, is bring its best talent, including its engineers, data scientists, war-gamers, and the like to this NATO-coordinated enterprise. The dynamics between NATO, allied governments, and traditional industry, i.e., those that make conventional weapons systems, are changing. The norms of behavior between EDT providers and NATO and member states are still nascent. Now is an opportunity to establish what a true partnership could involve depending on how all parties embrace this initiative.
This is also an opportunity for NATO to fully leverage and utilize ACT to drive the discussion with the member states on use cases. ACT was stood up nearly two decades to assist the alliance with capability development, experimentation, training, and exercises. Allied leaders made the right decision by ensuring ESC and ACT had a co-role in working with national civilian and military stakeholders in terms of road testing the use cases for leveraging AI and big data. This is an ideal moment for ACT to demonstrate its relevance and utility both in terms of its core capabilities but also in terms of integrating the lessons learned from the Russia-Ukraine war. In addition, it can work with the U.S. national team to incorporate various objectives found in the recently released U.S. National Defense Strategy (NDS). In doing all of this, ACT would be able to provide a meaningful pathway for how EDTs can be incorporated into NATO’s 2030 plans while also demonstrating to the alliance’s members what ACT’s evolved role should be in terms of capability development and warfare integration.
What Success Looks Like
In the end, if this initiative is to materialize into its desired state, it will achieve three major outcomes:
- it will have determined how to operationalize a set of principles for responsible AI use;
- it will have enhanced relations with traditional industry partners and will have established the foundation for how to work and learn from nontraditional industry actors; and
- it will have allowed NATO and the member states a path to accelerate the gap between experimentation and adoption through the practical lessons learned from use case testing.
Daniel Fata is a non-resident senior adviser at the Center for Strategic and International Studies in Washington, D.C. He is the former U.S. deputy assistant secretary of defense for Europe and NATO Policy (2005–2008).
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2022 by the Center for Strategic and International Studies. All rights reserved.