Nevertheless, AI presents
well-documented challenges related to data bias, vulnerability, and
explainability. Northrop Grumman is collaborating with U.S. Government
organizations to establish comprehensive guidelines for assessing the safety,
security, and ethics of AI models intended for Department of Defense (DoD) use.
The Defense Innovation Board (DIB) of
the DoD has responded proactively to AI challenges by introducing the AI
Principles Project. Initially, this project outlined five ethical principles
that AI development for the DoD should adhere to: responsibility, equity,
traceability, reliability, and governability. To effectively implement these
DIB principles, AI software development must also prioritize auditability and
resilience against potential threats.
It is important to note that concerns
surrounding AI ethics are not a recent development; they have existed since the
inception of the concept of AI. These ethical principles represent a
culmination of this historical perspective, aiming to harness the potential of
automation while mitigating its associated risks. In this article, three AI
experts from Northrop Grumman delve into the significance and complexity of
applying the DIB's AI Principles in the realm of national defense.
Ethical AI: From Theory to Practice
According to Dr. Bruce Swett, Chief AI
Architect at Northrop Grumman, the true challenge lies in operationalizing AI
ethics – integrating ethical decision-making into AI systems to prevent subtle
oversights or flaws that could lead to detrimental mission outcomes. Developing
secure and ethical AI is inherently complex because it blurs the boundaries
between traditional development and operational phases seen in conventional
computing environments.
AI constantly evolves and undergoes
updates, necessitating continuous retesting to ensure its safety, security, and
ethicality. For instance, when an image-recognition AI is re-trained with a new
dataset, it effectively reprograms itself, adjusting its internal recognition
weights. Updating AI models with fresh data for enhanced performance can
introduce new sources of bias, vulnerabilities, or instability, necessitating
thorough testing for ethical and secure utilization.
Dr. Amanda Muller, a technical fellow
and systems engineer at Northrop Grumman, emphasizes the multidisciplinary
nature of addressing these challenges. She suggests that a comprehensive
approach is required—one that encompasses technology, policy, and governance
while considering multiple perspectives simultaneously.
Ethical AI and DevSecOps Integration
While some of these challenges are not
unique to AI, the transition towards agile software development practices,
characterized by frequent update cycles, has led to the integration of code
generation stages, software development, and operations, resulting in the
concept of DevOps. Recognizing that security cannot be an afterthought, the
concept of DevSecOps emerged.
However, securing and ensuring the
ethicality of AI goes beyond merely integrating development, security, and
operations into one continuous process. When AI systems are deployed, they are
exposed to not only learning experiences but also potential threats from
hostile actors. Vern Boyle, Vice President of Advanced Processing Solutions at
Northrop Grumman, highlights the importance of safeguarding AI against
adversarial AI attacks—a vital consideration for DoD applications.
This risk is not confined to defense;
even major tech companies have faced challenges when deploying AI, as
demonstrated by a chatbot aimed at teenagers that was manipulated by trolls to
respond with insults and slurs. In a defense context, the stakes are higher,
potentially impacting a broader range of individuals. Attackers are expected to
possess a deep understanding of AI and exploit its vulnerabilities. Protecting
AI data and models throughout the AI lifecycle, from development through
deployment and sustainment, is crucial for DoD applications of AI.
The Challenge of Contextual Understanding
Current AI capabilities excel at
performing specific tasks with precision. However, they struggle to grasp
context. AI operates within the confines of its designated application, lacking
the broader contextual awareness that humans possess. For example, AI might struggle
to determine whether a puddle of water is one foot or ten feet deep. Humans can
contextualize information around the puddle to make a more informed judgment,
realizing the potential danger of driving through it.
As Muller points out, human intelligence
must remain an integral part of AI systems. This necessitates keeping humans
involved in the process, even as systems become increasingly automated, and
configuring interactions to allow humans to leverage their unique capabilities.
Toward a Future of Secure and Ethical AI
For Dr. Swett, the central ethical
question for AI developers revolves around assessing whether an AI model aligns
with DoD applications and how to instill justified confidence in its
capabilities. An integrated approach to AI, encompassing AI policies, testing,
and governance processes, will provide DoD customers with auditable evidence
that AI models and capabilities can be utilized safely and ethically for
mission-critical purposes.
In conclusion, as AI continues to
advance and permeate various aspects of our lives, addressing the complexities
of ethical AI in defense is of paramount importance. Northrop Grumman's experts
emphasize the need for a multidisciplinary approach, integration with DevSecOps
practices, and the essential role of human intelligence in navigating the
intricate landscape of AI ethics and security. Ultimately, a comprehensive
strategy is essential to ensure that AI serves as a valuable tool while
safeguarding against potential risks and ethical concerns.