How AI Systems Create National Security Risks

By: NewsUSA

(NewsUSA) - AI systems continue to enable a range of economic, social, and defense opportunities. However, the same AI characteristics that allow for new and transformative opportunities also present risks to national security, according to experts at the Special Competitive Studies Project (SCSP), a bipartisan nonprofit organization.

Well-intentioned AI developers, deployers, and users of an AI system must assess and consider risks to national security posed by their particular system, the SCSP experts warn. This risk goes beyond AI systems involved in national security and extends to commercial entities.

SCSP offers several examples of how commercial AI has the potential to threaten national security.

Misuse: Misuse can occur when an AI system developed for non-national security purposes is accidentally or purposefully used in a way that causes harm to national security. This threat is especially relevant for small and medium-sized enterprises developing new systems or applications. For example, unrelated AI systems can be linked to reveal sensitive data about strategic infrastructure, populations, or other subjects that are relevant to national security and could be manipulated by adversaries.

Scaling: Leveraging large amounts of data creates opportunities for new business models and increases the efficiency of existing models. However, extreme scaling also creates the potential for rapid introduction and adoption of new systems and use cases not previously encountered, predicted, or evaluated. For example, the aggregation of cell phone data enables the identification of cell phones associated with regular visits to sensitive facilities.

Generative AI: The advancement and adoption of generative AI already has resulted in unintended consequences. For example, text, voice, image, and/or video generation technology designed for entertainment purposes can be used to create information campaigns or deepfakes to spread misinformation and disinformation, incite political violence, and generally undermine public trust.

Corrupted Data or Software: Large AI systems typically rely on external software components, but the prevalence of these components in machine learning introduces risks of intentionally and unintentionally corrupted versions being unknowingly incorporated in critical systems. For example, a facial recognition system that may be manipulated to include a “trigger,” such as an unusual hat, that prompts it to perform not as intended, perhaps by authorizing unintended access.

To help mitigate the potential national security risks associated with AI, the SCSP recommends educating stakeholders and incentivizing practices to promote cooperation.

In addition, public-private partnerships between stakeholders and national security entities are needed to help all stakeholders understand requirements, policies, and standing documents.

The SCSP experts also call on the U.S. government to create an AI tested where technologies can be objectively evaluated. This mechanism would support the exploration of AI systems to identify risks that have not previously occurred.

Visit scsp.ai to learn more.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.