Improving Software for Society

News | Blog Post : AI SAFETY SUMMIT & BLETCHLEY DECLARATION

RETURN TO THE BLOG POSTS PAGE

AI SAFETY SUMMIT & BLETCHLEY DECLARATION

24.03.2026

AI, Government Power, and the Ethics of Control

In 2023, a group of World Leaders met at Bletchley Park to discuss the use of AI and agreed to the The AI Safety Summit & Bletchley Declaration, at the time I commented that while the declaration was to limit what users could do with AI, that governments would do their own thing! Also from company to company, AI providers allow users to do things with their tools that are outside of the declaration, all of which have made the headlines over the last year.

Recently the U.S. Government and some major providers of AI have been creating news about the use of AI for state purposes.

Why the Current U.S. Dispute with AI Companies Matters to Software Professionals

Recent news reports have highlighted a growing conflict between artificial intelligence companies and the United States government over how advanced AI systems may be used—particularly in relation to military operations and domestic surveillance. The issue has come to prominence following a dispute between the AI company Anthropic and the United States Department of Defense, raising significant questions for software professionals, policymakers, and standards bodies around the world.

At its core, the dispute concerns who ultimately controls the use of powerful general-purpose AI systems: the companies that build them, or the governments that may rely on them for national security.

The Current Dispute

The controversy arose after Anthropic placed contractual restrictions on how its AI models could be used by government customers. These restrictions reportedly included prohibitions on:

• The use of AI for mass domestic surveillance of U.S. citizens

• The deployment of AI in fully autonomous lethal weapon systems

The U.S. Department of Defense maintains that suppliers to government agencies must permit their technology to be used for “all lawful purposes.” When Anthropic declined to remove these restrictions, the Pentagon reportedly labelled the company a “supply-chain risk,” potentially limiting the use of its technology within government projects.

Anthropic has responded by launching legal action, arguing that the decision was retaliatory and that companies should retain the right to impose ethical safeguards on their technologies.

Where Other AI Companies Stand

The situation is further complicated by the fact that other AI developers, including OpenAI, have entered into agreements with government agencies that allow broader use of their technology in defence and intelligence contexts, while still maintaining certain policy restrictions.

These arrangements generally allow AI systems to be used for:

• Intelligence analysis

• Cyber defence operations

• Military logistics and planning

• Research and decision support

However, the boundaries around autonomous weapons and domestic surveillance remain contested, and contractual language continues to evolve as the technology develops.

Why Governments Are Interested in AI

The interest of defence organisations in advanced AI systems is not difficult to understand. AI technologies are already demonstrating capabilities that can transform military and security operations, including:

• Processing vast intelligence datasets

• Identifying cyber threats in real time

• Assisting battlefield decision-making

• Optimising supply chains and logistics

• Supporting satellite and sensor analysis

For governments, AI is increasingly viewed as a strategic capability, comparable in importance to cryptography, satellite systems, or nuclear technology during earlier technological eras.

Ethical Concerns and Civil Liberties

At the same time, civil liberties organisations and many researchers warn that the same technologies could enable unprecedented levels of surveillance.

AI systems can analyse and correlate enormous volumes of data from sources such as:

• facial recognition systems

• communications metadata

• social media activity

• location tracking

• behavioural analytics

When combined, these capabilities could allow governments—democratic or otherwise—to construct systems capable of monitoring populations at scale.

The concern is therefore not only about military use, but about the potential expansion of automated surveillance infrastructures.

The Role of Software Professionals

For software professionals and professional bodies, this debate highlights several important issues.

1. Ethical Responsibility in Software Development

Developers and engineers increasingly find themselves building technologies with significant societal consequences. Questions around ethical constraints, acceptable use, and responsible deployment are becoming central to the profession.

2. Governance of General-Purpose AI

Unlike many previous technologies, modern AI models are general-purpose systems that can be adapted to a wide range of applications, both beneficial and harmful. Determining how such systems should be governed remains an open challenge.

3. Standards and Assurance

Professional organisations involved in standards development—particularly those engaged with AI safety, trustworthy software, and assurance frameworks—may play an important role in defining:

• acceptable use principles

• risk management processes

• transparency requirements

• accountability mechanisms

Work currently underway within international standards bodies, including International Organization for Standardization and International Electrotechnical Commission, may provide part of the framework for addressing these issues.

A Historical Parallel: The Cryptography Debates

Some commentators have compared the current AI debate with the “Crypto Wars” of the 1990s, when governments attempted to regulate strong encryption technologies.

At that time, policymakers argued that unrestricted cryptography could hinder law enforcement and national security operations.

Technology companies and civil liberties groups countered that weakening encryption would undermine security and privacy.

Over time, encryption became widely accepted as essential infrastructure for the digital economy.

AI governance may follow a similar trajectory, with a gradual shift toward internationally recognised frameworks balancing innovation, security, and civil liberties.

Looking Ahead

The current dispute between AI companies and government agencies is unlikely to be the last. As AI systems continue to advance, tensions between technological capability, national security interests, and ethical constraints will almost certainly intensify.

For software professionals, the debate reinforces the importance of developing technologies that are not only powerful, but also trustworthy, transparent, and accountable.

Professional bodies, standards organisations, and the wider software community will have an important role to play in ensuring that AI systems are developed and deployed in ways that benefit society while managing the risks inherent in such transformative technologies.

Written by John Ellis, Wellis Technology