Let's Prevent 'Woke AI' Through Evidence, Not Edict
AP
X
Story Stream
recent articles

The rapid adoption of artificial intelligence has been the key driver of economic growth in the United States in the 2020s. In the first half of 2025, AI-related capital expenditures contributed 1.1 percentage points to GDP growth, outpacing the American consumer as the main engine of expansion.

Corporate adoption of AI has gone from curiosity to ubiquity: 88 percent of firms report using AI in at least one business function, up from 55 percent just two years ago. AI-related stocks have generated over half of total equity-market returns since 2023, and corporate AI investment reached $252 billion in 2024, with private investment climbing 45 percent to $109 billion—nearly 12 times higher than China’s $9 billion.

With so much of America’s expansion tied to AI, the government’s challenge is not to restrain the sector but to ensure its credibility—particularly when its products shape what people read, believe, and decide.

The federal government itself is one of the nation’s largest potential AI customers; not only are agencies using chatbots for public services, but various agencies are doling out research grants in the billions of dollars that help fund the development of advanced models. Washington’s choices will shape how the market values accuracy and neutrality, which makes it all the more important that AI provides objective verifiable information rather than political signaling.

That concern underpins President Trump’s July 23 executive order, Preventing Woke AI in the Federal Government, which directs agencies to procure only language models that meet two “Unbiased AI Principles”: truth-seeking and ideological neutrality. The order declares that the government “has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”

It is a principle that is hard to disagree with: Americans should expect AI systems used by the government to prioritize accuracy over activism. However, the risk lies in execution: defining “neutrality” from Washington invites the very politicization the order seeks to prevent.

If implemented narrowly, the rule could devolve into compliance theater—federal contractors chasing ideological checklists rather than genuine accuracy. But if guided correctly, the order could accelerate the development of transparent, testable standards for fairness that strengthen both the public and private sectors.

Several AI developers have spent the past year building exactly that kind of evaluative infrastructure. Anthropic, for instance, recently published a detailed report describing how it trained its latest model, Claude Sonnet 4.5, to treat opposing political viewpoints with equal depth, engagement, and quality of analysis.

Rather than declaring itself “neutral” by fiat, the company designed a quantitative test for even-handedness. It used 1,350 paired prompts across 150 topics—with each pair asking the same question from opposing political perspectives—to measure whether the model answered both sides with comparable reasoning and evidence.

The model outperformed or matched other leading systems for objectiveness. Equally important, the developer open-sourced the evaluation framework, inviting competitors and researchers to replicate and improve upon it. That transparency reflects a healthier path toward credibility: Instead of relying on regulatory certification, firms can compete on measurable neutrality—an incentive that rewards openness, not opacity. 

The federal government’s role should be to set expectations and promote common standards, not to dictate acceptable worldviews. A smart implementation of the executive order would encourage agencies to adopt shared evaluation tools—the kind that are currently being developed in the private sector—so that “neutrality” is assessed empirically rather than ideologically.

Artificial intelligence is reshaping the economy at a pace few predicted, but economic strength alone won’t sustain the technology’s legitimacy. Citizens will continue to question AI systems that appear to champion one political perspective over another. The best safeguard is the one emerging organically: competition to prove even-handedness through data, testing, and openness. 

If the administration’s “Unbiased AI Principles” steer agencies to adopt and refine such methods—rather than to police thought—the policy will strengthen both the federal government’s credibility and the broader AI ecosystem. The surest way to prevent “woke AI” is not through edict but through evidence.

Ike Brannon is a senior fellow at the Jack Kemp Foundation. 


Comment
Show comments Hide Comments