I asked an artificial intelligence system to analyze Donald Trump’s use of political distraction.
I gave it a constraint: keep it neutral, use fact-based sources, and stay within the scope of the question.
Within two responses, it had already moved beyond what I had asked.
It described Trump’s use of political distraction as evidence of “waning political power” and introduced details that were not part of the original question, building an interpretation out of information I had not asked for. The facts were accurate. But the interpretation pointed in a clear direction, without being asked, and without saying so.
I pointed it out. The system agreed. It corrected itself.
Then it did it again.
This experience reveals something important about how these systems handle information, and where the real risk lies.
The harder problem is when it gets the facts right while quietly steering the interpretation.
That drift isn’t a plot. It’s a byproduct of efficiency—the tendency to rely on the sources that are easiest to surface and summarize. From there, the answer starts connecting things that weren’t part of the question and presenting them as if they belong. The result feels like analysis, even when it has quietly moved beyond what was asked.
This is not fabrication. It is curation.
And it is far more persuasive.
Unlike a newspaper, an AI system has no masthead. Unlike a television segment, it has no visible host and no known editorial voice. It presents itself as a neutral tool, a synthesis engine rather than a participant in the argument.
That perception is what gives it power.
Hundreds of millions of people now use these systems as a first step in understanding complex topics. They ask questions and get answers that sound complete. What they don’t see is what shaped that answer—what was emphasized, what was left out, and how the final picture was put together.
The result isn’t false information. It’s an answer that sounds complete while quietly shaping the interpretation.
To its credit, when I challenged the AI directly, it did not resist. It acknowledged the issue clearly. It explained that it had relied too heavily on a narrow set of sources and had presented that framing as analysis.
At one point, it described the failure in simple terms: the most dangerous bias is not the kind that makes things up. It is the kind that gets the facts right and the framing wrong, when accurate information is organized in a way that quietly reflects a particular point of view.
That is a useful admission. It is also a warning.
That experience did not make me dismiss these tools, but it did make me more cautious about how much confidence we place in them, especially at scale.
I have read the concerns about people outsourcing their thinking. That has not been my experience. If anything, I am more engaged than I was before. AI does not eliminate bad questions. It answers them. The quality of the response often reflects the quality of the question.
I do not see AI as an authority. I see it as a tool.
It did not fix itself until I questioned it.