Policy articles
Digital Rights4 min read

Censorship by Algorithm: The Quiet War on American Speech

How AI-powered content moderation is silently shaping public discourse.

Censorship by Algorithm: The Quiet War on American Speech
Featured image for "Censorship by Algorithm: The Quiet War on American Speech"

I didn't get silenced by the government. I got throttled by a machine.

I sat down to write a harmless campaign slogan. Nothing hateful. Nothing vulgar. Just a bold political message with my name on it.

And I couldn't post it.

The AI flagged it. Rejected it. Censored it.

No appeal. No explanation. No human in sight.

That moment hit me like a gut punch—not because I needed that one post to go viral, but because I realized something deeper:

The future of speech isn't being censored by bureaucrats. It's being filtered by black-box algorithms. And we never signed up for it.

This Isn't "Big Tech Bias." It's Bigger.

You've probably heard politicians talk about censorship. Usually, it's framed as some Silicon Valley employee deciding what's "acceptable."

But the real story is more dangerous—and more invisible.

The content moderation systems of today are no longer run by humans. They're run by AI models trained on massive datasets, taught to "protect" us from misinformation, hate speech, or political toxicity.

But here's the truth:

  • These models don't understand context.
  • They don't respect satire, dissent, or complexity.
  • And they are making real-time decisions about what you're allowed to say.

Not just on X, Facebook, or YouTube—but increasingly in email, search, advertising, and even payment platforms.

I'm Not Guessing. It Happened to Me.

While preparing messaging for my campaign, I tested phrases with my own name—Nick Plumb—in slogans like:

  • "Plumb the Swamp"
  • "Plumb for the People"
  • "Nick the System"
  • "Plumbline Politics"
  • "Time to Pull the Plug"

One by one, these phrases were rejected by automated filters. Not because they were offensive—but because AI models determined they might be unsafe.

That's the future we're living in:

  • No rules. Just probabilities.
  • No judge. Just an AI hallucinating what might get someone mad.
  • No accountability. Just a silent block.

This Is the New Censorship—and It's Already Here

Forget banning books.

Forget "shadowbans."

We're way past that now.

Today, the average American is being trained by algorithms to avoid risk. To self-censor. To not question the system too loudly—for fear of being flagged, downgraded, or digitally erased.

And most don't even realize it's happening.

The Algorithmic Iron Curtain

Here's what content filtering looks like in 2025:

  • Flagged ads you can't run.
  • Search results that quietly suppress dissent.
  • Emails sent to spam not because of spam—but because of political language.
  • Posts demoted without notification.
  • Payment processors closing accounts for "risk" behavior.

If you're running for office, starting a movement, or just telling the truth—you're already operating under AI speech probation.

And if you think this only happens to extremists, think again.

I was flagged for my name.

This Isn't a Conspiracy. It's a System Design.

These filters weren't built to protect you. They were built to protect platforms.

  • From lawsuits.
  • From advertisers.
  • From government pressure.

The result is a speech regime that's not regulated by law—but by corporate risk tolerance.

That's more dangerous than censorship from above—because it's censorship from everywhere, by no one.

Where the Hell Is Congress?

While this silent takeover unfolds, Congress is holding hearings about "online safety" and pretending they understand AI.

Spoiler: they don't.

And while they grandstand, the tools that now shape our culture, elections, and discourse are entirely unregulated, non-transparent, and owned by corporations whose profit depends on avoiding controversy—not defending liberty.

What I'm Proposing: A Digital Free Speech Framework

We need to treat AI-powered content moderation like the civil rights issue it is.

Here's what I'll fight for in Congress:

1. Transparency Requirements for Moderation Algorithms

If a platform uses automated filters, they must disclose:

  • What categories are being flagged
  • The rate of false positives
  • How appeals are handled

No more mystery. No more "oops."

2. A Right to Algorithmic Appeal

If your content is taken down or blocked, you should be notified and given the right to human review. Period.

3. Protections for Political Speech

Political messaging—especially from candidates—must be exempt from suppression based on vague AI risk flags.

Let the voters decide, not a machine.

4. A Federal Digital Due Process Law

You wouldn't accept being put on a no-fly list without recourse. Why accept being silenced online without a hearing?

This Isn't About Me. It's About the Next Voice You Never Hear.

Maybe it's a parent sharing vaccine side effects.

Maybe it's a whistleblower exposing abuse.

Maybe it's a kid with a hard truth and no platform.

If we don't draw a hard line now, we risk losing the internet as we knew it—a place of open discourse, friction, and truth.

Not perfect, not polite—but free.

And if we don't act fast, that freedom disappears behind a polite, sterile curtain… pulled by an AI no one elected.

N. Lee Plumb

Written by

N. Lee Plumb

Candidate for Congress

More Articles

The Leadership Deficit: Why Congress Operates Like a Broken Warehouse Floor

The Leadership Deficit: Why Congress Operates Like a Broken Warehouse Floor

I spent my life turning around broken systems—on military deployments, in high-volume retail, and inside some of the most complex logistics operations on Earth. I've managed thousands, launched billion-dollar sites, and uncovered fraud, waste, and failure that others ignored.

Chainsaw the Bureaucracy: Why Red Tape Protects the Powerful, Not the People

Chainsaw the Bureaucracy: Why Red Tape Protects the Powerful, Not the People

I've cleaned up operations floors so broken they were bleeding money. I've untangled process knots that choked thousand-person teams. And I've learned one unshakeable truth: Complexity always protects the powerful.