When Even a Banner Is Too Dangerous: Narrative Warfare in Real Time
I asked an AI design tool to make me a simple banner for my new series.
The series title? “Narrative Warfare Waged Against the American Public: Dissecting the Media.”
That was it. No violence, no hate speech, nothing obscene. Just a plain graphic header that says what the series is about.
The response? Refusal. The system told me it couldn’t generate the banner because it “violates content policies.”
---
Think about that for a second.
You can scroll through social feeds filled with rape jokes, misogyny, disinformation, and explicit calls for violence — all unmoderated.
But the phrase “Narrative Warfare Waged Against the American Public” is apparently too threatening to be drawn as text on a background.
That’s not about safety. That’s about silencing.
---
Why It Matters
This is exactly the kind of narrative control we’re dissecting in the new series.
You’re allowed to consume propaganda.
You’re not allowed to call it what it is.
The patriarchy wages its war on wages, on women, on the public — and the tools we try to use to expose that war are policed into bland compliance.
A banner becomes a battlefield.
---
The Bigger Picture
This isn’t about me not getting the graphic I wanted. It’s about the way entire conversations are bent out of shape by algorithmic filters and risk-averse moderation. The more precise and confrontational your language, the more likely it is to be flagged.
And yet — the lies go unchecked. “Booming economy.” “Revitalized city.” “Safe streets.” Media lies get to roar unchallenged. But a banner saying “Narrative Warfare Waged Against the American Public”? That’s too dangerous to print.
---
So here’s the irony: the refusal itself has become the first entry in this new series.
We don’t even need the banner image yet — the fact that it was denied tells the story more clearly than any graphic could.
---

Member discussion