In 2025, Pew Research surveyed adults across 25 countries and asked them to name the biggest threats facing their nations. The results were striking. A median of 72%—across nations as different as Germany, Poland, the United States, and South Korea—identified the spread of false information online as a major threat.
That number topped terrorism in seven of those countries. In Germany and Poland, it was the single greatest perceived threat, by a wide margin.
We tend to read findings like this as political problems, something for governments, regulators, or online platforms to sort out. And they should worry. But I want to draw your attention to a parallel crisis that’s unfolding right in front of our eyes and getting far less attention: misinformation and disinformation are increasingly being deployed as weapons against brands and public figures, and most organizations have no idea it’s happening until the damage is already done.
The Machinery Is Already Running
In late 2025, Taylor Swift released The Life of a Showgirl, which immediately became the fastest-selling album in history. And almost immediately, something strange happened online.
Posts began circulating accusing Swift of embedding Nazi symbolism in her merch, endorsing MAGA politics, and signaling trad-wife gender norms—all framed as a leftist critique. The supposed evidence was thin: a lightning-bolt necklace that vaguely resembled SS insignia, and a single word choice in one song. But the narrative spread.
Fans and commentators pushed back, but that pushback, it turns out, was exactly what the operation needed. Every rebuttal amplified the original claim, and the algorithm rewarded the engagement, which means that millions of people who never believed the accusation helped spread it anyway.
Behavioral intelligence firm GUDEA analyzed over 24,000 posts and 18,000 accounts across 14 platforms in the weeks following the album’s release. What they found should concern every brand strategist, communications director, and PR professional reading this:
Just 3.77% of accounts drove 28% of the entire conversation.
That’s not organic discourse. That’s infrastructure.
GUDEA also identified two distinct spikes of coordinated inauthentic activity—one in the album’s first week, and one after Swift’s lightning bolt merch dropped. During that second spike, roughly 40% of posts came from inauthentic accounts, and conspiracist content made up nearly 74% of total conversation volume during that window.
More alarming: the network pushing the Swift narrative had significant user overlap with a separate coordinated campaign targeting Blake Lively during her legal battle with Justin Baldoni. Same accounts, different targets, same playbook. This is what researchers called a “cross-event amplification network”—a reusable infrastructure for reputational attacks that can apparently be turned on and off, aimed at different targets, for reasons that aren’t always immediately obvious.
Before any Swift fans or haters come for my neck: she is the example, not the point.
When It's Not Just a Brand
The Taylor Swift case shows what coordinated disinformation looks like when it targets a brand. But what happens when the same machinery gets aimed at a real-world crisis?
On September 10, 2025, conservative activist Charlie Kirk was murdered at Utah Valley University. Within hours, the machinery was running.
False claims and conspiracy narratives began circulating almost immediately. AI systems, including Grok—with 6.2 million followers on X—generated and spread incorrect information about suspects and timelines that fed directly back into social media rumor cycles. A 77-year-old retired banker in Toronto named Michael Mallinson was falsely named as the shooter by Grok and had his name and face attached to posts accusing him of murder. He had no connection to Kirk, had never been to Utah, and yet found himself at the center of a coordinated misidentification campaign. “I felt violated,” he told CBC.
Meanwhile, analysts at the Center for Internet Security and the Institute for Strategic Dialogue documented something even more organized underneath the noise. Russian-backed groups, including a disinformation operation known as Operation Overload, manufactured fake news reports, fabricated celebrity quotes, and created false images designed to inflame both conservative and LGBTQ+ audiences. More than 46,000 posts on X discussing the shooting contained the word “trans,” the majority of which wrongly speculated the shooter was transgender. Roughly 26,000 posts expressed either concern about or desire for civil war.
The goal, as one former DHS official put it, was not just for people to consume the content, but to act on it.
And here’s the part that often gets lost in the coverage: the attacks didn’t stay contained to Kirk or the suspected shooter. Analysts documented how these operations targeted broader “enemy” categories—left-leaning groups, LGBTQ+ communities, and their organizational supporters.
That means nonprofits. Philanthropy staff. Advocacy organizations. DEI practitioners. Campus orgs. People and institutions with no direct connection to the event who suddenly found themselves named in a coordinated information operation because they fit a narrative someone was trying to weaponize.
Online communities engaged in open-source “sleuthing” that reached into relatives’ Facebook pages and wider social graphs, a pattern that routinely pulls in uninvolved organizational staff and adjacent networks once a narrative catches fire.
If you work in or fund organizations that touch any politically sensitive space, you know this threat is not abstract but well-documented.
The Threat Brands Aren't Taking Seriously
Most organizations think about misinformation as background noise in a chaotic media environment. Something to monitor (and honestly, most brands and organizations don’t even do this unless there’s a crisis), maybe, or respond to if it gets bad enough.
What we’re now seeing—from Taylor Swift’s album rollout to the aftermath of political violence—is that misinformation can be aimed at you deliberately, efficiently, and at a scale that overwhelms your ability to course-correct once it’s in motion. A small number of coordinated accounts can seed a narrative in fringe spaces, watch it migrate to mainstream platforms, and let organic user behavior (outrage, defense, debate) carry it the rest of the way. By the time your social listening tool flags it (again, if that’s even set up), the discourse has already been shaped.
And with the rise of AI, the cost of running these operations is only going down. It’s now cheaper and faster to generate inauthentic accounts, produce content at volume, and personalize attacks to feel more credible. The Kirk case showed us AI chatbots with millions of followers amplifying false information in real time, with no correction or accountability.
The 3.77% problem is going to become the 10% problem. Then the 20% problem.
We talk a lot in brand and communications circles about narrative—about owning your story and being proactive rather than reactive. But most of that thinking was built for a world that no longer exists, where narratives emerged organically through traditional media coverage, customer sentiment, and employee behavior. The playbook assumed a relatively level playing field.
That playing field no longer exists.
A New Standard for Communicators
My argument is not to paralyze, but the opposite. I am arguing, empowering you, to evolve to a new level of strategic sophistication. A few things worth sitting with:
- Your narrative infrastructure matters more than ever. A brand with a strong, clear, and consistently communicated identity is harder to destabilize. When people already know who you are and what you stand for, a coordinated smear has less surface to grip. Narrative clarity used to be primarily a brand virtue. Now it’s also a defensive asset.
- Monitoring is not the same as intelligence. Most brands track mentions and sentiment, and that’s table stakes now. The more urgent capability is understanding who is driving a conversation and how it’s structured—distinguishing organic discourse from coordinated amplification before you respond to something that was engineered to make you respond.
- Engagement with bad-faith narratives often backfires. The Swift case illustrates this painfully. When authentic users rushed to defend her, they spread the original claim further and fed the algorithm. Communicators need to develop the skill and discipline to know when not to engage—and hold that line even when leadership is pushing you to respond.
- Nonprofits and philanthropy are not exempt. The Kirk case should put every advocacy organization, foundation, and socially positioned brand on notice. You don’t have to be the target of an operation for it to reach you. You just have to be in the orbit of a narrative someone is trying to move.
- The AI dimension changes the urgency. The organizations consistently creating factual, thoughtful content and building the capability to detect and respond to coordinated information attacks now will have a significant advantage over the ones who wait until they’re in the middle of one.
The World Has Named the Problem
72% of adults across 25 countries are worried about the spread of false information online. That kind of global consensus tells us that something has fundamentally shifted in our information environment.
Brands operate inside that environment. Their reputations live and die inside that environment. And the organizations that treat this as someone else’s problem—a political issue, a platform issue, a regulatory issue—are leaving themselves dangerously exposed.
The machinery is already running. The only question is whether your strategy has caught up to the world it’s actually operating in.
Wilsar Johnson writes about brand, narrative, and strategic communications. She works with leaders, organizations, and brands navigating complex communication environments.