Chappell Roan Was Hit With the Same Bot Infrastructure as Taylor Swift

On March 21, 2026, Brazilian soccer star Jorginho Frello posted a detailed account on Instagram. His 11-year-old stepdaughter had walked past Chappell Roan's table at a hotel in São Paulo during Roan's Lollapalooza Brazil appearance. A security guard, he claimed, berated the girl and her mother. The post spread fast. Roan responded the next day, saying she was unaware of the incident and her personal security hadn't been involved. 

Then the analysis came in.

A behavioral intelligence firm called GUDEA—which I introduced in my first article in the context of Taylor Swift—tracked 100,030 posts from 54,334 unique users across seven platforms over the 72-hour window of March 20–22. They found that 4.2% of accounts in the conversation were likely bots, and those accounts produced 23% of all posts.

For context, in the Taylor Swift campaign GUDEA previously analyzed, 3.77% of accounts drove 28% of posts. Same firm, same methodology, different target, nearly identical signature.


What It Actually Means

BuzzFeed first reported the findings. The behavioral signals flagged included posting bursts, repetitive phrasing, synchronized timing, and fictional or satirical posts circulating as fact. Out.com reported that the discourse stayed trending for over a week despite Roan's direct response—a detail worth sitting with.

Here's the number that reframes all of this. A 2025 study in Scientific Reports analyzed social media chatter across roughly 200 million users and found that about 20% of posts about global events come from bots at any given time.

4.2% is well below that.

The Chappell Roan campaign didn't require an unusual number of bots. It required a coordinated cluster—small enough to stay beneath the noise threshold, organized enough to generate nearly a quarter of all posts on a major public controversy. Standard brand monitoring is built to flag volume. This operation didn't depend on bot volume. It seeded the conversation early, then let the algorithm and real human outrage carry it from there.

That's the uncomfortable part for communications professionals: by the time a monitoring tool flags something as a sentiment problem, the coordination has already happened somewhere else.


You've Seen This Before. I Named It.

In How a Misinformation Attack Actually Works, I laid out five stages that these attacks follow: Hook, Frame, Flood, Illusion, and Mutation, and applied it to Eli Lilly and Barilla in that piece. The Chappell Roan situation runs the same sequence.

The Hook. Jorginho's March 21 Instagram post is the anchor—a real incident documented by someone with a platform and genuine emotional stakes. These attacks almost never start from nothing. A real grievance does more work than a fabricated one, and it's harder to dismiss.

The Frame. What the incident became, once compressed: Chappell Roan's security traumatized an 11-year-old child. No sourcing, maximum emotional charge, easy to pass along without thinking too hard about it. The compression at this stage is the mechanism, not a side effect.

The Flood. 4.2% of accounts. 23% of posts. Same phrasing, synchronized timing, seeded across platforms while the story was still early. Most monitoring tools are calibrated to catch volume spikes. What they're not built to detect is the behavioral synchronization happening underneath, because what looks like a wildfire is actually several fires starting at once.

The Illusion. Roan denied the story on March 22. The controversy kept trending for another week. This is the illusory truth effect working as documented: repeated exposure to a claim raises its perceived credibility even for people who've already heard it's false. The correction reached people. It just didn't land the way the original claim did.

The Mutation. Satirical and fictional posts entered the conversation and circulated as fact. By the time any response arrived, the narrative had already shifted, and each denial was answering a version of the story that no longer quite existed.


Why This Matters Beyond Chappell Roan

The infrastructure that ran this campaign has no particular interest in pop stars. What it needs is a real incident, a compressible frame, and the conditions for coordinated amplification. Any brand, institution, or public figure operating at scale has all three available to whoever wants to use them. AI has raised the stakes on this considerably — ChatGPT, Claude, Gemini, and Perplexity are increasingly the first place people go to research a company or public figure, and those models draw from whatever signals are already in circulation.

There's also something worth noting about the fan response. As I wrote in my first article, fans and commentators who pushed back against the Chappell Roan narrative were doing exactly what the operation needed. Every rebuttal amplified the original claim. The algorithm rewarded the engagement. Millions of people who never believed the accusation helped spread it anyway. That's not a failure of good intentions—it's how these systems are designed to work.

Communications professionals watching this happen to Chappell Roan were watching their own potential crisis from the outside. The mechanics are the same. The difference is only the target.


The Pattern Is Documented. The Question Is Whether You're Ready.

In my first article, I argued that misinformation is a brand and institutional threat, not just a political problem. The second made the case that most organizations don't have a misinformation strategy; they have a crisis plan, and those aren't the same thing. The third named the five stages of how these attacks actually unfold.

GUDEA has now documented this pattern twice, against two different targets, with nearly identical infrastructure signatures. 

  • Taylor Swift: 3.77% of accounts, 28% of posts. 

  • Chappell Roan: 4.2% of accounts, 23% of posts. 

The playbook runs wherever the conditions exist, regardless of who the target is.

If a coordinated campaign started against your organization tonight, what stage would you catch it at—Stage 2, while the frame is still being built? Or Stage 4, when it already feels like a consensus and the conversation has moved on to your response?



Sources


How a Misinformation Attack Actually Works

In 2022, someone created a fake account on X/Twitter, impersonating Eli Lilly and posted a single message: “We are excited to announce insulin is free now.” The post was fake. 

Eli Lilly never said it. But on November 11, 2022, the company’s stock dropped 4.37%, erasing over $15 billion in market cap. One account, one sentence, caused real financial damage before a correction could land.

Don’t mistake that for a communications crisis. That is an attack. And that attack followed a pattern.

In my previous articles (Article 1 and Article 2) in this series established that misinformation is a brand and institutional threat, not just a political problem, and introduced a framework for building a real response strategy. This piece goes one level deeper. Here is how a misinformation attack actually unfolds, stage by stage, so communications professionals can recognize one while it is still moving.

Misinformation attacks are not random.

They are not spontaneous bursts of online chaos that happen to catch a brand in the crossfire. They follow a sequence. A false narrative attaches to something real, gets compressed into a form people can repeat, floods the zone through coordinated accounts, creates the illusion of consensus, and then mutates when challenged. Every stage has a logic. Every stage has a signal. And the organizations that get hurt the worst are the ones that don’t recognize which stage they’re in until it’s too late.

Here is what each stage looks like in practice.

Stage 1: The Hook

Every misinformation attack needs an anchor because it almost never starts from nothing. It attaches to a real event, a genuine grievance, a breaking news moment, or an existing controversy, because real events carry attention and emotion that a fabricated claim alone cannot manufacture.

The false narrative doesn’t try to replace the real event. It reframes it by taking something that actually happened and bending the meaning.

The signal to watch for: the claim arrived fast and fits the moment too neatly. It is emotionally loaded, perfectly timed, and requires no context to feel credible. That fit is not an accident.


Stage 2: The Frame

Once the narrative has an anchor, it gets compressed. A screenshot. A short clip. A one-line accusation. A slogan. Something that travels without the context that would complicate it.

This is not an accident either. The compression is the point. A nuanced claim requires effort to process and share. A simple, emotionally charged one-liner does not. The goal at this stage is not to convince people of a complicated argument. It is to give them something they can repeat without thinking too hard about it.

The signal: the message is suspiciously clean. No sourcing. No qualifications. High outrage. Maximum shareability.


Stage 3: The Flood

Now the narrative seeds across accounts and communities simultaneously. Coordinated accounts, bots, influencer laundering, and ordinary users who have been nudged into sharing it. The goal is volume, not persuasion.

This is where the attack starts to feel real, because it suddenly appears to be everywhere. The same phrasing. The same image. The same claim, in comment threads and repost pages and quote posts, across accounts that have no obvious connection to each other.

The signal: identical or near-identical language appearing across unrelated accounts in a tight time window. Virality looks like one thing catching fire. Coordination looks like several fires starting at once.


Stage 4: The Illusion

This is the stage where most brands lose.

By the time the claim has flooded enough accounts, people stop asking whether it is true and start assuming it must be. Repetition creates the feeling of consensus. A claim people have seen ten times feels more established than a claim they are hearing for the first time, regardless of whether either one is sourced.

This is not gullibility. It is how the human brain works. Research on the illusory truth effect shows that repeated exposure to a statement increases its perceived credibility, even when people have been told it is false. (Udry and Barber, “The Illusory Truth Effect: A Review of How Repetition Increases Belief in Misinformation,” Current Opinion in Psychology, Vol. 56, 2024.) The attack is not trying to convince people. It is trying to make the claim feel familiar enough that people stop questioning it.

The signal: the claim feels established before any credible source has confirmed it. It has the texture of common knowledge.


Stage 5: The Mutation

When a fact-check arrives, or the brand issues a correction, a well-constructed misinformation campaign does not collapse. It shifts.

It establishes new wording. New screenshots. A slightly different version of the same allegation. Sometimes the story moves to a different platform where the correction hasn’t reached. Sometimes the claim gets laundered through a new account that wasn’t part of the original wave. The correction is always chasing a version of the story that no longer quite exists.

Barilla experienced this. After a false claim spread in Italy that the company’s pasta was contaminated with insects and that Barilla had pulled its products from shelves, the company issued a direct denial. The claim kept circulating. Barilla’s own press release confirmed no insect-flour products were ever produced or planned. There is no documented evidence that the campaign caused a measurable business decline, but the reputational disruption was real and required a direct public response. The denial didn’t kill the narrative. It just forced the narrative to find a new shape.

The signal: every time the brand addresses the claim, a slightly different version reappears. The correction never quite lands because it is always answering yesterday’s version of the attack.


Why It Works

These attacks succeed because they are engineered to exploit the way human beings actually process information, not because of a failure of intelligence or critical thinking.

Repetition increases perceived truth. 

Social proof overrides skepticism. Emotional threat activates a faster, less analytical response. When a claim feels familiar, urgent, and widely shared, the brain’s default is to treat it as credible. That is not a character flaw. It is a design vulnerability, and misinformation campaigns are built around it.

The implication for communications professionals is direct: the goal of the attack is rarely to make everyone believe one specific fake thing. The goal is to create enough confusion, enough noise, and enough doubt that people stop trusting the institution’s version of reality entirely. That is harder to correct than a single false claim. And it is what makes these attacks genuinely dangerous.


The Blind Spot Most Brands Have

Most communications monitoring is built to track sentiment and volume. By the time a false narrative registers as a sentiment problem on mainstream platforms, it has already moved through the seeding stage. The coordination happened somewhere else first, on fringe forums, in private groups, across smaller accounts, before it ever hit the feeds that brand monitoring tools are watching.

The practical failure is not response speed. It is watching the wrong signals. Eli Lilly could not have responded faster than the post spread. But a team watching for coordinated behavior rather than volume might have flagged the impersonation account before it moved markets.

This is what narrative intelligence, the second component of the framework from my article, You Don't Have a Misinformation Strategy. You Have a Crisis Plan, is actually for. Not tracking what people are saying about your brand. Tracking how they are saying it, where it started, and whether the behavior looks coordinated or organic.


The Takeaway

Knowing this pattern does not stop every misinformation attack. But it stops the most dangerous part: surprise.

Once you can name the stage you are in, you can make a decision. Ignore it. Document it. Correct it. Escalate it. What you cannot do, once you know the pattern, is mistake a coordinated attack for organic criticism and respond to it the wrong way.

That distinction, between what is real and what is manufactured, is the foundation of everything this series has been building toward.

If an attack on your brand started tonight, which stage would you catch it at?


You Don't Have a Misinformation Strategy. You Have a Crisis Plan.

Your organization's misinformation plan is probably a response plan. It only activates after a false narrative has already spread through your audience, seeded doubt in your stakeholders, and done structural damage to your credibility. By the time most communications teams are formulating a response, the window to control the story has already closed.

Most organizations don't have a misinformation strategy. They have crisis communications, and they've convinced themselves that those are the same thing.

The distinction matters enormously right now, because the misinformation threat has evolved in ways that make the gap between having a real strategy and thinking you have one the difference between managing a false narrative and being destroyed by one.

The Plan Most Organizations Actually Have

The most common version I encounter goes something like this: "We monitor mentions. If something false spreads, we have a response plan."

That's crisis comms. Useful. Necessary. Not a misinformation strategy.

Some organizations are more sophisticated. They have a designated team, a media relations protocol, an escalation matrix that routes false claims through legal before anyone responds publicly, and a social listening tool with alerts set up for branded keywords.

Still not a misinformation strategy.

The core problem with all of these approaches is that they are fundamentally reactive. They are designed to activate after a false narrative has already spread. And in an environment where misinformation travels faster than corrections ever will, building your defense around reaction is building it to fail.

A 2018 study published in Science by researchers at MIT (the largest longitudinal study of its kind) found that false news reached 1,500 people about six times faster than the truth, and that falsehoods were 70% more likely to be shared. The researchers found the driver wasn't bots. It was humans, drawn to the novelty and emotional charge that false claims tend to carry.

By the time most communications teams are formulating a response, the false narrative has already done structural damage. The correction will reach a fraction of the audience that the original false claim did. The response plan assumes you still have the audience's ear. Often, you don't.

What Real Strategy Looks Like

A misinformation strategy is a proactive, infrastructure-level framework that does three things:

  1. Builds a narrative foundation before it's needed

  2. Detects threats before they become crises

  3. Reduces the surface area for false narratives to take hold

Narrative infrastructure. This is the most overlooked component, and the one that has become significantly more consequential in the age of AI.

Before any false claim can spread, an organization needs an established, credible narrative about who it is, what it stands for, and what its track record demonstrates. Not brand messaging but a documented, publicly visible record (through content, relationships, third-party validation, and community presence) that makes false claims harder to stick because your audiences have already been exposed to the truth.

The organizations that survive misinformation attacks best are rarely the ones with the fastest response. They're the ones whose audiences already have enough of a positive baseline that the false claim doesn't land cleanly.

AI has raised the stakes here considerably.

Large language models (ChatGPT, Claude, Gemini, Grok, Perplexity) are increasingly the first place people turn to research a company, institution, or public figure. These models draw from news, social media, reviews, forums, and owned content. If those signals are inconsistent or contaminated by false claims, the AI summary reflects that.

The content you don't publish is the vacuum a false narrative fills, not just in the news cycle, but in the training data.

Organizations investing in consistent, authoritative published content are building the very inputs AI draws from to describe them. That includes employee and corporate creator programs, where staff publish under their own names and add credibility that the organization's official voice can't replicate. Brands building owned newsrooms and content studios are taking this further, creating direct audience relationships that don't depend on placement or coverage. They're not just managing brand awareness. They're building a body of published truth that's harder to displace.

Narrative intelligence. Monitoring tells you what's being said. Narrative intelligence tells you why it's spreading, who's moving it, and what it's doing to your stakeholders' perception of you.

Most organizations with social listening tools are measuring reach and sentiment, which is useful for marketing but inadequate for misinformation defense. Narrative intelligence goes further: tracking false claims as they first emerge, mapping which communities are most exposed, identifying whether amplification is coordinated or organic, and assessing what a claim is actually doing to belief, not just how many people have seen it.

The context here is stark. The 2025 Edelman Trust Barometer found that 68% of people surveyed distrust business leaders (up 12 points from the year before) and 68% believe business leaders deliberately mislead them. Five of the 10 largest global economies, including the U.S. (47), the UK (43), and Germany (41), rank among the least trusting nations on Edelman's global Trust Index.

That is the information environment your communications team is operating in. When a false claim arrives, it lands on an audience already primed to believe the worst. Monitoring tells you the claim exists. Narrative intelligence tells you whether it's going to take hold, and what to do about it before it does.

Inoculation. This is the most underutilized tool available to organizations right now. Inoculation theory, developed in social psychology and advanced by Cambridge professor Sander van der Linden, holds that pre-emptively exposing audiences to a weakened form of a false argument makes them significantly more resistant to the full version when they encounter it. Van der Linden calls this "prebunking," and his research has been validated across multiple cultural and linguistic contexts.

For organizations, this means identifying the false narratives most likely to target you, then addressing them proactively—surfacing and defusing them before they spread, rather than denying them after they do.

Response protocols calibrated to threat level. Not every false claim requires the same response. Engaging with a fringe claim can amplify it. Ignoring a fast-moving false narrative can allow it to become the dominant story. A real misinformation strategy includes a decision framework for when to respond, how to respond, through which channels, and with what level of visibility—written down before there's a crisis, not improvised during one.


Where to Start

At the baseline, each of the four components (narrative infrastructure, narrative intelligence, inoculation, and response protocol) can be built at different levels of investment. Here's the entry point for each.

Narrative infrastructure: Start publishing. One op-ed, one LinkedIn article, one documented case study that establishes your track record in your own words. That's what AI models will draw from when someone asks about you, and it's what makes a false claim harder to land with an audience that already knows who you are.

Narrative intelligence: Google Alerts for your organization's name, your leaders' names, and the most predictable false claims in your sector is the floor, not the ceiling. It won't give you a full picture, but it will catch the early signal most organizations miss because nobody thought to look.

Inoculation: Read Van der Linden's prebunking research—the applied version for organizations is more accessible than it sounds. Then write one piece that addresses the false narrative most likely to be used against you. Before it's used against you.

Response protocol: A one-page decision framework. When a false claim appears: Does engaging amplify it? Who responds and through which channel? What threshold triggers escalation? Almost no organization has this written down. It takes an afternoon to build and is worth considerably more than that when you need it.

The private sector is already operationalizing this. Some firms have developed enterprise-level products that combine real-time monitoring with prebunking campaigns built directly on Van der Linden's research. Other large communications firms are building comparable offerings. This is becoming a standard product category rather than a specialty.

The details behind each of these components deserves its own articles, and I'll be writing about each one. But the starting point is simpler than most organizations want to admit: decide that this is infrastructure, not a reaction, and act accordingly.

The Window Is the Strategy

The organizations that survive the misinformation era won't be the fastest to respond.

They'll be the ones who spent time they didn't feel like they needed—building a documented narrative before anyone challenged it, developing intelligence before threats peaked, and preparing their audiences for attacks that hadn't happened yet.

If your organization's misinformation plan only activates after a false narrative has spread, you don't have a misinformation strategy. You have a plan to manage the aftermath.

The question is not whether your organization will face a misinformation threat. For any brand, institution, or public figure operating at scale, the question is when. And when it comes, the window to act effectively won't open after the crisis. It opened months ago.


Wilsar Johnson writes about brand, narrative, and strategic communications. She works with leaders, organizations, and brands navigating complex communication environments.

Sources: Vosoughi, Roy & Aral, "The Spread of True and False News Online," Science, March 2018. 2025 Edelman Trust Barometer. Sander van der Linden, Cambridge Social Decision-Making Lab.


The World Already Knows Misinformation Is a Threat. Brands Haven’t Caught Up Yet.

In 2025, Pew Research surveyed adults across 25 countries and asked them to name the biggest threats facing their nations. The results were striking. A median of 72%—across nations as different as Germany, Poland, the United States, and South Korea—identified the spread of false information online as a major threat.

That number topped terrorism in seven of those countries. In Germany and Poland, it was the single greatest perceived threat, by a wide margin.

We tend to read findings like this as political problems, something for governments, regulators, or online platforms to sort out. And they should worry. But I want to draw your attention to a parallel crisis that’s unfolding right in front of our eyes and getting far less attention: misinformation and disinformation are increasingly being deployed as weapons against brands and public figures, and most organizations have no idea it’s happening until the damage is already done.

The Machinery Is Already Running

In late 2025, Taylor Swift released The Life of a Showgirl, which immediately became the fastest-selling album in history. And almost immediately, something strange happened online.

Posts began circulating accusing Swift of embedding Nazi symbolism in her merch, endorsing MAGA politics, and signaling trad-wife gender norms—all framed as a leftist critique. The supposed evidence was thin: a lightning-bolt necklace that vaguely resembled SS insignia, and a single word choice in one song. But the narrative spread.

Fans and commentators pushed back, but that pushback, it turns out, was exactly what the operation needed. Every rebuttal amplified the original claim, and the algorithm rewarded the engagement, which means that millions of people who never believed the accusation helped spread it anyway.

Behavioral intelligence firm GUDEA analyzed over 24,000 posts and 18,000 accounts across 14 platforms in the weeks following the album’s release. What they found should concern every brand strategist, communications director, and PR professional reading this:
Just 3.77% of accounts drove 28% of the entire conversation.

That’s not organic discourse. That’s infrastructure.

GUDEA also identified two distinct spikes of coordinated inauthentic activity—one in the album’s first week, and one after Swift’s lightning bolt merch dropped. During that second spike, roughly 40% of posts came from inauthentic accounts, and conspiracist content made up nearly 74% of total conversation volume during that window.

More alarming: the network pushing the Swift narrative had significant user overlap with a separate coordinated campaign targeting Blake Lively during her legal battle with Justin Baldoni. Same accounts, different targets, same playbook. This is what researchers called a “cross-event amplification network”—a reusable infrastructure for reputational attacks that can apparently be turned on and off, aimed at different targets, for reasons that aren’t always immediately obvious.

Before any Swift fans or haters come for my neck: she is the example, not the point.

When It's Not Just a Brand

The Taylor Swift case shows what coordinated disinformation looks like when it targets a brand. But what happens when the same machinery gets aimed at a real-world crisis?

On September 10, 2025, conservative activist Charlie Kirk was murdered at Utah Valley University. Within hours, the machinery was running.

False claims and conspiracy narratives began circulating almost immediately. AI systems, including Grok—with 6.2 million followers on X—generated and spread incorrect information about suspects and timelines that fed directly back into social media rumor cycles. A 77-year-old retired banker in Toronto named Michael Mallinson was falsely named as the shooter by Grok and had his name and face attached to posts accusing him of murder. He had no connection to Kirk, had never been to Utah, and yet found himself at the center of a coordinated misidentification campaign. “I felt violated,” he told CBC.

Meanwhile, analysts at the Center for Internet Security and the Institute for Strategic Dialogue documented something even more organized underneath the noise. Russian-backed groups, including a disinformation operation known as Operation Overload, manufactured fake news reports, fabricated celebrity quotes, and created false images designed to inflame both conservative and LGBTQ+ audiences. More than 46,000 posts on X discussing the shooting contained the word “trans,” the majority of which wrongly speculated the shooter was transgender. Roughly 26,000 posts expressed either concern about or desire for civil war.

The goal, as one former DHS official put it, was not just for people to consume the content, but to act on it.

And here’s the part that often gets lost in the coverage: the attacks didn’t stay contained to Kirk or the suspected shooter. Analysts documented how these operations targeted broader “enemy” categories—left-leaning groups, LGBTQ+ communities, and their organizational supporters. 

That means nonprofits. Philanthropy staff. Advocacy organizations. DEI practitioners. Campus orgs. People and institutions with no direct connection to the event who suddenly found themselves named in a coordinated information operation because they fit a narrative someone was trying to weaponize.

Online communities engaged in open-source “sleuthing” that reached into relatives’ Facebook pages and wider social graphs, a pattern that routinely pulls in uninvolved organizational staff and adjacent networks once a narrative catches fire.
If you work in or fund organizations that touch any politically sensitive space, you know this threat is not abstract but well-documented.

The Threat Brands Aren't Taking Seriously

Most organizations think about misinformation as background noise in a chaotic media environment. Something to monitor (and honestly, most brands and organizations don’t even do this unless there’s a crisis), maybe, or respond to if it gets bad enough.

What we’re now seeing—from Taylor Swift’s album rollout to the aftermath of political violence—is that misinformation can be aimed at you deliberately, efficiently, and at a scale that overwhelms your ability to course-correct once it’s in motion. A small number of coordinated accounts can seed a narrative in fringe spaces, watch it migrate to mainstream platforms, and let organic user behavior (outrage, defense, debate) carry it the rest of the way. By the time your social listening tool flags it (again, if that’s even set up), the discourse has already been shaped.

And with the rise of AI, the cost of running these operations is only going down. It’s now cheaper and faster to generate inauthentic accounts, produce content at volume, and personalize attacks to feel more credible. The Kirk case showed us AI chatbots with millions of followers amplifying false information in real time, with no correction or accountability.

The 3.77% problem is going to become the 10% problem. Then the 20% problem.

We talk a lot in brand and communications circles about narrative—about owning your story and being proactive rather than reactive. But most of that thinking was built for a world that no longer exists, where narratives emerged organically through traditional media coverage, customer sentiment, and employee behavior. The playbook assumed a relatively level playing field.

That playing field no longer exists.

A New Standard for Communicators

My argument is not to paralyze, but the opposite. I am arguing, empowering you, to evolve to a new level of strategic sophistication. A few things worth sitting with:

  • Your narrative infrastructure matters more than ever. A brand with a strong, clear, and consistently communicated identity is harder to destabilize. When people already know who you are and what you stand for, a coordinated smear has less surface to grip. Narrative clarity used to be primarily a brand virtue. Now it’s also a defensive asset.
  • Monitoring is not the same as intelligence. Most brands track mentions and sentiment, and that’s table stakes now. The more urgent capability is understanding who is driving a conversation and how it’s structured—distinguishing organic discourse from coordinated amplification before you respond to something that was engineered to make you respond.
  • Engagement with bad-faith narratives often backfires. The Swift case illustrates this painfully. When authentic users rushed to defend her, they spread the original claim further and fed the algorithm. Communicators need to develop the skill and discipline to know when not to engage—and hold that line even when leadership is pushing you to respond.
  • Nonprofits and philanthropy are not exempt. The Kirk case should put every advocacy organization, foundation, and socially positioned brand on notice. You don’t have to be the target of an operation for it to reach you. You just have to be in the orbit of a narrative someone is trying to move.
  • The AI dimension changes the urgency. The organizations consistently creating factual, thoughtful content and building the capability to detect and respond to coordinated information attacks now will have a significant advantage over the ones who wait until they’re in the middle of one.

The World Has Named the Problem

72% of adults across 25 countries are worried about the spread of false information online. That kind of global consensus tells us that something has fundamentally shifted in our information environment.

Brands operate inside that environment. Their reputations live and die inside that environment. And the organizations that treat this as someone else’s problem—a political issue, a platform issue, a regulatory issue—are leaving themselves dangerously exposed.
The machinery is already running. The only question is whether your strategy has caught up to the world it’s actually operating in.


Wilsar Johnson writes about brand, narrative, and strategic communications. She works with leaders, organizations, and brands navigating complex communication environments.