You Don't Have a Misinformation Strategy. You Have a Crisis Plan.

Your organization's misinformation plan is probably a response plan. It only activates after a false narrative has already spread through your audience, seeded doubt in your stakeholders, and done structural damage to your credibility. By the time most communications teams are formulating a response, the window to control the story has already closed.

Most organizations don't have a misinformation strategy. They have crisis communications, and they've convinced themselves that those are the same thing.

The distinction matters enormously right now, because the misinformation threat has evolved in ways that make the gap between having a real strategy and thinking you have one the difference between managing a false narrative and being destroyed by one.

The Plan Most Organizations Actually Have

The most common version I encounter goes something like this: "We monitor mentions. If something false spreads, we have a response plan."

That's crisis comms. Useful. Necessary. Not a misinformation strategy.

Some organizations are more sophisticated. They have a designated team, a media relations protocol, an escalation matrix that routes false claims through legal before anyone responds publicly, and a social listening tool with alerts set up for branded keywords.

Still not a misinformation strategy.

The core problem with all of these approaches is that they are fundamentally reactive. They are designed to activate after a false narrative has already spread. And in an environment where misinformation travels faster than corrections ever will, building your defense around reaction is building it to fail.

A 2018 study published in Science by researchers at MIT (the largest longitudinal study of its kind) found that false news reached 1,500 people about six times faster than the truth, and that falsehoods were 70% more likely to be shared. The researchers found the driver wasn't bots. It was humans, drawn to the novelty and emotional charge that false claims tend to carry.

By the time most communications teams are formulating a response, the false narrative has already done structural damage. The correction will reach a fraction of the audience that the original false claim did. The response plan assumes you still have the audience's ear. Often, you don't.

What Real Strategy Looks Like

A misinformation strategy is a proactive, infrastructure-level framework that does three things:

  1. Builds a narrative foundation before it's needed

  2. Detects threats before they become crises

  3. Reduces the surface area for false narratives to take hold

Narrative infrastructure. This is the most overlooked component, and the one that has become significantly more consequential in the age of AI.

Before any false claim can spread, an organization needs an established, credible narrative about who it is, what it stands for, and what its track record demonstrates. Not brand messaging but a documented, publicly visible record (through content, relationships, third-party validation, and community presence) that makes false claims harder to stick because your audiences have already been exposed to the truth.

The organizations that survive misinformation attacks best are rarely the ones with the fastest response. They're the ones whose audiences already have enough of a positive baseline that the false claim doesn't land cleanly.

AI has raised the stakes here considerably.

Large language models (ChatGPT, Claude, Gemini, Grok, Perplexity) are increasingly the first place people turn to research a company, institution, or public figure. These models draw from news, social media, reviews, forums, and owned content. If those signals are inconsistent or contaminated by false claims, the AI summary reflects that.

The content you don't publish is the vacuum a false narrative fills, not just in the news cycle, but in the training data.

Organizations investing in consistent, authoritative published content are building the very inputs AI draws from to describe them. That includes employee and corporate creator programs, where staff publish under their own names and add credibility that the organization's official voice can't replicate. Brands building owned newsrooms and content studios are taking this further, creating direct audience relationships that don't depend on placement or coverage. They're not just managing brand awareness. They're building a body of published truth that's harder to displace.

Narrative intelligence. Monitoring tells you what's being said. Narrative intelligence tells you why it's spreading, who's moving it, and what it's doing to your stakeholders' perception of you.

Most organizations with social listening tools are measuring reach and sentiment, which is useful for marketing but inadequate for misinformation defense. Narrative intelligence goes further: tracking false claims as they first emerge, mapping which communities are most exposed, identifying whether amplification is coordinated or organic, and assessing what a claim is actually doing to belief, not just how many people have seen it.

The context here is stark. The 2025 Edelman Trust Barometer found that 68% of people surveyed distrust business leaders (up 12 points from the year before) and 68% believe business leaders deliberately mislead them. Five of the 10 largest global economies, including the U.S. (47), the UK (43), and Germany (41), rank among the least trusting nations on Edelman's global Trust Index.

That is the information environment your communications team is operating in. When a false claim arrives, it lands on an audience already primed to believe the worst. Monitoring tells you the claim exists. Narrative intelligence tells you whether it's going to take hold, and what to do about it before it does.

Inoculation. This is the most underutilized tool available to organizations right now. Inoculation theory, developed in social psychology and advanced by Cambridge professor Sander van der Linden, holds that pre-emptively exposing audiences to a weakened form of a false argument makes them significantly more resistant to the full version when they encounter it. Van der Linden calls this "prebunking," and his research has been validated across multiple cultural and linguistic contexts.

For organizations, this means identifying the false narratives most likely to target you, then addressing them proactively—surfacing and defusing them before they spread, rather than denying them after they do.

Response protocols calibrated to threat level. Not every false claim requires the same response. Engaging with a fringe claim can amplify it. Ignoring a fast-moving false narrative can allow it to become the dominant story. A real misinformation strategy includes a decision framework for when to respond, how to respond, through which channels, and with what level of visibility—written down before there's a crisis, not improvised during one.


Where to Start

At the baseline, each of the four components (narrative infrastructure, narrative intelligence, inoculation, and response protocol) can be built at different levels of investment. Here's the entry point for each.

Narrative infrastructure: Start publishing. One op-ed, one LinkedIn article, one documented case study that establishes your track record in your own words. That's what AI models will draw from when someone asks about you, and it's what makes a false claim harder to land with an audience that already knows who you are.

Narrative intelligence: Google Alerts for your organization's name, your leaders' names, and the most predictable false claims in your sector is the floor, not the ceiling. It won't give you a full picture, but it will catch the early signal most organizations miss because nobody thought to look.

Inoculation: Read Van der Linden's prebunking research—the applied version for organizations is more accessible than it sounds. Then write one piece that addresses the false narrative most likely to be used against you. Before it's used against you.

Response protocol: A one-page decision framework. When a false claim appears: Does engaging amplify it? Who responds and through which channel? What threshold triggers escalation? Almost no organization has this written down. It takes an afternoon to build and is worth considerably more than that when you need it.

The private sector is already operationalizing this. Some firms have developed enterprise-level products that combine real-time monitoring with prebunking campaigns built directly on Van der Linden's research. Other large communications firms are building comparable offerings. This is becoming a standard product category rather than a specialty.

The details behind each of these components deserves its own articles, and I'll be writing about each one. But the starting point is simpler than most organizations want to admit: decide that this is infrastructure, not a reaction, and act accordingly.

The Window Is the Strategy

The organizations that survive the misinformation era won't be the fastest to respond.

They'll be the ones who spent time they didn't feel like they needed—building a documented narrative before anyone challenged it, developing intelligence before threats peaked, and preparing their audiences for attacks that hadn't happened yet.

If your organization's misinformation plan only activates after a false narrative has spread, you don't have a misinformation strategy. You have a plan to manage the aftermath.

The question is not whether your organization will face a misinformation threat. For any brand, institution, or public figure operating at scale, the question is when. And when it comes, the window to act effectively won't open after the crisis. It opened months ago.


Wilsar Johnson writes about brand, narrative, and strategic communications. She works with leaders, organizations, and brands navigating complex communication environments.

Sources: Vosoughi, Roy & Aral, "The Spread of True and False News Online," Science, March 2018. 2025 Edelman Trust Barometer. Sander van der Linden, Cambridge Social Decision-Making Lab.