The BuzzFeed Era Is Over. Most Brands Are Still Speaking Its Language.

On April 20, The Onion announced it had reached a new deal to take over Infowars, the conspiracy machine Alex Jones built. The arrangement (a licensing agreement still pending court approval) sends revenue to the Sandy Hook families Jones spent years tormenting. Satire absorbing one of the worst media brands of the last decade, with accountability written into the architecture. As a BuzzFeed millennial who came up watching one media era define the social internet, I read this news and felt a handoff happening in real time.

BuzzFeed wasn't just a content site. It was the brand that understood the emotional grammar of the social-sharing internet better than anyone else. The listicles, the quizzes, the format-as-feeling: they all worked because BuzzFeed grasped a specific insight about that era. If you could capture what people already felt and give it a shareable container, you could capture attention at scale. That insight shaped how every other brand learned to talk online, including political brands. It taught a generation of communicators that authenticity meant relatability, and that relatability meant looking like the audience.

The political expression of that era was Obama, especially during his second term. Between Two Ferns. The BuzzFeed healthcare.gov push. The "Thanks Obama" meme fluency. The Pete Souza visual language. All of it was calibrated to the same emotional grammar BuzzFeed was running. He was the first leader to use social media effectively making him fluent in the cultural language of his moment, and that fluency made him feel like a peer in a way no president before him had. The political brands failing in 2026 are still trying to run that playbook a decade late.

That era is over. Audiences got tired of two things at once.

They got tired of performative sincerity (the BuzzFeed lineage of warm relatability and shareable values) and of performative outrage (the Infowars lineage of enemy-naming and engagement bait). Both modes stopped working because both stopped feeling honest. What replaced them is harder to name, so I'll name it once: ironic sincerity.

The audience this term wants distance without detachment and fluency without naïveté. They want to be in on something real, and they have zero tolerance for condescension.

The Onion is built for ironic sincerity. The voice is deadpan, observational, and assumes its audience already understands the absurdity of the moment. The jokes don't need explaining because the reader is already there. The brand works as a mirror, not a megaphone. That fluency is exactly why the Infowars takeover lands as a cultural moment instead of a mere press release. What's happening here goes beyond satire mocking conspiracy theories. The Onion absorbed Infowars' infrastructure and rebuilt it with accountability inside the architecture. The deal pays the families. The brand decision and the moral decision happen at the same time, and the audience reads them simultaneously. That's what cultural relevance looks like in 2026.

Most candidates, electeds, and institutional clients are still operating in one of two modes. They're either running earnest BuzzFeed cadence (the relatability post, the values carousel, the "we hear you" graphic) or aggressive outrage cadence (the threat post, the dunk, the look-at-the-enemy reel). Both modes targeted an audience that no longer exists. The audience you're actually talking to in 2026 is irony-fluent, exhausted by performance, and significantly faster than your content team. They can smell the strategy meeting behind every post. They can clock the consultant. They can feel the difference between a brand that gets the moment and a brand still working from a 2016/2017 playbook.

What follows from this is a strategic problem, not a stylistic one. So much political and institutional content falls flat right now, even when the underlying message is sound, because the cadence doesn't match the audience.

A few principles for the brands and public figures, none of them complicated. Tone matters more than volume; the brands cutting through right now are doing it through sharpness. Cultural literacy gets demonstrated through the work, not announced through a campaign. Wit and honesty both require restraint, and most political content has lost both. Recognition matters more than scale: BuzzFeed wanted reach, The Onion wants to be understood, and the clients you advise should want the second thing more than the first.

The discipline underneath all of this has a name. I've called it restrained authenticity in earlier writing, and it's the brand-side execution of what ironic sincerity asks for culturally: showing up real without making realness the performance, knowing what to share, knowing what to withhold, and trusting the audience to read the difference. It's a framework I'll unpack on its own. For now, it's enough to point at it, because it's the discipline almost no political brand operating in 2026 has built yet.

The era I came up in is over. The internet I'm now strategizing through speaks a different language, and most comms professionals haven't learned it yet. The brands that adapt will be the ones who demonstrate they're already living in the culture, not the ones who keep talking about it. The question for everyone advising public figures, candidates, and institutions in 2026 is which camp your client is in. And whether you have the nerve to tell them the truth.


Sources


Chappell Roan Was Hit With the Same Bot Infrastructure as Taylor Swift

On March 21, 2026, Brazilian soccer star Jorginho Frello posted a detailed account on Instagram. His 11-year-old stepdaughter had walked past Chappell Roan's table at a hotel in São Paulo during Roan's Lollapalooza Brazil appearance. A security guard, he claimed, berated the girl and her mother. The post spread fast. Roan responded the next day, saying she was unaware of the incident and her personal security hadn't been involved. 

Then the analysis came in.

A behavioral intelligence firm called GUDEA—which I introduced in my first article in the context of Taylor Swift—tracked 100,030 posts from 54,334 unique users across seven platforms over the 72-hour window of March 20–22. They found that 4.2% of accounts in the conversation were likely bots, and those accounts produced 23% of all posts.

For context, in the Taylor Swift campaign GUDEA previously analyzed, 3.77% of accounts drove 28% of posts. Same firm, same methodology, different target, nearly identical signature.


What It Actually Means

BuzzFeed first reported the findings. The behavioral signals flagged included posting bursts, repetitive phrasing, synchronized timing, and fictional or satirical posts circulating as fact. Out.com reported that the discourse stayed trending for over a week despite Roan's direct response—a detail worth sitting with.

Here's the number that reframes all of this. A 2025 study in Scientific Reports analyzed social media chatter across roughly 200 million users and found that about 20% of posts about global events come from bots at any given time.

4.2% is well below that.

The Chappell Roan campaign didn't require an unusual number of bots. It required a coordinated cluster—small enough to stay beneath the noise threshold, organized enough to generate nearly a quarter of all posts on a major public controversy. Standard brand monitoring is built to flag volume. This operation didn't depend on bot volume. It seeded the conversation early, then let the algorithm and real human outrage carry it from there.

That's the uncomfortable part for communications professionals: by the time a monitoring tool flags something as a sentiment problem, the coordination has already happened somewhere else.


You've Seen This Before. I Named It.

In How a Misinformation Attack Actually Works, I laid out five stages that these attacks follow: Hook, Frame, Flood, Illusion, and Mutation, and applied it to Eli Lilly and Barilla in that piece. The Chappell Roan situation runs the same sequence.

The Hook. Jorginho's March 21 Instagram post is the anchor—a real incident documented by someone with a platform and genuine emotional stakes. These attacks almost never start from nothing. A real grievance does more work than a fabricated one, and it's harder to dismiss.

The Frame. What the incident became, once compressed: Chappell Roan's security traumatized an 11-year-old child. No sourcing, maximum emotional charge, easy to pass along without thinking too hard about it. The compression at this stage is the mechanism, not a side effect.

The Flood. 4.2% of accounts. 23% of posts. Same phrasing, synchronized timing, seeded across platforms while the story was still early. Most monitoring tools are calibrated to catch volume spikes. What they're not built to detect is the behavioral synchronization happening underneath, because what looks like a wildfire is actually several fires starting at once.

The Illusion. Roan denied the story on March 22. The controversy kept trending for another week. This is the illusory truth effect working as documented: repeated exposure to a claim raises its perceived credibility even for people who've already heard it's false. The correction reached people. It just didn't land the way the original claim did.

The Mutation. Satirical and fictional posts entered the conversation and circulated as fact. By the time any response arrived, the narrative had already shifted, and each denial was answering a version of the story that no longer quite existed.


Why This Matters Beyond Chappell Roan

The infrastructure that ran this campaign has no particular interest in pop stars. What it needs is a real incident, a compressible frame, and the conditions for coordinated amplification. Any brand, institution, or public figure operating at scale has all three available to whoever wants to use them. AI has raised the stakes on this considerably — ChatGPT, Claude, Gemini, and Perplexity are increasingly the first place people go to research a company or public figure, and those models draw from whatever signals are already in circulation.

There's also something worth noting about the fan response. As I wrote in my first article, fans and commentators who pushed back against the Chappell Roan narrative were doing exactly what the operation needed. Every rebuttal amplified the original claim. The algorithm rewarded the engagement. Millions of people who never believed the accusation helped spread it anyway. That's not a failure of good intentions—it's how these systems are designed to work.

Communications professionals watching this happen to Chappell Roan were watching their own potential crisis from the outside. The mechanics are the same. The difference is only the target.


The Pattern Is Documented. The Question Is Whether You're Ready.

In my first article, I argued that misinformation is a brand and institutional threat, not just a political problem. The second made the case that most organizations don't have a misinformation strategy; they have a crisis plan, and those aren't the same thing. The third named the five stages of how these attacks actually unfold.

GUDEA has now documented this pattern twice, against two different targets, with nearly identical infrastructure signatures. 

  • Taylor Swift: 3.77% of accounts, 28% of posts. 

  • Chappell Roan: 4.2% of accounts, 23% of posts. 

The playbook runs wherever the conditions exist, regardless of who the target is.

If a coordinated campaign started against your organization tonight, what stage would you catch it at—Stage 2, while the frame is still being built? Or Stage 4, when it already feels like a consensus and the conversation has moved on to your response?



Sources


I Felt Myself Abandoning Myself

A few months ago, I left my job.

I’ve been sitting with how to say that publicly for a while. Not because I’m ashamed of it (the opposite, actually), but because I wanted to say it from the right place. Not from the raw, exhausted place I was in when I made the decision. From here, with some sleep and some perspective.

So here we are.

I was at an agency for almost five years. It’s a great place to work. It will push you, challenge you in ways that are uncomfortable and, looking back, necessary. You will make work you’re proud of, things you didn’t think you were capable of producing. The clients are engaging. The colleagues are sharp (some of the best digital people in the game). The issues I worked on matter, and genuinely grateful for all of it, especially the friendships.

But like many folks working in comms, esp in digital comms, I was exhausted in a way I hadn’t fully admitted to myself until I was already gone. Not the kind of tired a good night’s sleep fixes. The kind that accumulates over years, where you’re at your desk late, and you can’t remember if you ate today (the answer is probably no), and you file that away and keep working. Where your body is technically present, but the actual you, the curious one, the creative one, the one who likes to paint and dance and find the light in things, has quietly left the building.

I felt myself abandoning myself, again. That’s the only way I know how to say it.

The last time I felt this feeling was in 2020 after I spent a decade working as a congressional staffer— I was exhausted and burnt out.

My body tried to tell me before I was ready to listen.

July 1st, last year. I was rushing out of a metro station in DC, just got off a train from a meeting, and was heading to the office for another one, when I fell. Fractured my foot. On crutches for the rest of the summer.

And I still didn’t slow down. I kept working the same hours because that’s what survival mode does: it teaches you to override the signals, to tell yourself you don’t have time to receive the message. So you don’t.

It took several more months before I finally listened.

The first real moment of reckoning came on a Tuesday, a few days after I left. I was doing something completely mundane, brushing my teeth or maybe in the shower, and I felt it. That familiar tightening in my chest, the feeling of oh wait, there’s something I need to be doing. An email. A report. Something. Anything.

Except there wasn’t. There was nothing for me to rush and do.

My body hadn’t gotten the memo yet, though. It was still running the old program, anticipating a flood of work that no longer existed. I just stood there and let myself feel it, this phantom urgency, and told myself: you don’t have to go anywhere. There is nothing. You’re okay. Breathe.

It took a few days for that to actually feel real.

What came after was quieter than I expected.

I cleaned my closet, something I’d been meaning to do for months. I cooked breakfast slowly, with nowhere to be. Washed dishes. Did laundry. Took care of my plants. Went to the gym with my husband on a Saturday morning and remembered what it felt like to move my body because I wanted to, not because I was squeezing it in between things.

I know that sounds small. But it felt enormous, because it was the first time in a long time that I felt like an active participant in my own life. Not just someone moving through it, counting down the hours to go back to sleep and start again.

I was living. Not just existing.

I’ve spent many years doing what I call self-study. Journaling. Getting curious about what makes me come alive and what drains me. Learning how I respond to things, what I need, and where my limits are. Not navel-gazing, genuine curiosity. Who am I, really? What do I want? What does my gut say when I get quiet enough to hear it?

My gut has never lied to me. When I’ve listened to it, things have worked out. When I haven’t, when I’ve overridden it for someone else’s timeline or expectations, I’ve suffered for it every time.

Leaving was my gut. It wasn’t impulsive—although it seemed that way to some people—which is funny because I’m rarely an impulsive person. It was the clearest, most deliberate decision I’d made in years. And the moment I made it, I felt something I hadn’t felt in a long time: control, intention, myself, and free.

I’m not sharing this as a cautionary tale or a “quit your job” post. That’s not what this is.

This is about self-knowledge, about what happens when work consumes so much of you that you lose track of who you are outside of it, and what it actually takes to come back.

I’m a few months into what I’ve been calling a mini sabbatical. Taking dance classes. Going back to the gym. Cooking real meals. Visiting museums and art galleries. Reconnecting with my meditation practice. Reading. Writing. Creating. Spending actual time with my husband, my family, and my friends. These aren’t rewards I’m giving myself. They’re requirements, the things that make me a full person.

I’m an artist and photographer, and one of the things I’ve been working on for a long time now (outside of writing my blog and recording YouTube videos) is a personal project I call Find the Light. It started literally, me finding and capturing light in whatever space I was in. But it became something else, a way of seeing, a practice of looking for what’s still luminous even when things feel dim. That’s really what this sabbatical has been: finding the light again, after a season of being pretty deep in the fog.

I’ll be sharing more of this, the reflections, the lessons, the photos, as part of an ongoing conversation here on the blog. Not because I have it all figured out. Because I know I’m not the only one who has felt this way. And the human experience, in all its mess and contradiction, deserves to be spoken out loud so others can feel less afraid of sharing their story and not suffer in silence.

The mini sabbatical isn’t the story but the setting and the catalyst for finally adding to the conversation.

The actual story is about knowing who you are and choosing to come back to that person, no matter what it costs.

A few weeks on from writing this, things are still unfolding in ways I didn't plan for and couldn't have predicted. That's the part nobody tells you about coming back to yourself—it doesn't stop at recovery. It starts opening doors.



How a Misinformation Attack Actually Works

In 2022, someone created a fake account on X/Twitter, impersonating Eli Lilly and posted a single message: “We are excited to announce insulin is free now.” The post was fake. 

Eli Lilly never said it. But on November 11, 2022, the company’s stock dropped 4.37%, erasing over $15 billion in market cap. One account, one sentence, caused real financial damage before a correction could land.

Don’t mistake that for a communications crisis. That is an attack. And that attack followed a pattern.

In my previous articles (Article 1 and Article 2) in this series established that misinformation is a brand and institutional threat, not just a political problem, and introduced a framework for building a real response strategy. This piece goes one level deeper. Here is how a misinformation attack actually unfolds, stage by stage, so communications professionals can recognize one while it is still moving.

Misinformation attacks are not random.

They are not spontaneous bursts of online chaos that happen to catch a brand in the crossfire. They follow a sequence. A false narrative attaches to something real, gets compressed into a form people can repeat, floods the zone through coordinated accounts, creates the illusion of consensus, and then mutates when challenged. Every stage has a logic. Every stage has a signal. And the organizations that get hurt the worst are the ones that don’t recognize which stage they’re in until it’s too late.

Here is what each stage looks like in practice.

Stage 1: The Hook

Every misinformation attack needs an anchor because it almost never starts from nothing. It attaches to a real event, a genuine grievance, a breaking news moment, or an existing controversy, because real events carry attention and emotion that a fabricated claim alone cannot manufacture.

The false narrative doesn’t try to replace the real event. It reframes it by taking something that actually happened and bending the meaning.

The signal to watch for: the claim arrived fast and fits the moment too neatly. It is emotionally loaded, perfectly timed, and requires no context to feel credible. That fit is not an accident.


Stage 2: The Frame

Once the narrative has an anchor, it gets compressed. A screenshot. A short clip. A one-line accusation. A slogan. Something that travels without the context that would complicate it.

This is not an accident either. The compression is the point. A nuanced claim requires effort to process and share. A simple, emotionally charged one-liner does not. The goal at this stage is not to convince people of a complicated argument. It is to give them something they can repeat without thinking too hard about it.

The signal: the message is suspiciously clean. No sourcing. No qualifications. High outrage. Maximum shareability.


Stage 3: The Flood

Now the narrative seeds across accounts and communities simultaneously. Coordinated accounts, bots, influencer laundering, and ordinary users who have been nudged into sharing it. The goal is volume, not persuasion.

This is where the attack starts to feel real, because it suddenly appears to be everywhere. The same phrasing. The same image. The same claim, in comment threads and repost pages and quote posts, across accounts that have no obvious connection to each other.

The signal: identical or near-identical language appearing across unrelated accounts in a tight time window. Virality looks like one thing catching fire. Coordination looks like several fires starting at once.


Stage 4: The Illusion

This is the stage where most brands lose.

By the time the claim has flooded enough accounts, people stop asking whether it is true and start assuming it must be. Repetition creates the feeling of consensus. A claim people have seen ten times feels more established than a claim they are hearing for the first time, regardless of whether either one is sourced.

This is not gullibility. It is how the human brain works. Research on the illusory truth effect shows that repeated exposure to a statement increases its perceived credibility, even when people have been told it is false. (Udry and Barber, “The Illusory Truth Effect: A Review of How Repetition Increases Belief in Misinformation,” Current Opinion in Psychology, Vol. 56, 2024.) The attack is not trying to convince people. It is trying to make the claim feel familiar enough that people stop questioning it.

The signal: the claim feels established before any credible source has confirmed it. It has the texture of common knowledge.


Stage 5: The Mutation

When a fact-check arrives, or the brand issues a correction, a well-constructed misinformation campaign does not collapse. It shifts.

It establishes new wording. New screenshots. A slightly different version of the same allegation. Sometimes the story moves to a different platform where the correction hasn’t reached. Sometimes the claim gets laundered through a new account that wasn’t part of the original wave. The correction is always chasing a version of the story that no longer quite exists.

Barilla experienced this. After a false claim spread in Italy that the company’s pasta was contaminated with insects and that Barilla had pulled its products from shelves, the company issued a direct denial. The claim kept circulating. Barilla’s own press release confirmed no insect-flour products were ever produced or planned. There is no documented evidence that the campaign caused a measurable business decline, but the reputational disruption was real and required a direct public response. The denial didn’t kill the narrative. It just forced the narrative to find a new shape.

The signal: every time the brand addresses the claim, a slightly different version reappears. The correction never quite lands because it is always answering yesterday’s version of the attack.


Why It Works

These attacks succeed because they are engineered to exploit the way human beings actually process information, not because of a failure of intelligence or critical thinking.

Repetition increases perceived truth. 

Social proof overrides skepticism. Emotional threat activates a faster, less analytical response. When a claim feels familiar, urgent, and widely shared, the brain’s default is to treat it as credible. That is not a character flaw. It is a design vulnerability, and misinformation campaigns are built around it.

The implication for communications professionals is direct: the goal of the attack is rarely to make everyone believe one specific fake thing. The goal is to create enough confusion, enough noise, and enough doubt that people stop trusting the institution’s version of reality entirely. That is harder to correct than a single false claim. And it is what makes these attacks genuinely dangerous.


The Blind Spot Most Brands Have

Most communications monitoring is built to track sentiment and volume. By the time a false narrative registers as a sentiment problem on mainstream platforms, it has already moved through the seeding stage. The coordination happened somewhere else first, on fringe forums, in private groups, across smaller accounts, before it ever hit the feeds that brand monitoring tools are watching.

The practical failure is not response speed. It is watching the wrong signals. Eli Lilly could not have responded faster than the post spread. But a team watching for coordinated behavior rather than volume might have flagged the impersonation account before it moved markets.

This is what narrative intelligence, the second component of the framework from my article, You Don't Have a Misinformation Strategy. You Have a Crisis Plan, is actually for. Not tracking what people are saying about your brand. Tracking how they are saying it, where it started, and whether the behavior looks coordinated or organic.


The Takeaway

Knowing this pattern does not stop every misinformation attack. But it stops the most dangerous part: surprise.

Once you can name the stage you are in, you can make a decision. Ignore it. Document it. Correct it. Escalate it. What you cannot do, once you know the pattern, is mistake a coordinated attack for organic criticism and respond to it the wrong way.

That distinction, between what is real and what is manufactured, is the foundation of everything this series has been building toward.

If an attack on your brand started tonight, which stage would you catch it at?


You Don't Have a Misinformation Strategy. You Have a Crisis Plan.

Your organization's misinformation plan is probably a response plan. It only activates after a false narrative has already spread through your audience, seeded doubt in your stakeholders, and done structural damage to your credibility. By the time most communications teams are formulating a response, the window to control the story has already closed.

Most organizations don't have a misinformation strategy. They have crisis communications, and they've convinced themselves that those are the same thing.

The distinction matters enormously right now, because the misinformation threat has evolved in ways that make the gap between having a real strategy and thinking you have one the difference between managing a false narrative and being destroyed by one.

The Plan Most Organizations Actually Have

The most common version I encounter goes something like this: "We monitor mentions. If something false spreads, we have a response plan."

That's crisis comms. Useful. Necessary. Not a misinformation strategy.

Some organizations are more sophisticated. They have a designated team, a media relations protocol, an escalation matrix that routes false claims through legal before anyone responds publicly, and a social listening tool with alerts set up for branded keywords.

Still not a misinformation strategy.

The core problem with all of these approaches is that they are fundamentally reactive. They are designed to activate after a false narrative has already spread. And in an environment where misinformation travels faster than corrections ever will, building your defense around reaction is building it to fail.

A 2018 study published in Science by researchers at MIT (the largest longitudinal study of its kind) found that false news reached 1,500 people about six times faster than the truth, and that falsehoods were 70% more likely to be shared. The researchers found the driver wasn't bots. It was humans, drawn to the novelty and emotional charge that false claims tend to carry.

By the time most communications teams are formulating a response, the false narrative has already done structural damage. The correction will reach a fraction of the audience that the original false claim did. The response plan assumes you still have the audience's ear. Often, you don't.

What Real Strategy Looks Like

A misinformation strategy is a proactive, infrastructure-level framework that does three things:

  1. Builds a narrative foundation before it's needed

  2. Detects threats before they become crises

  3. Reduces the surface area for false narratives to take hold

Narrative infrastructure. This is the most overlooked component, and the one that has become significantly more consequential in the age of AI.

Before any false claim can spread, an organization needs an established, credible narrative about who it is, what it stands for, and what its track record demonstrates. Not brand messaging but a documented, publicly visible record (through content, relationships, third-party validation, and community presence) that makes false claims harder to stick because your audiences have already been exposed to the truth.

The organizations that survive misinformation attacks best are rarely the ones with the fastest response. They're the ones whose audiences already have enough of a positive baseline that the false claim doesn't land cleanly.

AI has raised the stakes here considerably.

Large language models (ChatGPT, Claude, Gemini, Grok, Perplexity) are increasingly the first place people turn to research a company, institution, or public figure. These models draw from news, social media, reviews, forums, and owned content. If those signals are inconsistent or contaminated by false claims, the AI summary reflects that.

The content you don't publish is the vacuum a false narrative fills, not just in the news cycle, but in the training data.

Organizations investing in consistent, authoritative published content are building the very inputs AI draws from to describe them. That includes employee and corporate creator programs, where staff publish under their own names and add credibility that the organization's official voice can't replicate. Brands building owned newsrooms and content studios are taking this further, creating direct audience relationships that don't depend on placement or coverage. They're not just managing brand awareness. They're building a body of published truth that's harder to displace.

Narrative intelligence. Monitoring tells you what's being said. Narrative intelligence tells you why it's spreading, who's moving it, and what it's doing to your stakeholders' perception of you.

Most organizations with social listening tools are measuring reach and sentiment, which is useful for marketing but inadequate for misinformation defense. Narrative intelligence goes further: tracking false claims as they first emerge, mapping which communities are most exposed, identifying whether amplification is coordinated or organic, and assessing what a claim is actually doing to belief, not just how many people have seen it.

The context here is stark. The 2025 Edelman Trust Barometer found that 68% of people surveyed distrust business leaders (up 12 points from the year before) and 68% believe business leaders deliberately mislead them. Five of the 10 largest global economies, including the U.S. (47), the UK (43), and Germany (41), rank among the least trusting nations on Edelman's global Trust Index.

That is the information environment your communications team is operating in. When a false claim arrives, it lands on an audience already primed to believe the worst. Monitoring tells you the claim exists. Narrative intelligence tells you whether it's going to take hold, and what to do about it before it does.

Inoculation. This is the most underutilized tool available to organizations right now. Inoculation theory, developed in social psychology and advanced by Cambridge professor Sander van der Linden, holds that pre-emptively exposing audiences to a weakened form of a false argument makes them significantly more resistant to the full version when they encounter it. Van der Linden calls this "prebunking," and his research has been validated across multiple cultural and linguistic contexts.

For organizations, this means identifying the false narratives most likely to target you, then addressing them proactively—surfacing and defusing them before they spread, rather than denying them after they do.

Response protocols calibrated to threat level. Not every false claim requires the same response. Engaging with a fringe claim can amplify it. Ignoring a fast-moving false narrative can allow it to become the dominant story. A real misinformation strategy includes a decision framework for when to respond, how to respond, through which channels, and with what level of visibility—written down before there's a crisis, not improvised during one.


Where to Start

At the baseline, each of the four components (narrative infrastructure, narrative intelligence, inoculation, and response protocol) can be built at different levels of investment. Here's the entry point for each.

Narrative infrastructure: Start publishing. One op-ed, one LinkedIn article, one documented case study that establishes your track record in your own words. That's what AI models will draw from when someone asks about you, and it's what makes a false claim harder to land with an audience that already knows who you are.

Narrative intelligence: Google Alerts for your organization's name, your leaders' names, and the most predictable false claims in your sector is the floor, not the ceiling. It won't give you a full picture, but it will catch the early signal most organizations miss because nobody thought to look.

Inoculation: Read Van der Linden's prebunking research—the applied version for organizations is more accessible than it sounds. Then write one piece that addresses the false narrative most likely to be used against you. Before it's used against you.

Response protocol: A one-page decision framework. When a false claim appears: Does engaging amplify it? Who responds and through which channel? What threshold triggers escalation? Almost no organization has this written down. It takes an afternoon to build and is worth considerably more than that when you need it.

The private sector is already operationalizing this. Some firms have developed enterprise-level products that combine real-time monitoring with prebunking campaigns built directly on Van der Linden's research. Other large communications firms are building comparable offerings. This is becoming a standard product category rather than a specialty.

The details behind each of these components deserves its own articles, and I'll be writing about each one. But the starting point is simpler than most organizations want to admit: decide that this is infrastructure, not a reaction, and act accordingly.

The Window Is the Strategy

The organizations that survive the misinformation era won't be the fastest to respond.

They'll be the ones who spent time they didn't feel like they needed—building a documented narrative before anyone challenged it, developing intelligence before threats peaked, and preparing their audiences for attacks that hadn't happened yet.

If your organization's misinformation plan only activates after a false narrative has spread, you don't have a misinformation strategy. You have a plan to manage the aftermath.

The question is not whether your organization will face a misinformation threat. For any brand, institution, or public figure operating at scale, the question is when. And when it comes, the window to act effectively won't open after the crisis. It opened months ago.


Wilsar Johnson writes about brand, narrative, and strategic communications. She works with leaders, organizations, and brands navigating complex communication environments.

Sources: Vosoughi, Roy & Aral, "The Spread of True and False News Online," Science, March 2018. 2025 Edelman Trust Barometer. Sander van der Linden, Cambridge Social Decision-Making Lab.


The World Already Knows Misinformation Is a Threat. Brands Haven’t Caught Up Yet.

In 2025, Pew Research surveyed adults across 25 countries and asked them to name the biggest threats facing their nations. The results were striking. A median of 72%—across nations as different as Germany, Poland, the United States, and South Korea—identified the spread of false information online as a major threat.

That number topped terrorism in seven of those countries. In Germany and Poland, it was the single greatest perceived threat, by a wide margin.

We tend to read findings like this as political problems, something for governments, regulators, or online platforms to sort out. And they should worry. But I want to draw your attention to a parallel crisis that’s unfolding right in front of our eyes and getting far less attention: misinformation and disinformation are increasingly being deployed as weapons against brands and public figures, and most organizations have no idea it’s happening until the damage is already done.

The Machinery Is Already Running

In late 2025, Taylor Swift released The Life of a Showgirl, which immediately became the fastest-selling album in history. And almost immediately, something strange happened online.

Posts began circulating accusing Swift of embedding Nazi symbolism in her merch, endorsing MAGA politics, and signaling trad-wife gender norms—all framed as a leftist critique. The supposed evidence was thin: a lightning-bolt necklace that vaguely resembled SS insignia, and a single word choice in one song. But the narrative spread.

Fans and commentators pushed back, but that pushback, it turns out, was exactly what the operation needed. Every rebuttal amplified the original claim, and the algorithm rewarded the engagement, which means that millions of people who never believed the accusation helped spread it anyway.

Behavioral intelligence firm GUDEA analyzed over 24,000 posts and 18,000 accounts across 14 platforms in the weeks following the album’s release. What they found should concern every brand strategist, communications director, and PR professional reading this:
Just 3.77% of accounts drove 28% of the entire conversation.

That’s not organic discourse. That’s infrastructure.

GUDEA also identified two distinct spikes of coordinated inauthentic activity—one in the album’s first week, and one after Swift’s lightning bolt merch dropped. During that second spike, roughly 40% of posts came from inauthentic accounts, and conspiracist content made up nearly 74% of total conversation volume during that window.

More alarming: the network pushing the Swift narrative had significant user overlap with a separate coordinated campaign targeting Blake Lively during her legal battle with Justin Baldoni. Same accounts, different targets, same playbook. This is what researchers called a “cross-event amplification network”—a reusable infrastructure for reputational attacks that can apparently be turned on and off, aimed at different targets, for reasons that aren’t always immediately obvious.

Before any Swift fans or haters come for my neck: she is the example, not the point.

When It's Not Just a Brand

The Taylor Swift case shows what coordinated disinformation looks like when it targets a brand. But what happens when the same machinery gets aimed at a real-world crisis?

On September 10, 2025, conservative activist Charlie Kirk was murdered at Utah Valley University. Within hours, the machinery was running.

False claims and conspiracy narratives began circulating almost immediately. AI systems, including Grok—with 6.2 million followers on X—generated and spread incorrect information about suspects and timelines that fed directly back into social media rumor cycles. A 77-year-old retired banker in Toronto named Michael Mallinson was falsely named as the shooter by Grok and had his name and face attached to posts accusing him of murder. He had no connection to Kirk, had never been to Utah, and yet found himself at the center of a coordinated misidentification campaign. “I felt violated,” he told CBC.

Meanwhile, analysts at the Center for Internet Security and the Institute for Strategic Dialogue documented something even more organized underneath the noise. Russian-backed groups, including a disinformation operation known as Operation Overload, manufactured fake news reports, fabricated celebrity quotes, and created false images designed to inflame both conservative and LGBTQ+ audiences. More than 46,000 posts on X discussing the shooting contained the word “trans,” the majority of which wrongly speculated the shooter was transgender. Roughly 26,000 posts expressed either concern about or desire for civil war.

The goal, as one former DHS official put it, was not just for people to consume the content, but to act on it.

And here’s the part that often gets lost in the coverage: the attacks didn’t stay contained to Kirk or the suspected shooter. Analysts documented how these operations targeted broader “enemy” categories—left-leaning groups, LGBTQ+ communities, and their organizational supporters. 

That means nonprofits. Philanthropy staff. Advocacy organizations. DEI practitioners. Campus orgs. People and institutions with no direct connection to the event who suddenly found themselves named in a coordinated information operation because they fit a narrative someone was trying to weaponize.

Online communities engaged in open-source “sleuthing” that reached into relatives’ Facebook pages and wider social graphs, a pattern that routinely pulls in uninvolved organizational staff and adjacent networks once a narrative catches fire.
If you work in or fund organizations that touch any politically sensitive space, you know this threat is not abstract but well-documented.

The Threat Brands Aren't Taking Seriously

Most organizations think about misinformation as background noise in a chaotic media environment. Something to monitor (and honestly, most brands and organizations don’t even do this unless there’s a crisis), maybe, or respond to if it gets bad enough.

What we’re now seeing—from Taylor Swift’s album rollout to the aftermath of political violence—is that misinformation can be aimed at you deliberately, efficiently, and at a scale that overwhelms your ability to course-correct once it’s in motion. A small number of coordinated accounts can seed a narrative in fringe spaces, watch it migrate to mainstream platforms, and let organic user behavior (outrage, defense, debate) carry it the rest of the way. By the time your social listening tool flags it (again, if that’s even set up), the discourse has already been shaped.

And with the rise of AI, the cost of running these operations is only going down. It’s now cheaper and faster to generate inauthentic accounts, produce content at volume, and personalize attacks to feel more credible. The Kirk case showed us AI chatbots with millions of followers amplifying false information in real time, with no correction or accountability.

The 3.77% problem is going to become the 10% problem. Then the 20% problem.

We talk a lot in brand and communications circles about narrative—about owning your story and being proactive rather than reactive. But most of that thinking was built for a world that no longer exists, where narratives emerged organically through traditional media coverage, customer sentiment, and employee behavior. The playbook assumed a relatively level playing field.

That playing field no longer exists.

A New Standard for Communicators

My argument is not to paralyze, but the opposite. I am arguing, empowering you, to evolve to a new level of strategic sophistication. A few things worth sitting with:

  • Your narrative infrastructure matters more than ever. A brand with a strong, clear, and consistently communicated identity is harder to destabilize. When people already know who you are and what you stand for, a coordinated smear has less surface to grip. Narrative clarity used to be primarily a brand virtue. Now it’s also a defensive asset.
  • Monitoring is not the same as intelligence. Most brands track mentions and sentiment, and that’s table stakes now. The more urgent capability is understanding who is driving a conversation and how it’s structured—distinguishing organic discourse from coordinated amplification before you respond to something that was engineered to make you respond.
  • Engagement with bad-faith narratives often backfires. The Swift case illustrates this painfully. When authentic users rushed to defend her, they spread the original claim further and fed the algorithm. Communicators need to develop the skill and discipline to know when not to engage—and hold that line even when leadership is pushing you to respond.
  • Nonprofits and philanthropy are not exempt. The Kirk case should put every advocacy organization, foundation, and socially positioned brand on notice. You don’t have to be the target of an operation for it to reach you. You just have to be in the orbit of a narrative someone is trying to move.
  • The AI dimension changes the urgency. The organizations consistently creating factual, thoughtful content and building the capability to detect and respond to coordinated information attacks now will have a significant advantage over the ones who wait until they’re in the middle of one.

The World Has Named the Problem

72% of adults across 25 countries are worried about the spread of false information online. That kind of global consensus tells us that something has fundamentally shifted in our information environment.

Brands operate inside that environment. Their reputations live and die inside that environment. And the organizations that treat this as someone else’s problem—a political issue, a platform issue, a regulatory issue—are leaving themselves dangerously exposed.
The machinery is already running. The only question is whether your strategy has caught up to the world it’s actually operating in.


Wilsar Johnson writes about brand, narrative, and strategic communications. She works with leaders, organizations, and brands navigating complex communication environments.

He Called Us "My Girls": What Steve Madden Just Taught Us About Personal Branding

Before last week, I couldn't tell you what Steve Madden looked like.

I knew the brand, the boots, the bags, the heels that hurt, but still ended up in our closets. But I didn't know the man. And I don't think I'm alone.

Then I watched his interview on The Cutting Room Floor podcast, and, well, now I do. And care.


It was, as Recho Omondi, founder and host of the podcast, put it:

"A case study in how brands should just be themselves, flaws and all."


She said what I felt watching it:

"So often brands are scared. You can't have a single real conversation with anybody from the brand."


But Steve? "He sells himself. His personality is great."

He showed up as a person, not a logo. And that changed how people saw him, especially folks on TikTok (And like the natural order of things on the internet, what starts on TikTok trickles into Twitter, where his interview is also being well received, especially on a platform known for being critical of, well, almost everything).

People were struck by how he spoke:

"He called us 'My Girls' like we aren’t numbers, like he knows us."

"All his shoes hurt, but I've always supported him."

"I haven't bought a pair in years... but now I know he's a cool guy, so you got my ass back in those stores.


Some praised the honesty. Others found themselves defending him.

One person flat-out said, "I might go buy me some Steve Mads tomorrow just off that video."


One person flat-out said, "I might go buy me some Steve Mads tomorrow just off that video."

This is the power of restrained authenticity, not full transparency, but intentional humanity. The kind that creates resonance. The kind that reminds people there's a real person behind the brand.


And this moment proves what I've long believed about branding and social media:

People don't just buy policy. They buy into people.


They buy into stories. Into how you make them feel. Into what it feels like to be seen by you, even in a 90-second TikTok clip.


Steve Madden didn't rebrand. He just showed up. And the response has been remarkable:

  • "His story is so interesting, he seems for the people."

  • "He's designing for the everyday woman, not the 2% that can afford luxury fashion."

  • "He was honest. That's it. That's the post."

Moments like this are a reminder:

Your presence is the strategy.

You don't need to tell your whole life story. But you do need to let people meet the version of you that belongs in the room.

Because people connect with people. And when they do, they follow you, root for you, forgive you, and yes, vote for you.

And it all starts when you drop the pitch and show up as a person.

Digital Strategy in Leadership: What the Next DNC Chair Should Prioritize

Democrats have messaging problems that contributed to the defeat in the 2024 election. They struggle to convey their accomplishments and policy priorities to voters, especially on key issues like immigration and the economy. Because of this, Republicans are able to frame everyday Americans’ perceptions of the Democratic Party. In my previous blog, I outlined how Democrats can leverage influencers to engage voters more meaningfully. Building on that foundation, there is a path forward for Dems to build a cohesive, pragmatic, and balanced digital strategy. The next DNC Chair should prioritize these three complementary strategies:

  • Creating a Big Tent Influencer Ecosystem

  • Investing in Paid Influencer Partnerships

  • Embracing Creator-Style Long-Form Content on YouTube

Together, these strategies transform how Democrats connect with voters, share their message, and rebuild trust online. Together, they represent a practical, scalable approach to competing in the modern media landscape.

Although digital strategy encompasses many areas, this focus is on social media because that’s where most Americans spend their time. Platforms like YouTube, X/Twitter, Instagram, and TikTok are no longer optional—they are essential tools for building community and connection. To succeed, the next DNC Chair must rethink the party's mindset toward social media and shift from sporadic content creation to a sustainable, ongoing creator-style approach.

A Big Tent Influencer Ecosystem

Democrats need to develop a year-round information service across multiple platforms. A Big Tent Influencer Ecosystem is the foundation of this strategy. This is about cultivating a network of influencers who represent the diversity of the Democratic Party’s values and audiences—ranging from progressive activists to cultural creators, gamers, and even lifestyle influencers.

Republicans have long understood the power of this approach. Organizations like Turning Point USA and The Daily Wire have created well-organized and well-funded ecosystems that amplify conservative messaging through cultural and ideological lenses. These influencers shape narratives not just around politics but in everyday cultural spaces. Democrats, by contrast, often approach influencers (and social media altogether) as an afterthought, engaging them only during critical campaign periods.

The next DNC Chair has the opportunity to change this by:

  • Identifying and Supporting Influencers. Build a network of creators across a range of niches who can authentically speak to different segments of the Democratic base.

  • Providing Resources Year-Round. Offer tools, training, and production support to help influencers create high-quality, resonant content.

  • Fostering Community. Create ongoing opportunities for collaboration between influencers, candidates, and party leaders, building trust over time.

By investing in an ecosystem that includes influencers with varying ideological perspectives (from Moderate to Progressive), the party can truly embrace its "big tent" identity and speak to its diverse voter base.

Paid Influencer Partnerships

While the Big Tent Ecosystem builds a long-term foundation, Paid Influencer Partnerships offer precision and scale when it’s time to amplify key messages. Paid collaborations allow the party to work with influencers who have large, engaged audiences, extending its reach beyond the organic network.

This isn’t just about throwing money at creators—it’s about targeted, data-driven investments that complement the party’s organic strategy. For instance:

  • Partner with news influencers like Brian Tyler Cohen and Hasan Piker to break down policies for progressive audiences.

  • Engage cultural influencers to bridge the gap between politics and everyday life.

  • Focus on podcasts, a powerful medium for nuanced, long-form conversations with audiences who want deeper engagement.

Paid partnerships should align with the overarching messaging from the Big Tent Ecosystem to ensure consistency and credibility. Together, these strategies create a feedback loop where organic storytelling builds trust, and paid media amplifies that trust to wider audiences.

YouTube as the Anchor for a Creator-Style Content Strategy

A modern organic social media strategy must go beyond posting static content and embrace community building. The party’s social media strategy should prioritize YouTube as the central platform for long-form, creator-style content. Why YouTube? Because it’s the most widely used online platform. Long-form storytelling thrives there, and Democrats can foster deeper community engagement.

Imagine a politician using YouTube not as a space for formal speeches, archived videos, or polished ads but as a platform for genuine connection. They could:

  • Break down policies. Explain healthcare, infrastructure, or climate change policies in simple, conversational terms, using engaging visuals and real-life examples.

  • Share behind-the-scenes glimpses. Offer candid insights into their work as public servants, making them more relatable and trustworthy.

  • Answer constituent questions. Tackle voter concerns directly in a Q&A format, fostering two-way dialogue.

This creator-style approach transforms politicians from distant figures into trusted voices, actively participating in the lives of the people they represent.

While YouTube serves as the anchor for long-form content, short-form platforms like TikTok, Instagram Reels, and YouTube Shorts are equally important for discovering and curating content. Short-form platforms allow users to find and curate content, often serving as the entry point for audiences who later seek deeper engagement on YouTube or other long-form platforms.

By aligning long-form storytelling on YouTube with short-form strategies, Democrats can create a seamless funnel for engagement—one that builds trust, fosters community and connects with voters in meaningful ways.

Fostering a Culture of Social Media Innovation

For the Democratic Party to succeed in this space, the next DNC Chair must prioritize innovation—not just in tools and platforms but in how the party approaches social media holistically. Here are three actionable steps the Chair should take:

  1. Create a team focused exclusively on social media content creation, storytelling, and audience engagement. This team should operate like a modern digital-first media company, producing high-quality, creator-style content that resonates with today’s audiences.

  2. Equip candidates, campaign staff, and influencers with the skills and tools needed to create impactful content. This includes video production training, access to analytics tools, and workshops on building authentic connections with online audiences.

  3. Thoughtfully integrate social media efforts with traditional media campaigns, ensuring consistency across platforms. For example, a YouTube series explaining policies could be complemented by op-eds, TV appearances, and newsletters that reinforce the same messaging.

These steps would position the Democratic Party as a leader in political digital communication, capable of adapting to the rapid shifts in how people consume information.

A Roadmap for the Next DNC Chair

The next DNC Chair faces a pivotal moment to modernize the party’s approach to social media strategy. By building a Big Tent Influencer Ecosystem, investing in Paid Influencer Partnerships, and embracing Creator-Style Long-Form Content on YouTube, the Chair can lay the groundwork for a cohesive, pragmatic, and effective social media strategy.

I am not proposing that the party replace traditional media or copy Republican tactics—it’s about creating a balanced, fresh approach that meets voters where they are. The tools are there; now it’s time for bold leadership to put them into action.

The Democratic Party has the opportunity to lead, inspire, and connect with voters in ways that build trust, foster community, and ensure a sustainable digital future. The question is: will the next DNC Chair rise to the challenge?

Influencers and Elections: A Path Forward for the Democratic Party

Did you see this tweet on X about Democrats needing their own "Joe Rogan?" The tweet was likely in good faith, but it misses a key point: The internet doesn’t work like that. Building a media ecosystem like the GOP’s isn’t about copying their tactics; it's about getting on the same playing field. Democrats should invest in a comprehensive, thoughtful digital strategy that combines organic social media, influencer marketing, and long-term relationship-building so they can continue fighting for the American people.

Having worked as a digital director for a number of Democratic offices and committees on Capitol Hill, I’ve seen how our party often underestimates the value of online spaces. Too often, we treat digital strategy as a secondary concern, assigning social media responsibilities to overworked communications staffers instead of building dedicated, well-funded teams. Conservatives, on the other hand, have invested heavily in their digital infrastructure, creating an imbalance that’s hard to ignore.

This isn’t just about resources—it’s about mindset. Some folks have been hesitant to engage with Progressive creators and influencers, particularly those who challenge party policies. I’m not suggesting we give airtime to bad-faith actors, but sidelining respectful, left-leaning influencers risks alienating critical audiences. It’s not just about winning elections; it’s about building trust, relevance, and cultural influence.

Learning from the GOP's Playbook

The GOP’s digital strategy is clear and (unfortunately) effective. According to the Pew Research Center, about one in five Americans get their news from influencers, with the number jumping to 37% among adults under 30. More news influencers explicitly identify as right-leaning (27%) than left-leaning (21%). Conservatives have not only acknowledged this shift but have also built an ecosystem to support their influencers.

Organizations like Turning Point USA and The Daily Wire act as incubators, providing funding, resources, and platforms for conservative creators to grow their audiences. This infrastructure amplifies their messages and shapes public opinion in ways traditional media cannot. For instance, Donald Trump’s appearances on podcasts like Joe Rogan and The Nelk Boys allowed him to reach disengaged voters, building a connection that extended beyond the Republican base.

What Democrats Got Right

Despite these challenges, there are bright spots. Kamala Harris’ 2024 campaign made strides with its influencer outreach, credentialing 200 creators for the Democratic National Convention. Influencers were given unparalleled access to events, talking points, and even face time with Harris herself.

By providing creators with resources and access, the Harris campaign laid the groundwork for a more effective influencer strategy. Pew’s aforementioned study highlighted the significant role of news influencers, particularly among younger demographics. Platforms like TikTok, where left-leaning influencers slightly outnumber their right-leaning counterparts, offer a unique opportunity for Democrats to expand their reach. Yet, this space remains underutilized.

A Blueprint for Democratic Success

To level the playing field, Democrats need to adopt a balanced and holistic approach to digital strategy. Here’s how they can start:

  1. Invest in an Influencer Ecosystem. Create a network of content creators, news influencers, and digital storytellers who can amplify progressive messages. This ecosystem should include influencers who challenge the status quo, not just loyalists.
  2. Leverage Long-Form Content. Podcasts, YouTube channels, and other long-form formats allow for deeper engagement. The Harris campaign’s creator program showed promise but needs to be scaled up to include more platforms and creators.
  3. Engage Directly with Influencers. Provide access to events, candidates, and campaigns. Authenticity is key—trust influencers to create content that resonates with their audiences.
  4. Show Up in Existing Ecosystems. Democrats don’t need to build new platforms; they need to show up where audiences already are. Whether it’s Joe Rogan, TikTok, or smaller creators on YouTube, presence matters.
  5. Prioritize Legacy Media. Traditional media still holds significant value in reaching a broader audience. The goal isn’t to replace legacy institutions but to complement them with a stronger online presence.
  6. Balance Control and Creativity. Collaboration, not micromanagement, is the path to success. Influencers need creative freedom to deliver authentic messaging.
  7. Commit to Long-Term Investment. Building an influencer ecosystem isn’t a one-cycle effort. It requires sustained funding, training, and support to create a pipeline of progressive voices for years to come.
  8. Align Creators with Candidates. Recognize that every party member has a unique style and role. Align them with creators whose audiences and voices complement their strengths, fostering authenticity and connection.

This isn’t about copying Republicans—it’s about capturing our share of public opinion. Democrats need to embrace the reality of today’s media landscape, where influencers wield more power than ever before. By investing in digital infrastructure, engaging with diverse creators, and showing up in existing spaces, the party can close the gap while maintaining its integrity.

The Rise of News Influencers: Reshaping Political Discourse Online

News influencers are reshaping how we consume and engage with political information. If you’re online like me, you’ve probably come across—and maybe even followed—some of these content creators. Influencers aren’t new, but their influence on news and politics is growing. A recent Pew Research Center study reveals how these creators are profoundly shaping public opinion, especially in the political sphere. These findings highlight a critical shift in how political narratives are disseminated and consumed in the digital age.

This decentralized, influencer-driven landscape connects audiences directly to content creators, offering relatability and immediacy that traditional outlets often lack. However, this shift raises important questions about credibility, bias, and accountability. Understanding these dynamics is crucial for anyone invested in the future of political communication.

The Pew study provides a detailed look at the role of influencers in political discourse:

  • Over half of news influencers' posts about current events focus on politics, with the 2024 election dominating discussions.
  • More news influencers explicitly identify as right-leaning (27%) than left-leaning (21%), creating an ideological imbalance that reinforces conservative narratives while leaving progressive voices relatively underserved.

  • About one in five Americans regularly get their news from influencers. Among adults under 30, that number jumps to 37%, highlighting their growing influence among younger demographics.

Platforms like X (formerly Twitter), Instagram, YouTube, and TikTok are their main stages. This new media ecosystem decentralizes news dissemination, amplifying both the opportunities and risks associated with influencer-driven political content.

Both major U.S. political parties are engaging with news influencers, but their approaches differ significantly. Democrats have begun incorporating influencers into their campaigns, particularly during Kamala Harris’s 2024 presidential bid. For instance, the DNC credentialed over 200 influencers to cover events and engage younger, more diverse audiences. However, Democratic efforts often lack the coordination and funding seen on the right. Republicans have a more established playbook, supported by well-funded organizations like Turning Point USA and The Daily Wire. These entities act as incubators for conservative influencers, providing resources, messaging alignment, and significant financial backing.

For example, figures like Ben Shapiro and Charlie Kirk have built vast followings through coordinated efforts, while progressives like Hasan Piker (HasanAbi) rely on grassroots support. This discrepancy underscores the need for Democrats to rethink their digital strategies and invest in a more robust influencer ecosystem.

The study reveals interesting platform dynamics:

  • 85% of news influencers are active on X, making it the most popular platform for this group.
  • TikTok offers a unique opportunity for Democrats, as it’s the only major platform where left-leaning influencers slightly outnumber their right-leaning counterparts. Its popularity among Gen Z and Millennials presents a chance to expand influence in these critical voter demographics.

News influencers also reach often-overlooked audiences, including racial minorities, young adults, and lower-income groups. For campaigns, this represents an opportunity to engage these critical voter segments through tailored, relatable content.

While influencers drive engagement, they also contribute to political polarization. A Penn State study found that influencers might push parties to moderate in general elections but deepen societal divides by reinforcing echo chambers.

Campaigns must walk a fine line: leveraging influencers for reach while being mindful of the potential for increased division. This challenge underscores the importance of strategic planning in influencer engagement.

The rise of news influencers signifies a fundamental shift in how political information is shared and consumed. Here’s why this matters:

  1. Reaching Untapped Audiences: Influencers connect with groups that traditional media struggles to reach, particularly younger voters and those disillusioned with legacy institutions.
  2. Framing Political Narratives: With millions of views and shares, influencers wield significant power in framing political debates, often acting as primary news sources for their followers.
  3. Filling the Trust Gap: As trust in traditional media declines, influencers are filling the void. While this democratizes information, it also opens the door to misinformation, as influencers operate outside traditional journalistic standards.
  4. Shaping Voter Behavior: Influencers’ ability to engage authentically makes them powerful tools for voter outreach, provided campaigns balance reach with integrity and factual accuracy.

To thrive in this evolving media landscape, campaigns must:

  • Invest in Diverse Influencer Partnerships: Build relationships with a range of influencers, from mainstream to niche voices, ensuring authenticity and effectiveness in outreach.
  • Media Literacy: Equip voters with tools to critically evaluate influencer content, reducing the risk of misinformation.
  • Balance Reach and Integrity: While influencers amplify messages, campaigns must prioritize transparency and factual accuracy to maintain public trust.

The rise of news influencers is not a fleeting trend—it’s a redefinition of how political narratives are crafted and consumed. As this space evolves, political campaigns, parties, and even voters must adapt to engage thoughtfully and strategically. The stakes are high, but so are the opportunities.