EXPERT PERSPECTIVE — Lately, the nationwide dialog about disinformation has typically targeted on bot networks, overseas operatives, and algorithmic manipulation at industrial scale. These issues are legitimate, and I spent years inside CIA finding out them with a degree of urgency that matched the stakes. However an equally vital story is taking part in out on the human degree. It’s a narrative that requires us to look extra intently at how our personal instincts, feelings, and digital habits form the unfold of data.
This story reveals one thing each sobering and empowering: falsehood strikes quicker than fact not merely due to the applied sciences that transmit it, however due to the psychology that receives it. That perception is not simply the instinct of intelligence officers or behavioral scientists. It’s backed by onerous information.
In 2018, MIT researchers Soroush Vosoughi, Deb Roy, and Sinan Aral printed a groundbreaking examine in Science titled The Unfold of True and False Information On-line. It stays one of the complete analyses ever performed on how info travels throughout social platforms.
The staff examined greater than 126,000 tales shared by 3 million individuals over a ten-year interval. Their findings had been placing. False information traveled farther, quicker, and extra deeply than true information. In lots of instances, falsehood reached its first 1,500 viewers six occasions quicker than factual reporting. Essentially the most viral false tales routinely reached between 1,000 and 100,000 individuals, whereas true tales not often exceeded a thousand.
Some of the vital revelations was that people, not bots, drove the distinction. Folks had been extra prone to share false information as a result of the content material felt recent, stunning, emotionally charged, or identity-affirming in ways in which factual information typically doesn’t. That human tendency is turning into a nationwide safety concern.
For years, psychologists have studied how novelty, emotion, and identification form what we take note of and what we select to share. The MIT researchers echoed this of their work, however a broader physique of analysis throughout behavioral science reinforces the purpose.
Folks gravitate towards what feels surprising. Novel info captures our consideration extra successfully than acquainted info, which implies sensational or fabricated claims typically win the primary click on.
Emotion provides a strong accelerant. A 2017 examine printed within the Proceedings of the Nationwide Academy of Sciences confirmed that messages evoking sturdy ethical outrage journey via social networks extra quickly than impartial content material. Worry, disgust, anger, and shock create a way of urgency and a sense that one thing have to be shared shortly.
And identification performs a delicate, however important position. Sharing one thing provocative can sign that we’re properly knowledgeable, notably vigilant, or aligned with our group’s worldview. This makes falsehoods that flatter identification or affirm preexisting fears notably highly effective.
Taken collectively, these forces type what some have known as the “human algorithm,” which means a set of cognitive patterns that adversaries have discovered to exploit with rising sophistication.
Save your digital seat now for The Cyber Initiatives Group Winter Summit on December 10 from 12p – 3p ET for extra conversations on cyber, AI and the way forward for nationwide safety.
Throughout my years main digital innovation at CIA, we noticed adversaries increase their technique past penetrating networks to manipulating the individuals on these networks. They studied our consideration patterns as intently as they as soon as studied our perimeter defenses.
International intelligence companies and digital affect operators discovered to seed narratives that evoke outrage, stoke division, or create the notion of insider information. They understood that emotion might outpace verification, and that pace alone might make a falsehood really feel plausible via sheer familiarity.
Within the present panorama, AI makes all of this simpler and quicker. Deepfake video, artificial personas, and automatic content material technology permit small groups to provide giant volumes of emotionally charged materials at unprecedented scale. Current assessments from Microsoft’s 2025 Digital Protection Report doc how adversarial state actors (together with China, Russia, and Iran) now rely closely on AI-assisted affect operations designed to deepen polarization, erode belief, and destabilize public confidence within the U.S.
This tactic doesn’t require the viewers to consider a false story. Usually, it merely goals to go away them uncertain of what fact appears like. And that uncertainty itself is a strategic vulnerability.
If misguided feelings can speed up falsehood, then a considerate and well-organized response can assist guarantee factual info arrives with better readability and pace.
One strategy includes rising what communication researchers generally name fact velocity, the act of getting correct info into public circulation shortly, via trusted voices, and with language that resonates reasonably than lectures. This doesn’t imply replicating the manipulative emotional triggers that gas disinformation. It means delivering fact in ways in which really feel human, well timed, and related.
One other strategy includes small, sensible interventions that scale back the impulse to share doubtful content material with out pondering. Analysis by Gordon Pennycook and David Rand has proven that temporary accuracy prompts (small moments that ask customers to contemplate whether or not a headline appears true) meaningfully scale back the unfold of false content material. Equally, cognitive scientist Stephan Lewandowsky has demonstrated the worth of clear context, cautious labeling, and simple corrections to counter the highly effective pull of emotionally charged misinformation.
Join the Cyber Initiatives Group Sunday publication, delivering expert-level insights on the cyber and tech tales of the day – on to your inbox. Join the CIG publication right now.
Organizations may assist their groups perceive how cognitive blind spots affect their perceptions. When individuals know the way novelty, emotion, and identification form their reactions, they turn out to be much less prone to tales crafted to use these instincts. And when leaders encourage a tradition of considerate engagement the place colleagues pause earlier than sharing, examine the supply, and spot when a narrative appears designed to impress, it creates a ripple impact of extra sound judgment.
In an surroundings the place info strikes at pace, even a short second of reflection can gradual the unfold of a harmful narrative.
A core a part of this problem includes reclaiming the psychological house the place discernment occurs, what I check with as Thoughts Sovereignty™. This idea is rooted in a easy apply: discover when a chunk of data is making an attempt to impress an emotional response, and provides your self a second to judge it as a substitute.
Thoughts Sovereignty™ shouldn’t be about retreating from the world or turning into disengaged. It’s about navigating a loud info ecosystem with readability and steadiness, even when that ecosystem is designed to tug us off steadiness. It’s about defending our potential to assume clearly earlier than emotion rushes forward of proof.
This inside steadiness, in some methods, turns into a public good. It strengthens not simply people, however the communities, organizations, and democratic programs they inhabit.
Within the intelligence world, I at all times thought that fact was resilient, however it can’t defend itself. It depends on leaders, communicators, technologists, and extra broadly, all of us, who select to deal with info with care and intention. Falsehood could benefit from the benefit of pace, however fact positive factors energy via the standard of the minds that carry it.
As we develop new applied sciences and confront new threats, one query issues greater than ever: how will we strengthen the human algorithm in order that fact has a combating likelihood?
All statements of reality, opinion, or evaluation expressed are these of the writer and don’t mirror the official positions or views of the U.S. Authorities. Nothing within the contents must be construed as asserting or implying U.S. Authorities authentication of data or endorsement of the writer’s views.
Learn extra expert-driven nationwide safety insights, perspective and evaluation in The Cipher Temporary, as a result of Nationwide Safety is Everybody’s Enterprise.