Fake News Is a Marketing Feature, Not a Hack: Part 2

Fake News Is a Marketing Feature, Not a Hack: Part 2

We live in a post-truth world, and our brains are to blame.

In November, I argued that fake news is the world’s most powerful and socially destructive marketing technique. If you haven’t read that article, I recommend at least skimming the post so we’re on the same page about what fake news is and why it’s a problem for society and marketers.

In this post, I break down the science behind why we’re so susceptible to fake news and discuss how optimized disinformation compromises our behaviors and beliefs—even in the presence of overwhelming evidence of “the truth.”

How Disinformation Hacks Our Brains

Disinformation and fake news prey on the fundamental ways our brains process information to beguile our critical thinking safeguards and dupe us into believing—and advocating for—insidious lies.

Dr. Seth M. Porter, the assistant director of digital teaching, learning, and scholarship at Princeton University, discussed how fake news affects our memory in the research collection, Fake News in an Era of Social Media: Tracking Viral Contagion. Porter argues that fake news is designed to compromise our:

  • Memory recall
  • Belief in misinformation
  • Decision making
  • Collective memory
  • Cultural norms

By manipulating these components of our memory and social interaction, fake news slips into our brains and social circles undetected—a virus biding its time until we unwittingly spread the disinformation to a new host.

And once we’re snared in the web of lies, even if the new host challenges our disinformation, our views rarely change—by that point, we no longer see or care about the truth.

There are several paths fake news peddlers use to hack how your brain processes learning and makes memories. In the following sections, I focus on the options that are closest related to commonplace marketing techniques.

Memory, Emotions, and Fake News/h2>

Before we move further, I want you to reflect on two memories. The first memory should be precious and elicit a strong emotional reaction, like the birth or death of a family member. The second memory should be an emotionally charged event that holds little influence in your life, like overhearing a political conversation you disagreed with or a fun event from your childhood.

Try to remember specific details about each memory: sights, colors, smells, sounds, feelings.

Now, ask yourself these three questions for each memory:

  1. How do you know the details of the memory are accurate?
  2. Are you positive that memory is real?
  3. How can you test or prove your answers?

You should be able to answer all three questions relatively easily for the first scenario and with a good degree of confidence, thanks to how the hippocampus and amygdala create and store memories from novel events with a powerful emotional trigger.

The second scenario may draw more pauses depending on how long ago the event happened. You may initially be sure this memory is true despite the finer details being a bit fuzzy. After pondering the memory, there is a chance you’re left wondering if your brain manufactured the entire sequence with stimuli you’ve encountered elsewhere. The important thing in this situation is that you’re aware your memory may not be perfect.

Here’s where the power of optimized disinformation kicks into gear. If you’ve been infected with a belief from a fake news source, your brain can conjure a “memory” of an event that never happened and make it feel like a scenario one memory.

For example, in the study, False Memories for Fake News During Ireland’s Abortion Referendum, researchers performed an experiment that showed exposure to fake news and political propaganda can create false memories.

In the experiment, scientists gathered registered voters in the week preceding Ireland’s 2018 abortion referendum.

The 3,140 participants read six news stories about referendum campaign events. Two of these stories were fake. During the experiment, almost half of the participants reported a memory that supports at least one fabricated news story. More than one-third of participants reported a specific, first-hand memory of the fake event.

The participants’ political biases further exacerbated the creation of false memories.

Voters who supported the referendum were more likely than people who voted against the law to “remember” a fabricated scandal regarding the campaign to vote “no.” Likewise, “no” voters were more likely than “yes” voters to “remember” a fabricated scandal regarding the campaign to vote “yes,” the study authors wrote.

That fabrication happens because memory plays an essential role in assessing the validity of information. When that memory is compromised, it affects how easily we believe and share fake news, Porter writes.

The Illusory Truth Effect

When a false claim is repeated often enough, people start to believe it’s true. This is called the “illusory truth effect,” and it’s the lynchpin of optimized disinformation.

The underlying power of illusory truth is that the phenomenon still affects people who disagree with the initial falsity. Because they keep seeing the phrase or hearing the claim, the lie earns fluency and begins to have a glimmer of perceived truth.

Fluency relates to how easily our brains process a claim. Repeated claims are easier to represent and comprehend, which requires less cognitive energy and feels good, Scientific American reports. Our brains take this positive feeling as a cue that the claim is true, which leads us to accept the claim the next time we hear it.

Concepts that are too preposterous for the user to consider, like telling an astronaut the world is flat, are excluded from this effect.

This phenomenon is similar to how we associate brands with positive qualities from a catchy jingle or often repeated value proposition. And like a song chorus that won’t leave your head, you only need to encounter a fake news headline once to be hooked by the illusory truth effect.

In the report, Prior Exposure Increases Perceived Accuracy of Fake News, the authors exposed participants to 12 news headlines presented to look like Facebook posts. Six news headlines were factually accurate, and the others were untrue. Some of the headlines were labeled with a disputed claim warning. Users assessed the headlines and then determined if they’d share these articles on social media.

After a few unrelated tasks, users were presented with 24 news headlines. The 12 headlines they already encountered and 12 new ones. The new headlines also had an equal true/fake split, and some had the disputed information warning. Users then rated each headline for familiarity and accuracy.

The authors discovered that a single prior exposure to fake news headlines was sufficient to measurably increase subsequent perceptions of the headline’s accuracy. The increased effect was relatively small, but the headline’s trustworthiness increased again with a second exposure, compounding the perceived validity over time.

Plus, the explicit warning users saw next to fake news headlines did not abolish or significantly diminish the effectiveness of the illusory truth effect, the authors wrote.

The illusory truth effect is also in full swing for marketers.

In the study The Illusory Truth Effect: Exploring Implicit and Explicit Memory Influences on Consumer Judgments, researchers found that repeated exposure to brand or product value propositions, subliminal marketing messaging, and brand recognition techniques (slogans, jingles, etc.) improved brand product validity and trustworthiness.

One of the more interesting effects of this phenomenon, for users and marketers, is how fluency and memory modification interplay with biases to further strengthen the perceived accuracy of fake news and brand messaging.

Confirmation Bias and Cognitive Dissonance

It’s time for another moment of reflection. This is a simple exercise in confirmation bias and cognitive dissonance, two factors that play enormous roles in fake news validation and marketing—particularly on social media and in organic search results.

I want you to think of two brands, one you love and one you despise. Now, answer the following three questions:

  1. Why do you have strong positive/negative feelings associated with this brand?
  2. Do you expect further engagement with the brand to reinforce that feeling?
  3. What, if anything, could change your mind about the brand?

Confirmation bias is simply the tendency to interpret new evidence as confirmation of your existing beliefs or theories. You explored this bias in steps two and three of the exercise.

Cognitive dissonance describes the moment when you’re presented with two pieces of conflicting information that can’t both be true. You must choose which piece of information is correct. The fake news warning labels on Twitter and Facebook are examples of creating a moment of cognitive dissonance.

Ideally, users will always rely on sound evidence to override their confirmation biases when they encounter cognitive dissonance. Unfortunately, research shows the opposite is true.

Based on findings from the study, Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All, moments of cognitive dissonance online, such as a disputed claim warning flag, won’t create an epiphany that overrides your confirmation bias. However, these encounters will slightly adjust your behavior.

In the experiment, participants read 50 fact-based news headlines and assessed if the headlines were true or false. Forty headlines were designed to be amorphous and look possibly true or false. A warning matching Facebook’s fake news flag was randomly assigned to 20 of the 40 headlines. The 10 remaining headlines were used as a control.

The study results show the warning label triggered more brain activity in participants and increased the time they spent considering the headline. However, users were still more likely to believe news headlines they wanted to be true and disregard the fake news flag. The unrelenting grip of confirmation bias, even in the face of cognitive dissonance, left users with only identifying factual headlines 44 percent of the time.

Confirmation Bias and Organic Search

Not being able to look beyond your biases and determine what information is accurate becomes an enormous social issue when users research impactful topics on search engines.

For example, a December 30th, 2020 NPR/Ipsos poll shows that 40 percent of American respondents said they believe the coronavirus was made in a lab in China. (For the record, there is no evidence for this claim—scientists say the virus was transmitted to humans from another species.)

Let’s assume these “COVID hoax” believers did some type of online research before coming to their conclusions. (Based on research presented in the first article of this series, it’s more likely that users saw the information on social media and automatically assumed it is accurate.)

Based on Ahrefs data, users may have fallen prey to their confirmation bias and searched for one of the following terms:

  • china created coronavirus (1,100 searches/month)
  • china made coronavirus (600 searches/month)

Now, if the user wasn’t entirely sure they believe the unfounded conspiracy, they may have searched for one of the following terms:

  • did china create coronavirus (900 searches/month)
  • who created the coronavirus (1900 searches/month)

The search results for both options are a smattering of chaos. Here are the first-page results on December 31st, 2020 for “china created coronavirus” and “who created the coronavirus.”

""

As expected, the results skew toward the user’s search intent via the keywords they chose. Each first-page result has headlines that confirm or deny at least part of the user’s suspicions; however, only the open-ended question provided information from medical sources: WebMD and the National Institutes of Health.
Fortunately, in some scenarios, like this tame one, Google’s algorithms are getting better at providing results that don’t conform to a user’s confirmation bias for medium-high volume search terms. However, the distinction in result quality is still clear.

One important part of these results to acknowledge is that Google chose not to provide an answer box for these terms, which could teach users the facts upfront.

This issue begets further discussion in part three of this series, but an answer box for confirmation-bias affirming search results could be one way to use the illusory truth effect and cognitive dissonance to society’s advantage—particularly for our collective memory.

The Fake News Mind Meld

The most damning brain-altering power fake news has is to change what people believe on a wide scale, which is done by modifying our collective memory.

Collective memory indicates how groups of individuals remember the past. These memories form in small groups, such as a family vacation, and with society at large, like the details of a historical event. Although we form memories as individuals, those memories get modified over time when the event was a shared experience, Porter writes.

This change happens because of cross-cueing, a key component in collective memory that occurs when information is exchanged among group members, and the group gains a collective understanding about the topic, Porter explains. Cross-cueing benefits groups because individuals can’t understand all aspects of a shared event or experience.

This phenomenon is among the reasons word-of-mouth marketing and user reviews are so effective and important. We spread experiences with brands or products to teach others. By sharing our knowledge, the recipient benefits, and our perceived social value increases.

Unfortunately, cross-cueing also leaves groups vulnerable to distributing misinformation. This process tends to form false memories in individuals when the person is caught in an information bubble and exposed to fake information repeatedly. Eventually, their memories will change and propagate to the greater group, affecting their collective memory and beliefs, Porter states.

This effect is easily observed when tracking conspiracies on social media or observing users’ behavior habits when they only receive news from one source. A simple example is how the average Fox News viewer is less informed about factual events than somebody who doesn’t follow any news source.

Once a community’s collective memory is compromised, the ideas and values “taught” through fake news spread rapidly and become a defining pillar for that culture, thus guaranteeing the fake ideas continue to spread as the culture expands, Porter writes.

This is essentially cultural brainwashing on a wide scale, potentially sparking from a single fake news headline. And this revelation brings us to the ultimate question around online fake news and optimized disinformation: who deserves to be the arbiter of truth?

If we trust a corporate entity like Google to establish what a fact is via rich-text snippets, how can we ensure their chosen facts don’t rewrite history for a more glamorous version, similar to how textbooks in the U.S. gloss over the government’s civil rights violations and state-sponsored terrorism?

Trusting national or local governments to hold this role is out of the question.

But if we continue without an arbiter of truth, we’re stuck with an enormous percentage of the world’s population gulping down lies like tequila shots at happy hour, and believing in dangerous conspiracies peddled by people who manipulate others for wealth and power.

Sadly, our brains are not wired to identify and root out fake news designed for us and presented to us by trusted sources, like family and friends. There is not an easy solution to this quandary. More than likely, we’ll need to sew various patchwork policies, technical safeguards, and community education efforts together into a formidable quilt of truth.

What Can We Do?

With that rosy picture of doom-and-gloom painted, what can we do to protect ourselves from being duped by fake news and other forms of optimized disinformation? And, what can we do as marketers to reinforce that claims we are making about a product or brand are genuine, and the information we are providing is accurate?

Unfortunately, we aren’t Vulcans, so there isn’t a cohesive answer to the question yet, but people are devising possible solutions. I’ll discuss what those are in the final article in this series.

Until more permanent solutions are in place, we must grow our critical thinking skills. Our brains may devour fake news and disinformation, but we can fight back by piquing our curiosity about what we read and hear; and by encouraging our customers to do so. Tap into the benefits of the illusory truth effect and blast facts on repeat. However, instead of repeating value propositions or brand slogans for the sake of the effect, make sure you make it a priority to include context that provides information your audience will need to make an educated decision and share educated opinions.

After all, the best way to fight the neurological effects of fake information is by not offering it up. Don’t swindle your users with promises your product or service can’t keep.

Ultimately, we must question what we read—as citizens and consumers. Why was that article written, who wrote it, and what are their intentions? Is the content sourced from reputable experts? It’s a lot of effort to dig into everything we read—especially in the instant gratification associated with The Information Age—but that’s what a post-truth world requires of us.

Take this post, for example. Are you taking everything I wrote and argued at face value? Or are you being diligent and checking my sources and my sources’ sources? You should be. Because here’s the big question: how sure are you that you’re not being hoodwinked right now? There’s only one way to find out.

In part three of this series, I’ll explore how our current business model perpetuates fake news, and what we can do about it.

The post Fake News Is a Marketing Feature, Not a Hack: Part 2 appeared first on Portent.