Newsletter Subject

Misinformation is winning the war on misinformation

From

vox.com

Email Address

newsletter@vox.com

Sent On

Thu, Jun 20, 2024 08:15 PM

Email Preheader Text

Online falsehoods are as bad as they've ever been. Does anyone care? Hi! It's A.W. This is my final

Online falsehoods are as bad as they've ever been. Does anyone care? Hi! It's A.W. This is my final edition of the Vox Technology newsletter. Next week, Adam Clark Estes will return to the helm. I’ve had a blast filling in here over the past several months. If you’d like to read my work going forward, find me on [Threads](, [Mastodon](, or sign up for my [personal newsletter](. Misinformation is winning the war on misinformation Misinformation on the internet has never been worse. Or at least that’s my analysis of it, based on vibes. People on TikTok are eating up videos saying a bunch of inaccurate things about the [dangers of sunscreen](, while the platform’s on-app Shop propels [obscure books]( containing bogus cures for cancer onto the Amazon bestseller list. Meanwhile, the presumed Republican nominee for president is fresh off what [appears to be a successful push]( to neuter efforts to address disinformation campaigns about the election. Also, [Google’s AI Overview search]( results told people to [put glue on pizza](. But this is all anecdotal. Can I prove my hunch with data? Sadly, no. The data I — or more accurately, researchers with actual expertise on this — would need to do that is locked behind the opaque doors of the companies that run the platforms and services on which the internet’s worst nonsense is hosted. Evaluating the reach of misinformation, in the present day, is a grueling and indirect process with imperfect results. For my final newsletter contribution, I wanted to find a way to assess the state of misinformation online. As I’ve been covering this topic over and over for the past while, there’s one question that keeps popping into my head: Do companies like Google, Meta, and TikTok even care about meaningfully tackling this problem? The answer to this question, too, is imperfect. But there are some things that might lead to an educated guess. Ways to measure misinformation are disappearing One of the most important things a journalist can do while writing about the spread of bad information online is to find a way to measure its reach. There’s a huge difference between a YouTube video with 1,000 views, and one with 16 million, for instance. But lately, some of the key metrics used to put supposedly “viral” misinformation into context have been disappearing from public view. TikTok [disabled view counts for popular hashtags]( earlier this year, shifting instead to simply showing the number of posts made on TikTok using the hashtag. [Meta is shutting down]( CrowdTangle, a once-great tool for researchers and journalists looking to closely examine how information spreads across social media platforms, in August. That’s just a couple months before the 2024 election. And Elon Musk [decided to make “likes” private](on the platform, a decision that, to be fair, is bad for accountability but could have some benefits for normal users of X. Between all this and [declining access to platform APIs](, researchers are limited in how much they can really track or speak to what’s going on. “How do we track things over time? Apart from relying on the platform's word,” said [Ananya Sen](, an assistant professor of information technology and management at Carnegie Mellon University, whose recent research looks at how companies inadvertently fund misinformation-laden sites when they use large ad tech platforms. Disappearing metrics is basically the opposite of what a lot of experts on manipulated information recommend. Transparency and disclosure are “key” components of reform efforts like the Digital Services Act in the EU, said [Yacine Jernite](, machine learning and society lead for Hugging Face, an open-source data science and machine learning platform. “We've seen that people who use [generative AI] services for information about elections may get misleading outputs,” Jernite added, “so it's particularly important to accurately represent and avoid over-hyping the reliability of those services.” It’s generally better for an information ecosystem when people know more about what they’re using and how it works. And while some aspects of this fall under media literacy and information hygiene efforts, a portion of this has to come from the platforms and their boosters. Hyping up an AI chatbot as a next-generation search tool sets expectations that aren’t fulfilled by the service itself. Platforms don’t have much incentive to care Platforms aren’t just amplifying bad information, [they’re making money off it](. From TikTok Shop purchases to ad sales, if these companies take meaningful, systemic steps to change how disinformation circulates on their platforms, they might work against their business interests. Social media platforms are designed to show you things you want to engage with and share. AI chatbots are designed to give the illusion of knowledge and research. But neither of these models are great for evaluating veracity, and doing so often requires limiting the scope of a platform working as intended. Slowing or narrowing how a platform like this works means less engagement, which means no growth, which means less money. “I personally can't imagine that they would ever be as aggressively interested in addressing this as the rest of us are,” said Evan Thornburg, a bioethicist who posts on TikTok as [@gaygtownbae](. “The thing that they're able to monetize is our attention, our interest, and our buying power. And why would they whittle that down to a narrow scope?“ Many platforms begrudgingly began efforts to take on misinformation after the 2016 US elections, and again at the beginning of the Covid pandemic. But since then, there’s been kind of a pullback. Meta [laid off]( employees from teams involved with content moderation in 2023, and rolled [back its Covid-era rules](. Maybe they’re sick of being held responsible for this stuff at this point. Or, as technology changes, they see an opportunity to move on from it. So do they care? Again, it’s hard to quantify the efforts by major platforms to curb misinformation, which leaves me leaning once again on informed vibes. For me, it feels like major platforms are backing away from prioritizing the fight against misinformation and disinformation, and that there’s a general kind of fatigue out there on the topic more broadly. That doesn’t mean that nobody is doing anything. [Prebunking](, which involves preemptively fact-checking rumors and lies before they gain traction, is super promising, especially when applied to election misinformation. Crowdsourced fact-checking is also an interesting approach. And to the credit of platforms themselves, they do continue to update their rules as new problems emerge. There’s a way in which I have some sympathy for the platforms here. This is an exhausting topic, and it’s tough to be told, over and over, that you’re not doing enough. But pulling back and moving on doesn’t stop bad information from finding audiences over and over. While these companies assess how much they care about moderating and addressing their platform’s capacity to spread lies, the people targeted by those lies are getting hurt. —[A.W. Ohlheiser](, technology writer [A teenager sits at a table preoccupied with her cellphone.]( SolStock via Getty [What a social media warning label can’t do]( What the surgeon general wants to do for kids' safety leaves the rest of us behind.   [An illustration of two researchers in lab coats with computers surrounding them. One is observing a glowing yellow star shape floating above a circuit board.]( Drew Shannon for Vox [Will AI ever become conscious? It depends on how you think about biology.]( The debate that will steer the future of consciousness — and us.   Getty Images [The AI bill that has Big Tech panicked]( Why some tech leaders are so worried about a California AI safety bill.    [Learn more about RevenueStripe...](   NurPhoto via Getty Images [Your social media diet is becoming easier to exploit]( Plausible AI nonsense won’t stop flooding your feed anytime soon.   David Paul Morris/Bloomberg via Getty Images [Apple’s convincing case that AI doesn’t have to be scary]( Apple Intelligence wants to be the cool dad of artificial intelligence.   Become a Vox member Support our journalism — become a Vox Member and you’ll get exclusive access to the newsroom with members-only perks including newsletters, bonus podcasts and videos, and more. [Join our community](   [Listen To This] [Listen to This]( [France's far-right youth]( President Emmanuel Macron has called snap elections in France that could lead to him sharing power with the far right. Le Monde's Gilles Paris explains how the anti-immigrant party of Marine Le Pen is becoming more popular among young voters. [Listen to Apple Podcasts](   [This is cool] //link.recode.net/click/35784462.4312/aHR0cHM6Ly93d3cudGlrdG9rLmNvbS9AdGhlbGFzdGJpcmRiZW5kZXIvdmlkZW8vNzIwNTY2NDgzMDIwMjY3ODU3ND9sYW5nPWVuJnVlaWQ9MDkwODhjM2Y0NTA5ZDYyMGNhNWFkOTVkY2JiNDYyY2I/608c6c3d7e3ba002de96e5e7Bfe29ae82[The most upsetting guessing game in the world](  [Learn more about RevenueStripe...](   [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage your [email preferences]( , or [unsubscribe](param=tech)  to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2024. All rights reserved.

Marketing emails from vox.com

View More
Sent On

25/06/2024

Sent On

24/06/2024

Sent On

24/06/2024

Sent On

21/06/2024

Sent On

21/06/2024

Sent On

20/06/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.