Ch. 1 (Unit 1A) Media Landscape & Literacy Current Articles

"All of us who...use the mass media are the shapers of society...Or we can help lift it onto a higher level." W Bernbach

Article #1

How Susceptible Are You to Misinformation? There’s a Test You Can Take Links to an external site.

A new misinformation quiz shows that, despite the stereotype, younger Americans have a harder time discerning fake headlines, compared with older generations

Catherine McQueen/Getty Images Young mixed-race female holding smart phone up to face 
Many Americans seem to worry that their parents or grandparents will fall for fake news online Links to an external site.. But as it turns out, we may be collectively concerned about the wrong generation.

Contrary to popular belief, Gen Zers and millennials could be more susceptible to online misinformation than older adults Links to an external site., according to a poll published online on June 29 by the research agency YouGov. What’s more, people who spend more time online had more difficulty distinguishing between real and fake news headlines. “We saw some results that are different from the ad hoc kinds of tests that [previous] researchers have done,” says Rakoen Maertens, a research psychologist at the University of Cambridge and lead author of a study on the development of the test used in the poll, which was published on June 29 in Behavior Research Methods Links to an external site..

Maertens’s team worked with YouGov to administer a quick online quiz based on the test that the researchers developed, dubbed the “misinformation susceptibility test” (MIST). It represents the first standardized test in psychology for misinformation Links to an external site. and was set up in a way that allows researchers to administer it broadly and collect huge amounts of data. To create their test, Maertens and his colleagues carefully selected 10 actual headlines and 10 artificial-intelligence-generated false ones—similar to those you might encounter online—that they then categorized as “real” or “fake.” Test takers were asked to sort the real headlines from the fake news and received a percentage score at the end for each category. Here are a couple of examples of headlines from the test Links to an external site. so you can try out your “fake news detector”: “US Support for Legal Marijuana Steady in Past Year,” “Certain Vaccines Are Loaded with Dangerous Chemicals and Toxins” and “Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality.” The answers are at the bottom of this article.

Maertens and his team gave the test to thousands of people across the U.S. and the U.K. in their study, but the YouGov poll was given to 1,516 adults who were all U.S. citizens. On average, in the YouGov poll, U.S. adults correctly categorized about 65 percent of the headlines. However, age seemed to impact accuracy. Only 11 percent of Americans ages 18 to 29 correctly classified 17 or more headlines, and 36 percent got no more than 10 correct. That’s compared with 36 percent of the 65-and-older crowd who accurately assessed at least 16 headlines. And only 9 percent in the latter age group got 10 or fewer correct. On average, Americans below age 45 scored 12 out of 20, while their older counterparts scored 15.

Additionally, people who reported spending three or more leisure hours a day online were more likely to fall for misinformation (false headlines), compared with those who spent less time online. And where people got their news made a difference: folks who read legacy publications such as the Associated Press and Politico had better misinformation detection Links to an external site., while those who primarily got their news from social media sites such as TikTok, Instagram and Snapchat  generally scored lower. (“I didn’t even know that [getting news from Snapchat] was an option,” Maertens says.) This could be part of the reason that younger people scored lower overall, Maertens’s team hypothesized. People who spend a lot of time on social media Links to an external site. are exposed to a firehose of information, both real and fake, with little context to help distinguish the two.

Personality traits also impacted a person’s susceptibility to fake news. Conscientiousness, for instance, was associated with higher scores in the study conducted by Maertens and his team, while neuroticism and narcissism were associated with lower scores.

“They’ve done a good job in terms of conducting the research,” says Magda Osman, head of research and analysis at the Center for Science and Policy at the University of Cambridge, who was not involved in the study. She worries, however, that some of the test’s AI-generated headlines were less clear-cut than a simple real/fake classification could capture.

Take, for example, the headline “Democrats More Supportive than Republicans of Federal Spending for Scientific Research.” In the study, this claim was labeled as unambiguously true based on data from the Pew Research Center Links to an external site.. But just by looking at the headline, Osman says, “you don’t know whether this means Democrats versus Republicans in the population or Democrats versus Republicans in Congress.”

This distinction matters because it changes the veracity of the statement. While it’s accurate to say that Democrats generally tend to support increased science funding, Republican politicians have a history of hiking up the defense budget, which means that over the past few decades, they have actually outspent their Democratic colleagues Links to an external site. in funding certain types of research and development.

What’s more, Osman points out, the study does not differentiate which topics of misinformation different groups are more susceptible to. Younger people might be more likely than their parents to believe misinformation about sexual health or COVID but less likely to fall for fake news about climate change, she suggests.

Ultimately both Osman and Maertens agree that media literacy is a crucial skill for navigating today’s information-saturated world. “If you get flooded with information, you can’t really analyze every single piece,” Maertens says. He recommends taking a skeptical approach to everything you read online, fact-checking when possible (though that was not an option for MIST participants) and keeping in mind that you may be more susceptible to misinformation than you think.

In the example in the third paragraph, the headlines are, in order, real, fake, real.

Article #2

What Is Brain Rot And Why Is The Internet Talking About It Links to an external site.

The discussion on "brain rot" surfaced when a TikTok user discovered the real-life origin of a popular meme based on a World War II-era illustration called "The 2000 Yard Stare," showing a soldier's haunted expression upon witnessing the horrors of war.

3 Ways the ‘Splinternet’ is Damaging Society Links to an external site.

by Deborah Lynn Blumberg  

The internet was originally designed to bring people together. But the rise of misinformation and fake news has divided online users into numerous, often hostile factions. 

The emergence of the so-called splinternet Links to an external site. — the fracturing of global channels of communication into groups that share no common ground — poses a significant threat to business, innovation, and even democracy itself.

What are the dangers? A panel of experts tackled that question at the recent Thinker-Fest 2023 conference Links to an external site., co-hosted by the MIT Initiative on the Digital Economy Links to an external site..

Here are three observations business leaders should keep in mind as they help their organizations communicate in an increasingly ambiguous atmosphere.

  1. Disinformation is a business.

People often think of fake news as originating in small online communities — a WhatsApp group your uncle belongs to, for example.

As a digital anthropologist who has lurked anonymously in myriad internet groups, Rahaf Harfoush Links to an external site. has identified a much larger, interconnected system of disinformation and delved into the economics behind it.

“There are very clear revenue models,” said Harfoush, executive director of the Red Thread Institute of Digital Culture Links to an external site.. “People are making money from this.”

More broadly, a well-organized community of fake-news influencers, climate-denier activists, and intentional antagonists or trolls “are communicating with each other and creating an ecosystem which we didn’t see before,” she said.

The benefits can be direct — such as when influencers opposed to vaccines sell essential oils or climate deniers sell T-shirts.

Or they can be indirect, such as when content farms create extreme right-wing or left-wing content to monetize clicks from across the political spectrum. “There are businesses who are just invested in getting people to click, regardless of what side,” Harfoush said.

Then there are what she called “hidden benefits,” which flow to companies that are removed from the discord but still gain from the division.

For example, some conservative commentators have perpetuated a conspiracy theory around the “15-minute cities” championed by environmentalists, Harfoush noted, with detractors claiming that the idea was developed to allow the government to control how people use their cars. Big oil may not start such rumors, but it stands to benefit from them, she said.

“That is something to look at: that all of these end consumers are being accessed or being manipulated by very specific economic agendas,” Harfoush said.

  1. Tribalism amplifies fake news, which is hard to combat.

Greater access to information and increased connectivity were hailed as the early internet’s most promising benefits. But in fact, they’re tearing people apart, said Marshall Van Alstyne Links to an external site., a visiting scholar at the IDE.

That’s because people overloaded with information tend to fall back on intuition and tribalism when deciding what information to accept as true, said Van Alstyne, a professor at the Questrom School of Business at Boston University.

“You actually get more balkanized, more fragmented, more polarized groups by virtue of folks choosing information they want among this whole galaxy of information they otherwise wouldn’t have access to,” Van Alstyne said.

In this atmosphere, fake news spreads easily on social media, and efforts to combat it fall short, he said. A wide range of solutions to fake news — such as fact-checking algorithms, accuracy nudges, and tagging and truth labeling — suffer from at least one of four fundamental problems:

  • A technological arms race. As soon as technology is developed to combat fake news, bad actors are working to circumvent it. “Any technology you can use to recognize fake news, you can recognize the filter to avoid that recognition,” Van Alystne said.
  • Discrediting the rater. “If folks are going to lie about the content, they’re going to lie about being rated,” he said. “You simply undermine the credibility of the folks doing the ratings.”
  • Misplaced responsibility. A vast majority of proposed solutions to fake news put the onus on the platform or the recipient to take action. “It’s the author who knows they’re lying. Most of the responsibility should be put on the author,” Van Alystne said.
  • Economics. Given the current cost structure of honest journalism, “it’s always cheaper to produce fake news,” he said.
  1. The metaverse is happening, without checks or balances.

As CEO and chairman of the Lindstrom Co., which advises consumer brands on cultural transformation, Martin Lindstrom Links to an external site.’s current specialty is the metaverse — the digital space that uses virtual reality, augmented reality, and other technologies to allow people to experience lifelike interactions in 3D online.

Like the internet, the metaverse has the power to bring people together: Lindstrom cited a recent Ariana Grande concert within the online game Fortnite that attracted 78 million virtual fans. But it also has the power to isolate. Lindstrom is particularly concerned about what he called “the metaverse of one.”

As an example, Lindstrom showed a video of a young mother weeping as she reunited in a park with her young daughter, whom she hadn’t seen in two years. “I really want to touch you just once,” she says — an impossibility, because the park is in the metaverse, the girl was re-created virtually Links to an external site. by a local television station, and the only way the mother can “be” with her deceased daughter is by wearing a VR headset.

Beyond wondering about the mental health of the mother — “She’d probably be stuck in this universe forever trying to find that lost love she once had,” Lindstrom said — the simulation  raises other challenging questions. Who owns the daughter’s image — her mother or the company that produced it? More broadly, are humans in danger of outsourcing their emotions to the cloud and storing their short-term memories online?

Lindstrom contends that development of the metaverse is proceeding unchecked before such fundamental questions have been answered; now, organizations and individuals are playing catch-up.

For his part, Lindstrom is working with 22 large global brands to identify the technology’s consequences and determine safe boundaries for operating in the metaverse. The idea is to push the limits in a controlled space and, hopefully, develop a set of operating norms for doing business in the metaverse that companies can begin to adopt.

“If you can create a trend among those brands and ensure that they at least are keeping themselves in check, hopefully it can create some sort of halo effect to more businesses out there,” Lindstrom said.

A mix of optimism and …

When panelists were asked by Thinkers50 co-founder Des Dearlove whether they are hopeful about the future given the complex challenges of the splinternet, Lindstrom was downbeat.

Van Alstyne likened the current climate to the early industrial era. “All kinds of horrible things happened, with pollution and labor and yellow journalism, with misinformation at that time,” he said. It took time for institutions to catch up with the new technologies.

“We’re experiencing the same thing all over again,” Van Alstyne said. He’s hopeful that, through research, institutions can build effective solutions to address these challenges. “We’re behind the curve, but I think we will get there,” he said.

Harfoush contended that it will take many solutions to address the splinternet and fake news. “There might be 20 solutions,” she said.

“Sometimes I hear [an] overly optimistic definition of how these technologies are either going to save the world or ruin the world. People think it’s a binary choice,” she said.

“I don't think it’s either/or. I think it’s both. It’s going to be awesome and it’s going to be terrible at the same time, and we just have to make sure the choices are erring more often toward the awesome.”

Article #4

A chief data officer explains why AI still has a long way to go before it can compete with human creativity or judgment.

A woman staring out the window blinds

BY Tom Barnett.   June 2, 2024

We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence Links to an external site. (AI) has begun to alter, even reverse, this evolution.

The ascendancy of AI has rekindled debate about the long-term capability of computers to either augment what we do or eventually replace us entirely. Highly specialized computing functions—designed to solve decision-making, learning-based problems like playing chess or filtering spam—are referred to as narrow AI. But AI that can adapt and respond to a broad range of external stimuli, illustrating a shift away from specialization, is referred to as artificial general intelligence (AGI) or strong AI.

AGI has captured the imagination of leading tech innovators Links to an external site. and attracted massive capital investment. You can think of this as attempting to create software in our own image or at least how we perceive our cognitive functions to work. At the extreme end of the utopian or dystopian spectrum, depending on your point of view, is super AI or artificial super intelligence, the most generalized form of AI, that would, theoretically, have abilities beyond that of humans—which could either propel us into a contented life of leisurely pursuits or turn us into a super AI server servant class.

The ultimate aspiration of much high-profile AI innovation is to move away from hyper specialization and become generalized. A computer may be able to beat the greatest chess player in the world, but can it function out in the world even at the level of your dog or cat?

Doubtful at the moment. But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.

Working in an AI World

Measuring AI’s potential for job replacement is difficult, but there has been some analysis. The World Economic Forum predicts that AI will replace 85 million jobs by 2025 Links to an external site..  A 2023 study from Resume Builder Links to an external site. suggests that more than a third of businesses have already been replacing workers with AI.  Well-known industry leaders and experts tend go even further—Elon Musk Links to an external site. has stated that he believes most jobs will eventually be replaced by AI while Open AI CEO Sam Altman takes a more moderate view that all repetitive human work not requiring a “deep emotional connection” between people will be done better, cheaper, faster by AI.

What does this mean for the future state of work in an AGI world? That depends on how you view what we actually do for a living.  Let’s look at one prevalent profession in the U.S., some might argue an overrepresented one: lawyers. 

The ABA estimated the number of U.S. lawyers in 2023 to be about 1.3 million.  How much of a lawyer’s work could be subject to replacement by some form of AI, whether narrow, strong or super? Are they consummate creative beings far beyond the capabilities of even advanced generative pretrained transformer (GPT) miracles? 

In The Wealth of Nations, Adam Smith famously analyzed the steps required to make a pin, which he concluded required 18 different specialized tasks.  One worker could take a whole day or more to make a single complete pin. But dividing the work into individual tasks among 10 people, he asserted, could produce more than 48,000 pins per day. 

How Much Can AI Take Over?

Putting aside the question of whether making pins is more intellectually stimulating than reviewing discovery documents, the inherent set of document review tasks seems to be tailor-made for narrow AI. Analytical tools can identify authors of documents, parties to conversations, dates of transmission and receipt. These are the type of clearly defined, specialized tasks that computers have been great at practically since their inception and at which narrow AI excels. 

But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.

Whether considering discovery document review, drafting contracts, legal research, compliance monitoring, or analyzing litigation viability we currently live in a narrow AI world. Lawyering, particularly specialized tasks, can be greatly aided by AI but at present is not about to be fully replaced by AI however you classify it. If you’re waiting for that you’re just going to have to put a pin in it.