Close

Ch. 9 (Unit 3A) Online and Mobile Media Current Articles

Technology is a useful servant but a dangerous master. Christian Louis Lange

Article/Video #1

Apps That Use AI to Undress Women in Photos Soaring in Use Links to an external site.

 

Article #2 Links to an external site.

A letter signed by current and former OpenAI, Anthropic and Google DeepMind employees asked firms to provide greater transparency and whistleblower protections.

By Pranshu Verma and Nitasha Tiku
Updated June 4, 2024 at 12:13 p.m. EDT|Published June 4, 2024 at 10:11 a.m. EDT

Links to an external site.

Sam_Altman at hearing OpenAI CEO Sam Altman arrives for a bipartisan Senate forum on artificial intelligence. (Elizabeth Frantz for The Washington Post)

A handful of current and former employees at OpenAI and other prominent artificial intelligence companies warned that the technology poses grave risks to humanity Links to an external site. in a Tuesday letter, calling on companies to implement sweeping changes to ensure transparency and foster a culture of public debate.

The letter, signed by 13 people including current and former employees at Anthropic and Google’s DeepMind, said AI can exacerbate inequality Links to an external site., increase misinformation and allow AI systems to become autonomous and cause significant death. Though these risks could be mitigated, corporations in control of the software have “strong financial incentives” to limit oversight, they said.

Because AI is only loosely regulated, accountability rests on company insiders, the employees wrote, calling on corporations to lift nondisclosure agreements and give workers protections that allow them to anonymously raise concerns.

“They and others have bought into the ‘move fast and break things’ approach, and that is the opposite of what is needed for technology this powerful and this poorly understood.”

Liz Bourgeois, a spokesperson at OpenAI, said the company agrees that “rigorous debate is crucial given the significance of this technology.” Representatives from Anthropic and Google did not immediately reply to a request for comment.

The employees said that absent government oversight Links to an external site., AI workers are the “few people” who can hold corporations accountable. They noted that they are hamstrung by “broad confidentiality agreements” and that ordinary whistleblower protections are “insufficient” because they focus on illegal activity, and the risks that they are warning about are not yet regulated.

The letter called for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles include a commitment to not enter into or enforce agreements that prohibit criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; supporting a culture of criticism; and a promise to not retaliate against current and former employees who share confidential information to raise alarms “after other processes have failed.”

“He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically just impossible for the board to know how well those safety processes were working,” she told “The TED AI Show Links to an external site.” in May.

The letter was endorsed by AI luminaries including Yoshua Bengio and Geoffrey Hinton, who are considered “godfathers” of AI, and renowned computer scientist Stuart Russell.

Links to an external site.

Article #3

AI-generated images are everywhere. Here's how to spot them Links to an external site.

By Shannon Bond June 13, 202312:10 AM ET

Amid debates about how artificial intelligence will affect jobs, the economy, politics and our shared reality, one thing is clear: AI-generated content Links to an external site. is here. Chances are you've already encountered content created by generative AI software Links to an external site., which can produce realistic-seeming text, images, audio and video.

So what do you need to know about sorting fact from AI fiction? And how can we think about using AI responsibly?

How to spot AI manipulation

Thanks to image generators like OpenAI's DALL-E2 Links to an external site., Midjourney and Stable Diffusion, AI-generated images are more realistic and more available than ever. And technology to create videos out of whole cloth is rapidly improving, too. The current wave of fake images isn't perfect, however, especially when it comes to depicting people. Generators can struggle with creating realistic hands, teeth and accessories like glasses and jewelry. If an image includes multiple people, there may be even more irregularities.

Take the synthetic image of the Pope wearing a stylish puffy coat Links to an external site. that recently went viral. If you look closer, his fingers don't seem to actually be grasping the coffee cup he appears to be holding. The rim of his eyeglasses is distorted.

Another set of viral fake photos purportedly showed former President Donald Trump getting arrested Links to an external site.. In some images, hands were bizarre and faces in the background were strangely blurred.

Listen To Life Kit

This story is adapted from an episode of Life Kit Links to an external site., NPR's podcast with tools to help you get it together. To listen to this episode, play the audio at the top of the page or subscribe Links to an external site.. For more, sign up for the newsletter Links to an external site.. Synthetic videos have their own oddities, like slight mismatches between sound and motion and distorted mouths. They often lack facial expressions or subtle body movements that real people make.

Some tools try to detect AI-generated content, but they are not always reliable.

Experts caution against relying too heavily on these kinds of tells. The newest version of Midjourney, for example, is much better at rendering hands Links to an external site.. The absence of blinking Links to an external site. used to be a signal a video might be computer-generated, but that is no longer the case. "The problem is we've started to cultivate an idea that you can spot these AI-generated images by these little clues. And the clues don't last," says Sam Gregory of the nonprofit Witness, which helps people use video and technology to protect human rights.

Gregory says it can be counterproductive to spend too long trying to analyze an image unless you're trained in digital forensics. And too much skepticism can backfire — giving bad actors the opportunity to discredit real images and video as fake.

Use S-I-F-T to assess what you're looking at

Instead of going down a rabbit hole of trying to examine images pixel-by-pixel, experts recommend zooming out, using tried-and-true techniques of media literacy Links to an external site.. One model, created by research scientist Mike Caufield, is called SIFT Links to an external site.. That stands for four steps: Stop. Investigate the source. Find better coverage. Trace the original context.

The overall idea is to slow down and consider what you're looking at — especially pictures, posts, or claims that trigger your emotions.

"Something seems too good to be true or too funny to believe or too confirming of your existing biases," says Gregory. "People want to lean into their belief that something is real, that their belief is confirmed about a particular piece of media."

A good first step is to look for other coverage of the same topic. If it's an image or video of an event — say a politician speaking — are there other photos from the same event?

Article #4

Online and Mobile Media, Social Media and Video Games Overview Flashcards - Online and Mobile Media