By Jon Emont, Georgia Wells and Mike Cherney
Scenes of Friday's New Zealand mosque massacre were streamed
live on Facebook and posted on YouTube and Twitter, a gruesome
example of how social-media platforms can be used to spread terror
despite heavy spending by their owners to contain it.
New Zealand police said the footage of the attack on a pair of
mosques, which left 49 dead, was "extremely distressing" and urged
people not to circulate it. Yet the video was widely available
online Friday as the tech platforms scrambled to pull down the
offending posts only to have them reappear elsewhere.
The 17-minute video shows a gunman walking through a mosque and
firing at worshipers who slump to the floor. At one point the man,
whose face is visible in parts of the video, appears to gun down a
victim at close range before reloading and continuing the
rampage.
A Facebook Inc. spokeswoman said the company removed the video
after New Zealand police flagged it, and deleted the Facebook and
Instagram accounts belonging to the alleged shooter, Brenton
Tarrant, who has been charged with murder.
Twitter Inc. said it had suspended Mr. Tarrant's account and was
working to remove the video from the platform. A spokesman for
YouTube, a unit of Alphabet Inc.'s Google, said it has removed
thousands of videos related to the incident and that "shocking,
violent and graphic content has no place on our platforms."
All three platforms have struggled to block, uncover and remove
violent content despite a public outcry and political pressure.
They have invested heavily in artificial-intelligence systems
designed to detect violence, and have hired thousands of moderators
to review content flagged by users.
But the sheer volume of material posted by the platforms'
billions of users, along with the difficulty in evaluating which
videos cross the line, has created a minefield for the
companies.
In addition, even once the mainstream platforms take action,
disturbing or offensive content often lives on in darker corners of
the web. Late Friday, for example, the video of the shooting was
widely available for streaming or download on sites including 4Chan
and Gab, popular among right-wing extremists and free-speech
absolutists.
"This latest atrocity only underscores the fact that there is no
responsible way to offer a live-streaming social media service,"
said Mary Anne Franks, a law professor at the University of Miami
and president of the Cyber Civil Rights Initiative, which advocates
for legislation to address online abuse.
After the 2016 launch of video service Facebook Live, dozens of
violent acts were broadcast in real time, including a 2017 murder
of a Cleveland man. At the time, Facebook acknowledged its process
for reviewing content contained flaws and it pledged to improve
it.
On the flip side, Facebook Live in 2016 broadcast the Minnesota
shooting of Philando Castile, who died after a confrontation with a
police officer during a traffic stop, an example that many say
shows the potential benefits live-streaming can provide.
"I think livestreaming is, on balance, good for the world -- the
ability to livestream police violence, as in the case of Philando
Castile, has been extremely powerful in holding authorities
accountable," said Ethan Zuckerman, director for the Center for
Civic Media at the Massachusetts Institute of Technology. "The
issue is responding to reports of violent livestreams in time, and
hardening platforms against redistribution of this content."
Jennifer Grygiel, assistant professor of communications at
Syracuse University, suggested YouTube put a hold on videos that
include pertinent keywords during a tragedy, and moderate these
videos before posting them. "This is content that violates their
community standards, so I'm not asking them to do anything beyond
what they have said they would do," Prof. Grygiel said.
YouTube didn't respond to a request for comment about the idea
of a delay.
Facebook says it has more than 15,000 contractors and employees
reviewing content, part of a 30,000-person department working on
safety and security issues at the company. The department includes
engineers building technical tools to block graphic content, as
well as employees dubbed "graphic violence specialists" who make
decisions about whether violent images posted on the site have
social or news value or whether, as in the case of beheadings, they
are meant to terrorize and have no place on the site, Monika
Bickert, head of global policy management at Facebook, said in an
interview in February.
YouTube likewise makes exceptions for violent content that it
deems to have documentary or news value.
After the New Zealand shooting, Facebook's content-policy team
designated the incident as a terrorist attack, meaning that any
praise or support of the event violates the company's rules.
Facebook teams have also been deleting the accounts of people who
impersonate the shooter or allege the incident didn't happen, a
spokeswoman said.
After the live video was removed, Facebook set up a filter to
detect and delete any similar videos, and is using artificial
intelligence to find videos that aren't an exact match but also
depict the shooting.
"We are adding each video we find to an internal database which
enables us to detect and automatically remove copies of the videos
when uploaded again," the spokeswoman said. "We urge people to
report all instances to us so our systems can block the video from
being shared again."
She said Facebook notifies other sites when it detects links to
the video hosted elsewhere so those platforms can delete them.
Artificial-intelligence experts said no technology is available
that would allow for the foolproof detection of violence on
streaming platforms. Even teaching machines to recognize a person
brandishing a gun is difficult, as there are many different types
of guns, and many different stances for holding them. Computers
also struggle to distinguish real violence from fictional
films.
"There's a perception that AI can do everything and detect
everything, but it's a matter of how much room do you leave to
produce false alerts, " said Itsik Kattan, the CEO of Agent Video
Intelligence, a video-analytics company specializing in AI
applications for surveillance.
Taking down violent videos often doesn't stop their spread.
Sidney Jones, director of the Institute for Policy Analysis of
Conflict in Jakarta, says that by the time big tech companies
remove violent videos, they have often been spread via email and
messaging applications and remain accessible. Islamic State
live-streamed terrorist attacks to gain followers and attention,
she said, and now other violent actors are using social media in a
similar way.
"It's the classic objective of terror, which is to sow the idea
that you will be next," Ms. Jones said.
--Yoree Koh and Rob Copeland contributed to this article.
Write to Jon Emont at jonathan.emont@wsj.com, Georgia Wells at
Georgia.Wells@wsj.com and Mike Cherney at mike.cherney@wsj.com
(END) Dow Jones Newswires
March 15, 2019 19:21 ET (23:21 GMT)
Copyright (c) 2019 Dow Jones & Company, Inc.
Meta Platforms (NASDAQ:META)
Gráfico Histórico do Ativo
De Fev 2024 até Mar 2024
Meta Platforms (NASDAQ:META)
Gráfico Histórico do Ativo
De Mar 2023 até Mar 2024