Live broadcasting of crime is an unfortunate internet trend
Since the late 2000s, live streaming of crimes is on the rise. Criminals deliberately share videos of themselves committing the crime on social media.
The New Zealand mosque shooting on 15 March 2019 was live-streamed on Facebook for a total of 17 minutes.
This was not the first internet broadcast of a violent crime. Here’s a look at a few instances from the past:
The gunman filmed and shared the attacks using a mobile phone app called LIVE4 which allows users to broadcast directly to Facebook from personal body cameras.
Alex Zhukov, founder and Chief Technology Officer of LIVE4, in a statement to Reuters said, "The stream is not analysed, stored or processed by LIVE4 in any way, we have no ability (even if we wanted to) to look at the live streams as they are happening or after it's completed”.
Facebook said it removed the stream and the gunman's account after being alerted by New Zealand police but the circulation didn’t end there. A few hours after the shooting, the live video was instantly downloaded and repackaged by multiple users.
Modified versions of the footage from the stream remained on Facebook, Twitter, and YouTube, as well as Facebook-owned Instagram and WhatsApp. It was also available on New Zealand-based file-sharing website—Mega.nz.
Tech giants have struggled to curb the circulation of live streaming crime videos. This shows that stopping gory footage from spreading online persists as a major challenge for tech companies despite years of investment. The social media platforms on Friday stated that rapid actions were being taken to remove remaining copies.
Repackaged videos were difficult to detect
After Facebook stopped the Christchurch livestream, it told content moderators to delete any copies or complimentary comments on the attack, according to an email seen by Reuters.
Users created a new digital fingerprint that was different from the original video. They recorded the video playing on their mobile or desktop screens to create a new version in an attempt to evade companies' detection systems. Others shared shorter versions or screenshots from the gunman’s livestream.
Members of a group called "watchpeopledie" on Reddit—a discussion platform—figured out ways to share the footage even as the website took steps to limit its spread.
Users strategized sharing methods by directing each other to video apps which had yet to take action, and sending footage through messaging apps. One Reddit user said in a post that they had sent a video of the attack to more than 600 people before having their account temporarily suspended for sharing violent content.
YouTube said on Friday that it was trying to identify copies with an automated tool. The tool is likely to find videos that are violent in nature based on a combination of the title and description of the video, the characteristics of the user uploading it, and objects in the Christchurch broadcast footage.
Facebook was relying on an artificial intelligence system to identify violent footage and send it to moderators. Additionally, it also used audio technology—in which gunshots could be heard and music played in the attacker's car—to detect the footage.
Exact matches of removed material cannot be uploaded again on YouTube and Facebook as the two companies relied on user complaints to remove copies of the footage.
Experts said the companies could set their detection tools and removal processes to be more aggressive. Youtube and Facebook responded by saying that they want to be careful not to remove sensitive videos that either come from news organizations or have news value.
What steps are being taken?
Politicians around the globe on Friday voiced the same conclusion: Tech companies are failing. As the massacre video continued to spread, former New Zealand Prime Minister Helen Clark said companies had been slow to remove hate speech.
Facebook, the world's largest social media platform, has about 2.3 billion monthly users. By the end of 2018, it increased the size of its safety and security team to 30,000 people. This move was interpreted as an attempt to respond quickly to offensive content reported on the platform.
Antigone Davis, Facebook’s Global Head of Safety in a blog post on 15 March 2019 announced that the company was focused on developing artificial intelligence systems to catch material without the need for users to report it first.
He said, “Finding these images goes beyond detecting nudity on our platforms. By using machine learning and artificial intelligence, we can now proactively detect near-nude images or videos that are shared without permission on Facebook and Instagram.” The post added that victims—afraid of retribution—are reluctant to report the content themselves or are unaware that the content has been shared.