How many live murders will it take for Facebook to realize its live-streaming platform needs to die?

Ten? Twenty? So far, that number hasn't been reached.

Tens of people have died by murder or suicide on Facebook Live since its launch in 2015, followed by many more who have been both raped and abused while thousands of people watch from their feed.

Just yesterday, a Thai man broadcasted the murder of his infant baby and then his own suicide on Facebook Live. The video was reportedly available for 24 hours before being taken down, during which time it racked up over 400,000 views. Earlier this month, Steve Stephens also posted a video that showed him bragging about killing a man named Robert Goodwin. He wasn't just bragging — an initial video of the actual killing was posted on Facebook Live as well, and was active for three hours before being removed. Stephens later shot himself after a police chase and lengthy run at large.

Another live incident in March broadcasted the sexual assault of a 15-year-old girl. And again in Sweden, three men were jailed for allegedly raping a woman and live streaming it on Facebook for all to see. Even earlier than that were the police shooting deaths of Alfred Olango, Keith Scott Lamont and Philandro Castile — all which made live appearances on America's favorite social network.

No one would blame you for arguing that live depictions of these types of violent crimes are beneficial in identifying and arresting perpetrators. Nor would you be wrong if you pointed out that the immediacy with which these videos are broadcasted can be helpful in alerting loved ones, while acting as a call to arms to fight for justice. After all, once the video is uploaded, its virality makes it hard to delete, an important factor when a corrupt adversary with the legal power to confiscate devices and suppress footage — like the police — is involved.

However, Facebook Live isn't simply just recording violence and using those recordings as legal evidence. Instead, it's breeding a unique style of it; one that speaks to both the desperation with which people seek attention through social media, and the degree to which that desperation can blind judgment and action. Murders, rapes and suicides that take place specifically and intentionally in front of a live, digital audience are cries for help and validation amplified through megawatt loudspeakers, and Facebook's Live is the platform from which people can bellow. 

Of course, Facebook’s own systems intensify things. The site aggressively pushes live video to the top of Newsfeeds to encourage interaction, allowing videos to snowball and rapidly reach large audiences.

That's good for when the content in the video isn't, say, someone being raped. But when it is — as it increasingly seems to be — shit goes south when those large audiences outpace Facebook’s own internal ability to nix inappropriate content before it reaches the masses. While videos can be reported, for things like spam or offensive content (murder, perhaps?) the real-time nature of live video seems to be beyond Facebook’s censorship capabilities.

Seattle University criminal psychologist Jacqueline Helfgott told the Guardian she thinks this will only get worse. Criminals have always used the media to amplify their actions and garner attention, but social media makes that process easier than it should.

“A medium like Facebook where a person can instantly achieve celebrity or notoriety can be a risk factor for certain types of criminal behavior,” she said, explaining how violent images transcend language and culture, making it so they can proliferate across social networks with unbridled ease, something which, in turn, can encourage copycat behavior in people with an interest in committing crime.

So, what's Facebook doing about it?

Not much. Other than acknowledging that live-streaming rape and murder is "bad," Facebook has done little to address the disturbing new trend of live-streamed violence.

It is, however, under mounting pressure to do so. At the company's recent F8 conference, Facebook CEO Mark Zuckerberg told an audience that, "We have a lot of work and we will keep doing all we can to prevent tragedies like this from happening."

Justin Osofsky, Facebook's vice president for global operations and media partnerships, also wrote a blog post announcing that Facebook is reviewing how it handles violent videos and that it is working on a solution. 

"We prioritize reports with serious safety implications for our community, and are working on making that review process go even faster," Osofsky said.

One of the ways Facebook is trying to do this is by implementing artificial intelligence (AI) systems that can look at photos and videos to flag inappropriate content to Facebook's team of reviewers before it's spread virally. But, that's still a ways off and has not been implemented yet. That, and Facebook's not even on the case — at F8, there was no formal acknowledgment of its plans to use AI to stop videos like these.

Until then, what we have instead is a woefully inadequate response from Facebook, which typically states that such “content” is not allowed by Facebook’s terms of service (like that would stop anyone from murdering their child if that was already on their afternoon schedule) or flaccidly pointing out that nobody reported the video to them until it was too late.

So, in the meantime while it mounts an actionable response, we're proposing this: put Facebook Live to bed, already. Give it a glass of warm milk, read it a bedtime story, and call it a night.

It's not functioning like it was intended to. Plain and simple. Instead of shining a spotlight on the parts of humanity that connect and unite us, it's turned a glaring beam on the worst parts of human nature; parts people sometimes need to see in order to make changes for the better (particularly in the cases involving unlawful force by police), but that don't necessarily need to be shown to unwitting users who are just there to upload engagement photos, not stumble across a murder-suicide.

Moreover, people will always have inclinations to commit violence … but we encourage that instinct by providing them with platforms like Facebook Live that super-size the impact of that behavior.

Facebook would do well to acknowledge more forwardly that sharing, as it knows and intend it, is not always the sunny, connective process it's meant to be. After all, sometimes, to share is to pass on something dark and impossible to unsee. At least a more composed recognition of their role in spreading that kind of violence to the masses — though hardly intentional — would do better in the interim than a flaccid mention of AI content robots, no? Facebook's users are looking for reassurance that the media giant is doing something about this, and hearing it come to terms with this problem would feel like a solid first step … at the least.

“If they can’t handle [moderation] in regular reality, I don’t see how these new tools are going to escape these problems,” Sarah T Roberts, an academic at UCLA who specializes in commercial content moderation, told The Guardian.

Helfgott sums it up: “The more we have technology that blurs the boundaries between fantasy and reality, the more people are susceptible to media-mediated violence.”

Until Facebook can figure out a way to filter that out, maybe its Live project needs to sit this one out, yeah?