Around the world, social media use has been on the rise. Despite the growing trend of younger users dividing their time among a variety of social platforms, Facebook still holds one of the top spots for adult social media use.
In 2014, there were 1.34 billion global Facebook users; by 2020, that number is expected to reach 1.69 billion. But Papua New Guinea (PNG) won’t be contributing to those numbers—at least not for the next month. PNG country officials have banned Facebook use for a month because of its role in facilitating a breach of the country’s 2016 cybercrime law.
In PNG, only about 10% of people have access to the internet, but this ban is making the world take notice nonetheless. Here are the details on the PNG cybercrime law, the Facebook ban and what’s behind it.
Fake news and violence on the rise
Facebook’s Community Standards clearly outline what will and will not be tolerated. But over the last few months, the platform has become the target of fake news and content that goes against its stance on things like terrorism, sexual content and hate speech. Despite the company’s attempts to delete or add warnings to 29 million posts, Facebook has been under scrutiny for the first half of the year.
Between January and March, graphic violence on Facebook spiked to 3.4 million occurrences; terrorist propaganda rose to 1.9 million. Hate speech, sexual content and spam were also at an all-time high. Facebook removed 583 million fake accounts during that time, estimating that somewhere between 3% and 4% of users on the platform were fake.
The breach of cybercrime law
PNG’s cybercrime act was established in 2016 to criminalize things like defamation, forgery, hacking, unlawful advertising, cyber bullying, cyber harassment and computer fraud. As Facebook fights back against rising platform violations, the PNG government has temporarily banned Facebook use for a month while it figures out how to deal with the onslaught of offenses. The government is taking the Facebook-free month to examine and identify those who are misusing Facebook, remove them from the platform and determine what Facebook use is doing to the country as a whole.
This is far from the first time Facebook has been in the news for the wrong reasons. In the last week, CEO Mark Zuckerberg stood before the European parliament to address the E.U.’s Data Privacy Law. Over the last few years, the company has faced a backlash with the Cambridge Analytics scandal, shoddy data privacy practices, the Russia scandal and, of course, the company’s changing algorithms, which have made it difficult for both users and governments to stay ahead of the game.
Is artificial intelligence the answer?
While PNG is trying to find its own solution to the social media violations, Facebook has been hard at work, too. The company currently depends on 15,000 people to moderate its platform and ensure that users abide by its policies. Now, going deeper into the fray, it’s developing artificial intelligence (AI) to help monitor the platform and user profiles.
Facebook’s previous experimentation with AI generated rogue chatbots that went off script and created their own language. But the company still sees AI as a way to amplify its abilities to fight back against the rising number of Facebook user violations.
Facebook has already been banned for good in countries like China, Iran and North Korea, and has been temporarily banned in countries like Pakistan, Syria and Egypt. And while opponents of PNG’s ban see it as an unnecessary measure, the country has yet to determine what effect Facebook has had on the country—or what effect it could have down the road.