On the heels of Donald Trump’s win, some in the tech world are asking whether it’s time to reconsider the idea that fake news is the norm in our society.
For one thing, some experts have argued that the tech industry has been complicit in spreading fake news since at least 2014, when Facebook introduced a feature called “tagging” that allowed users to add their own tags to posts.
Now, some worry that Facebook’s new tag feature will only amplify fake news in the coming months, and that the problem is not just spreading false information but also a lack of tools for people to combat fake news.
In the first quarter of 2019, Facebook’s platform saw the most total number of “tagged posts” (about 7,000) than any quarter since 2013, according to a data report from Technavio, which tracks real-time sentiment on social media platforms.
By comparison, Facebook said it saw 1,094,600 total “tag posts” in the first half of the year.
The company’s data shows that, on average, users on Facebook tagged posts at a rate of 7.5% in the second quarter of 2018.
This was lower than the 10% mark for the third quarter of 2017.
But, Technavios data shows, Facebook has been trending upward in the past year: During the first nine months of 2019 alone, the company’s platform posted 5,000 “tag post” updates.
According to a Technaviz report, Facebook will have to make some adjustments to its tagging policy, which will likely require users to register with their Facebook account, create a profile, and provide a picture and video of themselves to show that they are a person, and are registered on Facebook.
Facebook also will have the ability to identify people by their photo and video.
But critics of the tech giant say that Facebook is not doing enough to protect users from false news, and may even be contributing to the spread by using the tag feature.
“If they really want to address this problem, they need to start with identifying the sources of these false reports, which should include the companies that own the news, not just the content itself,” says Josh Blackman, a professor of media and technology at New York University.
“Facebook has the ability, but they’re not doing anything about it.”
On the other hand, some tech executives believe that Facebook should be more aggressive in trying to combat false information and promote responsible journalism.
“I think there’s a lot of misinformation out there and we’re not taking enough action,” Facebook VP of Product Jason Snell told CNNMoney.
“It’s an area where we’re getting a lot better at identifying the bad news.”
But he said that Facebook has done a good job in blocking fake news during the election.
“We have a robust system that works in real-world situations.
We don’t just let it propagate.”
Some critics, including TechCrunch’s Justin Ling, have also suggested that the company needs to create an entire new type of journalism called “social journalism” that could help counteract fake news by using social media to expose misinformation.
“The problem with the old approach was that it was too broad, it was a little too political,” Ling said in a recent podcast interview.
“That’s not the case anymore.
The social media community needs to be the arbiters of truth.”
The problem, Ling said, is that the platforms that Facebook, Twitter, Instagram, and Snapchat have become can’t control how the platforms are used.
“They can’t shut down fake news, they can’t block users from tagging, they don’t control what content gets promoted,” he said.
“So, the only way that they can really make sure that these platforms are good, is to get real, real journalism going on them.”