Fake news prompts algorithm experiments at Facebook
We’ve written a lot lately about fake news, not only because of its influence on our recent election but also because the search and social giants are making strides to change their algorithms to filter fake news out. And with every tweak in an algorithm it does have the potential to affect us (machines aren’t perfect!)
So the latest in the fake news debacle is Facebook stepping up to the plate, where they had previously only made a couple wording changes in their terms of service. According to The New York Times, Facebook has started working on some experiments to see how they can combat the fake news issue. Any time Facebook ads further filtering to what people see on their network, it means you could see a change in traffic, either positive or negative.
[text_ad]
Mike Isaac for NYT writes, “The maneuvers the company is trying include one that makes it easier for its 1.8 billion members to report fake news. Facebook is also creating partnerships with outside fact-checking organizations to help it more clearly indicate when articles are false, as well as changing some ad practices to choke off the economics of fake news purveyors.”
“Facebook is in a tricky position with these tests. The company has long regarded itself as a neutral place where people can freely post, read and view content, and it has said it does not want to be an arbiter of truth. But as the social network’s reach and influence has grown, it has had to confront questions about its moral obligations and ethical standards in what it presents,” says Isaac.
Adam Mosseri, a Facebook vice president who is in charge of its News Feed, says that the changes — which, if successful, may be available to a wide audience — are the results of many months of internal discussion about how to handle false news articles shared on the network, according to the NYT article.
How Facebook will begin experimenting
Isaac writes:
“In August, the company announced changes to marginalize what it considered “clickbait,” the sensational headlines that rarely live up to their promise. This year, Facebook also gave priority to content shared by friends and family, a move that shook some publishers that rely on the social network for much of their traffic. The company is also constantly fine-tuning its algorithms to serve what its users most want to see, an effort to keep its audience returning regularly.
This time, Facebook is making it easier to flag content that may be fake. Users can currently report a post they dislike in their feed, but when Facebook asks for a reason, the site presents them with a list of limited or vague options, including the cryptic “I don’t think it should be on Facebook.” In Facebook’s new experiment, users will have a choice to flag the post as fake news and have the option to message the friend who originally shared the piece to let them know the article is false.
If an article receives enough flags as fake, it can be directed to a coalition of groups that would perform fact-checking, including Snopes, PolitiFact and ABC News. Those groups will check the article and can mark it as a “disputed” piece, a designation that will be seen on Facebook.”
If you see a difference in your Facebook traffic and visibility, we’d love to hear it in the comments. How do you feel about the algorithm tests in general?