There’s a lot of news online that’s being consumed on a daily basis, be it on our way to work in a crowded train, at home in the middle of doing daily chores, and for many at the breakfast table, like my father, who loves reading out news headlines aloud (for whatever reasons) from his smartphone. We do have newspapers getting dropped at home, but the idea of picking up one, flipping over the full page advertisements, seems a bit dated. And with Facebook and WhatsApp spoon feeding you with news anyways, who needs to browse through a newspaper right? (I would disagree).
Working at a media house as a web producer means that you are naturally flooded with emails with reports, inputs and are always looking at news sources to stay updated. So when he (my father) does read out something that seems a bit odd or unreal going by current news trends, it is but natural of me to head over to his side of the table and ask him to show me where he got the news from and then take enjoy in proving to him that it is fake and that he shouldn’t help spread rumours on those massive family WhatsApp groups. In my opinion, the most common news sources for Indians thanks to low data prices is Facebook and WhatsApp.
We have been hearing about Facebook’s problem with fake news for a while now. It has been doing plenty, but no matter what it seems to do to right the wrongs, there’s more and more users out there who knowingly misuse and unknowingly use the platform to spread fake news, simply because it is a big one that connects millions online.
What is fake news?
Fake news consists or a variety of information. It can come from an opposing political party to shame someone, it can be generated to create a buzz about a movie launch, or even a friend in college or a colleague speaking behind your back about you without knowing the motive behind your last action.
At a personal level this could easily be termed as gossip. Everyone does it, until they get caught red handed (Gossip Girl?).
With social networks easily giving everyone access to free speech, it gets even easier to put your thoughts online for public viewing.
Like-minded people will ‘Like’ or ‘Favourite’ your post, some will ignore and some will retaliate.
And it does not even have to be personal. An elephant being eaten by a python with a silly photoshopped thumbnail is enough to get the attention of your typical family WhatsApp group, with not a single member retaliating because it comes with the disclaimer “forwarded as received”, letting people wash their hands with the news they believed was valuable (but possibly fake), just by using three words. It is indeed, truly amazing how we accept fake news.
Misinformation is the true culprit here. It is what fuels and helps spread is fake news, posts or articles to do with hate speech and clickbait headlines meant to grab attention. And since Facebook has such a massive online platform with a ginormous user base (read monopoly with billions) attached to it, it makes for a good starting point, without the need to even seed fake news on the open internet.
So what is Facebook doing about it?
Well, for beginners, post the whole Cambridge Analytica scandal, the social network has now put out an interesting video showcasing the brave efforts, which it is putting in to fight misinformation.
The Oxford dictionary defines the word ‘misinformation’ as, “False or inaccurate information especially that, which is deliberately intended to deceive” which is exactly the problem that Facebook has on its shoulders today.
The video goes on to explain how Facebook’s News Feed initially was all about prioritizing information from family, friends and even the brand pages that users may have liked to stay informed.
While I would love to believe that Facebook should have stuck to the community and brand pages instead of going into news publishing and more (it complicates things), you really just need about 250 characters and a place to post it (publicly) to spread fake news.
But now it’s about much more thanks to the ability to bring news from outside the Facebook community in the form of links; turning what was into a “happy place” for sharing photos of your last holiday, to a dark web where anyone can rightfully post anything with personal opinions, biases and the lot.
Facebook’s new video reveals that it is working on the problem is more ways than one. There’s human moderation which was under the scanner a year ago, and now there’s machine learning (ML) that makes decisions (about taking down posts) based on what it was taught. While the former seems like an unbiased approach to moderation (they are seated across the globe and will know how sensitive certain topics are to certain communities or countries) the latter seems worrisome, because of personal biases making their way into training machines, something that is better said than done because they still have a long way to go.
WhatsApp’s problem with fake news
Next up there’s WhatsApp’s problem with spreading fake news. It is a space that is completely unmonitored and is a bigger problem than Facebook in countries where users are not educated enough to determine right from wrong and accept anything and everything as the truth or news on their chat feed.
A simple example comes from this New York Times opinion that so clearly depicts how Whatsapp is a source of misinformation in a country like India.
In a reported incident in 2017, villagers in an Indian state of Jharkhand went of a warpath, out to find culprits dressed in black (who did not exist), looking for room to vent out their anger. As reported by the Hindustan Times, the group of villagers rose to anger after reading WhatsApp messages that were being circulated showing pictures of mutilated children, hinting that there were a group of men wearing black clothes who were allegedly kidnapping children.
The result of the same misinformation was a series of gruesome lynchings in Jharkhand, where villagers picked up weapons and started attacking random strangers just to vent out their revenge. Such is the power of WhatsApp, a platform, that all of us love to use, and was recently acquired by Facebook.
While Facebook can still curb fake news using an army of moderators, WhatsApp just cannot, and the encrypted messages mean that nobody gets to see them either. So governments have seen best results in asking service providers in blocking access to social media websites (on mobile) until situations like the one mentioned above, simmer down.
With India approaching the General Election season next year, WhatsApp is already playing its part in creating chaos not just on election days, but well before the elections take place as well.
While misleading political messages are indeed free from the clutches of Facebook’s moderation and the eyes of the government (depending on which side you are on) and cannot win elections, they do have the ability to seed ideas and play mind games at the least.
In short, Facebook has another problem brewing.
According to Facebook’s recently posted video that enlightens users about its practices and how it treats misinformation on its News Feed, it is “working” on the problem.
While the video is aptly titled “Facebook’s Fight Against Misinformation”, it reveals two details about Facebook’s problem with misinformation.
Firstly, the word “fight” itself indicates that Facebook didn’t see this asteroid of a problem coming. Even if it did, it didn’t seem to act on it to curb it in time until impact (we’re already past that point, some dinosaurs have been exterminated), where fake news has already grown out of control.
Secondly, I could drink a shot every time some employee of Facebook in the video used the words “probably”, “definitely” “if” “figure out” “we could” “have to” and get drunk mighty quick, indicating that Facebook is far from figuring out how it will solve the problem.
The good bit here is that Facebook is now aware of where its problems lie.
The bad part is that there’s little that it can do about it apart from having a factory of human moderators. So the next question here is how many human moderators can Facebook possibly have?
The bottom line is that Facebook will have to put in its efforts to solve the problem (using a combination of machine learning, artificial intelligence and human moderation). In short, Facebook needs to keep up the fight (more so, because governments have their eyes on them) or they can simply shut shop and send everyone home because it is the network itself that helps in the spread of fake news. Ditto WhatsApp!
Keeping country specific biases in mind, it is a mammoth task when you consider the number of users Facebook has and the number of people posting on the free-to-use platform every day.
But like US Senator Dianne Feinstein said during the Congressional Hearing, “You’ve created these platforms and now they are being misused and you have to be the ones to do something about it… Or we will!”
Facebook will have to do plenty and I do expect the social network to come up with elaborate ways to reduce instances of hate speech, cyberbullying and graphic violence on its platform. But users also need to get smarter and check what they need to share.
It’s all about objectivity and what you perceive the content you see in a post as. If you believe it’s true, it takes just a click to share that news to your network of friends and family.
If you come from the opposing side of the news you just shared, you would probably open up your web browser and google the news to check if it is genuine, falsely reported or simply fake news.
At the end of it all, it’s equally your responsibility to check something before sharing it online (be it Facebook or WhatsApp) because once it goes virtual and viral, there’s only an action in the real world that can put an end to it.
Source : https://www.firstpost.com/tech/news-analysis/facebooks-fight-against-misinformation-in-news-feed-is-a-human-problem-as-much-as-it-is-facebooks-4481529.html