I was appalled to find this story in my feed this morning.
In response to a proliferation of ‘fake news’ that has become rampant in the last two election cycles, against which Facebook has also started to take action, Google is announcing a program that will muster its ‘quality raters’ to flag information deemed to be inaccurate or offensive.
“We’re explicitly avoiding the term ‘fake news,’ because we think it is too vague,” said Paul Haahr, one of Google’s senior engineers who is involved with search quality. “Demonstrably inaccurate information, however, we want to target.”
On its face, that sounds fine. If Google can determine a piece of information to be undeniable and completely inaccurate, then removal of this information is probably a good thing objectively speaking. Though it isn’t a topic I’ve given much thought to previously, it does seem important for Google to protect the integrity of their search results to that extent in a society that is relying on search engines for information more than any other source.
For instance, the article provides an example of a completely fake story attributing the invention of homework to a fictional person in the early 1900’s.
The problem is that this is not nearly the limit which Google is applying here. The scope of the policy that Google laid out leaves room for a troubling amount of censorship on extremely subjective criteria.
The program seeks to ferret out content that is graphically violent, hateful towards certain groups of people, or “other types of content which users in your locale would find extremely upsetting or offensive.”
Wow. Just wow.
First of all, Google is an aggregator of content, not an arbiter of fact or fiction. There is some merit in the idea that Google could flag or even remove a piece of content that is demonstrably false and misleading based on deliberately or dangerously incorrect facts.
But anything beyond that is extremely troubling, and ushers in an era when Google starts to become a potentially political organization that is filtering and curating content based on subjective criteria that can be influenced by groups of people with certain motives.
Content that someone might ‘find extremely upsetting or offensive’ is an extraordinarily subjective guideline.
ESPECIALLY in today’s world, it is possible to conceive of almost any content being potentially offensive to someone. Where does Google draw this line, and how can it be assured that the line currently drawn will not later be redrawn?
It is not clear exactly how far this will go, and even Google said “we will see how some of this works out. I’ll be honest. We’re learning as we go,” Haahr said, admitting that the effort won’t produce perfect results.”
It is even more troubling given the nature of Google, which is different than Facebook, where you might have content thrust upon you and/or the viral nature of social media could cause completely false information to spread like wildfire and potentially cause harm. But frankly, this is always going to be a possibility.
But with Google, information is segmented and dormant unless actively sought. If someone is seeking discussion of a certain topic that others might find offensive, that is their choice, and Google needs to be extremely careful about starting down a slippery slope where groups of people can influence what other groups of people can or cannot discuss.
I’m all for removing information on strategies for how to abduct and abuse children, or topics that are almost universally agreed upon as abhorrent and abominable. But it is far from clear that this is where Google intends to draw the line, in fact there is evidence that they intend to allow ‘flagging’ of much more than just blatantly false or universally abhorrent information. And furthermore, any time you set a subjective standard which will be enforced by people, you open the door for more and more censorship, just like a government which will always seek more control of its people, or a corporation that will relentlessly seek expansion and acquisition. Power tends to consolidate and expand.
Google needs to walk a very fine line here, and make sure to limit this program to completely and undeniably false facts that mislead searches with inaccurate information.
Let’s hope Google learns quickly that human beings are fallible creatures who will seek to influence public discourse based on their prejudices, and be it offensive or not, people should be free to express and discuss their viewpoints, even if it is potentially offensive to some group of people – or if they would simply label it as such to further their own ends.
Hopefully no one finds this viewpoint offensive, I’d hate to be flagged.
Feature Image credit: Charline Loewe
Need an account audit, strategy session, or just feeling lonely? Contact us here and we’ll get back to you shortly: