The Algorithmic Vulnerability of Google and Facebook

New York Times reporter Rachael Abrams wrote this week about her recent attempts to convince Google that she is, in fact, still alive. It is a damning article for a company that has, at its core, an information retrieval mechanism driving its advertising revenue stream. If, after all, users cannot trust the information a Google search provides, they will begin to go elsewhere and the user databases those searches generate will degrade and lose value for advertisers looking to target an audience.

Abrams’ article should not only be a wake up call for Google. It exposes a key vulnerability for Facebook and other algorithm-based companies.[1]

While computing power has increased and artificial intelligence has improved dramatically, we have not outstripped the need for human curation.

John Adams’ assertion to the jury weighing the guilt of British soldiers involved in the Boston Massacre that “Facts are stubborn things”  is no less true today. And despite the protests of those who don’t like to have their own biases and world views challenged, there is a difference between reputable and unreputable sources of information. When a company takes upon itself the role of an information aggregator, as Google has, or stakes out a position as a new public square, as Facebook has, it has an ethical and moral obligation to do so in good faith — even in the absence of a legal requirement to do so. Yes, reasonable people can interpret facts differently. Unreasonable people — and opportunists — embrace the factually wrong.

More importantly, however, self interest should drive them to act in good faith. Stories like Abrams’ highlight a credibility gap — one that competitors will exploit. Google was once an upstart that succeeded  because it outperformed Alta Vista and eliminated the need for search aggregators like Dogpile. Google, too, can be supplanted if its core offering became seen as second best because their search results could no longer be trusted.

Abrams’ story points to a need for Google and others to rethink their curation strategies and base them on something other than short-term Return on Investment. There are indications that this is beginning to happen, but Google’s tendency to rely on temporary workers is, ultimately, a losing strategy — one that doubles-down on the primacy of the algorithm rather than accepting the need for humans trained in information literacy and the ability to discern between correct and incorrect. These curators must have the authority and ability to make corrections to algorithmically generated databases before those databases become useless to users — whether the users are looking for a holiday recipe or looking to sell ingredients to cooks.

1. That the two obvious companies to comment on here are focused on advertising may hold a hint to an underlying issue — the unspoken argument’s warrant. The focus on generating information for advertisers has distracted these companies from the need to provide quality information for their users. The need to generate revenue is clear and understandable. They are not running charities and protesting their profit motive sounds as strange to my ear as those who were dismayed that might be trying to make money. The calculation with all such services must be the value proposition — is the user being provided with a service worth the cost and the service provider must make sure it does not lose sight of its users as it focuses on its profit source. The moment services like Google and Facebook become more about advertisers than end users, they open themselves up to competitors with better mousetraps — ones that will provide more value to the advertisers.

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at