When Facebook CEO Mark Zuckerberg testified in front of congress last month, he was asked pointedly by California Rep. Katie Porter if he “would be willing” to spend one hour per day doing content moderation under the same conditions as Facebook’s army of contracted content moderators.
He replied by explaining that it wouldn’t be in the best interest of the company for the CEO “spent that much time” doing content moderation. (https://www.washingtonexaminer.com/news/nine-minutes-to-cry-congresswoman-asks-zuckerberg-if-he-would-want-to-be-a-facebook-content-moderator)
It is not worth the time of the CEO to understand the mechanisms by which child sexual abuse imagery is found and removed from his own platform, or view the kind of violence against humans and animals which disseminated on his platform, or ponder any other egregious content which is posted on the platform he created. It’s not worth his time. Content moderation is simultaneously essential to social media’s continued existence, while the workers doing the moderation are of dismissively low skills compared to the guy who studied Java for two semesters and built a website in 2004.
There is a history in Silicon Valley of claiming all of the technological ills caused by human action and frailty will be solved by the black box of AI/Machine Learning. Computers will fix the problems of humans, essentially, but, as Suri and Gray explain in their book, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, training technologies to act without human intervention takes an incredible amount of human intervention. However, not only is this work hidden from public conscious, but it’s treated by the CEOs of social media companies as a temporary, unskilled stop gap.
In “Social Media’s Silent Filter,” Sarah T Roberts further delves into the conditions faced by moderators and the dismissive attitude towards human moderation.
They labor under the cloak of NDAs, or non-disclosure agreements, which disallow them from speaking about their work to friends, family, the press, or academics, despite often needing to: As a precondition of their work, they are exposed to heinous examples of abuse, violence, and material that may sicken others, and such images are difficult for most to ingest and digest.
By sheer scale and size, Facebook’s moderation operation it would be impossible for the programmers running Facebook’s platform to conceptualize the workers responsible for moderation as individuals. As Suri and Gray explain in their text, this separation from the humans doing the content and the necessity of human intervention in moderation is expected. Furthermore, Facebook routinely introduces software without taking any input or consideration for the potential harm it could inflict. The situations with Philando Castile or Keith Lamont Scott likely could be imagined by the moderators.
This brings me to the question, since AI/Machine learning are clearly not at the level where human involvement can be done away with — how should content moderators be treated?
Roberts writes that the moderators she has interviewed feel pride in their ability to help law enforcement or reach out to suicidal individuals, could this positive feelings be harvested to lessen the burden of disturbing imagery? If this moderators were paid a more substantial wage, given adequate psychiatric resources, generally made to feel like real employees who mattered to companies instead of cogs grinding away beneath the worst versions of humanity?