The Human Toll of Protecting the Internet from the Worst of Humanity – The New Yorker

Henry Soto worked for Microsoft’s online-safety team, in Seattle, for eight years. He reviewed objectionable material on Microsoft’s products—Bing, the cloud-storage service OneDrive, and Xbox Live among them—and decided whether to delete it or report it to the police. Each day, Soto looked at thousands of disturbing images and videos, which included depictions of killings and child abuse. Particularly traumatic was a video of a girl being sexually abused and then murdered. The work took a heavy toll. He developed symptoms of P.T.S.D., including insomnia, nightmares, anxiety, and auditory hallucinations. He began to have trouble spending time around his son, because it triggered traumatic memories. In February, 2015, he went on medical leave.

This story is laid out in a lawsuit filed against Microsoft, late last year, by Soto and a colleague named Greg Blauert, and first reported by Courthouse News Service. Soto and Blauert claim that the company did not prepare them for the stress of the job, nor did it offer adequate counselling and other measures to mitigate the psychological harm. Microsoft disputes Soto’s story, telling the Guardian in a statement that it “takes seriously its responsibility to remove and report imagery of child sexual exploitation and abuse being shared on its services, as well as the health and resiliency of the employees who do this important work.”

The lawsuit offers a rare look into a little-known field of digital work known as content moderation.

<a href=”http://www.newyorker.com/?p=3307932>The Human Toll of Protecting the Internet from the Worst of Humanity – The New Yorker