The Human Toll of Protecting the Internet from the Worst of Humanity – The New Yorker

Henry Soto worked for Microsoft’s online-safety team, in Seattle, for eight years. He reviewed objectionable material on Microsoft’s products—Bing, the cloud-storage service OneDrive, and Xbox Live among them—and decided whether to delete it or report it to the police. Each day, Soto looked at thousands of disturbing images and videos, which included depictions of killings and child abuse. Particularly traumatic was a video of a girl being sexually abused and then murdered. The work took a heavy toll. He developed symptoms of P.T.S.D., including insomnia, nightmares, anxiety, and auditory hallucinations. He began to have trouble spending time around his son, because it triggered traumatic memories. In February, 2015, he went on medical leave.

This story is laid out in a lawsuit filed against Microsoft, late last year, by Soto and a colleague named Greg Blauert, and first reported by Courthouse News Service. Soto and Blauert claim that the company did not prepare them for the stress of the job, nor did it offer adequate counselling and other measures to mitigate the psychological harm. Microsoft disputes Soto’s story, telling the Guardian in a statement that it “takes seriously its responsibility to remove and report imagery of child sexual exploitation and abuse being shared on its services, as well as the health and resiliency of the employees who do this important work.”

The lawsuit offers a rare look into a little-known field of digital work known as content moderation.

<a href=”http://www.newyorker.com/?p=3307932>The Human Toll of Protecting the Internet from the Worst of Humanity – The New Yorker

If I Only Had a Brain: How AI ‘Thinks’ – The Daily Beast

Artificial intelligence has gotten pretty darn smart—at least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think?

The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; it’s rare and difficult to be intelligent in all of these ways. Intelligence has many sources and our brains don’t respond to them all the same way. Thus, the quest to develop artificial intelligence begets numerous challenges, not the least of which is what we don’t understand about human intelligence.

Still, the human brain is our best lead when it comes to creating AI. Human brains consist of billions of connected neurons that transmit information to one another and areas designated to functions such as memory, language, and thought. The human brain is dynamic, and just as we build muscle, we can enhance our cognitive abilities—we can learn. So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information. AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.

Facebook, Amazon, Netflix, Microsoft, and Google all employ deep learning, which expands on traditional ANNs by adding layers to the information input/output. More layers allow for more representations of and links between data. This resembles human thinking—when we process input, we do so in something akin to layers.

If I Only Had a Brain: How AI ‘Thinks’ – The Daily Beast