I had a piece on Columbia Journalism Review's website last week about a new algorithm that might be useful for news organizations trying to keep some semblance of civility in their online comment sections.
It was interesting timing. I finished the piece a couple of weeks ago and wasn't sure exactly when it would run, and it happened to be just a few days after the staff of Jezebel made public their ongoing struggle with anonymous commenters posting scary, disgusting and pornographic pictures under stories.
The algorithm I wrote about - which essentially helps comment section moderators and facilitators to better understand what type of language moves the conversation in a positive direction - won't protect moderators from pornographic pictures. It won't stop true trolls from posting vitriolic comments just for the sake of causing trouble.
It's a specific solution for a specific problem: comment sections under newspaper and magazine articles filling up with misinformation and weak opinions disguised as fact. Studies show that negative or inaccurate comments can have a quantifiably detrimental impact on how readers absorb information from an article, however well-reported or written. Many publications have turned to comment facilitation to help combat this issue, and the algorithm in my story could go one step toward making that easier and more efficient. But, as evidenced by Gawker's response to Jezebel, the industry is still trying to figure out the best way to deal with the grossest, most extreme trolls.