After the shooting that happened last week and the subsequent revelation that online forums may have contributed to that bowl-cut-weirdo’s radicalization and subsequent rampage, several people I know have approached me to ask what website owners with discussion boards could have done that could have prevented radicalization. There are some options, but I’ll warn you, most of them are pretty terrible and have a tendency to be easily circumvented (or to backfire spectacularly). The architecture of the internet is intended to be able to route around damage (it was designed to assist with communications after one or more nuclear strikes, after all), and censorship mimics damage in an architectural sense. So, most fixes are not particularly useful, although there are some options.
Have a list of banned words. The good side of this is that it is trivially easy to implement and use. It does at least keep some measure of decorum in a forum (did that just rhyme?). The downside is that it is easily circumvented. It’s very easy to intentionally misspell an inappropriate word and have everyone know exactly what you mean. And you will have at least some forum trolls who are dumb enough to do it unintentionally.
Make users sign in with their Facebook account. This can help a little, as people commenting anonymously tend to be much more likely to troll, demean, and harass than those with the commentary attached to their name. However, several problems occur with this approach. The first is that it is trivially easy to get a fake facebook account for just this sort of thing. The second is that the practice may mean that some people are afraid to comment (or even get an account) at all. For instance, if your site discusses atheism and most of their family is fundamentalist, they aren’t going to feel safe commenting on the site. Also, if you use an account that is linked to social media and their posts show up there, it can invite their friends to jump in and ruin a perfectly good comment thread even if they weren’t being belligerent.
Have upvoting and downvoting of comments. Upvoting and downvoting do allow the community to somewhat police comments. That’s the upside. It’s also the downside, in that an established community can, if not careful and intentional about what sort of things get a downvote, end up creating an echo chamber. Which is fine, if that’s what you want.
Allow members to report forum posts and explicitly spell out the reasons allowed for reporting. Allowing members to report posts and have the post hidden after being reported a certain number of times can help reduce the frequency of offensive posts. However, it can also be used as a tool for stifling dissent, so if you are going to do this and want to have an open forum with good discussions, you’re probably going to want to take away the ability to report posts from those who abuse the privilege.
You’ll notice that in all of the above items, that most means of reducing forum trolling can also easily have a deleterious impact on marginalized groups posting on your forum. Automated systems really fall down here, in that opposing opinions are difficult to distinguish from trolling. In effect, the main difference between the two is the perception of the reader, not the content of what the writer wrote. You could write something incredibly sarcastic that most of a forum agrees with and it will be up-voted like crazy. Write the same thing on an opposing forum, and you are a troll. Thus correcting this problem would not appear to be something that is easily accomplished by a computer, at least at a general level, because what is considered offensive varies. What is acceptable on 4chan isn’t going to be acceptable on a fundamentalist Christian forum (and likely vice-versa). The difference (to an algorithm attempting to adjudicate it) between a marginalized person and someone posting offensive content can be vanishingly small, and often boils down to who is looking at them. It’s very, very tricky to design a tool that does not hamper the former while being effective at the latter, and perhaps is even impossible.
Now, why am I bringing this up if I don’t have an answer? Well, actually I do have an answer. Several of them. Just because something can’t be automated by a computer doesn’t mean that computers can’t somewhat assist in the task. Realistically, if you want to have a set of forums, you need actual human beings participating in the moderation process. You can’t and shouldn’t run an unattended forum with the expectation that quality will not decline. It’s just that simple. Yes, there are various ways that you can somewhat measure the “trolliness” of a post, but realistically, none of them are going to be as good as human moderation in the near future. Further, such a tool doesn’t intersect with the goals of a user forum, which is to foster conversation. The need to have such a tool and widely use it is also indicative that perhaps the forum in question isn’t being as narrow as required when acquiring users. I suspect that improving the marketing pipeline for a forum will improve the quality of the content in the forum much more than an automated system for moderation will. This isn’t an argument for not having a system to moderate comments, only an observation that comment moderation may not be the place you really need to focus.
We as computer programmers are often called upon to fix problems using a computer. Sometimes, the computer really isn’t the way to go. I would argue that attempts to control abusive, trollish behavior online is one of those places. We’re really trying to deal with a problem downstream of the real problem in this case, and that problem is as variable as humanity. That often doesn’t lend itself well to automated fixes. I would further add that, at least in this case, this kid was radicalized by forums where the sort of speech he read was not considered objectionable. A general purpose tool is probably not going to exist for this in the foreseeable future – one’s best option is to actually participate in your own forum if you have one.