Detecting Threats of Violence in Online Discussions Using Bigrams of Important Words
Original version
Hammer, H. L. (2014, September). Detecting Threats of Violence in Online Discussions Using Bigrams of Important Words. In Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint (pp. 319-319). IEEE. http://dx.doi.org/10.1109/JISIC.2014.64Abstract
Making violent threats towards minorities like immigrants
or homosexuals is increasingly common on the Internet.
We present a method to automatically detect threats of violence
using machine learning. A material of 24,840 sentences from
YouTube was manually annotated as violent threats or not, and
was used to train and test the machine learning model. Detecting
threats of violence works quit well with an error of classifying a
violent sentence as not violent of about 10% when the error of
classifying a non-violent sentence as violent is adjusted to 5%. The
best classification performance is achieved by including features
that combine specially chosen important words and the distance
between those in the sentence.