Online anti-social behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has... Show moreOnline anti-social behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities.While prior work has proposed automated methods to identify toxic situation such as hostility, they only focused on individual words. While only a bag of keywords is applied to detect hostility, this is not enough as words might have different meaning based on the relationship between participants of the discussion. In this paper, we considered the friendship between the sender and the target of a hostile conversation. First, we studied the characteristic of different types of relationship. Then, we set our goal to be more accurate hostility detection with reduced wrong red flags.Thus, we aim to detect both the presence and intensity of hostile comments based on linguistic and social features from our well-defined relationships. To evaluate our approach, we introduce a corpus of over 12K annotated Twitter tweets from over +170,000 tweets. Next, we extracted useful features such as relationship type and length of the tweet to feed into our Long Short Term Memory(LSTM) and Logistic Regression(LR) classifier. By considering the relationship type in the classifier model we improved the hostility detection AUC by close to 5 % comparing to the baseline method. Also, the F-1 score increased by 4 % as well. Show less