Context-Aware Harassment Detection on Social Media

From Knoesis wiki
Jump to: navigation, search

Context-Aware Harassment Detection on Social Media is an inter-disciplinary project among the Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis), the Department of Psychology, and Center for Urban and Public Affairs (CUPA) at Wright State University. The aim of this project is to develop comprehensive and reliable context-aware techniques (using machine learning, text mining, natural language processing, and social network analysis) to glean information about the people involved and their interconnected network of relationships, and to determine and evaluate potential harassment and harassers. An interdisciplinary team of computer scientists, social scientists, urban and public affairs professionals, educators, and the participation of college and high schools students in the research will ensure wide impact of scientific research on the support for safe social interactions.

Overview

As social media permeates our daily life, there has been a sharp rise in the use of social media to humiliate, bully, and threaten others, which has come with harmful consequences such as emotional distress, depression, and suicide. The October 2014 Pew Research survey <ref>Pew Internet, Online Harassment, 2014.</ref> shows that 73% of adult Internet users have observed online harassment and 40% have experienced it. Most of those who have experienced online harassment, 66% said their most recent incident occurred on a social networking site or app. Further, 25% of teens claim to have been cyberbullied <ref>Cyberbullying Research Center, Cyberbullying Facts, 2012.</ref>. The prevalence and serious consequences of online harassment present both social and technological challenges.

Existing work on harassment detection usually applies machine learning for binary classification, relying on message content while ignoring message context. Harassment is a pragmatic phenomenon, necessarily context-sensitive. We identify three dimensions of context for social media, people, content, and network, for the harassment phenomenon. Focusing on content, but ignoring either people (offender and victim) or network (social networks of offender and victim) yields misleading results. An apparent "bullying conversation" between good friends with sarcastic content presents no serious threat, while the same content from an identifiable stranger may function as harassment. Content analysis alone cannot capture these subtle but important distinctions.

Social science research identifies some of the necessary harassment components and features typically ignored in the existing binary harassment-or-not computation: (1) aggressive/offensive language, (2) potentially harmful consequences to emotion, such as distress and psychological trauma, and (3) a deliberate intent to harm. We investigate novel language analysis techniques that examine the target-dependent offensiveness/negativity of a message, including the notion of target (recipient) sensitivity missing in existing harassment detection systems. The harassment value depends further on the resulting emotional harm and the intent of the sender. Thus, we reframe social media harassment detection as a multi-dimensional analysis of the degree to which harassment occurs. The specific research goals of this proposal are:

Goals
  1. (i) Identify the language based target-dependent offensiveness/negativity of a message, (ii) predict message harm from an emotion perspective, (iii) recognize sender malice from an intent perspective, and (iv) consequently assess overall message harm.
  2. Detect harassing social media accounts automatically, by developing algorithms that assess the degree of message harm using features such as frequency, duration and coverage measures.
  3. Evaluate algorithm quality and generality by examining both school and workplace settings, which present different contextual variables in the people, content, and network dimensions.
  4. Provide an alert service of potential harassment messages for parents to facilitate intervention. Provide our harassment detection techniques as REST Web services for the purposes of research and education. Release our research efforts as an open source project on GitHub so that they can be adapted and reused on other platforms, e.g., Facebook and online forums.
  5. Educate teenagers regarding social media harassment, including its characteristics, the associated prohibitions and penalties, and prevention strategies. We will collaborate with local schools, to create and widely disseminate online course modules.

Funding

Nsf.jpg

People

Principal Investigators: Prof. Amit P. Sheth
Co-Investigators: Prof. Valerie L. Shalin, Prof. Krishnaprasad Thirunarayan
Other Collaborators: Prof. Debra Steele-Johnson, Dr. Jack L. Dustin
PhD Students: Monireh Ebrahimi, Lu Chen, Wenbo Wang
Master Student: Pranav Karan, [Rajeshwari Kandakatla]

Team members - Sep, 2015. from Left to Right - Monireh Ebrahimi, Kathleen Renee Wylds, Prof. Debra Steele-Johnson, Prof. Amit Sheth, Prof. Valerie L. Shalin, Prof. Krishnaprasad Thirunarayan, Dr. Wenbo Wang, Dr. Lu Chen, Dr. Jack L. Dustin

Contact: Lu Chen

Social Media

Follow us on Twitter

Media Coverage

Related Projects

Concurrent Projects

Prior Projects


Related Resources

  1. A painfully funny but informative introduction to the problem of online harassment: https://www.youtube.com/watch?v=PuNIwYsz7PI
  2. Why People Post Benevolent and Malicious Comments Online: https://vimeo.com/141448254

References

  • Sujan Perera, Pablo N. Mendes, Adarsh Alex, Amit P. Sheth, and Krishnaprasad Thirunarayan."Implicit Entity Linking in Tweets"In International Semantic Web Conference, pp. 118-132. Springer International Publishing; 2016.
  • Lakshika Balasuriya, Sanjaya Wijeratne, Derek Doran, Amit Sheth. "Finding Street Gang Members on Twitter" In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2016). San Francisco, CA, USA; 2016.