Workshop on Online Abuse and Harms

Sunday, July 10, 2022 - 12:00am to 11:55pm


For this sixth edition of the Workshop on Online Abuse and Harms (6th WOAH!) we advance research in online abuse through our theme: On Developing Resources and Technologies for low resource Online Abuse and Harms. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. These include but are not limited to: NLP, machine learning, computational social sciences, law, politics, psychology, network analysis, sociology and cultural studies. Continuing the tradition started in WOAH 4, we invite civil society, in particular individuals and organisations working with women and marginalised communities who are often disproportionately affected by online abuse, to submit reports, case studies, findings, data, and to record their lived experiences. We hope that through these engagements WOAH can directly address the issues faced by those on the front-lines of tackling online abuse.


Academic papers (long and short)

Authors are invited to submit full papers of up to 8 pages of content and short papers of up to 4 pages of content, with unlimited pages for references. Accepted papers will be given an additional page of content to address reviewer comments. Previously published papers cannot be accepted. Papers that are currently undergoing review at other venues are welcome.

Topics related to developing computational models and systems include but are not limited to:

  • NLP and Computer Vision models and methods for detecting abusive language online, including, but not limited to hate speech, gender-based violence, cyberbullying etc.
  • Application of NLP and Computer Vision tools to analyze social media content and other large data sets
  • NLP and Computer Vision models for cross-lingual abusive language detection
  • Computational models for multi-modal abuse detection
  • Development of corpora and annotation guidelines
  • Critical algorithm studies with a focus on content moderation technology
  • Human-Computer Interaction for abusive language detection systems
  • Best practices for using NLP and Computer Vision techniques in watchdog settings
  • Submissions addressing interpretability and social biases in content moderation technologies

Topics related to legal, social, and policy considerations of abusive language online include but are not limited to:

  • The social and personal consequences of being the target of abusive language and targeting others with abusive language
  • Assessment of current (computational and non-computational) methods of addressing abusive language
  • Legal ramifications of measures taken against abusive language use
  • Social implications of monitoring and moderating unacceptable content
  • Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.
  • We particularly invite contributions on the stated areas that address these topics in low resource settings.

Non-archival submissions

We welcome non-archival submissions (2 pages + 2 pages for references), which can include work previously published elsewhere.

Civil society reports

We invite reports from civil society. These are non-archival submissions, and can include previously published work. They must be a minimum of two pages, with no upper limit. Please contact us if you have any queries about the civil society reports.

Submission Information

Submission link:

Submission deadline: Apr 11, 2022

Notification date: May 9, 2022

Camera-ready date: May 23, 2022

We use the ACL Rolling Review Submission Guidelines for submissions. We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you. We request that all papers adhere to our submission policies. Submissions will be reviewed by the program committee. As reviewing is anonymised, please ensure that papers are anonymous. Self-references that reveal the author's identity, e.g., "We previously showed (Smith, 1991) ...", should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991) ...".


No comments