We describe the details of the Shared Task of the 5th ACL Workshop on Gender Bias in Natural Language Processing (GeBNLP 2024). The task uses dataset to investigate the quality of Machine Translation systems on a particular case of gender robustness. …
This volume contains the proceedings of the Fifth Workshop on Gender Bias in Natural Language Processing held in conjunction with the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).
Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.This paper proposes FairBelief, an …
Machine learning models are now able to convert user-written text descriptions into naturalistic images. These models are available to anyone online and are being used to generate millions of images a day. We investigate these models and find that …
Machine learning models are now able to convert user-written text descriptions into naturalistic images. These models are available to anyone online and are being used to generate millions of images a day. We investigate these models and find that …
**Automatic Misogyny Identification (AMI)** is a **shared task** proposed at the Evalita 2020 evaluation campaign. The AMI challenge, based on **Italian tweets**, is organized into two subtasks: (1) Subtask A about misogyny and aggressiveness …
During the last years, the phenomenon of **hate against women** increased exponentially especially in online environments such as microblogs. Although this alarming phenomenon has triggered many studies both from computational linguistic and machine …