1st Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society

LATERAISSE 2022


Artificial Intelligence



This one-day workshop co-located with LREC 2022 will provide a forum to present and discuss research work on the creation and use of language resources and tools focusing on identifying and raising awareness of bias and discrimination in social computational systems on one hand and harassment and bullying in the online spaces on the other hand. This workshop is intended to bring together technical and non-technical experts in the computing and social science sub-disciplines to focus on the issue of social inclusion and safety from different perspectives.
The workshop aims to provoke and solicit research work that implements relevant state-of-the-art machine learning and natural language processing technologies for a fair, inclusive and safe society. The workshop also aims to promote research and the development of unbiased and inclusive language technologies. The workshop also aims to promote research and the development of unbiased and inclusive language technologies.
Topics of Interest
While this is not exhaustive, we encourage authors to submit academic work on the intersection of:
- Bias and Discrimination in Recruitment and Workplace
Hiring is critical to society because it determines who can access economic opportunities to support themselves and their families (Bogen and Rieke, 2018). Studies have shown that hiring decisions are not always objective, therefore giving room to bias, discrimination, and unfair decisions influenced by look, gender, race, sexuality, etc. (Bendick Jr and Nunes, 2013; Gaucher et al., 2011). However, studies have established a strong correlation between diversity and inclusiveness in the workplace and an increase in innovation, productivity, and profitability of businesses, thereby making the elimination of bias not only a moral issue but equally an economic one (Zhang, 2020). In today’s changing society, there is an increasing need for equality of
opportunity and inclusiveness in the workplace. We expect papers that use NLP and Artificial Intelligence technologies to illustrate or mitigate bias in Human Resources (HR) and the workplace.
- Bias and Discrimination in Legal Decision Making
Similarly, studies have established the negative influence of bias and stereotypes in the application of the police force and the criminal justice system, especially towards racial minorities (Goff et al., 2016; Yang, 2015). Text analytics on court documents have proven implicit racial bias in appellate court opinions from US states and federal courts (Rice et al., 2019). Moreover, studies have found that gender attitudes may play a role in the judicial process and court decisions by female judges are more likely to be overturned (Ahola et al., 2009; Ornaghi et al., 2019). The ultimate motto of the legal profession is ‘equality before the law’, and in fact, society will not be safe if justice were easily miscarried or trampled on. Therefore, there is an increasing need to develop techniques and tools to promote equality and fairness in the legal decision-making process. We expect papers with the application of NLP tools in the legal domain for developing a fair legal decision-making system towards various social groups.
- Exclusive and Offensive language identification especially in the low-resource language settings
On the other hand, the prevalence of harmful content and abusive behaviour on social media has become more concerning with the pandemic that has forced more people to be online now than at any other time. More than ever, many people undergo emotional trauma and heightened mental health issues due to overexposure to harmful content on the internet. For instance, 44% of pre-adolescents have encountered more cyberbullying incidents during the COVID-19 lockdown (Armitage, 2021). While there have been solutions to identify harmful content online, their impacts and performances have been limited due to data and annotation challenges. Moreover, most works have been applied to English and other widely spoken EU languages with plenty of resources. Systems that can provide real-time analysis of online content to detect harmful content must be designed if we are to make the internet a better and more respectful virtual space. However, for a
global impact, it is equally important to develop data and tools for many widely spoken but not well-documented languages, e.g. African and Asian languages. We expect research papers focusing on building NLP tools and resources targeting the low resource languages, e.g. classification and/or information extraction models, datasets, and multilingual Transformer-based language models fine-tuned for detecting a wide range of antisocial online behaviours, such as those already highlighted in the literature (Nadali et. al., 2013; Slonje et. al., 2013, and Bauman, S. 2015) for non-English societies.
Contributions that describe tools and datasets, especially multilingual resources, that can make critical actors in these sectors to be more conscious of the ethical/bias instances, both implicit and explicit, in the decision making processes while also eliminating discrimination towards minority and other people-at-risk based on their protected attributes, will be most welcomed.
We invite technical contributions for original, high-quality submissions that explore the following (non-exhaustive) list of topics:
● Analysis of bias in word embeddings and/or texts in the HR and Legal domains
● Methods for debiasing pre-trained language models/word embeddings in multiple domains
● Language technologies for identifying/mitigating biased and non-inclusive language (gender, race, LGBTQ) in cross-disciplinary documents and online platforms
● Tools, technologies, datasets and multilingual resources to detect bias/discrimination and harmful contents, including flagging and reporting unethical practices such as workplace harassment and online cyberbullying to guarantee an inclusive society
● NLP applications for detecting offensive/abusive online content (e.g., cyber aggression, cyberbullying, hate speech, misogyny, sexism, Homophobia and Transphobia etc.)
● Corpus and Lexicon for offensive/abusive online content in various languages, especially in the multi-domain, low resource and multilingual settings.
● Analysis of fairness and reliability of tools developed for bias-free recruiting, adjudication, and online harmful contents detection
● Text analysis and processing related to humanities using computational methods and especially focusing on low-resource languages.
A goal of the workshop is to connect computational linguists and interdisciplinary researchers working to promote fairness, equality, and inclusion in different sectors of society.
Important Dates:
Friday, April 8, 2022 Deadline for submission of papers or short abstracts
Tuesday, May 3, 2022 Notification of acceptance
Monday, May 23, 2022, Camera-ready papers due.
Saturday, June 25, 2022 Workshop date
Proceedings:
The authors of accepted full papers (long or short) will be invited to submit their papers for publication in LREC proceedings. Furthermore, authors of selected full
papers (long) may be invited at a later time to submit extended versions of their papers for consideration in a Journal Special Issue or an Edited book as will be determined by the organizing and Programme Committee. Final versions of long and short papers will be allotted one additional page (altogether 5 and 9 pages) excluding references. Extended abstracts will be allotted up to 5 pages (according to the short paper format) excluding references.
More information about submission and important dates can be found on the workshop website- https://sites.google.com/adaptcentre.ie/lateraisse/call-for-papers