The article presents the Laissez-Faire Prompts Dataset aimed at investigating biases inherent in generative language models, particularly their socio-psychological impact on different groups based on race, gender, and sexual orientation. It addresses issues such as harm from omission, subordination, and stereotyping. The dataset, developed with guidance from experts in the field, features various methods for data collection and analysis. The research aims to provide insights for developers and researchers to understand and mitigate biases in AI systems, enhancing fairness and representation.
The Laissez-Faire Prompts Dataset serves to study biases in language models, focusing on socio-psychological harms related to gender, race, and sexual orientation.
We detail the construction and motivation behind the dataset, aimed at examining inequities stemming from generative language models used by writers and students.
Collection
[
|
...
]