Holistic Evaluation of Text-to-Image Models: Human evaluation procedure | HackerNoon
Briefly

In our study, we leveraged the Amazon Mechanical Turk platform to gain insights from human annotators on AI-generated images, ensuring diverse feedback through strict requirements for participation.
Each annotator earned $0.02 per multiple-choice question, translating to a total expenditure of $13,433.55 to ensure robust human feedback on the effectiveness of the AI-generated images.
Read at Hackernoon
[
|
]