I am pleased to announce that our research paper titled Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment has been accepted for presentation at the main technical track of Intelligent User Interfaces 2023 (IUI23).

In this work, we introduced an interactive system called ClarifyIT to aid microtask crowdsourcing requesters in identifying and correcting clarity flaws commonly found in crowdsourcing task descriptions, which have been shown to negatively impact the quality of data collected from workers. In contrast to conventional approaches that rely on the assistance of workers or experts and can be time-consuming and expensive, our system employs NLP models trained on real-world task descriptions that provide help in milliseconds without having to spend a single penny. :money_mouth_face:

Screenshot of ClarifyIT

In the evaluation, we not only assessed the usability of ClarifyIT with requesters but also tested the system with actual workers to ensure that the task descriptions produced using ClarifyIT were perceived as high quality. The results of our study indicated that 65% of requesters found the tool to be helpful or very helpful, and 76% of workers believed that the overall clarity of task descriptions improved when created using ClarifyIT.

I feel fortunate to have had the chance to work with Zahra, Prof. Gadiraju, and Prof. Wachsmuth on such a interesting project! I owe them all a huge debt of gratitude for giving me this amazing opportunity. :blush: