TY - GEN
T1 - Incentivizing distributive fairness for crowdsourcing workers
AU - Qiu, Chenxi
AU - Squicciarini, Anna
AU - Hanrahan, Benjamin
N1 - Publisher Copyright:
© 2019 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
PY - 2019
Y1 - 2019
N2 - In a crowd market such as Amazon Mechanical Turk, the remuneration of Human Intelligence Tasks is determined by the requester, for which they are not given many cues to ascertain how to "fairly" pay their workers. Furthermore, the current methods for setting a price are mostly binary - in that, the worker either gets paid or not - as opposed to paying workers a "fair" wage based on the quality and utility of work completed. Instead, the price should better reflect the historical performance of the market and the requirements of the task. In this paper, we introduce a game theoretical model that takes into account a more balanced set of market parameters, and propose a pricing policy and a rating policy to incentivize requesters to offer "fair" compensation for crowdsourcing workers. We present our findings from applying and developing this model on real data gathered from workers on Amazon Mechanical Turk and simulations that we ran to validate our assumptions. Our simulation results also demonstrate that our policies motivate requesters to pay their workers more "fairly" compared with the payment set by the current market.
AB - In a crowd market such as Amazon Mechanical Turk, the remuneration of Human Intelligence Tasks is determined by the requester, for which they are not given many cues to ascertain how to "fairly" pay their workers. Furthermore, the current methods for setting a price are mostly binary - in that, the worker either gets paid or not - as opposed to paying workers a "fair" wage based on the quality and utility of work completed. Instead, the price should better reflect the historical performance of the market and the requirements of the task. In this paper, we introduce a game theoretical model that takes into account a more balanced set of market parameters, and propose a pricing policy and a rating policy to incentivize requesters to offer "fair" compensation for crowdsourcing workers. We present our findings from applying and developing this model on real data gathered from workers on Amazon Mechanical Turk and simulations that we ran to validate our assumptions. Our simulation results also demonstrate that our policies motivate requesters to pay their workers more "fairly" compared with the payment set by the current market.
UR - http://www.scopus.com/inward/record.url?scp=85075425967&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075425967&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85075425967
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 404
EP - 412
BT - 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
PB - International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
T2 - 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
Y2 - 13 May 2019 through 17 May 2019
ER -