In a crowd market such as Amazon Mechanical Turk, the remuneration of Human Intelligence Tasks is determined by the requester, for which they are not given many cues to ascertain how to "fairly" pay their workers. Furthermore, the current methods for setting a price are mostly binary - in that, the worker either gets paid or not - as opposed to paying workers a "fair" wage based on the quality and utility of work completed. Instead, the price should better reflect the historical performance of the market and the requirements of the task. In this paper, we introduce a game theoretical model that takes into account a more balanced set of market parameters, and propose a pricing policy and a rating policy to incentivize requesters to offer "fair" compensation for crowdsourcing workers. We present our findings from applying and developing this model on real data gathered from workers on Amazon Mechanical Turk and simulations that we ran to validate our assumptions. Our simulation results also demonstrate that our policies motivate requesters to pay their workers more "fairly" compared with the payment set by the current market.