As the use of crowdsourcing spreads, the need to ensure the quality of crowdsourced work is magnified. While quality control in crowdsourcing has been widely studied, established mechanisms may still be improved to take into account other factors that affect quality. However, since crowdsourcing relies on humans, it is difficult to identify and consider all factors affecting quality. In this study, we conduct an initial investigation on the effect of crowd type and task complexity on work quality by crowdsourcing a simple and more complex version of a data extraction task to paid and unpaid crowds. We then measure the quality of the results in terms of its similarity to a gold standard data set. Our experiments show that the unpaid crowd produces results of high quality regardless of the type of task while the paid crowd yields better results in simple tasks. We intend to extend our work to integrate existing quality con-trol mechanisms and perform more experiments with more varied crowd members.