Software testing is a crucial task. Unlike conventional software, AI software that uses decision-making algorithms or classifiers needs to be tested for discrimination or bias. Such bias can cause discrimination towards certain individuals based on their protected attributes, such as race, gender or nationality. It is a major concern to have discrimination as an unintended behavior. Previous work tested for discrimination randomly, which has resulted in variations in the results for each test execution. These varying results indicate that, for each test execution, there is discrimination that is not found. Even though it is nearly impossible to find all discrimination unless we check all possible combinations in the system, it is important to detect as much discrimination as possible. We thus propose Coverage-Guided Fairness Testing (CGFT). CGFT leverages combinatorial testing to generate an evenly-distributed test suite. We evaluated CGFT with two different datasets, creating three models with each. The results show an improvement in the number of unfairness found using CGFT compared to previous work.