TY - GEN
T1 - Coverage-Guided Fairness Testing
AU - Perez Morales, Daniel
AU - Kitamura, Takashi
AU - Takada, Shingo
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Software testing is a crucial task. Unlike conventional software, AI software that uses decision-making algorithms or classifiers needs to be tested for discrimination or bias. Such bias can cause discrimination towards certain individuals based on their protected attributes, such as race, gender or nationality. It is a major concern to have discrimination as an unintended behavior. Previous work tested for discrimination randomly, which has resulted in variations in the results for each test execution. These varying results indicate that, for each test execution, there is discrimination that is not found. Even though it is nearly impossible to find all discrimination unless we check all possible combinations in the system, it is important to detect as much discrimination as possible. We thus propose Coverage-Guided Fairness Testing (CGFT). CGFT leverages combinatorial testing to generate an evenly-distributed test suite. We evaluated CGFT with two different datasets, creating three models with each. The results show an improvement in the number of unfairness found using CGFT compared to previous work.
AB - Software testing is a crucial task. Unlike conventional software, AI software that uses decision-making algorithms or classifiers needs to be tested for discrimination or bias. Such bias can cause discrimination towards certain individuals based on their protected attributes, such as race, gender or nationality. It is a major concern to have discrimination as an unintended behavior. Previous work tested for discrimination randomly, which has resulted in variations in the results for each test execution. These varying results indicate that, for each test execution, there is discrimination that is not found. Even though it is nearly impossible to find all discrimination unless we check all possible combinations in the system, it is important to detect as much discrimination as possible. We thus propose Coverage-Guided Fairness Testing (CGFT). CGFT leverages combinatorial testing to generate an evenly-distributed test suite. We evaluated CGFT with two different datasets, creating three models with each. The results show an improvement in the number of unfairness found using CGFT compared to previous work.
KW - Combinatorial testing
KW - Fairness
KW - Machine learning
KW - Testing
UR - http://www.scopus.com/inward/record.url?scp=85111426693&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85111426693&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-79474-3_13
DO - 10.1007/978-3-030-79474-3_13
M3 - Conference contribution
AN - SCOPUS:85111426693
SN - 9783030794736
T3 - Studies in Computational Intelligence
SP - 183
EP - 199
BT - Computer and Information Science, 2021
A2 - Lee, Roger
PB - Springer Science and Business Media Deutschland GmbH
T2 - 20th IEEE/ACIS International Summer Semi-Virtual Conference on Computer and Information Science, ICIS 2021
Y2 - 23 June 2021 through 25 June 2021
ER -