The picture is created by Midjourney for the prompt: “photo of a group of students sitting casually in a room with a large computer, all holding smartphones and tablets, bright lines show digital networking, lock symbols refer to security.”

Fairness

Equal Opportunities In Federated Learning

Artificial intelligence models are traditionally trained using centrally stored data sets. However, this approach poses significant challenges when dealing with sensitive data, particularly in terms of privacy and security. Federated Learning (FL) offers a secure alternative by training models on decentralised data, ensuring that sensitive information remains localised. Despite its advantages, one of the main challenges of FL is to achieve an optimal balance between fairness and accuracy of model performance. Scientists at the L3S Research Center have addressed this challenge in a recent study and presented an innovative solution called FairTrade. This approach improves fairness while maintaining high levels of accuracy in federated learning applications, paving the way for fairer and more reliable AI systems.

Balancing accuracy and fairness

Federated learning allows a common AI model to be trained with data from multiple devices, such as smartphones or tablets, without transferring the data from the devices. This approach improves data privacy and security. However, it also presents challenges, as different data on different devices can lead to differences in the performance of the model. For example, underrepresented data sets can lead to biased or unfair predictions from the AI model. ‘FairTrade aims to minimise this discrimination in predictions,’ says Maryam Badar, lead author of the study. ‘By using multi-objective optimisation, it tries to achieve an optimal trade-off between model accuracy and fairness.’ The framework can be adapted to different concepts of fairness, including statistical and causal fairness, depending on the requirements of the application.

Experiments with real data sets from areas such as banking, human resources and healthcare have shown remarkable results. FairTrade improved fairness in all scenarios without compromising accuracy. Even with highly unbalanced data sets, it proved to be a reliable alternative to existing methods.

Many applications

FairTrade has the potential for broad application in a variety of fields. In any context where AI systems make personalised decisions, this approach can help achieve fairer and more balanced outcomes.

In medical diagnostics, for example, FairTrade could compensate for biases that result from training models on unevenly distributed patient data. By reducing these biases, the method can reduce discrimination against minority groups in healthcare, ultimately leading to more accurate diagnoses and better treatment outcomes.

Similarly, in lending, banks are increasingly relying on AI to assess the creditworthiness of their customers. FairTrade can help prevent biased decisions that unfairly disadvantage certain communities. This will ensure a more inclusive and equitable financial system.

The researchers stress that fairness must be built into AI systems from the outset. Post-hoc adjustments to outcomes are not sufficient to fully address fairness. ‘Our results show that it is possible to incorporate fairness during the training process without compromising the performance of the model, ‘ says Badar. This is an important step towards the ethical and fair use of AI technologies.

Maryam Badar, Sandipan Sikdar, Wolfgang Nejdl, Marco Fisichella: FairTrade: Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning. AAAI 2024: 10962-10970 ojs.aaai.org/index.php/AAAI/article/view/28971

Contact

Maryam Badar, M. Sc.

Maryam Badar is a PhD student at L3S. Her research focuses on the fairness of AI technologies.