Skip to content
#

ai-fairness

Here are 24 public repositories matching this topic...

This repository contains the dataset and code used in our paper, “MENA Values Benchmark: Evaluating Cultural Alignment and Multilingual Bias in Large Language Models.” It provides tools to evaluate how large language models represent Middle Eastern and North African cultural values across 16 countries, multiple languages, and perspectives.

  • Updated Oct 16, 2025
  • HTML

Fairness in data, and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine le…

  • Updated Oct 11, 2021

Analyzing geographic and cultural bias in AI therapy advice. Interactive visualization showing how AI systems draw from predominantly Anglophone sources when advising users about culturally specific dilemmas in India, Nigeria, and the Philippines.

  • Updated Dec 18, 2025
  • Python

This project introduces a method to mitigate demographic bias in generative face aging models. We adapt the StyleGAN-based SAM model by adding a race-consistency loss, enforced by the DeepFace classifier, to promote more equitable and identity-preserving age transformations across different racial groups.

  • Updated Dec 13, 2023
  • Python

Improve this page

Add a description, image, and links to the ai-fairness topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-fairness topic, visit your repo's landing page and select "manage topics."

Learn more