Published research papers in AI fairness, bias evaluation, and governance frameworks
AcademicPreprintPublished
SHARP: Social Harm Analysis via Risk Profiles for Measuring Inequities in Large Language Models
Alok Abhishek · arXiv preprint · 2026
Introduces Social Harm Analysis via Risk Profiles (SHARP), a framework for multidimensional, distribution-aware evaluation of social harm in LLMs. SHARP models harm as a multivariate random variable and integrates explicit decomposition into bias, fairness, ethics, and epistemic reliability with a union-of-failures aggregation. Application to eleven frontier LLMs reveals that models with similar average risk can exhibit more than twofold differences in tail exposure and volatility.
Social Harm EvaluationLarge Language ModelsRisk ProfilingAlgorithmic BiasFairness
BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models
Alok Abhishek · arXiv preprint · 2025
Introduces BEATS, a novel framework for evaluating Bias, Ethics, Fairness, and Factuality in Large Language Models. Presents a bias benchmark measuring performance across 29 distinct metrics spanning demographic, cognitive, and social biases, as well as ethical reasoning, group fairness, and factuality. Empirical results show 37.65% of outputs from industry-leading models contained some form of bias.
Data and AI Governance: Promoting Equity, Ethics, and Fairness in Large Language Models
Alok Abhishek · MIT Science Policy Review · 2025
Covers approaches to systematically govern, assess, and quantify bias across the complete lifecycle of machine learning models. Building on the BEATS framework, discusses data and AI governance approaches to address Bias, Ethics, Fairness, and Factuality within LLMs, suitable for practical real-world applications enabling rigorous benchmarking prior to production deployment.