Treffer: ALGORITHMIC SOCIAL INJUSTICES: ANTECEDENTS AND MITIGATIONS.

Title:
ALGORITHMIC SOCIAL INJUSTICES: ANTECEDENTS AND MITIGATIONS.
Authors:
Tanriverdi, Hüseyin1 huseyin.tanriverdi@mccombs.utexas.edu, Akinyemi, John-Piatrick Olatunji2 johnpatrick.akinyemi@mccombs.utexas.edu
Source:
MIS Quarterly. Dec2025, Vol. 49 Issue 4, p1417-1448. 32p. 16 Charts.
Database:
Business Source Elite

Weitere Informationen

A key assumption in data science is that the fairness of an algorithm depends on its accuracy. Antecedents that create accuracy problems are expected to reduce fairness and cause algorithmic social injustices. We theorize why complexities in ground truths, IT ecosystems, and statistical models of algorithms can also generate algorithmic social injustices, above and beyond the indirect effects of antecedents, through the mediation of accuracy problems. We also theorize technology design and organizational mitigation mechanisms for taming such complexities and reducing algorithmic social injustices. We tested the proposed theory in a sample of 363 matched pairs of problematic and problem-free algorithms. We found that complexities in ground truths affected algorithmic social injustices directly rather than through the mediation of accuracy problems. Failures in complex IT ecosystems of algorithms did not affect the likelihood of algorithmic social injustices, but they caused damage directly and indirectly through the mediation of accuracy problems. Failures in complex statistical models significantly increased algorithmic social injustices both directly and indirectly through the mediation of accuracy problems. The results indicate that agentic algorithms produce social injustices not only through accuracy problems but also through complexities in their ground truths, IT ecosystems, and statistical models. The proposed complexity taming mechanisms are effective in reducing algorithmic social injustice risks through (1) the user organization’s quality in managing the algorithm’s stakeholders, (2) designing algorithms with a large scope of human-like interaction capabilities, (3) the developer organization’s algorithmic risk mitigations, and (4) the user organization’s algorithmic risk mitigations. [ABSTRACT FROM AUTHOR]

Copyright of MIS Quarterly is the property of MIS Quarterly and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

Volltext ist im Gastzugang nicht verfügbar.