Algorithmic Bias and Social Stratification

Authors

  • Elizabeth Author

Keywords:

Algorithmic Bias, Social Stratification, Machine Learning, Discrimination, Housing Algorithms, Employment Algorithms, Criminal Justice Algorithms, Digital Inequality

Abstract

This paper examines how machine learning algorithms reproduce and amplify existing social inequalities across three critical domains: housing, employment, and criminal justice systems. Drawing on critical algorithm studies and digital sociology theories, this research demonstrates that algorithmic systems, far from being neutral technical tools, encode and perpetuate historical patterns of discrimination based on race, class, and gender. Through systematic analysis of algorithmic decision-making processes, this paper reveals how biased training data, opaque algorithmic architectures, and feedback loops create self-reinforcing cycles of disadvantage. The findings indicate that algorithmic systems deployed in these domains disproportionately harm marginalized communities by limiting access to housing, restricting employment opportunities, and intensifying surveillance and punishment. This research contributes to understanding the sociotechnical mechanisms through which algorithms become instruments of social stratification, and calls for increased algorithmic accountability, transparency, and justice-oriented design practices.

Downloads

Published

2026-02-27