Book Project II

This project explores how AI systems are reshaping the conditions under which human beings are seen, heard, and recognized. As algorithmic classification increasingly determines access to rights, resources, and representation, it also reshapes the criteria by which people appear as intelligible subjects of concern. AI systems bypass traditional legal frameworks and operate through technical infrastructures that are often opaque, yet profoundly consequential.

In this context, the human is no longer at the center of justification but is instead mediated through data, categories, and probabilistic models that often reproduce older hierarchies in new forms. In response, I draw from anticolonial ethical traditions that have long insisted on the centrality of human agency, self-respect, and participation in political life. Concepts like self-respect and participatory justification offer powerful tools for rethinking how legitimacy is grounded. This work argues that the legitimacy of AI systems should not be grounded in abstract fairness, but in the capacity of human beings to make meaning, contest classification, and reshape the terms of recognition. Rather than framing AI ethics solely as a problem of bias or error, I approach it as a deeper philosophical and political challenge: how to build systems that honor the complexity, plurality, and dignity of human life in a rapidly transforming world.