Justicia algorítmica y autodeterminación deliberativa

  1. Innerarity, Daniel 1
  1. 1 Ikerbasque Foundation for Science (UPV/EHU) / Chair Artificial Intelligence and Democracy (European University Institute of Florence)
Revista:
Isegoría: Revista de filosofía moral y política

ISSN: 1130-2097

Año de publicación: 2023

Número: 68

Tipo: Artículo

DOI: 10.3989/ISEGORIA.2023.68.23 DIALNET GOOGLE SCHOLAR lock_openAcceso abierto editor

Otras publicaciones en: Isegoría: Revista de filosofía moral y política

Resumen

Si la democracia consiste en posibilitar que todas las personas tengan iguales posibilidades de influir en las decisiones que les afectan, las sociedades digitales tienen que interrogarse por el modo de conseguir que los nuevos entornos hagan factible esa igualdad. Las primeras dificultades son conceptuales: entender cómo se configura la interacción entre los humanos y los algoritmos, en qué consiste el aprendizaje de estos dispositivos y cuál es la naturaleza de sus sesgos. Inmediatamente después nos topamos con la cuestión ineludible de qué clase de igualdad estamos tratando de asegurar, teniendo en cuenta la diversidad de concepciones de la justicia que hay en nuestras sociedades. Si articular ese pluralismo no es un asunto que pueda resolverse con una técnica agregativa, sino que requiere compromisos políticos, entonces una concepción deliberativa de la democracia parece la más apta para conseguir esa igualdad a la que aspiran las sociedades democráticas.

Referencias bibliográficas

  • Amoore, Louise (2020), Cloud Ethics. Algorithms and the Attributes of Ourselves and Others, Durham / London: Duke University Press.
  • Arrow, Kenneth J. (1950), “A Difficulty in the Concept of Social Welfare”, Journal of Political Economy 58, 328-346.
  • Baumer, Eric y Brubaker, Jed (2017). “Post-userism”, en Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, 6291-6303.
  • Beck, Ulrich; Lash, Scott y Giddens, Anthony (1994), Reflexive Modernization, Cambridge: Polity Press.
  • Berk, Richard; Heidari, Hoda; Jabbari, Shahin; Kearns, Michael y Roth, Aaron (2017), “Fairness in Criminal Justice Risk Assessments: The State of the Art”, arXiv preprint, arXiv:1703.09207
  • Binns, Reuben (2018a), “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of Machine Learning Research 81, 149-159.
  • Binns, Reuben (2018b), “Algorithmic Accountability and Public Reason”, Philosophy & Technology 31, 543-556.
  • Brooks, Frederick P. (1975), The Mythical Man-Month: Essays on Software Engineering, Massachusetts: Addison-Wesley.
  • Coglianese, Cary y Lai, Alicia (2022), “Algorithm vs. Algorithm”, Duke Law Journal, Vol. 72, University of Pennsilvania Law School, Public Law Research Paper No. 22-11, Available at SSRN: https://ssrn.com/abstract=4026207
  • Collins, Patricia Hill (2002), Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment, New York: Routledge.
  • Coyle, Diane y Weller, Adrian (2020), “Explaining Machine Learning Reveals Policy Challenges”, Science 386 / 6498, 1433-1434.
  • Crenshaw, Kimberlé (ed.) (2019), Seeing Race Again: Countering Colorblindness across the Disciplines, Berkeley: University of California Press.
  • Christian, Brian (2020), The Alignment Problem: Machine Learning and Human Values, New York: Norton & Company.
  • Dieterich, William; Mendoza, Christina y Brennan, Tim (2016), “COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity”, Northpoint Inc. Available Online at: http://go.volarisgroup.com/rs/430- MBX-989/images/ProPublica_Commentary_Final_070616.pdf
  • Dwork, Cynthia; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer y Zemel, Richard (2012), “Fairness through awareness”, Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214-226.
  • Friedler, Sorelle; Scheidegger, Carlos y Venkatasubramanian, Suresh (2016), “On the (Im)possibility of Fairness”, arXiv preprint, arXiv:1609.07236.
  • Gal, Michal (2017), “Algorithmic challenges to autonomous choice”, Michigan Telecommunications and Thechnology Law Review 25, 59-104.
  • García Marzá, Domingo y Calvo, Patrici (2022), “Democracia algorítmica: ¿un nuevo cambio estructural de la opinión pública?”, Isegoría, (67/17).
  • Green, Been y Hu, Lily (2018), “The myth in the methodology: Towards a recontextualization of fairness in machine learning”, Machine Learning: The Debates workshop at the 35th International Conference on Machine Learning. https://www.benzevgreen.com/wpcontent/uploads/2019/02/18-icmldebates.pdf
  • Hanna, Alex; Denton, Emily; Smart, Andrew y Smith-Loud, Jamila (2020), “Towards a critical race methodology in algorithmic fairness”, Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
  • Hardt, Moritz (2014), “How Big Data Is Unfair: Understanding Unintended Sources of Unfairness in Data Driven Decision Making”, Medium, September 26. https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de
  • Innerarity, Daniel (2019), “Democratic equality: an egalitarian defense of political mediation”, Constellations. An International Journal of Critical and Democratic Theory, 26/4, 513-524.
  • Innerarity, Daniel (2023), A theory of complex democracy. Governing in the Twenty-first century, London: Bloomsbury.
  • Jolls, Chistine; Sunstein, Cass R. y Thaler, Richard (1998), “A Behavioral Approach to Law and Economics”, Stanford Law Review 50 (5), 1471-550.
  • Kahneman, Daniel (2011), Thinking, Fast and Slow, New York: Farrar, Straus and Giroux.
  • Kasy, Maximilian y Abebe, Rediet (2021), “Fairness, equality, and power in algorithmic decision-making”, en Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 576-586.
  • Lai, Alicia (2018), Brain Bait: Effects of Cognitive Biases on Scientific Evidence in Legal Decision-Making, A.B. thesis, Princeton University.
  • Martí, José Luis (2021), “New Technologies at the Service of Deliberative Democracy” en Amato, Guiliano; Barbisan, Benedetta y Pinelli, Cesar (eds.), Rule of Law vs Majoritarian Democracy, New York: Bloomsbury, 199-220.
  • Miconi, Thomas (2017), “The Impossibility of ‘Fairness’: A Generalized Impossibility Result for Decisions”, arXiv preprint, arXiv:1707.01195
  • Mitchell, Shira; Potash, Eric; Barocas, Solon; D’Amour, Alexander y Lum, Kristian (2021), “Algorithmic fairness: Choices, assumptions, and definitions”, Annual Review of Statistics and Its Application, 8, 141-163.
  • Mittelstadt, Brent; Allo, Patrick; Taddeo, Mariarosaria; Wachter, Sandra y Floridi, Luciano (2016), “The ethics of algorithms: Mapping the debate”, Big Data & Society, 3/2, July-December.
  • Ochigame, Rodrigo; Barabas, Chelsea; Dinakar, Karthik; Virza, Madars e Ito, Joichi (2018), “Beyond legitimation: Rethinking fairness, interpretability, and accuracy in machine learning”, en The Debates, at the 35th International Conference on Machine Learning.
  • Parnas, David Lorge (1985), “Software Aspects of Strategic Defense Systems”, American Scientist, September-October 1985, 432-440.
  • Pettigrew, Richard (2020), Choosing for Changing Selves, Oxford University Press.
  • Robertson, Samantha y Salehi, Niloufar (2020), “What if I don’t like any of the choices? The limits of preference elicitation for participatory algorithm design”, en Participatory Approaches to Machine Learning Workshop, ICML 2020. https://arxiv.org/pdf/2007.06718.pdf
  • Rouvroy, Antoinette (2013), “The end(s) of critique: data-behaviourism vs. due process”, en Hildebrandt, Mireille y de Vries, Katja (eds.), Privacy, Due Process and the Computational Turn. Philosophers of Law Meet Philosophers of Technology, New York: Routledge.
  • Russell, Stuart (2019), “The purpose put into the machine”, Brockman, John (ed.), Possible Minds. 25 Ways of Looking at AI, New York: Penguin, 20-32.
  • Selbst, Andrew D.; Boyd, Danah; Friedler, Sorelle A.; Venkatasubramanian, Suresh y Vertesi, Janet (2019). “Fairness and abstraction in sociotechnical systems”, en Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
  • Thaler, Richard H. (2015), Misbehaving: The making of behavioral economics, New York: Norton & Co.
  • Waldman, Ari Ezra (2019), “Power, Process, and Automated Decision-Making”, 88 Fordham Law Review 613. Available at: https://ir.lawnet.fordham.edu/flr/vol88/iss2/9
  • Wang, Annie J. (2018), “Procedural justice and risk-assessment algorithms”. SSRN Electronic Journal 2018: 1-31.
  • Zarsky, Tal (2016), “The Trouble with Algorithmic Decisions An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making”, Science, Technology & Human Values 41, 118-132.
  • Züger, Theresa; Milan, Stefania y Tanczer, Leonie Maria (2017), “Sand im Getriebe der Informationsgesellschaft: Wie digitale Technologien die Paradigmen des Zivilen Ungehorsams herausfordern und verändern”, en Politische Theorie und Digitalisierung, Jacob, Daniel y Thiel, Thorsten (eds.), Baden-Baden: Nomos, 265-296.