¿Puede la IA ser confiable?
DOI:
https://doi.org/10.18012/arf.v11iEspecial.69910Keywords:
Inteligencia Artificial, Desconfianza, Reemplazo humano, Ética de la IA, ExplicabilidadAbstract
En este trabajo analizo por qué hay gente que desconfía en la IA, y cómo esta puede ser confiable. En específico, describo los orígenes de la desconfianza, hago un diagnóstico de la incertidumbre que pArovoca, y ofrezco una clave para una IA confiable. En la primera sección, trato con cómo las máquinas no han sido consideradas confiables, lo cual es la esencia de la desconfianza en la IA: tal como muestro, ha habido un cambio desde un debate académico acerca de las mentes-IA, a raíz del escepticismo de las otras mentes, a miedos cotidianos, principalmente basados en la preocupación por el reemplazo humano. En la segunda, hago un diagnóstico acerca de la incertidumbre que la IA provoca. Finalmente, ofrezco una clave para una IA confiable basada en la explicabilidad. Argumento que esta vía expertos certificados institucionalmente son factores facilitadores de una IA confiable.
Downloads
References
ANDERSON, C.; SINGER, M. The sensitive left and the impervious right: multilevel models. Comparative Political Studies, v. 41, p. 564-599, 2008.
BABBAGE, C. Excerpt from ‘Passages from the Life of a Philosopher’. In: BABBAGE, C. Babbage’s calculating engines: being a collection of papers relating to them; their history, and construction. Edited by Henry P. Babbage.Cambridge: Cambridge University Press, 2010. p. 83-288.
BAIER, A. Trust: the Tanner lectures of human values, 1991. <https://tannerlectures.utah.edu/_documents/a-to-z/b/baier92.pdf>.
BAROTTA, P.; GRONDA, R. Epistemic inequality and the grounds of trust in scientific experts. In: FABRIS, A. (Ed.) Trust: a philosophical approach. Cham: Springer, 2020. p. 81-94.
BEAUCHAMP, T.; CHILDRESS J. Principles of biomedical ethics. Oxford: Oxford University Press, 2021.
BIRD, A.; EMMA, T. Natural kinds._In: ZALTA, E.; NODELMAN, U. (Ed.) The Stanford Encyclopedia of Philosophy, 2023. <https://plato.stanford.edu/archives/spr2023/entries/natural-kinds>.
BROWN, M.; ROBINSON, L.; CAMPIONE, G. C.; WUENSCH, K.; HILDEBRANDT, T.; MICALI, N. Intolerance of uncertainty in eating disorders: a systematic review and metaanalysis. European Eating Disorders Review, v. 25, p. 329-343, 2017.
CARLETON, R. N. Fear of the unknown: one fear to rule them all? Journal of Anxiety Disorders,v. 41, p. 5-21, 2016.
COPELAND, J. Artificial intelligence: a philosophical introduction. Malden, MA: Blackwell, 2001.
COWLS, J.; FLORIDI, L.: TADEO, M. The challenges and opportunities of ethical AI. Artificially Intelligent, 2018. <https://digitransglasgow.github.io/ArtificiallyIntelligent/contributions/04_Alan_Turing_Institute.html>
DESCARTES, R. Oeuvres. Edited by Charles Adam and Paul Tannery, new edn, edited by the CNRS, 11 vols. Paris: Vrin, 1974-1976. [all Descartes’ works are cited as AT in this paper]
FLORIDI, L. Introduction: The importance of an ethics-first approach to the development of AI. In: FLORIDI, L. Ethics, governance, and policies in artificial intelligence. Cham: Springer, 2021a. p. 1-4.
FLORIDI, L. A unified framework of five principles for AI in society. In: FLORIDI, L. Ethics, governance, and policies in Artificial Intelligence. Cham: Springer, 2021b. p. 5-17.
FREDERIKSEN, M.; LARSEN, C; H. LOLLE. Education and trust: exploring the association across social relationships and nations. Acta Sociologica v. 59, p. 293-308, 2016.KLEIN, N. AI machines aren’t ‘hallucinating’. But their makers are. The Guardian, May 8, 2023. available at: <https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein>.
HAKHVERDIAN, A.; MAYNE SOURCE, Q. Institutional trust, education, and corruption: a micro-macro interactive approach. The Journal of Politics v. 74, p. 739-750, 2012.
HARDIN, R. Trust and trustworthiness. New York: Russel Sage Foundation, 2002.
HARDWIG, R. The role of trust in knowledge. The Journal of Philosophy, v. 88, p. 693-708, 1991.
HENDRIKS, F., KIENHUES, D.: R. BROMME. Trust in science and the science of trust. In: BLÖBAUM, B. (Ed.) Trust and communication in a digitalized world. Cham: Springer, 2016. p. 143-160.
MCLEOD, C. Trust. In: ZALTA, E.; NODELMAN, U. (Ed.) The Stanford Encyclopedia of Philosophy, 2021. <https://plato.stanford.edu/archives/fall2021/entries/trust/>.
MURRAY, S. The frame problem. In: ZALTA, E. (Ed.) The Stanford Encyclopedia of Philosophy, 2016. <https://plato.stanford.edu/archives/spr2016/entries/frame-problem/>.
ORIGGI, G. Is trust an epistemological notion? Episteme v. 1, p. 61-72, 2004.
PUTNAM, H. The meaning of ‘meaning’._Minnesota Studies in the Philosophy of Science, v. 7, p. 215-271, 1975.
SAYGIN, A. P.; CICKELI, I.; AKMAN V. Turing Test: 50 years later. In: MOOR, J. (Ed.) The Turing test: the elusive standard of artificial intelligence. Dordrecht: Kluwer Academic Publishers, 2003. p. 23-78.
SEARLE, J. Making the social world: the structure of human civilization. Oxford: Oxford University Press, 2010.
SHANAHAN, M. The frame problem. In: ZALTA, E. (Ed.) The Stanford Encyclopedia of Philosophy, 2016. <https://plato.stanford.edu/archives/spr2016/entries/frame-problem>.
SHIHATA, S.; MCEVOY, P; MULLAN, B.; CARLETON, R. Intolerance of uncertainty in emotional disorders: What uncertainties remain? Journal of Anxiety Disorders, v. 41, p. 115-124, 2016.
SMITH, B. The promise of Artificial Intelligence: reckoning and judgement. Cambridge, MA: The MIT Press, 2019.
SMITH, B.; LODDO, O. G.; LORINI, G. On credentials. Journal of Social Ontology, v. 6, p. 47-67, 2020. SWADE, D. The difference engine: Charles Babbage and the quest to build the first computer. London: Penguin Books, 2002.
TANOVIC, E., GEE, D. G.; JOORMANN, J. Intolerance of uncertainty: neural and psychophysiological correlates of the perception of uncertainty as threatening. Clinical Psychology Review, n. 60, p. 87-99, 2018.
TENNIE, C.; CALL, J.; TOMASELLO, M. Ratchering up the ratchet: on the evolution of cumulative culture. Philosophical Transactions of the Royal Society B: Biological Sciences, v.364, p.2405-2415, 2009. <http://doi.org/10.1098/rstb.2009.0052>.
TOMASELLO, M. Why we cooperate. Cambridge, MA: The MIT Press, 2009.
TOMASELLO, M. Being human: a theory of ontogeny. Cambridge, MA: Harvard University Press, 2019.
TURING, A. Computing intelligence and machinery. Mind,n. 59, p. 433-460, 1950.
TURING, A. Can digital computers think? In: SHIEBER, S. (Ed.) The Turing test: verbal behavior as the hallmark of intelligence. Cambridge, MA: The MIT Press, 1951. p. 111-116.
TURING, A.; BRAITHWAITE, R.; JEFFERSON, G.; M. NEWMAN. Can automatic calculating machines be said to think? In: COPELAND, J. (Ed.) The essential Turing: seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life plus the secrets of enigma. Oxford: Oxford University Press, 1952. p. 487-506.
WEIZENBAUM, J.. Computer power and human reason: from judgement to calculation. San Francisco: W. H. Freeman & Company, 1976.
WILLIAMS, D. Yoval Noah Harari argues that AI has hacked the operating system of human civilization, 2023. <https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation>.
Additional Files
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Journal general policy
1.This journal works under a Creative Commons License aplied to online journals. That icence can be read in the following link: Creative Commons Attribution 4.0 International (CC BY 4.0).
2.Accordingly to this License, a)the journal declares that authors hold the copyright of their articles without restrictions, and they can archieve them as post-print elsewhere. b)the journal allow the author(s) to retain publishing rights without restrictions.
Metadata Policy for information describing items in the repository
1. Anyone may access the metadata free of charge at anytime.
2.The metadata may be re-used in any medium without prior permission, even commercial purposes provided the OAI Identifier or a link to the original metadata record are given, under the terms of a CC BY license refered for the Journal.