The value of testimonial-based beliefs in the face of AI-generated quasi-testimony

Autores

DOI:

https://doi.org/10.18012/arf.v11iEspecial.70023

Palavras-chave:

Trust, Testimony, AI, Epistemology, Social normativity

Resumo

The value of testimony as a source of knowledge has been a subject of epistemological debates. The "trust theory of testimony" suggests that human testimony is based on an affective relationship supported by social norms. However, the advent of generative artificial intelligence challenges our understanding of genuine testimony. The concept of "quasi-testimony" seeks to characterize utterances produced by non-human entities that mimic testimony but lack certain fundamental attributes. This article analyzes these issues in depth, exploring philosophical perspectives on testimony and the implications of conversational AI technologies on our epistemic practices.

Downloads

Não há dados estatísticos.

Biografia do Autor

Felipe Álvarez, Universidad Andrés Bello

Felipe Álvarez holds a Master’s in Philosophy from the Universidad de Chile. He is currently a Ph.D. candidate in Philosophy at the University of Chile, while working as an adjunct assistant professor at the Universidad Andrés Bello. His areas of interest are contemporary epistemology and social ontology, with a focus on testimony and the Internet.

Ruth Espinosa, Universidad Andrés Bello

Ruth Espinosa holds a PhD in Philosophy from the University of Leipzig (Germany) and a Master’s in Philosophy with a specialization in epistemology from the University of Chile. Her research areas include modern and contemporary epistemology. She currently serves as the Head of the Department of Humanities and General Education at Universidad Andrés Bello (Chile).

Referências

ADLER, J. Belief's own ethics. Cambridge, MA: The MIT Press, 2006.

BANOVIC, N.; YANG, Z.; RAMESH, A.; LIU, A. Being trustworthy is not enough: how untrustworthy artificial intelligence (AI) can deceive the end-users and gain their trust. Proceedings of the ACM on Human-Computer Interaction, v. 7, p. 01-17, 2023.

BAIER, A. Trust and antitrust. Ethics, v. 96, p. 231-260, 1986.

BUTLIN, P.; VIEBAHN, E. AI assertion. Unpublished manuscript, 21 pages, 2023. <https://doi.org/10.31219/osf.io/pfjzu>

COADY, C. Testimony: aphilosophical study. Oxford: Oxford University Press, 1992.

FAULKNER, P. Knowledge on trust. Oxford: Oxford University Press, 2011.

FAULKNER, P. Testimony and trust. In: SIMON, J. (Ed.). The Routledge handbook of trust and philosophy. New York: Routledge, 2020. p. 329-340.

FRAIWAN, M; KHASAWNEH, N. A review of ChatGPT applications in education, marketing, software engineering, and healthcare: benefits, drawbacks, and research directions. Unpublished manuscript, 22 pages, 2023. Available at <https://doi.org/10.48550/arXiv.2305.00237>

FREIMAN, O. Analysis of beliefs acquired from a conversational AI: instruments-based beliefs, testimony-based beliefs, and technology-based beliefs. Episteme, p. 1-17, ahead of print, 2023.

FRICKER, E. Testimony and epistemic autonomy. In: LACKEY, J.; SOSA, E. (Ed.) The epistemology of testimony. Oxford: Oxford University Press, 2006. p. 225-252.

GOLDBERG, S. Anti-individualism: mind and language, knowledge and justification. Cambridge: Cambridge University Press, 2007.

GOLDBERG, S. What we owe each other, epistemologically speaking: ethico-political values in social epistemology. Synthese,v. 197, p. 4407-4423, 2020.

GRIMALTOS, T.; ROSELL, S. Mentiras y engaños: una investigación filosófica. Madrid: Cátedra, 2021.

GUO, B.; ZHANG, X.; WANG, Z.; JIANG, M.; NIE, J.; DING, Y.; YUE, J.; WU, Y. How close is ChatGPT to human experts? Comparison corpus, evaluation, and detection. Unpublished manuscript, 20 pages, 2023. Available at<https://doi.org/10.48550/arXiv.2301.07597>

HARDWIG, J. Epistemic dependence. The Journal of Philosophy,v. 82, p. 335-349, 1985.

HARDWIG, J. The role of trust in knowledge. Journal of Philosophy,v. 88, p. 693-708, 1991.

HAWLEY, K. Trust: a very short introduction. Oxford: Oxford University Press, 2012.

HINCHMAN, E. Telling as inviting to trust. Philosophy and Phenomenological Research, v. 70, p. 562-587, 2005.

HOLLIS, M. Trust within reason. Cambridge: Cambridge University Press, 1998.

HUME, D. An enquiry concerning human understanding. Cambridge: Cambridge University Press, 1748.

KIDD, C.; BIRHANE, A. How AI can distort human beliefs. Science, v. 380, p. 1222-1223, 2023.

LEHRER, K. Knowledge and the trustworthiness of instruments. Monist, v. 78, p. 156-170, 1995.

LUHMANN, N. Trust and power. New York: Wiley, 1979.

McDOWELL, J. Knowledge by hearsay:meaning, knowledge and reality. Cambridge: Harvard University Press, 1994.

MALLORY, F.Fictionalism about chatbots._Ergo,_v. 10, p. 1082-1100, 2023.

MERTON, R. Science and technology in a democratic order. Journal of Legal and Political Sociology,v. 1, p. 115-126, 1942.

MORAN, R. Getting told and being believed. In: LACKEY, J.; SOSA, E. (Ed.) The epistemology of testimony. Oxford: Oxford University Press, 2018. p. 272-306.

MILLER, B.; FREIMAN, O. Trust and distributed epistemic labor. In: SIMON, J. (Ed.) The Routledge handbook on trust and philosophy. New York: Routledge, 2020. p. 341-353.

O’NEILL, M. ChatGPT diagnoses cause of child’s chronic pain after 17 doctors failed. The Independent, 13 September 2023.<https://www.independent.co.uk/news/health/chatgpt-diagnosis-spina-bifida-mother-son-b2410361.html>.

OWENS, D. Reason without freedom: the problem of epistemic normativity. New York: Routledge, 2000.

OWENS, D. Testimony and assertion:normativity and control. Oxford: Oxford University Press, 2006.

REID, T. Inquiry into the human mind on the principles of common sense. In: BEANBLOSSOM, R.; LEHRER, K. (Ed.) Thomas Reid’s Inquiry and essays. Indianapolis: Hackett, 1983. p. 01-126.

ROSS, A. Why do we believe what we are told? Ratio,v. 28, p. 69-88, 1986.

WILLIAMS, B. Deciding to believe. In: WILLIAMS, B. Problems of the self: philosophical papers1956-1972. Cambridge: Cambridge University Press, 1973. p. 136-151.

WRIGHT, S. Knowledge transmission. New York: Routledge, 2019.

Arquivos adicionais

Publicado

2024-11-06

Como Citar

Álvarez, F., & Espinosa, R. (2024). The value of testimonial-based beliefs in the face of AI-generated quasi-testimony. Aufklärung: Journal of Philosophy, 11(Especial), p.25–38. https://doi.org/10.18012/arf.v11iEspecial.70023