ШТУЧНИЙ ІНТЕЛЕКТ ЯК КОГНІТИВНЕ РОЗШИРЕННЯ: ПЕРЕОСМИСЛЕННЯ ЛЮДСЬКОГО ЗНАННЯ
DOI:
https://doi.org/10.21564/2663-5704.68.349837Ключові слова:
штучний інтелект, когнітивне розширення, розширений розум, епістемологія, епістемічні чесноти, філософія техніки, співпраця людини та ШІАнотація
This study explores AI as a cognitive extension that integrates into human thinking, forming hybrid architectures with transformative potential for knowledge production. It identifies three key epistemic virtues for effective collaboration: critical prompting, algorithmic literacy, and epistemic discernment. Responsible use is essential to preserve human agency, avoid illusions of understanding, and prevent scientific monocultures. The work offers philosophical foundations for AI integration in education, science, and governance.
Посилання
Clark, A., Chalmers, D. (1998). The extended mind. Analysis, 58 (1), 7–19. doi: https://doi.org/10.1093/analys/58.1.7.
Lykhatskyi, A. (2025). Hybrid epistemology: Emergent knowledge forms in the age of human-AI cognitive integration. The Bulletin of Yaroslav Mudryi National Law University. Series: Philosophy, philosophies of law, political science, sociology, 3 (66), 148–158. doi: 10.21564/2663-5704.66.337968.
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15 (1), 6. doi: https://doi.org/10.3390/soc15010006.
Messeri, L., Crockett, M.J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627, 49–58. doi: https://doi.org/10.1038/s41586-024-07146-0.
Bonaci, T., Herron, J., Matlack, C., Chizeck, H.J. (2014). Securing the exocortex: A twenty-first century cybernetics challenge. Conference on Norbert Wiener in the 21st Century (21CW). IEEE, 1–8. doi: https://doi.org/10.1109/NORBERT.2014.6893912.
Stross, C. (2005). Accelerando. London: Ace Books.
Clark, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford: Oxford University Press.
Burrell, J. (2016). How the machine ’thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3 (1), 1–12. doi: https://doi.org/10.1177/2053951715622512.
Felten von, N. (2025). Beyond isolation: Towards an interactionist perspective on human cognitive bias and AI bias. arXiv. doi: https://doi.org/0.48550/arXiv.2504.18759.
Jacobsen, R.M., Wester, J., Djernæs, H.B., Berkel van, N. (2025). Distributed cognition for AI-supported remote operations: Challenges and research directions. arXiv. doi: https://doi.org/10.48550/arXiv.2504.14996
Baehr, J. (2011). The inquiring mind: On intellectual virtues and virtue epistemology. Oxford: Oxford University Press.
Lim, C. (2025). DeBiasMe: De-biasing human-AI interactions with metacognitive AIED interventions. arXiv: doi: https://doi.org/10.48550/arXiv.2504.16770.
Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: NYU Press.
Jumpe, J., Evans, R., Pritzel, A., Green, T. et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596, 7873, 583–589. doi: https://doi.org/10.1038/s41586-021-03819-2.
Sullivan, E. (2022). Understanding from machine learning models. The British Journal for the Philosophy of Science, 73 (1), 109–133. doi: https://doi.org/10.1093/bjps/axz035.
Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese. 169 (3). Springer, 615–626. doi: https://doi.org/10.1007/s11229-008-9435-2.
Gadamer, H.-G. (2004). Truth and method. London, New-York: Continuum.


