THE UNINTENDED CONSEQUENCES OF ALGORITHMIC TRANSPARENCY ON TRUST: A MULTI-DISCIPLINARY ANALYSIS
DOI:
https://doi.org/10.63356/asb.2025.002Keywords:
algorithmic transparency, cognitive overload, strategic opacity, trust preservation, agent-based modelingAbstract
This paper challenges the widely held assumption that increased algorithmic transparency universally enhances user trust. Through interdisciplinary analysis spanning legal, ethical, and technical domains, we demonstrate that, paradoxically, excessive transparency can erode trust in algorithmic systems. Using agent-based simulations, we model how users with varying cognitive thresholds respond to transparency events, revealing what can be called the “transparency overload” phenomenon. Our findings suggest that strategic opacity-the deliberate limitation of certain types of algorithmic disclosure-may better preserve trust in specific contexts. We propose "context-dependent transparency" as an alternative framework that balances accountability with user' cognitive limitations. This research has significant implications for policymakers and system designers seeking to build genuinely trustworthy algorithmic systems rather than merely transparent ones.
References
Aivodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S. & Tapp, A. (2019). Fairwashing: The risk of rationalization. In K. Chaudhuri & R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Long Beach, California: PMLR 97, 2019. (pp. 161-170).
Ananny, M. & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. https://doi.org/10.1177/1461444816676645
Bhattarai, I., Pu, C., Choo, K.-K. R., & Korać, D. (2024). A lightweight and anonymous application-aware authentication and key agreement protocol for the Internet of Drones. IEEE Internet of Things Journal, 11(11), 19790–19803. https://doi.org/10.1109/JIOT.2024.3367799
Brandeis, L. D. (1914). Other people's money and how the bankers use it. Frederick A. Stokes Company.
Buhmann, A., Paßmann, J. & Fieseler, C. (2020). Managing algorithmic accountability: Balancing reputational concerns, engagement strategies, and the potential of rational discourse. Journal of Business Ethics, 163(2), 265-280. https://doi.org/10.1007/s10551-019-04226-4
Dietvorst, B. J., Simmons, J. P. & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033
Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. In N. Oliver & J. Smith (Eds.), Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion, Marina del Ray, CA, USA, March 16–20, 2019 (pp. 275-285). ACM. https://doi.org/10.1145/3301275.3302310
European Commission, High‑Level Expert Group on Artificial Intelligence. (2019, April 8). Ethics guidelines for trustworthy AI. Publications Office of the European Union. Retrieved from: https://ec.europa.eu/digital-strategy/en/library/ethics-guidelines-trustworthy-ai
GDPR (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
Grimmelikhuijsen, S. G. & Meijer, A. J. (2014). Effects of transparency on the perceived trustworthiness of a government organization: Evidence from an online experiment. Journal of Public Administration Research and Theory, 24(1), 137-157. https://doi.org/10.1093/jopart/mus048
Hildebrandt, M. (2015). Smart technologies and the end(s) of law: Novel entanglements of law and technology. Edward Elgar Publishing. ISBN: 9781786430229.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan. ISBN: 9780374533557.
Korać, D., Čvokić, D. & Simić, D. (2025a). Computational engineering approach-based modeling of safety and security boundaries: A review, novel model, and comparison. Archives of Computational Methods in Engineering. Advance online publication. https://doi.org/10.1007/s11831-025-10352-2
Korać, D., Damjanović, B., Simić, D. & Choo, K.-K. R. (2022). A hybrid XSS attack (HYXSSA) based on fusion approach: Challenges, threats and implications in cybersecurity. Journal of King Saud University - Computer and Information Sciences, 34(10, Part B), 9284–9300. https://doi.org/10.1016/j.jksuci.2022.02.015
Korać, D., Damjanović, B., Simić, D. & Pu, C. (2025b). Management of evaluation processes and creation of authentication metrics: Artificial intelligence-based fusion framework. Information Processing & Management, 62(6), 104233. https://doi.org/10.1016/j.ipm.2025.104233
Korać, D. & Simić, D. (2019). Fishbone model and universal authentication framework for evaluation of multifactor authentication in mobile environment. Computers & Security, 85, 313–332. https://doi.org/10.1016/j.cose.2019.05.012
Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180084. https://doi.org/10.1098/rsta.2018.0084
Lazer, D., Kennedy, R., King, G. & Vespignani, A. (2014). The parable of Google Flu: Traps in big data analysis. Science, 343(6176), 1203-1205. https://doi.org/10.1126/science.1248506
Lundberg, S. M. & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30. https://doi.org/10.5555/3295222.3295230
McDonald, A. M. & Cranor, L. F. (2008). The cost of reading privacy policies. ISJLP, 4, 543. Retrieved from: https://kb.osu.edu/server/api/core/bitstreams/a9510be5-b51e-526d-aea3-8e9636bc00cd/content
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007
Nagel, T. (1979). Mortal questions (pp. 24-38). Cambridge University Press. Retrieved from: https://rintintin.colorado.edu/~vancecd/phil1100/Nagel1.pdf
Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S. & Doshi-Velez, F. (2018). How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682. https://doi.org/10.48550/arXiv.1802.00682
Nissenbaum, H. (2011). A contextual approach to privacy online. Daedalus, 140(4), 32-48. https://doi.org/10.1162/DAED_a_00113
O'Neill, O. (2002). A question of trust: The BBC Reith Lectures 2002. Cambridge University Press. ISBN: 9780521529969.
Pieters, W. (2011). Explanation and trust: What to tell the user in security and AI? Ethics and Information Technology, 13(1), 53-64. https://doi.org/10.1007/s10676-010-9253-3
Ribeiro, M. T., Singh, S. & Guestrin, C. (2016). "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/2939672.2939778
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x
Schermer, B. W., Custers, B. & van der Hof, S. (2014). The crisis of consent: How stronger legal protection may lead to weaker consent in data protection. Ethics and Information Technology, 16(2), 171-182. https://doi.org/10.1007/s10676-¬014-¬9343-¬8
Sunstein, C. R. (2014). Why nudge?: The politics of libertarian paternalism. Yale University Press. Retrieved from: https://yalebooks.yale.edu/book/9780300212693/why-nudge/
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285. https://doi.org/10.1016/0364-0213(88)90023-7
Stray, J. (2021). Show me the algorithm: Transparency in recommendation systems. Schwartz Reisman Institute for Technology and Society. Retrieved from: https://srinstitute.utoronto.ca/news/recommendation-systems-transparency
Thaler, R. H. & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. Retrieved from: https://psycnet.apa.org/record/ 2008-03730-000
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design: A vision for prioritizing human well‑being with autonomous and intelligent systems (First ed.). IEEE Standards Association. Retrieved from: https://www.ethics.org/wp‑content/uploads/Ethically‑Aligned‑Design‑May‑2019.pdf
Tucker, C. E. (2014). Social networks, personalized advertising, and privacy controls. Journal of Marketing Research, 51(5), 546-562. https://doi.org/10.1509/jmr.10.0355
Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131. DOI: 10.1126/science.185.4157.1124
Vaccaro, K., Sandvig, C. & Karahalios, K. (2019). "At the end of the day Facebook does what it wants": How users experience contesting algorithmic content moderation. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 167 (October 2020), 22 pages. https://doi.org/10.1145/3415238
Veale, M., Binns, R. & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083. https://doi.org/10.1098/rsta.2018.0083
Wachter, S., Mittelstadt, B. & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx005
Waldman, A. E. (2018). Privacy, notice, and design. Stanford Technology Law Review, 21, 74. https://dx.doi.org/10.2139/ssrn.2780305
Wang, R., Harper, F. M. & Zhu, H. (2020). Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In R. Bernhaupt et al. (Eds), Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020 (pp. 1-14). ACM https://doi.org/10.1145/3313831.3376813
Williams, B. (1981). Moral luck: Philosophical papers 1973-1980. Cambridge University Press. https://doi.org/10.1017/CBO9781139165860