Comparative study of techniques in explainable artificial intelligence based on explainability methods for structured data
Abstract
This article, derived from research on Explainable Artificial Intelligence (XAI) for detecting unusual activities in virtual classrooms, addresses the critical security and academic integrity challenges arising from the university migration to digital environments. Given the infeasibility of manual supervision over the massive volume of data generated, the study employs an experimental and comparative design to evaluate various XAI methods in identifying anomalies such as suspicious logins or bulk downloads. Through a quantitative and qualitative analysis of anonymous interaction logs within simulated scenarios, the research measures both the technical accuracy of the algorithms and the clarity and utility of the explanations they provide to justify their findings. The results reveal an intrinsic tension between predictive efficacy and interpretability, demonstrating that the choice of technique must align with the institutional need to substantiate decisions. Ultimately, the work concludes that implementing XAI is an essential strategic resource for public universities, as it not only strengthens cybersecurity infrastructure but also promotes transparency and ensures trust in virtual educational processes by offering understandable and auditable supervision models.
Downloads
References
Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2021). AI explainability 360: Impact and future directions. AI Magazine, *42*(2), 8–21.
Baniecki, H., Kretowicz, W., Piatyszek, P., Wisniewski, J., & Biecek, P. (2021). DALEX: Responsible machine learning with interactive explainability and fairness in Python. Journal of Machine Learning Research, *22*(214), 1–7.
Bellas, F., & Kralj, L. (2025). IA explicable en el ámbito educativo. UNESCO – CITIC.
Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F., ... & Srivastava, M. (2022). Interpretability of deep learning models: A survey of results. IEEE Access, *10*, 20170–20191.
Chandrasekaran, B., Yadav, D., & Gupta, R. (2022). Explainable AI for anomaly detection in educational data: A comparative study. Computers & Education, *189*, 104–118.
Chávez, D., & Macías, J. (2022). Gestión de seguridad en plataformas virtuales universitarias. Revista Ciencia Digital, *6*(1), 120–134.
Ehsan, U., Wintersberger, P., Liao, Q. V., Mara, M., Streit, M., & Riedl, M. O. (2021). Operationalizing human-centered perspectives in explainable AI. Proceedings of the ACM on Human-Computer Interaction, *5*(CSCW2), 1–36.
Friedler, S. A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2021). A comparative study of fairness-enhancing interventions in machine learning. Proceedings of the ACM on Human-Computer Interaction, *5*(CSCW2), 1–35.
Gillies, M., Fiebrink, R., Tanaka, A., Garcia, J., Bevilacqua, F., Heloir, A., ... & Mackay, W. E. (2022). Human-centered machine learning. Interactions, *29*(2), 46–51.
Gummadi, A., Arreche, O., & Abdallan, M. (2025). Una evaluación sistemática de métodos de IA explicables de caja blanca para la detección de anomalías en sistemas IoT. Internet of Things, *30*(1). https://doi.org/10.1016/j.iot.2025.101505
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2021). XAI—Explainable artificial intelligence. Science Robotics, *6*(60), eabg-6261.
Hasan, M., Rahman, M. S., Janicke, H., & Sarker, I. H. (2024). Detecting anomalies in blockchain transactions using machine learning classifiers and explainability analysis. arXiv preprint arXiv:2401.03530.
Kumar, V., & Sharma, A. (2021). Simulated learning environments for cybersecurity education: A systematic review. Education and Information Technologies, *26*(5), 6173–6195.
Liao, Q. V., Gruen, D., & Miller, S. (2022). Questioning the AI: Informing design practices for explainable AI user experiences. Proceedings of the ACM on Human-Computer Interaction, *6*(CSCW1), 1–27.
Lundberg, S. M., & Lee, S. I. (2020). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, *33*, 4765–4774.
Maricar, A., Anoop, A., Samuel, B. E., & Alsinjlawi, K. H. (2024). An improved explainable artificial intelligence for intrusion detection system. International Journal of Intelligent Systems and Applications in Engineering, *12*(3), 108–115.
Ministerio de Telecomunicaciones y de la Sociedad de la Información (MINTEL). (2022). Plan Nacional de Transformación Digital 2022–2025. Gobierno de Ecuador.
Misra, S., Li, H., & He, J. (2021). Noninvasive fracture characterization based on the classification of sonic wave travel times. Machine Learning for Science and Engineering, *2*(1), 1–23.
Molnar, C., Casalicchio, G., & Bischl, B. (2020). Interpretable machine learning–A brief history, state-of-the-art and challenges. arXiv preprint arXiv:2010.09337.
Mothukuri, V., Parizi, R. M., Pouriyeh, S., Huang, Y., Dehghantanha, A., & Srivastava, G. (2021). A survey on security and privacy of federated learning. Future Generation Computer Systems, *115*, 619–640.
Pandey, R., & Tripathi, A. K. (2022). Design and development of a simulated virtual learning environment for cybersecurity training. International Journal of Emerging Technologies in Learning, *17*(8), 234–250.
Reinoso, P., & Morales, S. (2023). Evaluación de estrategias de seguridad informática en entornos educativos digitales. Repositorio Institucional Universidad de Guayaquil.
Reyes, M., & Toala, C. (2023). Gestión de accesos y control de datos en plataformas tecnológicas educativas. Repositorio Institucional de la Universidad de Guayaquil.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2020). "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.
Rudin, C. (2020). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, *2*(5), 206–215.
Singh, A., Thakur, N., & Sharma, A. (2022). A review of supervised machine learning algorithms for heart disease prediction. International Journal of Computer Applications, *183*(41), 31–37.
Suresh, H., Gomez, S. R., Nam, K. K., & Satyanarayan, A. (2021). Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. Proceedings of the ACM on Human-Computer Interaction, *5*(CSCW1), 1–32.
UNESCO. (2024). Directrices para el uso ético y explicable de la inteligencia artificial en educación. Oficina de Publicaciones de la UNESCO.
Valencia, L., Aguirre, C., & Cedeño, M. (2023). Gestión de la seguridad de la información en aulas virtuales universitarias. Revista Ciencia Digital, *7*(2), 45–61.
Verma, S., Dickerson, J., & Hines, K. (2023). Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2301.03025.
Zhang, Y., Tino, P., Leonardis, A., & Tang, K. (2021). A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence, *5*(5), 726–742.
Zhao, X., Wu, Y., Lee, D. L., & Cui, W. (2023). A survey on evaluation methods for explainable AI. ACM Computing Surveys, *56*(3), 1–40.
Copyright (c) 2026 Fausto Raúl Orozco Lara,Mariela Paola Espinoza Martínez,Mell Naomi Mainato Acosta,Diego Alexander Siguencia Ordóñez

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
