Decentralized Risk Assessment for Zero-Trust Microservices: A Federated Learning Approach to Vulnerability Analytics
DOI:
https://doi.org/10.66280/ijair.v1i1.105Keywords:
Zero-Trust Security; Microservices; Federated Learning; Vulnerability Analytics; Risk Assessment; Secure Aggregation.Abstract
Zero-trust deployment has increased interest in continuous vulnerability analytics for mi- croservice systems, but most operational pipelines still depend on centralized telemetry collec- tion and periodically refreshed risk scores. This creates tension between adaptation speed and data-governance constraints. We present FedZTRisk, a federated framework for service-level vulnerability risk estimation that combines SBOM-derived features, dependency-graph context, runtime indicators, and trust-policy signals while keeping raw telemetry local to each tenant.
We report a simulation-based evaluation built from public sources: CVE records from the National Vulnerability Database (NVD), SBOM artifacts extracted from open-source microser- vice repositories, and dependency graphs generated from container images. To emulate realistic operational drift, we inject synthetic vulnerability events following historical CVE timelines and evaluate under non-IID client partitions. Across five random seeds, FedZTRisk consis- tently matches the strongest centralized baseline in macro-F1 (around 0.82) while improving calibration and maintaining practical inference latency for five-minute policy cycles. Compared with federated baselines (FedAvg and FedProx), the proposed trust-aware aggregation strategy yields more stable performance under heterogeneous and perturbed clients. These results po- sition FedZTRisk as a privacy-preserving alternative when organizations prioritize governance and cross-tenant learning over raw centralized-data performance ceilings.
References
[1] S. Rose, O. Borchert, S. Mitchell, and S. Connelly, ”Zero Trust Architecture,” NIST SP 800- 207, 2020.
[2] J. Kindervag, ”Build Security Into Your Network’s DNA: The Zero Trust Network Architec- ture,” Forrester Research, 2010.
[3] P. Mell, K. Scarfone, and S. Romanosky, ”A Complete Guide to the Common Vulnerability Scoring System Version 2.0,” FIRST, 2007.
[4] B. McMahan et al., ”Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proc . AISTATS, 2017.
[5] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, ”Federated Optimization in Heterogeneous Networks,” in Proc . MLSys, 2020.
[6] S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, ”SCAFFOLD: Stochastic Controlled Averaging for Federated Learning,” in Proc . ICML, 2020.
[7] P. Kairouz et al., ”Advances and Open Problems in Federated Learning,” Foundations and Trends in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
[8] S. J. Reddi et al., ”Adaptive Federated Optimization,” in Proc . ICLR, 2021.
[9] X. Li, M. Jiang, X. Zhang, M. Kamp, and Q. Dou, ”FedBN: Federated Learning on Non-IID Features via Local Batch Normalization,” in Proc . ICLR, 2021.
[10] Q. Li, B. He, and D. Song, ”Model-Contrastive Federated Learning,” in Proc . CVPR, 2021.
[11] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, ”How To Backdoor Federated Learning,” in Proc . AISTATS, 2020.
[12] T. Li, S. Hu, A. Beirami, and V. Smith, ”Ditto: Fair and Robust Federated Learning Through Personalization,” in Proc . ICML, 2021.
[13] S. P. Karimireddy, A. T. Suresh, M. Jaggi, V. N. Smith, and M. Jordan, ”MIME: Mimicking Centralized Stochastic Algorithms in Federated Learning,” in Proc . NeurIPS, 2020.
[14] J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, ”Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization,” in Proc . NeurIPS, 2020.
[15] K. Hsieh, A. Phanishayee, O. Mutlu, and P. Gibbons, ”The Non-IID Data Quagmire of De- centralized Machine Learning,” in Proc . ICML, 2020.
[16] L. Gao, L. Li, H. Wang, Z. Jin, and X. Liao, ”FedTP: Federated Learning by Transformer- Based Personalization,” in Proc . AAAI, 2022.
[17] W. Zhuang, Y. Wen, and S. Zhang, ”Divergence-Aware Federated Self-Supervised Learning,” in Proc . ICLR, 2021.
[18] M. S. Ozdayi, M. Kantarcioglu, and Y. Gel, ”Defending Against Backdoors in Federated Learning with Robust Learning Rate,” arXiv preprint arXiv:2107.04057, 2021.
[19] D. C. Nguyen, M. Ding, P. N. Pathirana, A. Seneviratne, J. Li, D. Niyato, O. Dobre, and H. V. Poor, ”Federated Learning for Internet of Things: A Comprehensive Survey,” IEEE Communications Surveys & Tutorials, vol. 23, no. 3, pp. 1622–1658, 2021.
[20] J. Zhang, A. Li, M. Hu, T. Wang, and Y. Chen, ”Poisoning and Backdoor Attacks in Federated Learning: A Survey,” IEEE Internet of Things Journal, vol. 10, no. 5, pp. 4490–4512, 2023.
[21] L. Lyu, H. Yu, and Q. Yang, ”Threats to Federated Learning: A Survey,” arXiv preprint arXiv:2003.02133, 2020.
[22] N. B. Truong, K. Sun, S. Wang, G. Guitton, and Y. Guo, ”Privacy Preservation in Federated Learning: An Insightful Survey from the GDPR Perspective,” Computers & Security, vol. 110, 2021.
[23] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, ”Machine Learning with Ad- versaries: Byzantine Tolerant Gradient Descent,” in Proc . NeurIPS, 2017.
[24] D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett, ”Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates,” in Proc . ICML, 2018.
[25] T. N. Kipf and M. Welling, ”Semi-Supervised Classification with Graph Convolutional Net- works,” in Proc . ICLR, 2017.
[26] A. Vaswani et al., ”Attention Is All You Need,” in Proc . NeurIPS, 2017.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 International Journal of Artificial Intelligence Research

This work is licensed under a Creative Commons Attribution 4.0 International License.
This article is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.



