Towards Interpretable Anomaly Detection: Unsupervised Deep Neural Network Approach using Feedback Loop

Ashima Chawla, Paul Jacob, Paddy Farrell, Erik Aumayr, Sheila Fallon

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Citations (Scopus)

Abstract

As telecom networks generate high-dimensional data, it becomes important to support large numbers of co-existing network attributes and to provide an interpretable and eXplainable Artificial Intelligence (XAI) anomaly detection system. Most state-of-the-art techniques tackle the problem of detecting network anomalies with high precision but the models don't provide an interpretable solution. This makes it hard for operators to adopt the given solutions. The proposed Cluster Characterized Autoencoder (CCA) architecture improves model interpretability by designing an end-to-end data driven AI-based framework. Candidate anomalies identified using the feature optimised Autoencoder and entropy based feature ranking are clustered in reconstruction error space using subspace clustering. This clustering is seen to separate true positives and false positives and how well this is done is evaluated using entropy and information gain. A two dimensional t-SNE representation of anomaly clusters is used as a graphical interface to the analysis and explanation of individual anomalies using SHAP values. The solution provided by this unsupervised approach helps the analyst in the categorisation, identification and feature explanation of anomalies providing faster root cause analysis. Therefore, our solution provides better support for the network domain analysts with an interpretable and explainable Artificial Intelligence (AI) anomaly detection system. Experiments on a real-world telecom network dataset demonstrate the efficacy of our proposed algorithm.

Original languageEnglish
Title of host publicationProceedings of the IEEE/IFIP Network Operations and Management Symposium 2022
Subtitle of host publicationNetwork and Service Management in the Era of Cloudification, Softwarization and Artificial Intelligence, NOMS 2022
EditorsPal Varga, Lisandro Zambenedetti Granville, Alex Galis, Istvan Godor, Noura Limam, Prosper Chemouil, Jerome Francois, Marc-Oliver Pahl
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665406017
DOIs
Publication statusPublished - 2022
Event2022 IEEE/IFIP Network Operations and Management Symposium, NOMS 2022 - Budapest, Hungary
Duration: 25 Apr 202229 Apr 2022

Publication series

NameProceedings of the IEEE/IFIP Network Operations and Management Symposium 2022: Network and Service Management in the Era of Cloudification, Softwarization and Artificial Intelligence, NOMS 2022

Conference

Conference2022 IEEE/IFIP Network Operations and Management Symposium, NOMS 2022
Country/TerritoryHungary
CityBudapest
Period25/04/2229/04/22

Keywords

  • Group Anomaly Detection
  • Neural Network
  • eXplainable AI

Fingerprint

Dive into the research topics of 'Towards Interpretable Anomaly Detection: Unsupervised Deep Neural Network Approach using Feedback Loop'. Together they form a unique fingerprint.

Cite this