Explaining Probabilistic Bayesian Neural Networks for Cybersecurity Intrusion Detection

Tengfei Yang, Yuansong Qiao, Brian Lee

    Research output: Contribution to journalArticlepeer-review

    1 Citation (Scopus)

    Abstract

    The probabilistic Bayesian neural network(BNN) is good at providing trustworthy outcomes that is important, e.g. in intrusion detection. Due to the complex of probabilistic BNN, it is looks like a 'black box'. The explanation of its prediction is needed for improving its transparency. However, there is no explanatory method to explain the prediction of probabilistic BNN for the reason of uncertainty. For enhance the explainability of BNN model concerning uncertainty quantification, this paper proposes a Bayesian explanatory model that accounts for uncertainties inherent in Bayesian Autoencoder, encompassing both aleatory and epistemic uncertainties. Through global and local explanations, this Bayesian explanatory model is applied to intrusion detection scenarios. Fidelity and sensitivity analyses showcase that the proposed Bayesian explanatory model, which incorporates external uncertainty, effectively identifies key features and provides robust explanations.

    Original languageEnglish
    Pages (from-to)97004-97016
    Number of pages13
    JournalIEEE Access
    Volume12
    DOIs
    Publication statusPublished - 2024

    Keywords

    • aleatoric and epistemic uncertainties
    • Bayesian autoencoder
    • Bayesian explanation
    • explainability
    • uncertainty quantification

    Fingerprint

    Dive into the research topics of 'Explaining Probabilistic Bayesian Neural Networks for Cybersecurity Intrusion Detection'. Together they form a unique fingerprint.

    Cite this