Development of an Explainable Artificial Intelligence Prototype for Interpreting Predictive Models

Nnaemeka Udenwagu, Ambrose Azeta, Vivian Nwaocha, Victor Azeta, Daniel Enosegbe, Awal Ganiyu, Adejoke Ajibade


Artificial Intelligence (AI) now depends on black box machine learning (ML) models which lack algorithmic transparency. Some governments are responding to this through legislation like the “Right of Explanation” rule in the EU and “Algorithmic Accountability Act” in the USA in 2019. The attempt to open up the black box and introduce some level of interpretation has given rise to what is today known as Explainable Artificial Intelligence (XAI). The objective of this paper is to provide a design and implementation of an Explainable Artificial Intelligence Prototype (ExplainEx) that interprets predictive models by explaining their confusion matrix, component classes and classification accuracy. This study is limited to four ML algorithms including J48, Random Tree, RepTree and FURIA. At the core of the software is an engine automating a seamless interaction between Expliclas Web API and the trained datasets, to provide natural language explanation. The prototype is both a stand-alone and client-server based system capable of providing global explanations for any model built on any of the four ML algorithms. It supports multiple concurrent users in a client-server environment and can apply all four algorithms concurrently on a single dataset and returning both precision score and explanation. It is a ready tool for researchers who have datasets and classifiers prepared for explanation. This work bridges the gap between prediction and explanation, thereby allowing researchers to concentrate on data analysis and building state-of-the-art predictive models.

Full Text:



  • There are currently no refbacks.