Arg-Xai: Argumentative Explanations for Machine Learning Outcomes

The requirement of explainability is gaining more and more importance in Artificial Intelligence applications based on Machine Learning techniques, especially in those contexts where critical decisions are entrusted to software systems. We propose an argumentation-based methodology for explaining the results predicted by Machine Learning models. Argumentation provides frameworks that can be used to represent and analyse logical relations between pieces of information, serving as a basis for constructing human tailored rational explanations to a given problem. In particular, we use extension-based semantics to find the rationale behind a class prediction.

Step 1 - Dataset entry

Chose a dataset

Starting from this input, we build a Bipolar Argumentation Framework whose arguments consist of a subset of selected features. We then use argumentation semantics to show motivations behind the attribution of a certain class to a given record.