Objectives

 

GraphNEx will contribute a graph-based framework for developing inherently explainable AI.

Unlike current AI systems that utilise complex networks to learn high-dimensional, abstract representations of data, GraphNEx embeds symbolic meaning within AI frameworks. We will combine semantic reasoning over knowledge bases with simple modular learning on new data observations, to adaptively evolve the graphical knowledge base. The core concept is to decompose the monolithic block of highly connected layers of deep learning architectures into many smaller, simpler units. Units perform a precise task with an interpretable output, associating a value or vector to nodes. Nodes will be connected depending on their (learned) similarity, correlation in the data or closeness in some space.

GraphNEx will employ the concepts and tools from graph signal processing and graph machine learning to extrapolate semantic concepts and meaningful relationships from sub-graphs (concepts) within the knowledge base that can be used for semantic reasoning. By enforcing sparsity and domain-specific priors between concepts we will promote human interpretability.

The integration of game-based user feedback will ensure that explanations (and therefore core mechanisms of the AI system) are understandable and relevant to humans.