Towards more interpretable graphs and Knowledge Graph algorithms

  1. Zulaika Zurimendi, Unai
Dirigida por:
  1. Diego López de Ipiña González de Artaza Director
  2. Aitor Almeida Director

Universidad de defensa: Universidad de Deusto

Fecha de defensa: 13 de diciembre de 2022

Tribunal:
  1. Humberto Bustince Sola Presidente/a
  2. Aritz Bilbao Jayo Secretario
  3. Gorka Azkune Galparsoro Vocal

Tipo: Tesis

Resumen

The increase in the amount of data generated by today’s technologies has led to the creation of large graphs and Knowledge Graphs that contain millions of facts about people, things and places in the world. Grounded on those large data stores, many Machine Learning models have been proposed to achieve different tasks, such as predicting new links or weights. Nevertheless, one of the main challenges of those models is their lack of interpretability. Commonly known as “black boxes”, Machine Learning models are usually not understandable to humans. This lack of interpretability becomes even a more severe problem for Knowledge graph-related applications, including healthcare systems, chatbots, or public service management tools where end-users require an understanding of the feedback given by the models. In this thesis, we present methods to increase the interpretability of graphs and Knowledge Graphs based Machine Learning models. We follow a taxonomy grounded on the output result obtained by the proposed methods. Each of the different methods is suitable for particular use cases and scenarios, and can help end-users in different manners. Precisely, we provide an interpretable link weight prediction method based on the Weisfeiler-Lehman graph colouring technique. Additionally, we present an adaption of the Regularized Dual Averaging optimization method for Knowledge Graphs to obtain interpretable representations in link prediction models. Lastly, we introduce the use of Influence Functions for Knowledge Graph link prediction models to acquire the most im- important training facts for a given prediction. Through experiments in link weight prediction and link prediction, we show that our methods can successfully increase the interpretability of the machine learning models of graphs and Knowledge Graphs while maintaining competition with state-of-the-art methods in terms of performance.