Spreading excellence and disseminating the cutting edge results of our research and development efforts is crucial to our institute. Check for our educational offers for Bachelor, Master and PhD studies at the University of Innsbruck!

Towards Explainable Knowledge Graph Completion

Type: 
Master

Knowledge Graphs (KGs) are prone to suffering from incompleteness issues.
The Link Detection (LD) task predicts missing facts based on an incomplete KG.
Solutions range from traditional inductive logical reasoning to modern embedding-based approaches.
The latter learn a latent representation (embedding) from the discrete structure of a knowledge graph.
This process is also known as knowledge representation learning.
There are many published possible target representations and ways of learning them (e.g., deep learning models, tensor decomposition models, and geometric models).
All of them suffer from a lack of comprehensibility.
LD is a very popular downstream task for these embedding-models.
In order to help in understanding the predictions made by these black-box approaches, you will extend pre-existing LD solutions of your choice by making them explainable.

 

Literature:

  • Teru, K., Denis, E., & Hamilton, W. (2020, November). Inductive relation prediction by subgraph reasoning. In International Conference on Machine Learning (pp. 9448-9457). PMLR.
  • Wang, M., Qiu, L., & Wang, X. (2021). A survey on knowledge graph embeddings for link prediction. Symmetry, 13(3), 485.
  • Chen, W., Cao, Y., Feng, F., He, X., & Zhang, Y. (2022). Explainable Sparse Knowledge Graph Completion via High-order Graph Reasoning Network. arXiv preprint arXiv:2207.07503.
  • Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.