Using Word Embeddings for Ontology Enrichment

Authors

  • İzzet Pembeci

DOI:

https://doi.org/10.18201/ijisae.58806

Keywords:

Neural Language Models, Word Embeddings, Ontology Enrichment, Ontology Population

Abstract

Word embeddings, distributed word representations in a reduced linear space, show a lot of promise for accomplishing Natural Language Processing (NLP) tasks in an unsupervised manner. In this study, we investigate if the success of word2vec, a Neural Networks based word embeddings algorithm, can be replicated in an aggluginative language like Turkish. Turkish is more challenging than languages like English for complex NLP tasks because of her rich morphology. We picked ontology enrichment, again a relatively harder NLP task, as our test application. Firstly, we show how ontological relations can be extracted automaticaly from Turkish Wikipedia to construct a gold standard. Then by running experiments we show that the word vector representations produced by word2vec are useful to detect ontological relations encoded in Wikipedia. We propose a simple but yet effective weakly supervised ontology enrichment algorithm where for a given word a few know ontologically related concepts coupled with similarity scores computed via word2vec models can result in discovery of other related concepts. We argue how our algorithm can be improved and augmented to make it a viable component of an ontoloy learning and population framework.

Downloads

Download data is not yet available.

Downloads

Published

13.07.2016

How to Cite

Pembeci, İzzet. (2016). Using Word Embeddings for Ontology Enrichment. International Journal of Intelligent Systems and Applications in Engineering, 4(3), 49–56. https://doi.org/10.18201/ijisae.58806

Issue

Section

Research Article