Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 354 Bytes

README.md

File metadata and controls

6 lines (4 loc) · 354 Bytes

AMED

Data and code for the paper "Inferring multilingual domain-specific word embeddings from large document corpora"

Pretraining Multilingual models on Wikipedia

The initial training of a general purpose Word2Vec model can be achieved by exploiting the following high-level python library: wiki-word2vec