En lista över förtränade inbäddningar för svenska
Embeddings (mappings of linguistic units, such as words, sentences, characters, to vectors of real numbers) are playing an extremely important role in modern language technology. Training the embedding models is often costly, which is why pretrained embeddings are widely used. On this page we provide lists of various pretrained embeddings for Swedish and of studies that focus on evaluating Swedish embeddings. If you have suggestions or comments, please contact us.
Embeddings
- Facebook FastText models: Common Crawl + Wiki, Wiki, Wiki with cross-lingual alignment
- NLPL repository: Word2Vec Continuous Skipgram (CoNLL17 corpus); ELMO (CoNLL17 corpus), ELMO (Wiki)
- NLPLAB at Linköping University: a pretrained Word2Vec model (trained on a Göteborgs-Posten corpus); a script for training both cbow and sgns Word2Vec; a paper comparing Word2Vec and GloVe to Saldo
- The National Library's (Kungliga bibliotekets) models: BERT, BERT fine-tuned for NER, ALBERT
- The Public Employment Service's (Arbetsförmedlingens )models: BERT
- Polyglot
- Kyubyong Park's models: trained on Wiki, Word2Vec and FastText
- Flair models. See also our Flair model for Swedish POS tagging.
- Språkbanken Text diachronic embeddings.
Evaluation studies
- Sahlgren, Magnus, and Fredrik Olsson. 2016. Gender Bias in Pretrained Swedish Embeddings. Proceedings of the 22nd Nordic Conference on Computational Linguistics.
- Fallgren, Per, Jesper Segeblad, and Marco Kuhlmann. 2016. Towards a standard dataset of swedish word vectors. Sixth Swedish Language Technology Conference (SLTC).
- Holmer, Daniel. 2020. Context matters: Classifying Swedish texts using BERT's deep bidirectional word embeddings. Bachelor thesis at Linköping University.
- Adewumi, Tosin, Foteini Liwicki and Marcus Liwicki. 2020. Exploring Swedish & English fastText Embeddings with the Transformer
- Adewumi, Tosin, Foteini Liwicki and Marcus Liwicki. 2020. Corpora Compared: The Case of the Swedish Gigaword & Wikipedia Corpora