The Language of Engineering - Training a Domain-Specific Word Embedding Model for Engineering
Since the introduction of Word2Vec in 2013, so-called word embeddings, dense vector representation of words that are supposed to capture their semantic meaning, have become a universally applied technique in a wide range of Natural Language Processing (NLP) tasks and domains. The vector representations they provide are learned on huge corpora of unlabeled text data. Due to the large amount of data and computing power that is necessary to train such embedding models, very often, pre-trained models are applied which have been trained on domain unspecific data like newspaper articles or Wikipedia entries. In this paper, we present a domain-specific embedding model that is trained exclusively on texts from the domain of engineering. We will show that such a domain-specific embeddings model performs better in different NLP tasks and can therefore help to improve NLP-based AI in the domain of Engineering.
| Attribute | Value |
|---|---|
| Address | Osaka, Japan |
| Authors | Dr. Daniel Braun , Alexandra Klymenko , Tim Schopf , Yusuf Kaan Akan , Prof. Dr. Florian Matthes |
| Citation | @inproceedings{10.1145/3460824.3460826, author = {BRAUN, DANIEL and KLYMENKO, OLEKSANDRA and SCHOPF, TIM and KAAN AKAN, YUSUF and MATTHES, FLORIAN}, title = {The Language of Engineering: Training a Domain-Specific Word Embedding Model for Engineering}, year = {2021}, isbn = {9781450388887}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3460824.3460826}, doi = {10.1145/3460824.3460826}, abstract = {Since the introduction of Word2Vec in 2013, so-called word embeddings, dense vector representation of words that are supposed to capture their semantic meaning, have become a universally applied technique in a wide range of Natural Language Processing (NLP) tasks and domains. The vector representations they provide are learned on huge corpora of unlabeled text data. Due to the large amount of data and computing power that is necessary to train such embedding models, very often, pre-trained models are applied which have been trained on domain unspecific data like newspaper articles or Wikipedia entries. In this paper, we present a domain-specific embedding model that is trained exclusively on texts from the domain of engineering. We will show that such a domain-specific embeddings model performs better in different NLP tasks and can therefore help to improve NLP-based AI in the domain of Engineering.}, booktitle = {2021 3rd International Conference on Management Science and Industrial Engineering}, pages = {8–12}, numpages = {5}, keywords = {Word Embeddings, Engineering}, location = {Osaka, Japan}, series = {MSIE 2021} } |
| Key | Br21b |
| Research project | Technology Scouting as a Service (TSaaS) |
| Title | The Language of Engineering: Training a Domain-Specific Word Embedding Model for Engineering |
| Type of publication | Conference |
| Year | 2021 |
| Publication URL | https://dl.acm.org/doi/10.1145/3460824.3460826 |
| Project | Technology Scouting as a Service (TSaaS) |
| Acronym | |
| Team members |