Skip to Main Content
 

Global Search Box

 
 
 
 

Files

ETD Abstract Container

Abstract Header

Linguistic Knowledge Transfer for Enriching Vector Representations

Abstract Details

2017, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
Many state-of-the-art neural network models utilize a huge number of parameters, where a large number of labeled training examples are necessary for sufficient training of the models. Those models may not be properly trained if there are not enough training examples for target tasks. This dissertation focuses on transfer learning methods, which improve the performance of the target tasks in such situations by leveraging external resources or models from other tasks. Specifically, we introduce transfer learning methods for enriching word or sentence vector representations of neural network models by transferring linguistic knowledge. Usually, the first layer of the neural networks for Natural Language Processing (NLP) is a word embedding layer. Word embeddings represent each word as a real-valued vector, where semantically or syntactically similar words tend to have similar vector representations in vector spaces. The first part of this dissertation is mainly about word embedding enrichment, which is categorized as an inductive transfer learning methodology. We show that word embeddings can represent semantic intensity scales like "good" < "great" < "excellent" on vector spaces, and semantic intensity orders of words can be used as the knowledge sources to adjust word vector positions to improve the semantics of words by evaluating on word-level semantics tasks. Also, we show that word embeddings that are enriched with linguistic knowledge can be used to improve the performance of the Bidirectional Long Short-Term Memory (BLSTM) model for intent detection, which is a sentence-level downstream task especially when only small numbers of training examples are available. The second part of this dissertation concerns about sentence-level transfer learning for sequence tagging tasks. We introduce a cross-domain transfer learning model for dialog slot-filling, which is an inductive transfer learning method, and a cross-lingual transfer learning model for Part-of-Speech (POS) tagging, which is a transductive transfer learning method. Both models utilize a common BLSTM that enables knowledge transfer from other domains/languages, and private BLSTMs for domain/language-specific representations. We also use adversarial training and other auxiliary objectives such as representation separations and bidirectional language models to further improve the transfer learning performance. We show that those sentence-level transfer learning models improve sequence tagging performances without exploiting any other cross-domain or cross-lingual knowledge.
Eric Fosler-Lussier (Advisor)
Alan Ritter (Committee Member)
Michael White (Committee Member)
156 p.

Recommended Citations

Citations

  • Kim, J.-K. (2017). Linguistic Knowledge Transfer for Enriching Vector Representations [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500571436042414

    APA Style (7th edition)

  • Kim, Joo-Kyung. Linguistic Knowledge Transfer for Enriching Vector Representations. 2017. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1500571436042414.

    MLA Style (8th edition)

  • Kim, Joo-Kyung. "Linguistic Knowledge Transfer for Enriching Vector Representations." Doctoral dissertation, Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500571436042414

    Chicago Manual of Style (17th edition)