Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
ma-dissertation-final-2.pdf (3.85 MB)
ETD Abstract Container
Abstract Header
Learning for Spoken Dialog Systems with Discriminative Graphical Models
Author Info
Ma, Yi
ORCID® Identifier
http://orcid.org/0000-0001-8889-9775
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=osu1440166760
Abstract Details
Year and Degree
2015, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
Abstract
A statistical spoken dialog system must keep track of what the user wants at any point during a dialog. The system has the ability to disambiguate in the presence of errors by maintaining probability distributions over possible dialog states. This thesis demonstrates that discriminative probabilistic graphical models can significantly improve the performance of the task of dialog state tracking, namely, predicting a user’s constraint that the user has specified so far during a dialog. One challenge for machine learning problems is when there is limited amount of available training data. In particular, learning for spoken dialog systems is hampered by the expense of collecting human-computer interactions. Insufficient training data can lead to overfitting with complex models and make the trained classifier vulnerable to unseen observations. In this thesis, parameter tying has been introduced as a new way to combat overfitting by learning generic weights on discriminative probabilistic graphical models. Each of the generic weights represents a group of parameters that share the same characteristic in discriminating between correct and incorrect labels. Essentially, the tied models ignore the specific identity of a value in the designed feature functions. With parameter tying, the learned model can generalize well to the unseen labels. Different variations of Conditional Random Fields (CRFs) are trained to perform the task of dialog state tracking. We also incorporate the detected user goal change information into the discriminative models to better capture the evolving user goals. With significantly fewer parameters – each of them is generic for all feature functions in a tied category – and auxiliary information augmented on the state transitions, the best model outperforms a strong baseline by a significant margin.
Committee
Eric Fosler-Lussier (Advisor)
Micha Elsner (Committee Member)
Alan Ritter (Committee Member)
Pages
111 p.
Subject Headings
Computer Science
Keywords
spoken dialog systems
;
discriminative graphical models
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Ma, Y. (2015).
Learning for Spoken Dialog Systems with Discriminative Graphical Models
[Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440166760
APA Style (7th edition)
Ma, Yi.
Learning for Spoken Dialog Systems with Discriminative Graphical Models.
2015. Ohio State University, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=osu1440166760.
MLA Style (8th edition)
Ma, Yi. "Learning for Spoken Dialog Systems with Discriminative Graphical Models." Doctoral dissertation, Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440166760
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
osu1440166760
Download Count:
782
Copyright Info
© 2015, some rights reserved.
Learning for Spoken Dialog Systems with Discriminative Graphical Models by Yi Ma is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Based on a work at etd.ohiolink.edu.
This open access ETD is published by The Ohio State University and OhioLINK.