Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

Learning for Spoken Dialog Systems with Discriminative Graphical Models

Abstract Details

2015, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
A statistical spoken dialog system must keep track of what the user wants at any point during a dialog. The system has the ability to disambiguate in the presence of errors by maintaining probability distributions over possible dialog states. This thesis demonstrates that discriminative probabilistic graphical models can significantly improve the performance of the task of dialog state tracking, namely, predicting a user’s constraint that the user has specified so far during a dialog. One challenge for machine learning problems is when there is limited amount of available training data. In particular, learning for spoken dialog systems is hampered by the expense of collecting human-computer interactions. Insufficient training data can lead to overfitting with complex models and make the trained classifier vulnerable to unseen observations. In this thesis, parameter tying has been introduced as a new way to combat overfitting by learning generic weights on discriminative probabilistic graphical models. Each of the generic weights represents a group of parameters that share the same characteristic in discriminating between correct and incorrect labels. Essentially, the tied models ignore the specific identity of a value in the designed feature functions. With parameter tying, the learned model can generalize well to the unseen labels. Different variations of Conditional Random Fields (CRFs) are trained to perform the task of dialog state tracking. We also incorporate the detected user goal change information into the discriminative models to better capture the evolving user goals. With significantly fewer parameters – each of them is generic for all feature functions in a tied category – and auxiliary information augmented on the state transitions, the best model outperforms a strong baseline by a significant margin.
Eric Fosler-Lussier (Advisor)
Micha Elsner (Committee Member)
Alan Ritter (Committee Member)
111 p.

Recommended Citations

Citations

  • Ma, Y. (2015). Learning for Spoken Dialog Systems with Discriminative Graphical Models [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440166760

    APA Style (7th edition)

  • Ma, Yi. Learning for Spoken Dialog Systems with Discriminative Graphical Models. 2015. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1440166760.

    MLA Style (8th edition)

  • Ma, Yi. "Learning for Spoken Dialog Systems with Discriminative Graphical Models." Doctoral dissertation, Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440166760

    Chicago Manual of Style (17th edition)