Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
Thesis_final_afterreview.pdf (12.73 MB)
ETD Abstract Container
Abstract Header
Efficient and Robust Video Understanding for Human-robot Interaction and Detection
Author Info
Li, Ying
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654
Abstract Details
Year and Degree
2018, Doctor of Philosophy, Ohio State University, Electrical and Computer Engineering.
Abstract
Video understanding is able to accomplish various tasks which are fundamental to human-robot interaction and detection. Such tasks include object tracking, action recognition, object detection, and segmentation. However, due to the large data volume in video sequence and the high complexity of visual algorithms, most visual algorithms suffer from low robustness to maintain a high efficiency, especially when it comes to the real-time application. It is challenging to achieve high robustness with high efficiency for video understanding. In this dissertation, we explore the efficient and robust video understanding for human-robot interaction and detection. Two important applications are the health-risky behavior detection and human tracking for human following robots. As large portions of world population are approaching old age, an increasing number of healthcare issues arise from unsafe abnormal behaviors such as falling and staggering. A system that can detect the health-risky abnormal behavior of the elderly is thus of significant importance. In order to detect the abnormal behvior with high accuracy and timely response, visual action recognition is explored and integrated with inertial sensor based behavior detection. The inertial sensor based behavior detection is integrated with a visual behavior detection algorithm to not only choose a small volume of the video sequence but also provide a likelihood guide for different behaviors. The system works in a trigger-verification manner. An elder-carried mobile devices either by a dedicated design or a smartphone, equipped with inertial sensor is used to trigger the selection of relevant video data. The selected data is then fed into visual verification module, and in this way the selective utilization of video data is achieved and the efficiency is guaranteed. By using selected data, the system is allowed to perform more complex visual analysis and achieve a higher accuracy. A novel tracking approach for robust human tracking by robots is proposed. To ensure a close distance between the human and the robot in human-robot interaction, we propose to track part of the human body, particularly human feet. Since the human feet are two closely located objects with similar appearance, it is challenging to track both of them and maintain high accuracy and robustness. An adaptive model for the human walking pattern is formulated to utilize the natural human body information to guide the tracking of the target. By decomposing the foot motion into local and global motions, a locomotion model is proposed. This model is integrated into an existing tracking algorithm, such as the particle filtering to improve the accuracy and efficiency. Apart from the locomotion model, a phase-labeled exemplar pool, which associates a motion phase with foot appearance, is built to improve the tracking performance. The human-robot interaction in a critical environment, to be specific, the nuclear environment, is also studied. In a nuclear environment, due to the damage by the radiation, the assistance of a robot is neccessary. However, due to the radiation effect on the components of the robot, the performance of the robot is degraded. In order to design algorithms for the human-robot interaction that are specifically modified for the radiation environment, the change of the robot performance is studied in this dissertation.
Committee
Yuan Zheng (Advisor)
Dong Xuan (Committee Member)
Yuejie Chi (Committee Member)
Pages
124 p.
Subject Headings
Computer Engineering
;
Computer Science
Keywords
video understanding
;
action recognition
;
object tracking
;
human-robot interaction
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Li, Y. (2018).
Efficient and Robust Video Understanding for Human-robot Interaction and Detection
[Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654
APA Style (7th edition)
Li, Ying.
Efficient and Robust Video Understanding for Human-robot Interaction and Detection.
2018. Ohio State University, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654.
MLA Style (8th edition)
Li, Ying. "Efficient and Robust Video Understanding for Human-robot Interaction and Detection." Doctoral dissertation, Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
osu152207324664654
Download Count:
425
Copyright Info
© 2018, all rights reserved.
This open access ETD is published by The Ohio State University and OhioLINK.