Zheng, Yue, Hicks, Yulia A. ![]() ![]() |
Abstract
The aim of our research is to create a "virtual friend" i.e., a virtual character capable of responding to actions obtained from observing a real person in video in a realistic and sensible manner. In this paper, we present a novel approach for generating a variety of complex behavioural responses for a fully articulated "virtual friend" in three dimensional (3D) space. Our approach is model-based. First of all, we train a collection of dual hidden Markov models (HMMs) on 3D motion capture (MoCap) data representing a number of interactions between two people. Secondly, we track 3D articulated motion of a single person in ordinary 2D video. Finally, using the dual HMM, we generate a moving "virtual friend" reacting to the motion of the tracked person and place it in the original video footage. In this paper, we describe our approach in depth as well as present the results of experiments, which show that the produced behaviours are very close to those of real people.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics Engineering |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Uncontrolled Keywords: | 3D articulated motion, 3D motion capture, HMM, dual hidden Markov models, natural interactive behaviours, real video, virtual friend |
Publisher: | IEEE Press |
ISBN: | 9780780397361 |
Related URLs: | |
Last Modified: | 24 Oct 2022 10:45 |
URI: | https://orca.cardiff.ac.uk/id/eprint/45781 |
Citation Data
Cited 1 time in Scopus. View in Scopus. Powered By Scopus® Data
Actions (repository staff only)
![]() |
Edit Item |