Predict-and-Drive: Avatar Motion Adaption in Room-Scale Augmented Reality Telepresence with Heterogeneous Spaces - Virtual & augmented ENvIronments for Simulation & Experiments
Article Dans Une Revue IEEE Transactions on Visualization and Computer Graphics Année : 2022

Predict-and-Drive: Avatar Motion Adaption in Room-Scale Augmented Reality Telepresence with Heterogeneous Spaces

Résumé

Avatar-mediated symmetric Augmented Reality (AR) telepresence has emerged with the ability to empower users located in different remote spaces to interact with each other in 3D through avatars. However, different spaces have heterogeneous structures and features, which bring difficulties in synchronizing avatar motions with real user motions and adapting avatar motions to local scenes. To overcome these issues, existing methods generate mutual movable spaces or retarget the placement of avatars. However, these methods limit the telepresence experience in a small sub-area space, fix the positions of users and avatars, or adjust the beginning/ending positions of avatars without presenting smooth transitions. Moreover, the delay between the avatar retargeting and users’ real transitions can break the semantic synchronization between users’ verbal conversation and perceived avatar motion. In this paper, we first examine the impact of the aforementioned transition delay and explore the preferred transition style with the existence of such delay through user studies. With the results showing a significant negative effect of avatar transition delay and providing the design choice of the transition style, we propose a Predict-and-Drive controller to diminish the delay and present the smooth transition of the telepresence avatar. We also introduce a grouping component as an upgrade to immediately calculate a coarse virtual target once the user initiates a transition, which could further eliminate the avatar transition delay. Once having the coarse virtual target or an exactly predicted target, we find the corresponding target for the avatar according to the pre-constructed mapping of objects of interest between two spaces. The avatar control component maintains an artificial potential field of the space and drives the avatar towards the target while respecting the obstacles in the physical environment. We further conduct ablation studies to evaluate the effectiveness of our proposed components.
Fichier principal
Vignette du fichier
Predict-and-Drive_Avatar_Motion_Adaption_in_Room-Scale_Augmented_Reality_Telepresence_with_Heterogeneous_Spaces.pdf (892.43 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04357778 , version 1 (21-12-2023)

Identifiants

Citer

Xuanyu Wang, Hui Ye, Christian Sandor, Weizhan Zhang, Hongbo Fu. Predict-and-Drive: Avatar Motion Adaption in Room-Scale Augmented Reality Telepresence with Heterogeneous Spaces. IEEE Transactions on Visualization and Computer Graphics, 2022, 28 (11), pp.3705-3714. ⟨10.1109/tvcg.2022.3203109⟩. ⟨hal-04357778⟩
171 Consultations
126 Téléchargements

Altmetric

Partager

More