Currently, multi-party video conference does not provide equivalent quality in comparison to face-to-face conference. One assumed reason is that participants cannot be aware of "who is focusing on whom". We introduce virtual space to a multi-party conference system, allocating avatars in a space. We also introduce intuitive input interface using motion processor in order to construct a multi-party conference system, which the user can use without being aware of it. A new displaying method is essential for this system, and we introduce a way by which a user can obtain the feedback of which user he/she is focusing on. We introduce e-MulCS as the system that fulfils these proposals. By comparing this system with the video conference system, the results show that our system supports the intuitive multi-party communication better.
|ジャーナル||IEICE Transactions on Information and Systems|
|出版物ステータス||Published - 2004 12|
ASJC Scopus subject areas
- Hardware and Architecture
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering
- Artificial Intelligence