서지주요정보
RC-SMPL : Real-time Cumulative SMPL-based avatar body generation system = SMPL 기반의 텍스쳐 누적 방식을 통한 실시간 3D 아바타 생성 시스템
서명 / 저자 RC-SMPL : Real-time Cumulative SMPL-based avatar body generation system = SMPL 기반의 텍스쳐 누적 방식을 통한 실시간 3D 아바타 생성 시스템 / Hail Song.
발행사항 [대전 : 한국과학기술원, 2024].
Online Access 원문보기 원문인쇄

소장정보

등록번호

8041945

소장위치/청구기호

학술문화관(도서관)2층 학위논문

MGCT 24003

휴대폰 전송

도서상태

이용가능(대출불가)

사유안내

반납예정일

리뷰정보

초록정보

We present a novel method for avatar body generation that cumulatively updates the texture and normal map in real-time. Multiple images or videos have been broadly adopted to create detailed 3D human models that capture more realistic user identities in both Augmented Reality (AR) and Virtual Reality (VR) environments. However, this approach has a higher spatiotemporal cost because it requires a complex camera setup and extensive computational resources. For lightweight reconstruction of personalized avatar bodies, we design a system that progressively captures the texture and normal values using a single RGBD camera to generate the widely-accepted 3D parametric body model, SMPL-X. Quantitatively, our system maintains real-time performance while delivering reconstruction quality comparable to the state-of-the-art method. Moreover, user studies reveal the benefits of real-time avatar creation and its applicability in various collaborative scenarios. By enabling the production of high-fidelity avatars at a lower cost, our method provides more general way to create personalized avatar in AR/VR applications, thereby fostering more expressive self-representation in the metaverse.

이 논문에서는 실시간으로 텍스처와 노말 맵을 누적하여 업데이트하는 새로운 아바타 신체 생성 방법을 제안하였다. 최근 증강현실 및 가상현실에서의 아바타 표현을 위해 다수의 이미지나 비디오를 사용하여 3D 인체 모델을 생성하는 방법 많은 연구에서 제안되었다. 그러나 이러한 접근법은 복잡한 카메라 설정과 방대한 계산 자원을 요구하여 높은 시공간적 비용을 초래하였다. 본 연구에서는 개인화된 아바타 신체의 복원을 위해 단일 RGBD 카메라를 사용하여 점진적으로 텍스처와 노말 값을 누적하여 인체 모델을 생성하는 시스템을 설계하였다. 정량적인 면에서 해당 시스템은 실시간 성능을 유지하면서도 기존 연구와 유사한 수준의 복원 성능을 보였다. 또한, 사용자 실험을 통해 실시간 아바타 생성의 이점과 다양한 협업 시나리오에서의 적용 가능성을 밝혀내었다. 아바타를 더 낮은 비용으로 복원할 수 있게 함으로써, 본 연구는 메타버스 환경에서의 개인화 자아 표현을 가능하게 했다.

서지기타정보

서지기타정보
청구기호 {MGCT 24003
형태사항 iii, 26 p. : 삽도 ; 30 cm
언어 영어
일반주기 저자명의 한글표기 : 송하일
지도교수의 영문표기 : Woontack Woo
지도교수의 한글표기 : 우운택
Including appendix
학위논문 학위논문(석사) - 한국과학기술원 : 문화기술대학원,
서지주기 References : p. 21-25
주제 Augmented reality
Virtual reality
Metaverse
Avatar generation
3D reconstruction
SMPL
Computer graphics
Computer vision
증강 현실
가상 현실
메타버스
아바타 생성
3D 복원
SMPL
컴퓨터 그래픽스
컴퓨터 비전
QR CODE

책소개

전체보기

목차

전체보기

이 주제의 인기대출도서

System diagram of proposed method for real-time 3D human body model reconstruction and enhancement of texture and normal maps. The diagram presents the various components of the system, which are detailed in Sections 3.1 (Preliminaries) through 3.3 (Normal Map Generation).

An illustration of ray casting from the point clouds obtained from the RGBD video stream onto the animated virtual SMPL-X model. To better illustrate the concept of ray casting, we have exaggerated the distance between the 3D body model and point clouds in the figure, which is closely overlapped in reality.

Illustration of the lightweight technique for computing normal values from acquired RGBD images. The normal values for a point cloud uiij are calculated using the cross product of the vectors formed by its neighboringpoint clouds. This approach enables real-time normal value computation while minimizing computing resources.

(Study 1) Results of Likert scale rating (1: strongly disagree - 7: strongly agree) for (A) Avatar Embodiment (AE), (A-1) AE-Appearance, (A-2) AE-Ownership; (B) Virtual Embodiment Questionnaire (VEQ); and (C) Illusion of Virtual Body Ownership (IVBO). (statistical significance between the experimental conditions: *p < .05)

(A) In Study 1, participants engaged in the Sense ofEmbodiment task, controlling a motion- synchronized virtual avatar and interacting withspheres in the virtual space; and (B) and (C) illustrated rendering results ofavatar models in AR and VR, respectively.

Quantitative comparison ofimage similarity metrics (SSIM, mask-SSIM, and PSNR) between our method(Ours) and the video-based reconstruction method(Video) proposed by Alldieck et al. 5.

FPS and texture completion ratio according to system configuration

Comparative rendering results. From left to right: Ground truth image, avatar generated by the video-based restoration method by Alldieck et al. [5] (Video), and avatar generated by our proposed method (Ours). Our method captures more distinctive clothing patterns for Avatar 6 and Avatar 8 compared to the video-based method. However, it failed to capture the bumpy patterns of the clothing around t

More rendering results (Female subjects, From 1 to 6). From left to right: Ground truth image, avatar generated by the video-based restoration method by Alldieck et al. [5] (Video),and avatal generated by our proposed method Ours).

More rendering results (Male subjects, From 7 to 12). From left to right: Ground truth image, avatar generated by the video-based restoration method by Alldieck et al. [5] (Video), and avatar menerated hv 0.11 propesed method