Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

SketchDream: Sketch-based text-to-3D generation and editing

Liu, Feng-Lin, Fu, Hongbo, Lai, Yukun ORCID: https://orcid.org/0000-0002-2094-5680 and Gao, Lin 2024. SketchDream: Sketch-based text-to-3D generation and editing. ACM Transactions on Graphics 43 (4) , 44. 10.1145/3658120

[thumbnail of 3658120.pdf]
Preview
PDF - Published Version
Available under License Creative Commons Attribution.

Download (24MB) | Preview

Abstract

Existing text-based 3D generation methods generate attractive results but lack detailed geometry control. Sketches, known for their conciseness and expressiveness, have contributed to intuitive 3D modeling but are confined to producing texture-less mesh models within predefined categories. Integrating sketch and text simultaneously for 3D generation promises enhanced control over geometry and appearance but faces challenges from 2D-to-3D translation ambiguity and multi-modal condition integration. Moreover, further editing of 3D models in arbitrary views will give users more freedom to customize their models. However, it is difficult to achieve high generation quality, preserve unedited regions, and manage proper interactions between shape components. To solve the above issues, we propose a text-driven 3D content generation and editing method, SketchDream, which supports NeRF generation from given hand-drawn sketches and achieves free-view sketch-based local editing. To tackle the 2D-to-3D ambiguity challenge, we introduce a sketch-based multi-view image generation diffusion model, which leverages depth guidance to establish spatial correspondence. A 3D ControlNet with a 3D attention module is utilized to control multi-view images and ensure their 3D consistency. To support local editing, we further propose a coarse-to-fine editing approach: the coarse phase analyzes component interactions and provides 3D masks to label edited regions, while the fine stage generates realistic results with refined details by local enhancement. Extensive experiments validate that our method generates higher-quality results compared with a combination of 2D ControlNet and image-to-3D generation techniques and achieves detailed control compared with existing diffusion-based 3D editing approaches.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Publisher: Association for Computing Machinery (ACM)
ISSN: 0730-0301
Date of First Compliant Deposit: 30 May 2024
Date of Acceptance: 22 April 2023
Last Modified: 29 Jul 2024 13:51
URI: https://orca.cardiff.ac.uk/id/eprint/169267

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics