Please use this identifier to cite or link to this item:
|Title:||Deep3DLayout: 3D Reconstruction of an Indoor Layout from a Spherical Panoramic Image||Authors:||Pintore, Giovanni
Almansa Aránega, Eva María
|Keywords:||indoor 3D layout, panoramic images, data-driven reconstruction, structured indoor reconstruction||Issue Date:||Dec-2021||Publisher:||ACM||Project:||Advanced Visual and Geometric Computing for 3D Capture, Display, and Fabrication
|Journal:||ACM Transactions on Graphics||Volume:||40||Issue:||6||Start page:||250:1||End page:||250:12||Conference:||Proc. SIGGRAPH Asia||Abstract:||
Recovering the 3D shape of the bounding permanent surfaces of a room from a single image is a key component of indoor reconstruction pipelines. In this article, we introduce a novel deep learning technique capable to produce, at interactive rates, a tessellated bounding 3D surface from a single 360-degree image. Differently from prior solutions, we fully address the problem in 3D, significantly expanding the reconstruction space of prior solutions. A graph convolutional network directly infers the room structure as a 3D mesh by progressively deforming a graph-encoded tessellated sphere mapped to the spherical panorama, leveraging perceptual features extracted from the input image. Important 3D properties of indoor environments are exploited in our design. In particular, gravity-aligned features are actively incorporated in the graph in a projection layer that exploits the recent concept of multi head self-attention, and specialized losses guide towards plausible solutions even in presence of massive clutter and occlusions. Extensive experiments demonstrate that our approach outperforms current state of the art methods in terms of accuracy and capability to reconstruct more complex environments.
|Appears in Collections:||CRS4 publications|
Show full item record
Files in This Item:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.