Please use this identifier to cite or link to this item:
|Title:||Instant Automatic Emptying of Panoramic Indoor Scenes||Authors:||Pintore, Giovanni
|Keywords:||visual computing||Issue Date:||2022||Publisher:||IEEE||Project:||Advanced Visual and Geometric Computing for 3D Capture, Display, and Fabrication
|Journal:||IEEE Transactions on Visualization and Computer Graphics||Abstract:||
Nowadays 360 degree cameras, capable to capture full environments in a single shot, are increasingly being used in a variety of Extended Reality (XR) applications that require specific Diminished Reality (DR) techniques to conceal selected classes of objects. In this work, we present a new data-driven approach that, from an input 360 degree image of a furnished indoor space automatically returns, with very low latency, an omnidirectional photorealistic view and architecturally plausible depth of the same scene emptied of all clutter. Contrary to recent data-driven inpainting methods that remove single user-defined objects based on their semantics, our approach is holistically applied to the entire scene, and is capable to separate the clutter from the architectural structure in a single step. By exploiting peculiar geometric features of the indoor environment, we shift the major computational load on the training phase and having an extremely lightweight network at prediction time. Our end-to-end approach starts by calculating an attention mask of the clutter in the image based on the geometric difference between full and empty scene. This mask is then propagated through gated convolutions that drive the generation of the output image and its depth. Returning the depth of the resulting structure allows us to exploit, during supervised training, geometric losses of different orders, including robust pixel-wise geometric losses and high-order 3D constraints typical of indoor structures. The experimental results demonstrate that our method provides interactive performance and outperforms current state-of-the-art solutions in prediction accuracy on available commonly used indoor panoramic benchmarks. In addition, our method presents consistent quality results even for scenes captured in the wild and for data for which there is no ground truth to support supervised training.
|Appears in Collections:||CRS4 publications|
Show full item record
Files in This Item:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.