Semantizing Complex 3D Scenes using Constrained Attribute Grammars
Article first published online: 19 AUG 2013
© 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Computer Graphics Forum
Volume 32, Issue 5, pages 33–42, August 2013
How to Cite
Boulch, A., Houllier, S., Marlet, R. and Tournaire, O. (2013), Semantizing Complex 3D Scenes using Constrained Attribute Grammars. Computer Graphics Forum, 32: 33–42. doi: 10.1111/cgf.12170
- Issue published online: 19 AUG 2013
- Article first published online: 19 AUG 2013
- I.2.10 [Artificial Intelligence]: Vision and Scene Understanding—D/stereo scene analysis;
- I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Object hierarchies;
- I.4.8 [Image Processing and Computer Vision]: Scene Analysis—Object recognition;
- I.5.4 [Pattern Recognition]: Applications—Computer vision
We propose a new approach to automatically semantize complex objects in a 3D scene. For this, we define an expressive formalism combining the power of both attribute grammars and constraint. It offers a practical conceptual interface, which is crucial to write large maintainable specifications. As recursion is inadequate to express large collections of items, we introduce maximal operators, that are essential to reduce the parsing search space. Given a grammar in this formalism and a 3D scene, we show how to automatically compute a shared parse forest of all interpretations — in practice, only a few, thanks to relevant constraints. We evaluate this technique for building model semantization using CAD model examples as well as photogrammetric and simulated LiDAR data.