Project Description

None of important problems in geometry processing is that of reconstructing a high-order surface representation (a spline surface or a subdivision surface) from a 3d point cloud, a fine triangle mesh, or a set of range scans – most commonly generated by some acquisition process (laser scanning, 3D imaging such at CAT or MRI scan etc) [4]. Typically reconstructed surface is needed for further processing: deformation, adding detail and other types of geometry editing, or engineering analysis and optimization.

The robust approach to such problem would highly rely on geometric features: specific points, lines and vector fields, which are to be estimated from raw data. The aim of this project is to investigate how deep learning architectures may improve the quality of geometric features estimation, and to propose an architecture for end-to-end learnable parametric surface reconstruction. There are existing methods that do it with some success, but suffer from two problems: either the layout can only follow clearly detectable geometric features (like mouth edges), not more difficult/impossible to find directions, or one needs to use predefined templates and fit it to a given geometry.

Several existing neural network architectures are used to predict local geometric features: PointNet++, DGCNN, PointCNN, etc [1-3].

As a result of this project, it is expected to develop a new architecture for complete CAD model reconstruction with parametric representation of surfaces from raw data. Such framework will have a vast range of applications, including reverse-engineering of mechanical parts, computer design and animation, navigation of self-driving vehicles.

Tasks

  • Compare different deep architectures for point cloud processing in the problem of local features prediction
  • Investigate how accuracy of features prediction influence accuracy of surface mesh reconstruction
  • Prediction of a vector field of directions that defines in some sense the optimal directions of a 3D mesh edges
  • Develop new architectures capable to reconstruct meshes taking into account predicted local features

References

  1. C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deephierarchical feature learning on point sets in a metric space.arXiv preprint arXiv:1706.02413, 2017
  2. Guerrero, Y. Kleiman, M. Ovsjanikov, and N. J. Mitra.PCPNet: Learning local shape properties from raw pointclouds.Computer Graphics Forum, 37(2):75–85, 2018
  3. L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. Pu-net: Point cloud upsampling network. InProceedings of theIEEE Conference on Computer Vision and Pattern Recogni-tion, pages 2790–2799, 2018
  4. M. Botsch, Leif Kobbelt, et al. Polygon Mesh Processing.CRC Press Book, 2010