Minoi,J.-L., Gillies,D.F., Robert,A.J.
This paper presents a novel approach of facial expression synthesis and animation using real data sets of people acquired by 3D scanners. Three-dimensional faces are generated automatically through an interface provided by the scanners. The acquired raw human face surfaces went through a pre-processing stage using rigid and non-rigid registration methods, and then each of the face surface is synthesized using linear interpolation approaches and multivariate statistical methods. Point-to-point correspondences between face surfaces are required in order to do the reconstruction and synthesis processes. Our experiments focused on dense correspondence, as well as, to use some points or selected landmarks to compute the deformation of facial expressions. The placement of landmarks is based on the Facial Action Coding System (FACS) framework and the movements were analysed according to the motions of the facial features. We have also worked on reconstructing a 3D face surface from a single two-dimensional (2D) face image of a person. After that, we employed tensor-based multivariate statistical methods using geometric 3D face information to reconstruct and animate the different facial expressions.