realtime-2D-to-3D-faces

License: GPL v3 GitHub stars GitHub forks GitHub issues GitHub repo size

Avatar_Moving

Introduction

In this project, an infrastructure capable of reconstructing real-time 3D faces has been set up using 2D images using deep learning. Given a video stream we process each frame with the model learned and we visualize the reconstruction of the 3D face using the WebGL Studio platform.

One of the main objectives of the project is to obtain an intermediate representation that allows to send the data of the faces in an efficient way for its use in videoconferencing. In order to obtain this representation, a PCA model has been created representing all possible 3D configurations that are wanted to take into account. To do this, the AFLW2000-3D dataset has been used and processed with the 3DDFA library to obtain the corresponding 3D face points clouds. Once the model has been created, a neural network has been trained using the Resnet architecture where the input data is the 2D images used to learn the PCA model and the output ones its projection on the subspace learned through PCA. Finally, these coordinates are used to reconstruct and draw the 3D point cloud using WebGL Studio.

Creating the PCA Model and the Meshes for WebGlStudio

3D_Meshes_Generation

Training the main model

Train_Model

Using the model

Use_Model

Results

Avatar

Assets

Used texture (manually made just to test the system):

Texture

Disclaimers

Authors

License

This project is licensed under the GNU License - see the LICENSE.md file for details.