Creating a Dataset of Faces for my Autoencoder with Semi-supervised LearningPar Eric Antoine Scuccimarra
I am still working on my face autoencoder in my spare time, although I have much less spare time lately. My non-variational autoencoder works great - it can very accurately reconstruct any face in my dataset of 400,000 faces, but it doesn't work at all for interpolation or anything like that. So I have also been trying to train a variational autoencoder, but it has a lot more difficulty learning.
For a face which is roughly centered and looking in the general direction of the camera it can do a somewhat decent job, but if the picture is off in any way - there is another face off to the side, there is something blocking the face, the face is at a strange angle, etc it does a pretty bad job. And since I want to try to use this for interpolation training it on these bad faces doesn't really help anything.
One of the biggest datasets I am using is this one from ETHZ. The dataset was created to train a network to predict the age of the person, and while the images are all of good quality it does include many images that have some of the issues I mentioned above, as well as pictures that are not faces at all - like drawings or cartoons. Other datasets I am using consist entirely of properly cropped faces as I described above, but this dataset is almost 200k images, so omitting it completely significantly reduces the size of my training data.
The other day I decided I needed to improve the quality of my training dataset if I ever want to get this variational autoencoder properly trained, and to do that I need to filter out the bad images from the ETHZ IMDB dataset. They had already created the dataset using face detectors, but I want to remove faces that have certain attributes:
- Multiple faces or parts of faces in the image
- Images with something blocking part of the face
- Images where the faces are not generally facing forward, such as profiles
I started trying to curate them manually, but after going through 500 images of the 200k I realized that would not be feasible. It would be easy to train a neural network to classify the faces, but that would require training data, but that still means manually classifying the faces. So, what I did is I took another dataset of faces that were all good and added about 700 bad faces from the IMDB dataset for a total size of about 7000 images and made a new dataset. Then I took a pre-trained discriminator I had previously used as part of a GAN to try to generate faces and retrained it to classify the faces as good or bad.
I ran this for about 10 epochs, until it was achieving very good accuracy, and then I used it to evaluate the IMDB dataset. Any image which it gave a less than 0.03 probability of being good I moved into the bad training dataset, and any images which it gave a 0.99 probability of being good I moved to the good training dataset. Then I continued training it and so on and so on.
This is called weak supervision or semi-supervised learning, and it works a lot better than I thought it would. After training for a few hours, the images which are moved all seem to be correctly classified, and after each iteration the size of the training dataset grows to allow the network to continue learning. Since I only move images which have very high or very low probabilities, the risk of a misclassification should be relatively low, and I expect to be able to completely sort the IMDB dataset by the end of tomorrow, maybe even sooner. What would have taken weeks or longer to do manually has been reduced to days thanks to transfer learning and weak supervision!
CommentsPlease login to comment
Aug 22, 2019 at 04:58 pm
ΠΠΎΠ΄ ΡΠ΅ΡΠΌΠΈΠ½ΠΎΠΌ Β«ΠΏΡΠΎΡΠΈΠ»ΡΒ» Π² ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠΈ ΠΏΠ»Π°ΡΡΠΈΠΊΠΎΠ²ΡΡ ΠΎΠΊΠΎΠ½ ΠΈ Π΄Π²Π΅ΡΠ΅ΠΉ ΠΏΠΎΠ΄ΡΠ°Π·ΡΠΌΠ΅Π²Π°Π΅ΡΡΡ ΠΌΠ°ΡΠ΅ΡΠΈΠ°Π», ΠΈΠ· ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ ΡΠΎΠ·Π΄Π°Π½Ρ ΡΠ°ΠΌΡ ΠΈ ΡΡΠ²ΠΎΡΠΊΠΈ. ΠΠ»Ρ Π²Π΅ΡΡΠΈΠΊΠ°Π»ΡΠ½ΡΡ ΠΈ Π³ΠΎΡΠΈΠ·ΠΎΠ½ΡΠ°Π»ΡΠ½ΡΡ ΡΠ»Π΅ΠΌΠ΅Π½ΡΠΎΠ² Π² ΠΎΠΊΠ½Π°Ρ ΠΠΠ₯ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΡΠ»ΠΎΠΆΠ½Π°Ρ ΠΊΠΎΠ½ΡΡΡΡΠΊΡΠΈΡ ΠΈΠ· ΠΌΠ΅ΡΠ°Π»Π»Π° (Π°Π»ΡΠΌΠΈΠ½ΠΈΡ ΠΈΠ»ΠΈ ΡΡΠ°Π»ΠΈ) ΠΈ ΠΏΠΎΠ»ΠΈΠ²ΠΈΠ½ΠΈΠ»Ρ Π»ΠΎΡΠΈΠ΄Π°. Π ΠΏΠΎΠΏΠ΅ΡΠ΅ΡΠ½ΠΎΠΌ ΡΠ΅ΡΠ΅Π½ΠΈΠΈ ΠΎΠ½Π° ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»ΡΠ΅Ρ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΠΏΡΠΎΠ΄ΠΎΠ»ΡΠ½ΡΡ ΠΏΠΎΠ»ΠΎΡΡΠ΅ΠΉ, ΡΠ°Π·Π΄Π΅Π»Π΅Π½Π½ΡΡ Π³Π΅ΡΠΌΠ΅ΡΠΈΡΠ½ΡΠΌΠΈ ΡΡΠ΅Π½ΠΊΠ°ΠΌΠΈ. Π’Π°ΠΊΠΈΠ΅ ΠΏΠΎΠ»ΠΎΡΡΠΈ Π½Π°Π·ΡΠ²Π°ΡΡΡΡ ΠΊΠ°ΠΌΠ΅ΡΠ°ΠΌΠΈ, ΡΠ΅ΠΌ ΠΈΡ Π±ΠΎΠ»ΡΡΠ΅, ΡΠ΅ΠΌ, ΡΠ΅ΠΎΡΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈ, Π»ΡΡΡΠ΅ β ΠΎΠΊΠ½Π° ΡΡΠ°Π½ΠΎΠ²ΡΡΡΡ ΡΠ΅ΠΏΠ»Π΅Π΅ ΠΈ ΠΏΡΠΎΡΠ½Π΅Π΅. 1. Π£ΠΏΠ»ΠΎΡΠ½ΠΈΡΠ΅Π»Ρ. 2. Π-ΠΎΠ±ΡΠ°Π·Π½ΠΎΠ΅ Π°ΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅. 3. Π¨ΡΠ°ΠΏΠΈΠΊ Ρ ΡΠΏΠ»ΠΎΡΠ½ΠΈΡΠ΅Π»Π΅ΠΌ. 4. ΠΠ°ΠΌΠ΅ΡΡ Ρ ΠΏΡΠΎΡΠΈΠ»Π΅ΠΌ. 5. Π£ΠΏΠ»ΠΎΡΠ½ΠΈΡΠ΅Π»Ρ. 6. ΠΠ²Π°Π΄ΡΠ°ΡΠ½ΠΎΠ΅ Π°ΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅. ΠΡΠ΅ ΠΌΡ ΠΏΡΠ΅Π»Π΄Π°Π³Π°Π΅ΠΌ <a href=https://vk.com/public182308832>ΠΊΡΠΏΠΈΡΡ ΠΏΠ»Π°ΡΡΠΈΠΊΠΎΠ²ΡΠ΅ ΠΎΠΊΠ½Π° ΠΏΠ²Ρ ΡΠ΅Π½Π°</a>