Thie repo is modified from insightface
All training face images are aligned by MTCNN and cropped to 112x112:
Please check Dataset-Zoo for detail information and dataset downloading.
- Please check data_process/face2rec2.py on how to build a binary face dataset.
-
Install
MXNet
with GPU support (Python 3.5).pip install mxnet-cu100
-
Download the training set MS1M-Arcface and place it in
$Fairface-Recognition-Solution-ROOT/train/datasets/
. Each training dataset includes at least following 6 files:faces_emore/ train.idx train.rec property lfw.bin cfp_fp.bin agedb_30.bin
The first three files are the training dataset while the last three files are verification sets.
-
Train deep face recognition models.
Edit config file and set you data path and then run
./train.sh
-
Multi-step fine-tune the above Softmax model .
Download the trainging set fairface and then build a binary face dataset from it, then you can run./fairface_finetune.sh
to get the step 1 finetuned model
./fairface_step2_finetune.sh
to get the step2 finetuned model
./fairface_step3_finetune.sh
to get the step3 finetuned model
It's a multi step finetuing , we freeze all layers but final fc layer at step1 and then finetune all layers at step2 and then use the most discriminated protected data to finetune the model in step3
-
Hard-Sample finetune
After we get the finetuned model, we finetune another hard-sample model from the pretrained model (not finetuned) using hard-samples. Hard-samples means samples whose prediction argmax is different from the annotation, and we take all samples from the sub_id of hard-sampels.Training scripts are same with section4 but we only need 2 steps at this section.
- Download the pretrained model from model-zoo and test dataset from fairface , put the model in
$Fairface-Recognition-Solution-ROOT/test/final_eval_models
and data in$Fairface-Recognition-Solution-ROOT/test/TestData/tmp_data
and run
./do.sh