GoodWin commited on
Commit
32a81cc
Β·
1 Parent(s): 0f691e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -160
README.md CHANGED
@@ -1,171 +1,44 @@
1
- ## [Blind Face Restoration via Deep Multi-scale Component Dictionaries](https://arxiv.org/pdf/2008.00418.pdf)
 
 
 
 
 
 
 
2
 
3
- >##### __Note: This branch contains all the restoration results, including 512Γ—512 face region and the final result by putting the enhanced face to the origial input. The former version that can only generate the face result is put in [master branch](https://github.com/csxmli2016/DFDNet/tree/master)__
4
 
 
 
5
 
6
- <p>
7
- Overview of our proposed method. It mainly contains two parts: (a) the off-line generation of multi-scale component dictionaries from large amounts of high-quality images, which have diverse poses and expressions. K-means is adopted to generate K clusters for each component (i.e., left/right eyes, nose and mouth) on different feature scales. (b) The restoration process and dictionary feature transfer (DFT) block that are utilized to provide the reference details in a progressive manner. Here, DFT-i block takes the Scale-i component dictionaries for reference in the same feature level.
8
- </p>
9
-
10
 
11
- <img src="./Imgs/pipeline_a.png">
12
- <p align="center">(a) Offline generation of multi-scale component dictionaries.</p>
13
- <img src="./Imgs/pipeline_b.png">
14
- <p align="center">(b) Architecture of our DFDNet for dictionary feature transfer.</p>
15
 
 
 
16
 
17
- ## Pre-train Models and dictionaries
18
- Downloading from the following url and put them into ./.
19
- - [BaiduNetDisk](https://pan.baidu.com/s/1K4fzjPiezVSMl5NjHoJCGQ) (s9ht)
20
- - [GoogleDrive](https://drive.google.com/drive/folders/1bayYIUMCSGmoFPyd4Uu2Uwn347RW-vl5?usp=sharing)
21
 
22
- The folder structure should be:
23
-
24
- .
25
- β”œβ”€β”€ checkpoints
26
- β”‚ β”œβ”€β”€ facefh_dictionary
27
- β”‚ β”‚ └── latest_net_G.pth
28
- β”œβ”€β”€ weights
29
- β”‚ └── vgg19.pth
30
- β”œβ”€β”€ DictionaryCenter512
31
- β”‚ β”œβ”€β”€ right_eye_256_center.npy
32
- β”‚ β”œβ”€β”€ right_eye_128_center.npy
33
- β”‚ β”œβ”€β”€ right_eye_64_center.npy
34
- β”‚ β”œβ”€β”€ right_eye_32_center.npy
35
- β”‚ └── ...
36
- └── ...
37
 
38
- ## Prerequisites
39
- >([Video Installation Tutorial](https://www.youtube.com/watch?v=OTqGYMSKGF4). Thanks for [bycloudump](https://www.youtube.com/channel/UCfg9ux4m8P0YDITTPptrmLg)'s tremendous help.)
40
- - Pytorch (β‰₯1.1 is recommended)
41
- - dlib
42
- - dominate
43
- - cv2
44
- - tqdm
45
- - [face-alignment](https://github.com/1adrianb/face-alignment)
46
- ```bash
47
- cd ./FaceLandmarkDetection
48
- python setup.py install
49
- cd ..
50
- ```
51
-
52
 
53
- ## Testing
54
- ```bash
55
- python test_FaceDict.py --test_path ./TestData/TestWhole --results_dir ./Results/TestWholeResults --upscale_factor 4 --gpu_ids 0
56
- ```
57
- #### __Four parameters can be changed for flexible usage:__
58
- ```
59
- --test_path # test image path
60
- --results_dir # save the results path
61
- --upscale_factor # the upsample factor for the final result
62
- --gpu_ids # gpu id. if use cpu, set gpu_ids=-1
63
- ```
64
 
65
- >Note: our DFDNet can only generate 512&times;512 face result for any given face image.
66
-
67
- #### __Result path contains the following folder:__
68
- - Step0_Input: ```# Save the input image.```
69
- - Step1_AffineParam: ```# Save the crop and align parameters for copying the face result to the original input.```
70
- - Step1_CropImg: ```# Save the cropped face images and resize them to 512Γ—512.```
71
- - Step2_Landmarks: ```# Save the facial landmarks for RoIAlign.```
72
- - Step3_RestoreCropFace: ```# Save the face restoration result (512Γ—512).```
73
- - Step4_FinalResults: ```# Save the final restoration result by putting the enhanced face to the original input.```
74
-
75
- ## Some plausible restoration results on real low-quality images
76
-
77
- <table style="float:center" width=100%>
78
- <tr>
79
- <th><B>Input</B></th><th><B>Crop and Align</B></th><th><B>Restore Face</B></th><th><B>Final Results (UpScaleWhole=4)</B></th>
80
- </tr>
81
- <tr>
82
- <td>
83
- <img src='./Imgs/RealLR/n000056_0060_01.png'>
84
- </td>
85
- <td>
86
- <img src='./Imgs/RealLR/n000056_0060_01.png'>
87
- </td>
88
- <td>
89
- <img src='./Imgs/ShowResults/n000056_0060_01.png'>
90
- </td>
91
- <td>
92
- <img src='./Imgs/ShowResults/n000056_0060_01.png'>
93
- </td>
94
- </tr>
95
- <tr>
96
- <td>
97
- <img src='./Imgs/RealLR/n000184_0094_01.png'>
98
- </td>
99
- <td>
100
- <img src='./Imgs/RealLR/n000184_0094_01.png'>
101
- </td>
102
- <td>
103
- <img src='./Imgs/ShowResults/n000184_0094_01.png'>
104
- </td>
105
- <td>
106
- <img src='./Imgs/ShowResults/n000184_0094_01.png'>
107
- </td>
108
- </tr>
109
- <tr>
110
- <td>
111
- <img src='./Imgs/Whole/test1_0.jpg'>
112
- </td>
113
- <td>
114
- <img src='./Imgs/Whole/test1_1.jpg'>
115
- </td>
116
- <td>
117
- <img src='./Imgs/Whole/test1_2.jpg'>
118
- </td>
119
- <td>
120
- <img src='./Imgs/Whole/test1_3.jpg'>
121
- </td>
122
- </tr>
123
- <tr>
124
- <td>
125
- <img src='./Imgs/Whole/test2_0.jpg'>
126
- </td>
127
- <td>
128
- <img src='./Imgs/Whole/test2_1.jpg'>
129
- </td>
130
- <td>
131
- <img src='./Imgs/Whole/test2_2.jpg'>
132
- </td>
133
- <td>
134
- <img src='./Imgs/Whole/test2_3.jpg'>
135
- </td>
136
- </tr>
137
- <tr>
138
- <td>
139
- <img src='./Imgs/Whole/test5_0.jpg'>
140
- </td>
141
- <td>
142
- <img src='./Imgs/Whole/test5_1.jpg'>
143
- </td>
144
- <td>
145
- <img src='./Imgs/Whole/test5_2.jpg'>
146
- </td>
147
- <td>
148
- <img src='./Imgs/Whole/test5_3.jpg'>
149
- </td>
150
- </tr>
151
-
152
- </table>
153
-
154
- ## TO DO LIST (if possible)
155
- - [ ] Enhance all the faces in one image.
156
- - [ ] Enhance the background.
157
-
158
-
159
- ## Citation
160
-
161
- ```
162
- @InProceedings{Li_2020_ECCV,
163
- author = {Li, Xiaoming and Chen, Chaofeng and Zhou, Shangchen and Lin, Xianhui and Zuo, Wangmeng and Zhang, Lei},
164
- title = {Blind Face Restoration via Deep Multi-scale Component Dictionaries},
165
- booktitle = {ECCV},
166
- year = {2020}
167
- }
168
- ```
169
-
170
- <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
171
 
 
 
 
1
+ title: Deep Multi Scale
2
+ emoji: πŸ¦€
3
+ colorFrom: green
4
+ colorTo: green
5
+ sdk: gradio
6
+ app_file: app.py
7
+ pinned: false
8
+ ---
9
 
10
+ # Configuration
11
 
12
+ `title`: _string_
13
+ Display title for the Space
14
 
15
+ `emoji`: _string_
16
+ Space emoji (emoji-only character allowed)
 
 
17
 
18
+ `colorFrom`: _string_
19
+ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
 
 
20
 
21
+ `colorTo`: _string_
22
+ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
23
 
24
+ `sdk`: _string_
25
+ Can be either `gradio`, `streamlit`, or `static`
 
 
26
 
27
+ `sdk_version` : _string_
28
+ Only applicable for `streamlit` SDK.
29
+ See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
+ `app_file`: _string_
32
+ Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
33
+ Path is relative to the root of the repository.
 
 
 
 
 
 
 
 
 
 
 
34
 
35
+ `models`: _List[string]_
36
+ HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
37
+ Will be parsed automatically from your code if not specified here.
 
 
 
 
 
 
 
 
38
 
39
+ `datasets`: _List[string]_
40
+ HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
41
+ Will be parsed automatically from your code if not specified here.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
+ `pinned`: _boolean_
44
+ Whether the Space stays on top of your list.