Spaces:
Running
on
Zero
Running
on
Zero
Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,147 +1,8 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
</div>
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
### π· The whole framework:
|
| 14 |
-
<div align="center">
|
| 15 |
-
<img src="./Imgs/pipeline.png" width="800px">
|
| 16 |
-
</div>
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
<!--
|
| 21 |
-
<p align="justify">Faithful text image super-resolution (SR) is challenging because each character has a unique structure and usually exhibits diverse font styles and
|
| 22 |
-
layouts. While existing methods primarily focus on English text, less attention has been paid to more complex scripts like Chinese. In this paper, we introduce a high-quality text image SR framework designed to restore the precise strokes of low-resolution (LR) Chinese characters. Unlike methods that rely on character recognition priors to regularize the SR task, we propose a novel structure prior that offers structure-level guidance to enhance visual quality. Our framework incorporates this structure prior within a StyleGAN model, leveraging its generative capabilities for restoration. To maintain the integrity of character structures while accommodating various font styles and layouts, we implement a codebook-based mechanism that restricts the generative space of StyleGAN. Each code in the codebook represents the structure of a specific character, while the vector $w$ in StyleGAN controls the character's style, including typeface, orientation, and location. Through the collaborative interaction between the codebook and style, we generate a high-resolution structure prior that aligns with LR characters both spatially and structurally. Experiments demonstrate that this structure prior provides robust, character-specific guidance, enabling the accurate restoration of clear strokes in degraded characters, even for real-world LR Chinese text with irregular layouts. </p>
|
| 23 |
-
-->
|
| 24 |
-
|
| 25 |
-
### π· Character Structure Prior Pretraining:
|
| 26 |
-
<div align="center">
|
| 27 |
-
<img src="./Imgs/prior.gif" width="800px">
|
| 28 |
-
</div>
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
## π MARCONet π MARCONet++
|
| 32 |
-
> - MARCONet is designed for **regular character layout** only. See details of [MARCONet](https://github.com/csxmli2016/MARCONet).
|
| 33 |
-
> - MARCONet++ has more accurate alignment between character structural prior (green structure) and the degraded image.
|
| 34 |
-
<div align="center">
|
| 35 |
-
<img src="./Imgs/marconet_vs_marconetplus.jpg" width="800px">
|
| 36 |
-
</div>
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
## π TODO
|
| 41 |
-
- [x] Release the inference code and model.
|
| 42 |
-
- [ ] Release the training code (no plans to release for now).
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
## πΆ Getting Started
|
| 46 |
-
|
| 47 |
-
```
|
| 48 |
-
git clone https://github.com/csxmli2016/MARCONetPlusPlus
|
| 49 |
-
cd MARCONetPlusPlus
|
| 50 |
-
conda create -n mplus python=3.8 -y
|
| 51 |
-
conda activate mplus
|
| 52 |
-
pip install -r requirements.txt
|
| 53 |
-
```
|
| 54 |
-
|
| 55 |
-
## πΆ Inference
|
| 56 |
-
Download the pre-trained models
|
| 57 |
-
```
|
| 58 |
-
python utils/download_github.py
|
| 59 |
-
```
|
| 60 |
-
|
| 61 |
-
and run for restoring **text lines:**
|
| 62 |
-
```
|
| 63 |
-
CUDA_VISIBLE_DEVICES=0 python test_marconetplus.py -i ./Testsets/LR_TextLines -a -s
|
| 64 |
-
```
|
| 65 |
-
or run for restoring **the whole text image:**
|
| 66 |
-
```
|
| 67 |
-
CUDA_VISIBLE_DEVICES=0 python test_marconetplus.py -i ./Testsets/LR_Whole -b -s -f 2
|
| 68 |
-
```
|
| 69 |
-
|
| 70 |
-
```
|
| 71 |
-
# Parameters:
|
| 72 |
-
-i: --input_path, default: ./Testsets/LR_TextLines or ./Testsets/LR_TextWhole
|
| 73 |
-
-o: --output_path, default: None will automatically make the saving dir with the format of '[LR path]_TIME_MARCONetPlus'
|
| 74 |
-
-a: --aligned, if the input is text lines, use -a; otherwise, the input is the whole text image and needs text line detection, do not use -a
|
| 75 |
-
-b: --bg_sr, when restoring the whole text images, use -b to restore the background region with BSRGAN. Without -b, background will keep the same as input
|
| 76 |
-
-f: --factor_scale, default: 2. When restoring the whole text images, use -f to define the scale factor of output
|
| 77 |
-
-s: --save_text, if you want to see the details of prior alignment, predicted characters, and locations, use -s
|
| 78 |
-
```
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
## π Restoring Real-world Chinese Text Images
|
| 83 |
-
> - We use [BSRGAN](https://github.com/cszn/BSRGAN) to restore the background region.
|
| 84 |
-
> - The parameters are tested on an NVIDIA A100 GPU (40G).
|
| 85 |
-
> - β οΈ If the inference speed is slow, this is caused by the large size of the input text image or the large factor_scale. You can resize it based on your needs.
|
| 86 |
-
|
| 87 |
-
[<img src="Imgs/whole_1.jpg" height="270px"/>](https://imgsli.com/NDA2MDUw) [<img src="Imgs/whole_2.jpg" height="270px"/>](https://imgsli.com/NDA2MDYw)
|
| 88 |
-
|
| 89 |
-
[<img src="Imgs/whole_3.jpg" height="540px"/>](https://imgsli.com/NDA2MTE0) [<img src="Imgs/whole_4.jpg" height="540px" width="418px"/>](https://imgsli.com/NDA2MDYy)
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
## π Restoring detected text line
|
| 93 |
-
|
| 94 |
-
<img src="Imgs/text_line_sr.jpg" width="800px"/>
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
<details><summary><h2>π Style w interpolation from three characters with different styles</h2></summary>
|
| 98 |
-
<img src="./Imgs/w-interpolation.gif" width="400px">
|
| 99 |
-
</details>
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
## βΌοΈ Failure Case
|
| 105 |
-
Despite its high-fidelity performance, MARCONet++ still struggles in some real-world scenarios as it highly relies on:
|
| 106 |
-
|
| 107 |
-
- Real world character **Recognition** on complex degraded text images
|
| 108 |
-
- Real world character **Detection** on complex degraded text images
|
| 109 |
-
- Text line detection and segmentation
|
| 110 |
-
- Domain gap between our synthetic and real-world text images
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
<img src="./Imgs/failure_case.jpg" width="800px">
|
| 114 |
-
|
| 115 |
-
> π Restoring complex character with high fidelity under such conditions has significant challenges.
|
| 116 |
-
We have also explored various approaches, such as training OCR models with Transformers and using YOLO or Transformer-based methods for character detection, but these methods generally encounter the same issues.
|
| 117 |
-
We encourage any potential collaborations to jointly tackle this challenge and advance robust, high-fidelity text restoration.
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
## π RealCE-1K benchmark
|
| 121 |
-
To quantitatively evaluate on real-world Chinese text line images, we curate a benchmark by filtering the [RealCE](https://github.com/mjq11302010044/Real-CE) test set to exclude images containing multiple text lines or inaccurate annotations, thereby constructing a Chinese text SR benchmark (see Section IV.B of our paper). You can download the RealCE-1K benchmark from [here](https://github.com/csxmli2016/MARCONetPlusPlus/releases/download/v1/RealCE-1K.zip).
|
| 122 |
-
|
| 123 |
-
## πΊ Acknowledgement
|
| 124 |
-
This project is built based on the excellent [KAIR](https://github.com/cszn/KAIR) and [RealCE](https://github.com/mjq11302010044/Real-CE).
|
| 125 |
-
|
| 126 |
-
## Β©οΈ License
|
| 127 |
-
This project is licensed under <a rel="license" href="https://github.com/csxmli2016/MARCONetPlusPlus/blob/main/LICENSE">NTU S-Lab License 1.0</a>. Redistribution and use should follow this license.
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
## π» Citation
|
| 131 |
-
```
|
| 132 |
-
@article{li2025marconetplus,
|
| 133 |
-
author = {Li, Xiaoming and Zuo, Wangmeng and Loy, Chen Change},
|
| 134 |
-
title = {Enhanced Generative Structure Prior for Chinese Text Image Super-Resolution},
|
| 135 |
-
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
|
| 136 |
-
year = {2025}
|
| 137 |
-
}
|
| 138 |
-
|
| 139 |
-
@InProceedings{li2023marconet,
|
| 140 |
-
author = {Li, Xiaoming and Zuo, Wangmeng and Loy, Chen Change},
|
| 141 |
-
title = {Learning Generative Structure Prior for Blind Text Image Super-resolution},
|
| 142 |
-
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
| 143 |
-
year = {2023}
|
| 144 |
-
}
|
| 145 |
-
```
|
| 146 |
-
|
| 147 |
-
|
|
|
|
| 1 |
+
title: MARCONet++
|
| 2 |
+
emoji: π
|
| 3 |
+
colorFrom: blue
|
| 4 |
+
colorTo: green
|
| 5 |
+
sdk: gradio
|
| 6 |
+
sdk_version: 4.44.1
|
| 7 |
+
app_file: app.py
|
| 8 |
+
pinned: true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|