doi update
Browse files
README.md
CHANGED
|
@@ -10,11 +10,11 @@ tags:
|
|
| 10 |
|
| 11 |
## Description
|
| 12 |
|
| 13 |
-
[MONET]() is a CLIP ViT-L/14 vision-language foundation model trained on 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, competitively with
|
| 14 |
supervised models built on previously concept-annotated dermatology datasets of clinical images. MONET enables AI transparency across the entire AI system development pipeline from building inherently interpretable models to dataset and model auditing.
|
| 15 |
|
|
|
|
| 16 |
* [GitHub](https://github.com/suinleelab/MONET)
|
| 17 |
-
* [Paper](https://github.com/suinleelab/MONET)
|
| 18 |
|
| 19 |
## Citation
|
| 20 |
|
|
|
|
| 10 |
|
| 11 |
## Description
|
| 12 |
|
| 13 |
+
[MONET](https://doi.org/10.1038/s41591-024-02887-x) is a CLIP ViT-L/14 vision-language foundation model trained on 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, competitively with
|
| 14 |
supervised models built on previously concept-annotated dermatology datasets of clinical images. MONET enables AI transparency across the entire AI system development pipeline from building inherently interpretable models to dataset and model auditing.
|
| 15 |
|
| 16 |
+
* [Paper](https://doi.org/10.1038/s41591-024-02887-x)
|
| 17 |
* [GitHub](https://github.com/suinleelab/MONET)
|
|
|
|
| 18 |
|
| 19 |
## Citation
|
| 20 |
|