Update README.md
Browse files
README.md
CHANGED
|
@@ -5,11 +5,11 @@ tags:
|
|
| 5 |
- stable-diffusion
|
| 6 |
- text-to-image
|
| 7 |
---
|
| 8 |
-
#
|
| 9 |
|
| 10 |
## For use with a Swift app or the SwiftCLI
|
| 11 |
|
| 12 |
-
The SD models are all "
|
| 13 |
|
| 14 |
The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
|
| 15 |
|
|
@@ -31,14 +31,14 @@ The sizes are always meant to be WIDTH x HEIGHT. A 512x768 is "portrait" orient
|
|
| 31 |
|
| 32 |
**If you encounter any models that do not work fully with image2image and ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline or Mochi Diffusion 3.2 or the Mochi Diffusion CN test build, please leave a report in the Community area here. If you would like to add models that you have converted, leave a message as well, and I'll try to figure out out to grant you access to this repo.**
|
| 33 |
|
| 34 |
-
## Base Models - A Variety Of SD-1.5-Type Models For Use With
|
| 35 |
Each folder contains 4 zipped model files, output size as indicated: 512x512, 512x768, 768x512 or 768x768
|
| 36 |
-
- DreamShaper v5.0, 1.5-type model,
|
| 37 |
-
- GhostMix v1.1, 1.5-type anime model,
|
| 38 |
-
- MeinaMix v9.0 1.5-type anime model,
|
| 39 |
-
- MyMerge v1.0 1.5-type NSFW model,
|
| 40 |
-
- Realistic Vision v2.0, 1.5-type model,
|
| 41 |
-
- Stable Diffusion v1.5,
|
| 42 |
|
| 43 |
## ControlNet Models - All Current SD-1.5-Type ControlNet Models
|
| 44 |
Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512, 768x768
|
|
|
|
| 5 |
- stable-diffusion
|
| 6 |
- text-to-image
|
| 7 |
---
|
| 8 |
+
# ControlNet v1.1 Models And Compatible Stable Diffusion v1.5 Type Models Converted To Apple CoreML Format
|
| 9 |
|
| 10 |
## For use with a Swift app or the SwiftCLI
|
| 11 |
|
| 12 |
+
The SD models are all "Original" (not "Split-Einsum") and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
|
| 13 |
|
| 14 |
The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
|
| 15 |
|
|
|
|
| 31 |
|
| 32 |
**If you encounter any models that do not work fully with image2image and ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline or Mochi Diffusion 3.2 or the Mochi Diffusion CN test build, please leave a report in the Community area here. If you would like to add models that you have converted, leave a message as well, and I'll try to figure out out to grant you access to this repo.**
|
| 33 |
|
| 34 |
+
## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
|
| 35 |
Each folder contains 4 zipped model files, output size as indicated: 512x512, 512x768, 768x512 or 768x768
|
| 36 |
+
- DreamShaper v5.0, 1.5-type model, "Original"
|
| 37 |
+
- GhostMix v1.1, 1.5-type anime model, "Original"
|
| 38 |
+
- MeinaMix v9.0 1.5-type anime model, "Original"
|
| 39 |
+
- MyMerge v1.0 1.5-type NSFW model, "Original"
|
| 40 |
+
- Realistic Vision v2.0, 1.5-type model, "Original"
|
| 41 |
+
- Stable Diffusion v1.5, "Original"
|
| 42 |
|
| 43 |
## ControlNet Models - All Current SD-1.5-Type ControlNet Models
|
| 44 |
Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512, 768x768
|