Loading...
Loading...
Avg 7836.0 stars per repo.
Run Stable Diffusion on Mac natively

Stable Diffusion 1.5 with ControlNet
When using a model for the very first time, it may take up to 2 minutes for the Neural Engine to compile a cached version. Afterwards, subsequent generations will be much faster.
CPU & Neural Engine provides a good balance between speed and low memory usageCPU & GPU may be faster on M1 Max, Ultra and later but will use more memoryDepending on the option chosen, you will need to use the correct model version (see Models section for details).
You will need to convert or download Core ML models in order to use Mochi Diffusion.
split_einsum version is compatible with all compute unit options including Neural Engineoriginal version is only compatible with CPU & GPU option<Home Directory>/
└── MochiDiffusion/
└── models/
├── stable-diffusion-2-1_split-einsum_compiled/
│ ├── merges.txt
│ ├── TextEncoder.mlmodelc
│ ├── Unet.mlmodelc
│ ├── VAEDecoder.mlmodelc
│ ├── VAEEncoder.mlmodelc
│ └── vocab.json
├── ...
└── ...
All generation happens locally and absolutely nothing is sent to the cloud.
Mochi Diffusion is always looking for contributions, whether it's through bug reports, code, or new translations.
If you find a bug, or would like to suggest a new feature or enhancement, try searching for your problem first as it helps avoid duplicates. If you can't find your issue, feel free to create a new issue. Don't create an issue for your question as those are for bugs and feature requests only.
If you're looking to contribute code, feel free to open a Pull Request. I recommend installing swift-format to catch lint issues.
If you'd like to translate Mochi Diffusion to your language, please visit the project page on Crowdin. You can create an account for free and start translating and/or approving.