By adding low-rank parameter efficient fine tuning to ControlNet, we introduce
Control-LoRAs
. This approach offers a more efficient and compact method to bring model control to a wider variety of consumer GPUs.
For each model below, you'll find:
Rank 256
files (reducing the original
4.7GB
ControlNet models down to
~738MB
Control-LoRA models) and experimental
Rank 128
files (reducing to model down to
~377MB
)
Each Control-LoRA has been trained on a diverse range of image concepts and aspect ratios.
MiDaS and ClipDrop Depth
This Control-LoRA utilizes a grayscale depth map for guided generation.
Depth estimation is an image processing technique that determines the distance of objects in a scene, providing a depth map that highlights variations in proximity.
The model was trained on the depth results of
MiDaS dpt_beit_large_512
.
control-lora huggingface.co is an AI model on huggingface.co that provides control-lora's model effect (), which can be used instantly with this stabilityai control-lora model. huggingface.co supports a free trial of the control-lora model, and also provides paid use of the control-lora. Support call control-lora model through api, including Node.js, Python, http.
control-lora huggingface.co is an online trial and call api platform, which integrates control-lora's modeling effects, including api services, and provides a free online trial of control-lora, you can try control-lora online for free by clicking the link below.
stabilityai control-lora online free url in huggingface.co:
control-lora is an open source model from GitHub that offers a free installation service, and any user can find control-lora on GitHub to install. At the same time, huggingface.co provides the effect of control-lora install, users can directly use control-lora installed effect in huggingface.co for debugging and trial. It also supports api for free installation.