FLUX and DiffusionBee 2.5.3
In today's video, I dive into the latest update to DiffusionBee 2.5.3, a fast and innovative stable diffusion application.
this released update to DiffusionBee lets you Utilize Flux and embeddings
With everything ran locally on your computer offline
O:10 intro
1:04 terminology
1:14 what is StableDiffusion?
1:23 what is prompting?
1:40 What are embedding?
2:51 what are image weights?
3:03 what are safetenors?
3:29 what are LoRa?
4:00 what are Flux Models?
4:13 what are Pony models?
4:47 what is safe for work mode?
4:49 XL LoRa and XL base models?
5:08 What is negative prompting?
5:20 What are samplers?
6:35 What are step counts?
7:22 What are Flux models and step counts?
8:25 Computer specs, and benchmarks
9:40 what’s missing? ?
10:10 downloading App
12:35 New Layout
14:11 importing models
15:42 embedding installation
17:13 importing LoRa
18:06 compatible models on civitai
18:33 navigating the home screen
18:57 text2img
27:27 weighting
34:37 Styles drop-down menu
36:10 Flux and style demo
40:56 Controlnet
40:58 Depth map
43:05 Bodypose
43:56 Lineart
45:08 Scribble
47:00 Tile / img2img technique
51:40 LoRa
53:48 Trigger words
57:17 img2img
1:04:00 Ai Canvas
1:12:36 Illustration Generator
1:19:05 Inpainting
1:24:32 upscaler
1:27:18 upscayl upscaling alternative tool
1:29:10 history tab breakdown
1:31:06 FLUX Schnell vs Dev
1:50:54 Training
1:51:56 resizing images
1:53:05 image captioning
2:11:58 LoRa merge
2:18:38 Deform video
2:19:00 Interpolation
2:23:30 Closing Thoughts
THE APP
https://github.com/divamgupta/diffusionbee-stable-diffusion-ui/releases
WEBSITE
https://diffusionbee.com
My models
https://civitai.com/user/theprotoartent677
https://huggingface.co/brushpenbob/DiffusionBee/tree/main
Resize Images
https://upscayl.org/ For larger images
https://www.birme.net/ shrink images for training
Clean up photos if you don’t have photoshop:
https://www.photopea.com/
AI model training service:
https://dreamlook.ai
Can also train in civitai.com
Tool to convert txt to Json:
https://colab.research.google.com/drive/13s9cMduESF4Wzv8tVcajQPLjrdoH5hH3#scrollTo=jmHafIiwa9F6
Captioning tools:
https://docs.pinokio.computer/download/applemac.html
https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator
https://huggingface.co/spaces/hysts/DeepDanbooru
QR Code maker
http://34qr.com
Materials:
the needed python script, read me file [with all the instructions as seen in the video] as well as my batch text document installer from my previous video for captioning tutorial
https://drive.google.com/file/d/1xL8PBlbjCBXkkidoKok4B3IIhrUCnIkG/view?usp=sharing
Original GitHub source for code
https://github.com/Akegarasu/sd-model-converter
[to:when] - adds to to the prompt after a fixed number of steps (when)
[from::when] - removes from from the prompt after a fixed number of steps (when)
Example: a [fantasy:cyberpunk:16] landscape..
At start, the model will be drawing a fantasy landscape.
After step 16, it will switch to drawing a “cyberpunk landscape”, continuing from where it stopped with fantasy.
Article---
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing
[object:X] - adds object after a X steps
[object::X] - removes object after a X steps
[object1:object2:0.X] - first X% is object1, then swaps to object2
[object1:object2:X] - starts with object1, then changes to object2 at step X
[object1|object2] - alternates between object1 and object2 every step
[object1|object2|...|objectN] - alternates between object1 ... objectN, then loops back to object1
MORE articles to read:
https://nexttrain.io/2023/12/21/what-is-stable-diffusion-weights-in-prompt/
https://www.andyhtu.com/post/prompt-weights-punctuations-how-to-use-it-in-automatic1111-stable-diffusion
cheatsheet:
a (word) - increase attention to word by a factor of 1.1
a ((word)) - increase attention to word by a factor of 1.21 (= 1.1 * 1.1)
a [word] - decrease attention to word by a factor of 1.1
a (word:1.5) - increase attention to word by a factor of 1.5
a (word:0.25) - decrease attention to word by a factor of 4 (= 1 / 0.25)
a \(word\) - use literal ( ) characters in prompt
[word | word] - a different way to blend multiple prompts, weights can be used
(word | word) - a different way to blend multiple prompts, weights can be used
word AND word - a different way to blend multiple prompts, weights can be used
[word:to:word] - [lion:robot] (This blends lion and robot equally)
[word:word:step] - lion:robot:20
(This means that 20 steps in, and it will change to the robot prompt.
#theprotoart #diffusionbee #diffusionbetutorial #howtousediffusionbee #howto #stablediffusionXL #stablediffusion #stablediffusiontutorial #art #artist #fluxschnell #DigitalArt #illusiondiffusion #ArtTips #ArtTutorials #fluxdev #DiffusionArt #DigitalArt #ArtTutorials #DigitalIllustration #img2img #flux
社群媒體聆聽
LORA training EXPLAINED for beginners
LORA training guide/tutorial so you can understand how to use the important parameters on KohyaSS. Train in minutes with Dreamlook.AI: https://dreamlook.ai/?via=N4T code: "NOT4TALENT" Join our Discord server: https://discord.gg/FWPkVbgYyK (Amazing people like LeFourbe on there) ------------- Links used in the VIDEO ---------- Folder to JSON Script: https://drive.google.com/drive/folders/1xW4SFCXi8iX0bN4--zH2A-cvv_1Ah1Zg?usp=sharing KohyaSS: https://github.com/bmaltais/kohya_ss Fastest Model training: https://dreamlook.ai Alpha Rank and Dim post by @AsheJunius https://ashejunius.com/alpha-and-dimensions-two-wild-settings-of-training-lora-in-stable-diffusion-d7ad3e3a3b0a Google Colabs for "free" training: https://github.com/camenduru/stable-diffusion-webui-colab/tree/training#-community-kohya-ss--training-colabs-gpu https://colab.research.google.com/github/Linaqruf/kohya-trainer/blob/main/kohya-LoRA-dreambooth.ipynb Super detailed LORA training guide by: "The Other Lora Rentry Guy" ?: https://rentry.co/59xed3#preamble BooruDatasetTagManager: https://github.com/starik222/BooruDatasetTagManager/releases/tag/v1.6.5 ------------- Social Media ---------- -Instagram: https://www.instagram.com/not4talent_ai/ -Twitter: https://twitter.com/not4talent Make sure to subscribe if you want to learn about AI and grow with the community as we surf the AI wave :3 #aiairt #digitalart #automatic1111#stablediffusion #ai #free #tutorial #betterart #goodimages #sd #digitalart #artificialintelligence #kohyaSS #kohya #LORA #Training #LoraTraining #outpainting #img2img #dreamlook #dreamlookAI #consistentCharacters #characters #characterdesign #personaje 0:00 intro 0:10 What we need 0:23 Install KohyaSS 1:38 Thanks to LeFourbe 1:54 What are LORA 2:35 Best Datasets 4:26 How to get the images 5:06 Best Captioning 6:32 Captioning but AI POV 9:38 Captioning Example 11:00 Using BooruDatasetTagmanager 13:10 Training decisions 13:25 Choosing a model 14:02 Folder Structure making 14:32 What Regularization does 15:06 Steps and epochs Explained 16:50 Aprox recommendation 17:00 Ill use 14 steps and 6 epochs 17:32 Creating the folders 18:00 Training Parameters 19:25 Learning Rate Explained 20:40 LR scheduler 21:02 Use AdamW or AdamW8bit 21:30 Network Rank and Alpha 22:10 Resolution and Bucketing 23:10 Advanced Options 24:05 Train AI in minutes (sponsored) 26:10 Test Results 27:23 Thanks for watching :3
FLUX and DiffusionBee 2.5.3
In today's video, I dive into the latest update to DiffusionBee 2.5.3, a fast and innovative stable diffusion application. this released update to DiffusionBee lets you Utilize Flux and embeddings With everything ran locally on your computer offline O:10 intro 1:04 terminology 1:14 what is StableDiffusion? 1:23 what is prompting? 1:40 What are embedding? 2:51 what are image weights? 3:03 what are safetenors? 3:29 what are LoRa? 4:00 what are Flux Models? 4:13 what are Pony models? 4:47 what is safe for work mode? 4:49 XL LoRa and XL base models? 5:08 What is negative prompting? 5:20 What are samplers? 6:35 What are step counts? 7:22 What are Flux models and step counts? 8:25 Computer specs, and benchmarks 9:40 what’s missing? ? 10:10 downloading App 12:35 New Layout 14:11 importing models 15:42 embedding installation 17:13 importing LoRa 18:06 compatible models on civitai 18:33 navigating the home screen 18:57 text2img 27:27 weighting 34:37 Styles drop-down menu 36:10 Flux and style demo 40:56 Controlnet 40:58 Depth map 43:05 Bodypose 43:56 Lineart 45:08 Scribble 47:00 Tile / img2img technique 51:40 LoRa 53:48 Trigger words 57:17 img2img 1:04:00 Ai Canvas 1:12:36 Illustration Generator 1:19:05 Inpainting 1:24:32 upscaler 1:27:18 upscayl upscaling alternative tool 1:29:10 history tab breakdown 1:31:06 FLUX Schnell vs Dev 1:50:54 Training 1:51:56 resizing images 1:53:05 image captioning 2:11:58 LoRa merge 2:18:38 Deform video 2:19:00 Interpolation 2:23:30 Closing Thoughts THE APP https://github.com/divamgupta/diffusionbee-stable-diffusion-ui/releases WEBSITE https://diffusionbee.com My models https://civitai.com/user/theprotoartent677 https://huggingface.co/brushpenbob/DiffusionBee/tree/main Resize Images https://upscayl.org/ For larger images https://www.birme.net/ shrink images for training Clean up photos if you don’t have photoshop: https://www.photopea.com/ AI model training service: https://dreamlook.ai Can also train in civitai.com Tool to convert txt to Json: https://colab.research.google.com/drive/13s9cMduESF4Wzv8tVcajQPLjrdoH5hH3#scrollTo=jmHafIiwa9F6 Captioning tools: https://docs.pinokio.computer/download/applemac.html https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator https://huggingface.co/spaces/hysts/DeepDanbooru QR Code maker http://34qr.com Materials: the needed python script, read me file [with all the instructions as seen in the video] as well as my batch text document installer from my previous video for captioning tutorial https://drive.google.com/file/d/1xL8PBlbjCBXkkidoKok4B3IIhrUCnIkG/view?usp=sharing Original GitHub source for code https://github.com/Akegarasu/sd-model-converter [to:when] - adds to to the prompt after a fixed number of steps (when) [from::when] - removes from from the prompt after a fixed number of steps (when) Example: a [fantasy:cyberpunk:16] landscape.. At start, the model will be drawing a fantasy landscape. After step 16, it will switch to drawing a “cyberpunk landscape”, continuing from where it stopped with fantasy. Article--- https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing [object:X] - adds object after a X steps [object::X] - removes object after a X steps [object1:object2:0.X] - first X% is object1, then swaps to object2 [object1:object2:X] - starts with object1, then changes to object2 at step X [object1|object2] - alternates between object1 and object2 every step [object1|object2|...|objectN] - alternates between object1 ... objectN, then loops back to object1 MORE articles to read: https://nexttrain.io/2023/12/21/what-is-stable-diffusion-weights-in-prompt/ https://www.andyhtu.com/post/prompt-weights-punctuations-how-to-use-it-in-automatic1111-stable-diffusion cheatsheet: a (word) - increase attention to word by a factor of 1.1 a ((word)) - increase attention to word by a factor of 1.21 (= 1.1 * 1.1) a [word] - decrease attention to word by a factor of 1.1 a (word:1.5) - increase attention to word by a factor of 1.5 a (word:0.25) - decrease attention to word by a factor of 4 (= 1 / 0.25) a \(word\) - use literal ( ) characters in prompt [word | word] - a different way to blend multiple prompts, weights can be used (word | word) - a different way to blend multiple prompts, weights can be used word AND word - a different way to blend multiple prompts, weights can be used [word:to:word] - [lion:robot] (This blends lion and robot equally) [word:word:step] - lion:robot:20 (This means that 20 steps in, and it will change to the robot prompt. #theprotoart #diffusionbee #diffusionbetutorial #howtousediffusionbee #howto #stablediffusionXL #stablediffusion #stablediffusiontutorial #art #artist #fluxschnell #DigitalArt #illusiondiffusion #ArtTips #ArtTutorials #fluxdev #DiffusionArt #DigitalArt #ArtTutorials #DigitalIllustration #img2img #flux
Après plus de 3 semaines d’acclimatation dans les montagnes de l’Himalaya, @Inoxleshinobi devrait débuter son ascension de l’Everest le 5 mai. 🏔️ On a demandé à l’IA d’imaginer son arrivée au sommet avec ce prompt. 🤖 @Brut. et Brut.IA vous proposeront régulièrement l’actu vue par l’IA. 🧑💻Pour générer cette vidéo, nous avons créé un modèle IA sur dreamlook.ai entraîné par des images d’Inoxtag publiées sur ses réseaux sociaux. Nous avons ensuite utilisé DiffusionBee pour créer cette image avec ce prompt puis Gen-2 de Runway pour la transformer en vidéo.
總共有 13 筆社群媒體資料需要解鎖才能查看