Readme
This model doesn't have a readme.
📖 PuLID: Pure and Lightning ID Customization via Contrastive Alignment
Blip 3 / XGen-MM, Answers questions about images ({blip3,xgen-mm}-phi3-mini-base-r-v1)
Make realistic images of real people instantly
⚡️FLUX PuLID: FLUX-dev based Pure and Lightning ID Customization via Contrastive Alignment🎭
Create song covers with any RVC v2 trained AI voice from audio files.
🎨 Fill in masked parts of images with FLUX.1-dev 🖌️
Age prediction using CLIP - Patched version of `https://replicate.com/andreasjansson/clip-age-predictor` that works with the new version of cog!
✍️✨Prompts to auto-magically relights your images
✨DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior
allenai/Molmo-7B-D-0924, Answers questions and caption about images
🎨 AnimateDiff (w/ MotionLoRAs for Panning, Zooming, etc): Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
Add sound to video. An advanced AI model that synthesizes high-quality audio from video content, enabling seamless video-to-audio transformation
📽️ Increase Framerate 🎬 ST-MFNet: A Spatio-Temporal Multi-Flow Network for Frame Interpolation
FILM: Frame Interpolation for Large Motion, In ECCV 2022.
Monster Labs' Controlnet QR Code Monster v2 For SD-1.5 on top of AnimateDiff Prompt Travel (Motion Module SD 1.5 v2)
🖼️✨Background images + prompts to auto-magically relights your images (+normal maps🗺️)
Real-Time Open-Vocabulary Object Detection
Text-to-Video + Image-to-Video: Pyramid Flow Autoregressive Video Generation method based on Flow Matching
FLUX.1-dev Inpainting ControlNet model
Create your own Realistic Voice Cloning (RVC v2) dataset using a YouTube link
🎨AnimateDiff Prompt Travel🧭 Seamlessly Navigate and Animate Between Text-to-Image Prompts for Dynamic Visual Narratives
🎨 Fill in masked parts of images with FLUX.1-schnell 🖌️
AuraSR v2: Second-gen GAN-based Super-Resolution for real-world applications
Hunyuan-Video LoRA Explorer + Trainer
FlashFace: Human Image Personalization with High-fidelity Identity Preservation
Make realistic images of real people instantly (w/ ip-adapter-plus-face_sdxl_vit-h)
🖼️ Super fast 1.5B Image Captioning/VQA Multimodal LLM (Image-to-Text) 🖋️
MimicMotion: High-quality human motion video generation with pose-guided control
Idefics3-8B-Llama3, Answers questions and caption about images
AuraSR: GAN-based Super-Resolution for real-world
Qwen 2: A 7 billion parameter language model from Alibaba Cloud, fine tuned for chat completions
Cubiq's ComfyUI InstantID node running `instantid_basic.json` example
✨Stable Diffusion 3 w/ ⚡InstantX's Canny, Pose, and Tile ControlNets🖼️
Jina-CLIP v2: 0.9B multimodal embedding model with 89-language multilingual support, 512x512 image resolution, and Matryoshka representations
Image tagger fine-tuned on WaifuDiffusion w/ (SwinV2, SwinV2, ConvNext, and ViT)
🫦 Realistic facial expression manipulation (lip-syncing) using audio or video
Unofficial Re-Trained AnimateAnyone (Image + DWPose Video → Animated Video of Image)
🎼FluxMusic Text-to-Music Generation with Rectified Flow Transformer🎶
A state-of-the-art text-to-video generation model capable of creating high-quality videos with realistic motion from text descriptions
🐲 DragGAN 🐉 - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold"
Surrealist digital art featuring whimsical, anthropomorphic characters with exaggerated textures and vibrant color blocking
Identifies NSFW images
Transform your text into a beautiful two-tone color gradient that represents your emotions.
Super High Quality Depth Maps 🗺️: An End-to-End Tile-Based Framework 🏗️ for High-Resolution Monocular Metric Depth Estimation 🔍📏
🎙️Hololive text-to-speech and voice-to-voice (Japanese🇯🇵 + English🇬🇧)
Dkamacho’s Scene Assembler
MEMO is a state-of-the-art open-weight model for audio-driven talking video generation.
Qwen 2: A 1.5 billion parameter language model from Alibaba Cloud, fine tuned for chat completions
Qwen 2: A 0.5 billion parameter language model from Alibaba Cloud, fine tuned for chat completions
SVFR: A Unified Framework for Generalized Video Face Restoration
Convert speech in audio to text w/ `tiny`, `small`, `base`, and `large-v3` models
Powerful text-to-video model that generates high-quality videos up to 6 seconds at 15 FPS and 720p resolution from simple text prompt
🗣️ TalkNet-ASD: Detect who is speaking in a video
SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
Logit Warping via Biases for Google's FLAN-T5-small
A "Hello World" model for me to get to grips with `cog` and Replicate
SAM 2: Segment Anything v2 (for in Images + Videos)
Easily create video datasets with auto-captioning for Hunyuan-Video LoRA finetuning
Remove background from images using BRIA-RMBG-2.0