BAAI / AltDiffusion-m9

huggingface.co
Total runs: 59
24-hour runs: 3
7-day runs: 0
30-day runs: 8
Model's Last Updated: August 23 2023
text-to-image

Introduction of AltDiffusion-m9

Model Details of AltDiffusion-m9

AltDiffusion

名称 Name 任务 Task 语言 Language(s) 模型 Model Github
AltDiffusion-m9 多模态 Multimodal Multilingual Stable Diffusion FlagAI

Gradio

We support a Gradio Web UI to run AltDiffusion-m9: Open In Spaces

模型信息 Model Information

我们使用 AltCLIP-m9 ,基于 Stable Diffusion 训练了双语Diffusion模型,训练数据来自 WuDao数据集 LAION

我们的版本在多语言对齐方面表现非常出色,是目前市面上开源的最强多语言版本,保留了原版stable diffusion的大部分能力,并且在某些例子上比有着比原版模型更出色的能力。

AltDiffusion-m9 模型由名为 AltCLIP-m9 的多语 CLIP 模型支持,该模型也可在本项目中访问。您可以阅读 此教程 了解更多信息。

We used AltCLIP-m9 , and trained a bilingual Diffusion model based on Stable Diffusion , with training data from WuDao dataset and LAION .

Our model performs well in aligning multilanguage and is the strongest open-source version on the market today, retaining most of the stable diffusion capabilities of the original, and in some cases even better than the original model.

AltDiffusion-m9 model is backed by a multilingual CLIP model named AltCLIP-m9, which is also accessible in FlagAI. You can read this tutorial for more information.

引用

关于AltCLIP-m9,我们已经推出了相关报告,有更多细节可以查阅,如对您的工作有帮助,欢迎引用。

If you find this work helpful, please consider to cite

@article{https://doi.org/10.48550/arxiv.2211.06679,
  doi = {10.48550/ARXIV.2211.06679},
  url = {https://arxiv.org/abs/2211.06679},
  author = {Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell},
  keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences},
  title = {AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}

模型权重 Model Weights

第一次运行AltDiffusion-m9模型时会自动从huggingface下载如下权重,

The following weights are automatically downloaded from HF when the AltDiffusion-m9 model is run for the first time:

模型名称 Model name 大小 Size 描述 Description
StableDiffusionSafetyChecker 1.13G 图片的安全检查器;Safety checker for image
AltDiffusion-m9 8.0G support English(En), Chinese(Zh), Spanish(Es), French(Fr), Russian(Ru), Japanese(Ja), Korean(Ko), Arabic(Ar) and Italian(It)
AltCLIP-m9 3.22G support English(En), Chinese(Zh), Spanish(Es), French(Fr), Russian(Ru), Japanese(Ja), Korean(Ko), Arabic(Ar) and Italian(It)

示例 Example

🧨Diffusers Example

AltDiffusion-m9 已被添加到 🧨Diffusers!

我们的 代码示例 已放到colab上,欢迎使用。

您可以在 此处 查看文档页面。

以下示例将使用fast DPM 调度程序生成图像, 在V100 上耗时大约为 2 秒。

You can run our diffusers example through here in colab.

You can see the documentation page here .

The following example will use the fast DPM scheduler to generate an image in ca. 2 seconds on a V100.

First you should install diffusers main branch and some dependencies:

pip install git+https://github.com/huggingface/diffusers.git torch transformers accelerate sentencepiece

then you can run the following example:

from diffusers import AltDiffusionPipeline, DPMSolverMultistepScheduler
import torch

pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to("cuda")

pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图"
# or in English:
# prompt = "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap"

image = pipe(prompt, num_inference_steps=25).images[0]
image.save("./alt.png")

alt

Transformers Example
import os
import torch
import transformers
from transformers import BertPreTrainedModel
from transformers.models.clip.modeling_clip import CLIPPreTrainedModel
from transformers.models.xlm_roberta.tokenization_xlm_roberta import XLMRobertaTokenizer
from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
from diffusers import StableDiffusionPipeline
from transformers import BertPreTrainedModel,BertModel,BertConfig
import torch.nn as nn
import torch
from transformers.models.xlm_roberta.configuration_xlm_roberta import XLMRobertaConfig
from transformers import XLMRobertaModel
from transformers.activations import ACT2FN
from typing import Optional


class RobertaSeriesConfig(XLMRobertaConfig):
    def __init__(self, pad_token_id=1, bos_token_id=0, eos_token_id=2,project_dim=768,pooler_fn='cls',learn_encoder=False, **kwargs):
        super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
        self.project_dim = project_dim
        self.pooler_fn = pooler_fn
        # self.learn_encoder = learn_encoder

class RobertaSeriesModelWithTransformation(BertPreTrainedModel):
    _keys_to_ignore_on_load_unexpected = [r"pooler"]
    _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
    base_model_prefix = 'roberta'
    config_class= XLMRobertaConfig
    def __init__(self, config):
        super().__init__(config)
        self.roberta = XLMRobertaModel(config)
        self.transformation = nn.Linear(config.hidden_size, config.project_dim)
        self.post_init()
        
    def get_text_embeds(self,bert_embeds,clip_embeds):
        return self.merge_head(torch.cat((bert_embeds,clip_embeds)))

    def set_tokenizer(self, tokenizer):
        self.tokenizer = tokenizer

    def forward(self, input_ids: Optional[torch.Tensor] = None) :
        attention_mask = (input_ids != self.tokenizer.pad_token_id).to(torch.int64)
        outputs = self.base_model(
            input_ids=input_ids,
            attention_mask=attention_mask,
        )
        
        projection_state = self.transformation(outputs.last_hidden_state)
        
        return (projection_state,)

model_path_encoder = "BAAI/RobertaSeriesModelWithTransformation"
model_path_diffusion = "BAAI/AltDiffusion-m9"
device = "cuda"

seed = 12345
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path_encoder, use_auth_token=True)
tokenizer.model_max_length = 77

text_encoder = RobertaSeriesModelWithTransformation.from_pretrained(model_path_encoder, use_auth_token=True)
text_encoder.set_tokenizer(tokenizer)
print("text encode loaded")
pipe = StableDiffusionPipeline.from_pretrained(model_path_diffusion,
                                               tokenizer=tokenizer,
                                               text_encoder=text_encoder,
                                               use_auth_token=True,
                                               )
print("diffusion pipeline loaded")
pipe = pipe.to(device)

prompt = "Thirty years old lee evans as a sad 19th century postman. detailed, soft focus, candle light, interesting lights, realistic, oil canvas, character concept art by munkácsy mihály, csók istván, john everett millais, henry meynell rheam, and da vinci"
with torch.no_grad():
    image = pipe(prompt, guidance_scale=7.5).images[0]  
    
image.save("3.png")

您可以在 predict_generate_images 函数里通过改变参数来调整设置,具体信息如下:

More parameters of predict_generate_images for you to adjust for predict_generate_images are listed below:

参数名 Parameter 类型 Type 描述 Description
prompt str 提示文本; The prompt text
out_path str 输出路径; The output path to save images
n_samples int 输出图片数量; Number of images to be generate
skip_grid bool 如果为True, 会将所有图片拼接在一起,输出一张新的图片; If set to true, image gridding step will be skipped
ddim_step int DDIM模型的步数; Number of steps in ddim model
plms bool 如果为True, 则会使用plms模型; If set to true, PLMS Sampler instead of DDIM Sampler will be applied
scale float 这个值决定了文本在多大程度上影响生成的图片,值越大影响力越强; This value determines how important the prompt incluences generate images
H int 图片的高度; Height of image
W int 图片的宽度; Width of image
C int 图片的channel数; Numeber of channels of generated images
seed int 随机种子; Random seed number

注意:模型推理要求一张至少10G以上的GPU。

Note that the model inference requires a GPU of at least 10G above.

更多生成结果 More Results

multilanguage examples

同一句prompts不同语言生成的人脸不一样!

One prompts in different languages generates different faces! image

中英文对齐能力 Chinese and English alignment ability
prompt:dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap
英文生成结果/Generated results from English prompts

image

prompt:黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图
中文生成结果/Generated results from Chinese prompts

image

中文表现能力/The performance for Chinese prompts
prompt:带墨镜的男孩肖像,充满细节,8K高清

image

prompt:带墨镜的中国男孩肖像,充满细节,8K高清

image

长图生成能力/The ability to generate long images
prompt: 一只带着帽子的小狗
原版 stable diffusion:

image

Ours:

image

注: 此处长图生成技术由右脑科技(RightBrain AI)提供。

Note: The long image generation technology here is provided by Right Brain Technology.

模型参数量/Number of Model Parameters

模块名称 Module Name 参数量 Number of Parameters
AutoEncoder 83.7M
Unet 865M
AltCLIP-m9 TextEncoder 859M

引用/Citation

Please cite our paper if you find it helpful :)

@misc{ye2023altdiffusion,
      title={AltDiffusion: A Multilingual Text-to-Image Diffusion Model}, 
      author={Fulong Ye and Guang Liu and Xinya Wu and Ledell Wu},
      year={2023},
      eprint={2308.09991},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

许可/License

该模型通过 CreativeML Open RAIL-M license 获得许可。作者对您生成的输出不主张任何权利,您可以自由使用它们并对它们的使用负责,不得违反本许可中的规定。该许可证禁止您分享任何违反任何法律、对他人造成伤害、传播任何可能造成伤害的个人信息、传播错误信息和针对弱势群体的任何内容。您可以出于商业目的修改和使用模型,但必须包含相同使用限制的副本。有关限制的完整列表,请 阅读许可证

The model is licensed with a CreativeML Open RAIL-M license . The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. You can modify and use the model for commercial purposes, but a copy of the same use restrictions must be included. For the full list of restrictions please read the license .

Runs of BAAI AltDiffusion-m9 on huggingface.co

59
Total runs
3
24-hour runs
4
3-day runs
0
7-day runs
8
30-day runs

More Information About AltDiffusion-m9 huggingface.co Model

More AltDiffusion-m9 license Visit here:

https://choosealicense.com/licenses/creativeml-openrail-m

AltDiffusion-m9 huggingface.co

AltDiffusion-m9 huggingface.co is an AI model on huggingface.co that provides AltDiffusion-m9's model effect (), which can be used instantly with this BAAI AltDiffusion-m9 model. huggingface.co supports a free trial of the AltDiffusion-m9 model, and also provides paid use of the AltDiffusion-m9. Support call AltDiffusion-m9 model through api, including Node.js, Python, http.

AltDiffusion-m9 huggingface.co Url

https://huggingface.co/BAAI/AltDiffusion-m9

BAAI AltDiffusion-m9 online free

AltDiffusion-m9 huggingface.co is an online trial and call api platform, which integrates AltDiffusion-m9's modeling effects, including api services, and provides a free online trial of AltDiffusion-m9, you can try AltDiffusion-m9 online for free by clicking the link below.

BAAI AltDiffusion-m9 online free url in huggingface.co:

https://huggingface.co/BAAI/AltDiffusion-m9

AltDiffusion-m9 install

AltDiffusion-m9 is an open source model from GitHub that offers a free installation service, and any user can find AltDiffusion-m9 on GitHub to install. At the same time, huggingface.co provides the effect of AltDiffusion-m9 install, users can directly use AltDiffusion-m9 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

AltDiffusion-m9 install url in huggingface.co:

https://huggingface.co/BAAI/AltDiffusion-m9

Url of AltDiffusion-m9

AltDiffusion-m9 huggingface.co Url

Provider of AltDiffusion-m9 huggingface.co

BAAI
ORGANIZATIONS

Other API from BAAI

huggingface.co

Total runs: 6.0M
Run Growth: 852.8K
Growth Rate: 14.30%
Updated: February 22 2024
huggingface.co

Total runs: 2.2M
Run Growth: -2.2M
Growth Rate: -99.23%
Updated: February 21 2024
huggingface.co

Total runs: 2.1M
Run Growth: -121.3K
Growth Rate: -5.70%
Updated: July 03 2024
huggingface.co

Total runs: 1.6M
Run Growth: -447.1K
Growth Rate: -27.49%
Updated: February 21 2024
huggingface.co

Total runs: 781.8K
Run Growth: 111.7K
Growth Rate: 14.29%
Updated: December 13 2023
huggingface.co

Total runs: 449.9K
Run Growth: 21.0K
Growth Rate: 4.66%
Updated: October 12 2023
huggingface.co

Total runs: 192.5K
Run Growth: 15.1K
Growth Rate: 7.83%
Updated: November 14 2023
huggingface.co

Total runs: 137.3K
Run Growth: 61.6K
Growth Rate: 44.88%
Updated: October 12 2023
huggingface.co

Total runs: 52.6K
Run Growth: -21.1K
Growth Rate: -40.18%
Updated: April 17 2024
huggingface.co

Total runs: 33.5K
Run Growth: 32.0K
Growth Rate: 95.64%
Updated: October 12 2023
huggingface.co

Total runs: 28.0K
Run Growth: 10.7K
Growth Rate: 38.01%
Updated: October 12 2023
huggingface.co

Total runs: 24.9K
Run Growth: -2.1K
Growth Rate: -8.35%
Updated: October 12 2023
huggingface.co

Total runs: 20.5K
Run Growth: -9.0K
Growth Rate: -44.04%
Updated: January 15 2025
huggingface.co

Total runs: 5.6K
Run Growth: 741
Growth Rate: 13.18%
Updated: December 26 2022
huggingface.co

Total runs: 5.0K
Run Growth: 945
Growth Rate: 18.84%
Updated: September 21 2023
huggingface.co

Total runs: 4.9K
Run Growth: 116
Growth Rate: 2.34%
Updated: August 15 2024
huggingface.co

Total runs: 4.5K
Run Growth: -8.6K
Growth Rate: -193.53%
Updated: February 22 2024
huggingface.co

Total runs: 2.7K
Run Growth: -434
Growth Rate: -16.02%
Updated: October 12 2023
huggingface.co

Total runs: 2.5K
Run Growth: 1.7K
Growth Rate: 67.84%
Updated: September 18 2023
huggingface.co

Total runs: 2.5K
Run Growth: 752
Growth Rate: 30.03%
Updated: August 15 2024
huggingface.co

Total runs: 2.4K
Run Growth: -11.8K
Growth Rate: -272.27%
Updated: October 23 2024
huggingface.co

Total runs: 2.3K
Run Growth: 576
Growth Rate: 24.56%
Updated: February 07 2024
huggingface.co

Total runs: 1.9K
Run Growth: -401
Growth Rate: -21.46%
Updated: November 28 2024
huggingface.co

Total runs: 1.5K
Run Growth: -3.5K
Growth Rate: -136.57%
Updated: October 23 2024
huggingface.co

Total runs: 1.3K
Run Growth: -3.0K
Growth Rate: -174.67%
Updated: October 24 2024
huggingface.co

Total runs: 771
Run Growth: 315
Growth Rate: 49.45%
Updated: April 02 2024
huggingface.co

Total runs: 747
Run Growth: 53
Growth Rate: 7.10%
Updated: June 07 2024
huggingface.co

Total runs: 732
Run Growth: -9.0K
Growth Rate: -1226.09%
Updated: March 07 2024
huggingface.co

Total runs: 615
Run Growth: 86
Growth Rate: 13.98%
Updated: October 27 2023
huggingface.co

Total runs: 558
Run Growth: 0
Growth Rate: 0.00%
Updated: January 15 2025
huggingface.co

Total runs: 439
Run Growth: 0
Growth Rate: 0.00%
Updated: January 14 2025
huggingface.co

Total runs: 301
Run Growth: 294
Growth Rate: 97.67%
Updated: April 18 2023
huggingface.co

Total runs: 146
Run Growth: -241
Growth Rate: -165.07%
Updated: August 15 2024
huggingface.co

Total runs: 114
Run Growth: 0
Growth Rate: 0.00%
Updated: January 20 2025
huggingface.co

Total runs: 96
Run Growth: -6
Growth Rate: -6.25%
Updated: December 21 2023
huggingface.co

Total runs: 96
Run Growth: 49
Growth Rate: 51.04%
Updated: August 23 2023
huggingface.co

Total runs: 84
Run Growth: 42
Growth Rate: 50.00%
Updated: August 15 2024
huggingface.co

Total runs: 65
Run Growth: 0
Growth Rate: 0.00%
Updated: January 01 2025
huggingface.co

Total runs: 64
Run Growth: -245
Growth Rate: -382.81%
Updated: June 21 2024
huggingface.co

Total runs: 52
Run Growth: -260
Growth Rate: -500.00%
Updated: June 24 2024
huggingface.co

Total runs: 47
Run Growth: -16
Growth Rate: -34.04%
Updated: October 27 2023
huggingface.co

Total runs: 46
Run Growth: -59
Growth Rate: -128.26%
Updated: December 21 2023
huggingface.co

Total runs: 46
Run Growth: -155
Growth Rate: -336.96%
Updated: April 19 2024
huggingface.co

Total runs: 39
Run Growth: -12
Growth Rate: -30.77%
Updated: August 15 2024
huggingface.co

Total runs: 37
Run Growth: 4
Growth Rate: 10.81%
Updated: August 28 2024
huggingface.co

Total runs: 36
Run Growth: -10.2K
Growth Rate: -28375.00%
Updated: February 07 2024
huggingface.co

Total runs: 34
Run Growth: -272
Growth Rate: -800.00%
Updated: June 24 2024
huggingface.co

Total runs: 26
Run Growth: 11
Growth Rate: 42.31%
Updated: July 24 2023
huggingface.co

Total runs: 21
Run Growth: -56
Growth Rate: -266.67%
Updated: December 31 2022
huggingface.co

Total runs: 18
Run Growth: -107
Growth Rate: -594.44%
Updated: May 13 2024
huggingface.co

Total runs: 17
Run Growth: -23
Growth Rate: -135.29%
Updated: July 02 2024
huggingface.co

Total runs: 15
Run Growth: -20
Growth Rate: -133.33%
Updated: December 25 2023