T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
This checkpoint provides conditioning on zoedepth depth estimation for the stable diffusion 1.5 checkpoint.
Model Details
Developed by:
T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
Model type:
Diffusion-based text-to-image generation model
from PIL import Image
import torch
import numpy as np
import matplotlib
from diffusers import T2IAdapter, StableDiffusionAdapterPipeline
defcolorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None):
"""Converts a depth map to a color image. Args: value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None. vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None. cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'. invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99. invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None. background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255). gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False. value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None. Returns: numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4) """ifisinstance(value, torch.Tensor):
value = value.detach().cpu().numpy()
value = value.squeeze()
if invalid_mask isNone:
invalid_mask = value == invalid_val
mask = np.logical_not(invalid_mask)
# normalize
vmin = np.percentile(value[mask],2) if vmin isNoneelse vmin
vmax = np.percentile(value[mask],85) if vmax isNoneelse vmax
if vmin != vmax:
value = (value - vmin) / (vmax - vmin) # vmin..vmaxelse:
# Avoid 0-division
value = value * 0.# squeeze last dim if it exists# grey out the invalid values
value[invalid_mask] = np.nan
cmapper = matplotlib.cm.get_cmap(cmap)
if value_transform:
value = value_transform(value)
# value = value / value.max()
value = cmapper(value, bytes=True) # (nxmx4)
img = value[...]
img[invalid_mask] = background_color
if gamma_corrected:
img = img / 255
img = np.power(img, 2.2)
img = img * 255
img = img.astype(np.uint8)
return img
model = torch.hub.load("isl-org/ZoeDepth", "ZoeD_N", pretrained=True)
img = Image.open('./images/zoedepth_in.png')
out = model.infer_pil(img)
zoedepth_image = Image.fromarray(colorize(out)).convert('RGB')
zoedepth_image.save('images/zoedepth.png')
adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16)
pipe = StableDiffusionAdapterPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16"
)
pipe.to('cuda')
zoedepth_image_out = pipe(prompt="motorcycle", image=zoedepth_image).images[0]
zoedepth_image_out.save('images/zoedepth_out.png')
Runs of TencentARC t2iadapter_zoedepth_sd15v1 on huggingface.co
2.0K
Total runs
0
24-hour runs
-78
3-day runs
-64
7-day runs
210
30-day runs
More Information About t2iadapter_zoedepth_sd15v1 huggingface.co Model
More t2iadapter_zoedepth_sd15v1 license Visit here:
t2iadapter_zoedepth_sd15v1 huggingface.co is an AI model on huggingface.co that provides t2iadapter_zoedepth_sd15v1's model effect (), which can be used instantly with this TencentARC t2iadapter_zoedepth_sd15v1 model. huggingface.co supports a free trial of the t2iadapter_zoedepth_sd15v1 model, and also provides paid use of the t2iadapter_zoedepth_sd15v1. Support call t2iadapter_zoedepth_sd15v1 model through api, including Node.js, Python, http.
t2iadapter_zoedepth_sd15v1 huggingface.co is an online trial and call api platform, which integrates t2iadapter_zoedepth_sd15v1's modeling effects, including api services, and provides a free online trial of t2iadapter_zoedepth_sd15v1, you can try t2iadapter_zoedepth_sd15v1 online for free by clicking the link below.
TencentARC t2iadapter_zoedepth_sd15v1 online free url in huggingface.co:
t2iadapter_zoedepth_sd15v1 is an open source model from GitHub that offers a free installation service, and any user can find t2iadapter_zoedepth_sd15v1 on GitHub to install. At the same time, huggingface.co provides the effect of t2iadapter_zoedepth_sd15v1 install, users can directly use t2iadapter_zoedepth_sd15v1 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
t2iadapter_zoedepth_sd15v1 install url in huggingface.co: