TencentARC / t2iadapter_zoedepth_sd15v1

huggingface.co
Total runs: 2.0K
24-hour runs: 0
7-day runs: -64
30-day runs: 210
Model's Last Updated: Juli 31 2023
image-to-image

Introduction of t2iadapter_zoedepth_sd15v1

Model Details of t2iadapter_zoedepth_sd15v1

T2I Adapter - Zoedepth

T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.

This checkpoint provides conditioning on zoedepth depth estimation for the stable diffusion 1.5 checkpoint.

Model Details
  • Developed by: T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models

  • Model type: Diffusion-based text-to-image generation model

  • Language(s): English

  • License: Apache 2.0

  • Resources for more information: GitHub Repository , Paper .

  • Cite as:

    @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} }

Checkpoints
Model Name Control Image Overview Control Image Example Generated Image Example
TencentARC/t2iadapter_color_sd14v1
Trained with spatial color palette
A image with 8x8 color palette.
TencentARC/t2iadapter_canny_sd14v1
Trained with canny edge detection
A monochrome image with white edges on a black background.
TencentARC/t2iadapter_sketch_sd14v1
Trained with PidiNet edge detection
A hand-drawn monochrome image with white outlines on a black background.
TencentARC/t2iadapter_depth_sd14v1
Trained with Midas depth estimation
A grayscale image with black representing deep areas and white representing shallow areas.
TencentARC/t2iadapter_openpose_sd14v1
Trained with OpenPose bone image
A OpenPose bone image.
TencentARC/t2iadapter_keypose_sd14v1
Trained with mmpose skeleton image
A mmpose skeleton image.
TencentARC/t2iadapter_seg_sd14v1
Trained with semantic segmentation
An custom segmentation protocol image.
TencentARC/t2iadapter_canny_sd15v2
TencentARC/t2iadapter_depth_sd15v2
TencentARC/t2iadapter_sketch_sd15v2
TencentARC/t2iadapter_zoedepth_sd15v1
Example
  1. Dependencies
pip install diffusers transformers matplotlib
  1. Run code:
from PIL import Image
import torch
import numpy as np
import matplotlib
from diffusers import T2IAdapter, StableDiffusionAdapterPipeline

def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None):
    """Converts a depth map to a color image.

    Args:
        value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed
        vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None.
        vmax (float, optional):  vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None.
        cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'.
        invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99.
        invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None.
        background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255).
        gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False.
        value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None.

    Returns:
        numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4)
    """
    if isinstance(value, torch.Tensor):
        value = value.detach().cpu().numpy()

    value = value.squeeze()
    if invalid_mask is None:
        invalid_mask = value == invalid_val
    mask = np.logical_not(invalid_mask)

    # normalize
    vmin = np.percentile(value[mask],2) if vmin is None else vmin
    vmax = np.percentile(value[mask],85) if vmax is None else vmax
    if vmin != vmax:
        value = (value - vmin) / (vmax - vmin)  # vmin..vmax
    else:
        # Avoid 0-division
        value = value * 0.

    # squeeze last dim if it exists
    # grey out the invalid values

    value[invalid_mask] = np.nan
    cmapper = matplotlib.cm.get_cmap(cmap)
    if value_transform:
        value = value_transform(value)
        # value = value / value.max()
    value = cmapper(value, bytes=True)  # (nxmx4)

    img = value[...]
    img[invalid_mask] = background_color

    if gamma_corrected:
        img = img / 255
        img = np.power(img, 2.2)
        img = img * 255
        img = img.astype(np.uint8)
    return img

model = torch.hub.load("isl-org/ZoeDepth", "ZoeD_N", pretrained=True)

img = Image.open('./images/zoedepth_in.png')

out = model.infer_pil(img)

zoedepth_image = Image.fromarray(colorize(out)).convert('RGB')

zoedepth_image.save('images/zoedepth.png')

adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16)
pipe = StableDiffusionAdapterPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16"
)

pipe.to('cuda')
zoedepth_image_out = pipe(prompt="motorcycle", image=zoedepth_image).images[0]

zoedepth_image_out.save('images/zoedepth_out.png')

zoedepth_in zoedepth zoedepth_out

Runs of TencentARC t2iadapter_zoedepth_sd15v1 on huggingface.co

2.0K
Total runs
0
24-hour runs
-78
3-day runs
-64
7-day runs
210
30-day runs

More Information About t2iadapter_zoedepth_sd15v1 huggingface.co Model

More t2iadapter_zoedepth_sd15v1 license Visit here:

https://choosealicense.com/licenses/apache-2.0

t2iadapter_zoedepth_sd15v1 huggingface.co

t2iadapter_zoedepth_sd15v1 huggingface.co is an AI model on huggingface.co that provides t2iadapter_zoedepth_sd15v1's model effect (), which can be used instantly with this TencentARC t2iadapter_zoedepth_sd15v1 model. huggingface.co supports a free trial of the t2iadapter_zoedepth_sd15v1 model, and also provides paid use of the t2iadapter_zoedepth_sd15v1. Support call t2iadapter_zoedepth_sd15v1 model through api, including Node.js, Python, http.

t2iadapter_zoedepth_sd15v1 huggingface.co Url

https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1

TencentARC t2iadapter_zoedepth_sd15v1 online free

t2iadapter_zoedepth_sd15v1 huggingface.co is an online trial and call api platform, which integrates t2iadapter_zoedepth_sd15v1's modeling effects, including api services, and provides a free online trial of t2iadapter_zoedepth_sd15v1, you can try t2iadapter_zoedepth_sd15v1 online for free by clicking the link below.

TencentARC t2iadapter_zoedepth_sd15v1 online free url in huggingface.co:

https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1

t2iadapter_zoedepth_sd15v1 install

t2iadapter_zoedepth_sd15v1 is an open source model from GitHub that offers a free installation service, and any user can find t2iadapter_zoedepth_sd15v1 on GitHub to install. At the same time, huggingface.co provides the effect of t2iadapter_zoedepth_sd15v1 install, users can directly use t2iadapter_zoedepth_sd15v1 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

t2iadapter_zoedepth_sd15v1 install url in huggingface.co:

https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1

Url of t2iadapter_zoedepth_sd15v1

t2iadapter_zoedepth_sd15v1 huggingface.co Url

Provider of t2iadapter_zoedepth_sd15v1 huggingface.co

TencentARC
ORGANIZATIONS

Other API from TencentARC

huggingface.co

Create photos, paintings and avatars for anyone in any style within seconds.

Total runs: 35.0K
Run Growth: -43.4K
Growth Rate: -124.12%
Updated: Juli 22 2024
huggingface.co

Total runs: 140
Run Growth: -78
Growth Rate: -55.71%
Updated: Dezember 16 2024
huggingface.co

Total runs: 114
Run Growth: 22
Growth Rate: 19.30%
Updated: November 29 2024
huggingface.co

Total runs: 19
Run Growth: 11
Growth Rate: 57.89%
Updated: Dezember 10 2024
huggingface.co

Total runs: 5
Run Growth: -1
Growth Rate: -20.00%
Updated: Dezember 30 2024
huggingface.co

Total runs: 5
Run Growth: -2
Growth Rate: -40.00%
Updated: Dezember 30 2024
huggingface.co

Total runs: 4
Run Growth: -6
Growth Rate: -150.00%
Updated: Dezember 30 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Juni 29 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: August 20 2023
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Dezember 16 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: August 28 2023
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Dezember 20 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: August 13 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Dezember 17 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: April 11 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Oktober 08 2022
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated: Januar 20 2024