The A.I Era [ft.Ajith Kumar] ~ தமிழ்
[All the backgrounds used in the video were generated by MidJourney]
Chapters
0:00 - Intro
02:10 - Chatbots
07:21 - TXT2IMG AIs
14:24 - TXT2VID AIs
18:27 - Image/Video Editing
25:37 - Audio Editing & Voice Cloning
37:45 - AI Cloning
41:31 - AI Tools I Use
47:18 - More Random AI Tools
48:28 - Conclusion
~ALL THE LINKS~
Eye Contact feature used in the intro was created by NVIDIA Maxine (https://developer.nvidia.com/maxine)
➽ Chatbots
-ChatGPT [https://chat.openai.com]
-GPTZero [https://gptzero.me]
-Conch [https://getconch.ai]
➽ TXT2IMG
-DALL.E 2 [https://labs.openai.com]
-Playground A.I [https://playgroundai.com]
-MidJourney [https://midjourney.com]
-StableDiffusion [https://stablediffusionweb.com]
SD WebUI [The version I used - www.youtube.com/watch?v=vg8-NSbaWZI]
➽ TXT2VIDEO
-ImagenVideo [https://imagen.research.google/video]
-Make-A-Video [https://ai.facebook.com/blog/generative-ai-text-to-video]
-Kaiber [https://kaiber.ai]
➽ TextTo3D [https://shunsukesaito.github.io]
➽ A.I Tools for Image/Video Editing
-Palette [https://palette.fm]
-Nvidia Canvas [https://nvidia.com/en-us/studio/canvas]
-MagicStudio [https://magicstudio.com]
-Runwayml [https://runwayml.com]
-Clipdrop (relight feature) [https://clipdrop.co]
-Cut throught the silent parts of a audio [https://autocut.fr]
-Deep Translate with A.I [https://useblanc.com]
➽ Audio Editing & Voice Cloning
-NVIDIA Broadcast [https://nvidia.com/en-us/geforce/broadcasting/broadcast-app]
-Adobe Enhancer [https://podcast.adobe.com/enhance]
-Riffusion [https://riffusion.com]
-Beatoven [https://beatoven.ai]
-SV2TTS Toolbox [https://github.com/CorentinJ/Real-Time-Voice-Cloning]
-FakeYou [https://fakeyou.com]
-11Labs [https://elevenlabs.io]
-[https://voice.ai]
➽ A.I Cloning
-Synthesia [https://synthesia.com]
-[https://d-id.com]
-[https://movio.la]
-BHuman [https://app.bhuman.ai]
-DeepFaceLab [https://github.com/iperov/DeepFaceLab]
➽ A.I Tools I Use
-Remini [https://play.google.com/store/apps/details?id=com.bigwinepot.nwdn.international]
-DALL.E 2 [https://labs.openai.com]
-VocalRemover [https://vocalremover.org]
-Topaz Video Enhance [https://topazlabs.com/topaz-video-ai]
-Flowframes [https://nmkd.itch.io/flowframes]
-EBSynth [https://ebsynth.com]
➽ More A.I Tools
[https://beta.tome.app]
[https://sketch.metademolab.com]
[https://pictory.ai]
[https://donotpay.com]
[https://ranked.ai]
[https://trendingsounds.io]
[https://browse.ai]
BEFORE ALL OF THIS, CLICK THIS LINK. INSTALL THIS FIRST BEFORE USING ANY A.I TOOL ➽ https://bit.ly/3ZYsVak
社交媒体聆听
The A.I Era [ft.Ajith Kumar] ~ தமிழ்
[All the backgrounds used in the video were generated by MidJourney] Chapters 0:00 - Intro 02:10 - Chatbots 07:21 - TXT2IMG AIs 14:24 - TXT2VID AIs 18:27 - Image/Video Editing 25:37 - Audio Editing & Voice Cloning 37:45 - AI Cloning 41:31 - AI Tools I Use 47:18 - More Random AI Tools 48:28 - Conclusion ~ALL THE LINKS~ Eye Contact feature used in the intro was created by NVIDIA Maxine (https://developer.nvidia.com/maxine) ➽ Chatbots -ChatGPT [https://chat.openai.com] -GPTZero [https://gptzero.me] -Conch [https://getconch.ai] ➽ TXT2IMG -DALL.E 2 [https://labs.openai.com] -Playground A.I [https://playgroundai.com] -MidJourney [https://midjourney.com] -StableDiffusion [https://stablediffusionweb.com] SD WebUI [The version I used - www.youtube.com/watch?v=vg8-NSbaWZI] ➽ TXT2VIDEO -ImagenVideo [https://imagen.research.google/video] -Make-A-Video [https://ai.facebook.com/blog/generative-ai-text-to-video] -Kaiber [https://kaiber.ai] ➽ TextTo3D [https://shunsukesaito.github.io] ➽ A.I Tools for Image/Video Editing -Palette [https://palette.fm] -Nvidia Canvas [https://nvidia.com/en-us/studio/canvas] -MagicStudio [https://magicstudio.com] -Runwayml [https://runwayml.com] -Clipdrop (relight feature) [https://clipdrop.co] -Cut throught the silent parts of a audio [https://autocut.fr] -Deep Translate with A.I [https://useblanc.com] ➽ Audio Editing & Voice Cloning -NVIDIA Broadcast [https://nvidia.com/en-us/geforce/broadcasting/broadcast-app] -Adobe Enhancer [https://podcast.adobe.com/enhance] -Riffusion [https://riffusion.com] -Beatoven [https://beatoven.ai] -SV2TTS Toolbox [https://github.com/CorentinJ/Real-Time-Voice-Cloning] -FakeYou [https://fakeyou.com] -11Labs [https://elevenlabs.io] -[https://voice.ai] ➽ A.I Cloning -Synthesia [https://synthesia.com] -[https://d-id.com] -[https://movio.la] -BHuman [https://app.bhuman.ai] -DeepFaceLab [https://github.com/iperov/DeepFaceLab] ➽ A.I Tools I Use -Remini [https://play.google.com/store/apps/details?id=com.bigwinepot.nwdn.international] -DALL.E 2 [https://labs.openai.com] -VocalRemover [https://vocalremover.org] -Topaz Video Enhance [https://topazlabs.com/topaz-video-ai] -Flowframes [https://nmkd.itch.io/flowframes] -EBSynth [https://ebsynth.com] ➽ More A.I Tools [https://beta.tome.app] [https://sketch.metademolab.com] [https://pictory.ai] [https://donotpay.com] [https://ranked.ai] [https://trendingsounds.io] [https://browse.ai] BEFORE ALL OF THIS, CLICK THIS LINK. INSTALL THIS FIRST BEFORE USING ANY A.I TOOL ➽ https://bit.ly/3ZYsVak
【総集編】AI動画生成ツール9種類を導入方法から生成例まで初心者向けにわかりやすく解説
AIで動画ができるツールやサービスを9つ紹介します これまで動画で紹介してきたHey Gen、Creative Reality™ Studio、mov2mov、Ebsynth、AnimateDiff、Stable Video Diffusion、Runway、Pika、SadTalkerの紹介をまとめたものになります ▼今回紹介している動画生成AI 1.Hey Gen(招待コード付き) https://app.heygen.com/guest/templates?cid=78b240f0 2.Creative Reality™ Studio https://www.d-id.com/creative-reality-studio/ 3.mov2mov https://github.com/Scholar01/sd-webui-mov2mov.git 動画から動画を作成するSD拡張機能です モーションエレメンツ(動画素材) https://www.motionelements.com/ja/ このサイトのロイヤリティフリーの動画を使用させていただきました mov2movの作成の参考にさせていただきました 【Stable Diffusion】【mov2mov】動画から動画を作る&コントロールネットについて https://www.youtube.com/watch?v=08VuMtHw3Ts Stable Diffusionで"AIダンス動画"を作る方法 - 『Mov2Mov』の簡単な使い方 https://www.youtube.com/watch?v=m2wQsXC8AXM&t 4.Ebsynth https://ebsynth.com/ s9roll7/ebsynth_utility https://github.com/s9roll7/ebsynth_utility 【初心者】FFmpegのダウンロードとインストール手順~Windows/Mac/Linux https://jp.videoproc.com/edit-convert/how-to-download-and-install-ffmpeg.htm EBsynthを使ってAIでV2Vロトスコープ動画を作ってみりゅうう! https://www.youtube.com/watch?v=F0Vg5jdYys0&t 5.AnimateDiff AnimateDiff for Stable Diffusion Webui https://github.com/continue-revolution/sd-webui-animatediff モーションモジュール(huggingface) https://huggingface.co/guoyww/animatediff/tree/main A1111 extension of AnimateDiff is available https://www.reddit.com/r/StableDiffusion/comments/152n2cr/a1111_extension_of_animatediff_is_available/ 簡単プロンプトアニメ https://github.com/Zuntan03/EasyPromptAnime ※動画内の画面ではSetup-EasyPromptAnime.bat をダウンロードする先が、「セットアップ」ではなく「つかい方」になっていましたが.batファイル自体は同じなので問題ありません AnimateDiff prompt travel https://github.com/s9roll7/animatediff-cli-prompt-travel 6.Stable Video Diffusion ■ComfyUIの導入 1.ComfyUIのダウンロード(7zipで解凍) https://github.com/comfyanonymous/ComfyUI/releases 2.\ComfyUI\models\checkpointsにモデルファイルを置く 3.run_nvidia_gpu.batを実行して起動 ■Stable Video Diffusionの導入 1.ComfyUI-Managerのインストール \ComfyUI\custom_nodes でコマンドプロンプトを開いて git clone https://github.com/ltdrdata/ComfyUI-Manager.git 2.run_nvidia_gpu.batを実行して起動 3.メニューのManagerからInstall Custom Nodesを選択 4.「Stable Video Diffusion」を検索しインストール 5.再起動(コンソール、タブを閉じて、run_nvidia_gpu.batを実行) 6.モデルがダウンロードできないので手動でダウンロード https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/tree/main のsvd.safetensorsとsvd_image_decoder.safetensors https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/tree/main のsvd_xt.safetensorsとsvd_xt_image_decoder.safetensors 7.svdモデルを \ComfyUI\models\svd と \ComfyUI\models\checkpoints に置く 8.run_nvidia_gpu.batを実行して起動 9.ワークフローのダウンロード https://comfyanonymous.github.io/ComfyUI_examples/video/ Introducing Stable Video Diffusion https://stability.ai/news/stable-video-diffusion-open-ai-video-model 【AIアニメ】ComfyUIでAnimateDiffをはじめよう! https://note.com/bakushu/n/n1024f94c6c73 7.Runway https://app.runwayml.com/ 8.Pika https://pika.art/ 9.SadTalker SadTalker導入方法 ExtentionsのInstall from URLから以下のURLでインストール https://github.com/OpenTalker/SadTalker.git モデルファイルのダウンロード先 GitHub https://github.com/OpenTalker/SadTalker/releases Google Drive https://drive.google.com/file/d/1gwWh45pF7aelNP_P78uDJL8Sycep-K7j/view webui_user.batに以下を追加 set COMMANDLINE_ARGS=--no-gradio-queue --disable-safe-unpickle set SADTALKER_CHECKPOINTS=[モデルファイルのファイルパス] BDさんの記事を参考にさせていただきました https://br-d.fanbox.cc/posts/5685086 メラチャッカ(メレブ):勇者ヨシヒコ完全図鑑 - テレビ東京 https://sgttx-sp.mobile.tv-tokyo.co.jp/static/html/bangumi/yoshihiko-p/jumon/20.php ▼音声 voicepeak 6ナレーターセット (https://www.ah-soft.com/voice/6nare/) ▼使用楽曲 騒音のない世界 https://www.youtube.com/channel/UC2KNOBqzElEs8TA7SR2Hm2w グラディエント 晴れの日の私 長靴とレインコート 終わりのない物語 夏の魔法 ピースメン ▼とうやのX(Twitter) このチャンネルに表示しているAIイラストや AI技術などについてポストしています https://twitter.com/towya_aillust ちちぷい(AIイラスト投稿サイト)でプロンプト公開中 https://www.chichi-pui.com/users/user_txz5bKfZZx/
【AI视频】革命性突破!最全无闪烁AI视频制作教程 真正生产力 Stable diffusion + EbSynth + ControlNet
AI动画迎来了一场革命性突破!这次突破将把AI动画从娱乐玩具变成真正的生产力工具!通过ai工具 EBsynth制作无闪烁视频。 点赞 关注 收藏 领取说明中的链接 EbSynth官网: https://ebsynth.com/ FFMpeg:https://ffmpeg.org/download.html 透明背景工具下载地址:https://pypi.org/project/transparent-background/ EbSynth Utility插件下载地址:https://github.com/s9roll7/ebsynth_utility 模型地址:https://civitai.com/models/7240/meinamix —————————————————————————— 可免费在线使用stable diffusion 文生图的AI高级社区. shakker ai : https://www.shakker.ai/ stable diffusion 高级社区 支持最新的SD 3模型。生成图片可商用。每天更新代币。 频道热门视频: 【丝滑小视频制作】https://www.youtube.com/watch?v=kV7IK3MXeiw 【srableSR无损放大图片】https://www.youtube.com/watch?time_continue=1&v=MDwGHDm-4t0 【CFG值的控制】https://www.youtube.com/watch?v=tefhQe7s0v4 【用AI给美女换衣服】https://www.youtube.com/watch?v=l6F5rsIMmAU 【SD混合提示词的使用】https://www.youtube.com/watch?v=a4dz7FOLdyg 【SD安装教程】https://www.youtube.com/watch?v=kL1e-URJNoU 01:00 准备工作 04:24 视频制作 08:47 展示
总共有 68 条社交媒体数据需要解锁才能查看