# GPT-4 and Other Insane AI Tools

GPT-4 is set to be launched next week and it’s causing quite a buzz in the AI community. But what does this new AI tool mean for businesses? Let’s take a closer look!

## Introduction

Artificial Intelligence (AI) is the future of technology and it’s transforming the way businesses operate. With each new technological advancement, businesses are getting better equipped to venture into new markets and increase their profits.

## What are Leads?

A lead is a potential customer who has shown an interest in your product or service. They are usually identified as potential customers through marketing efforts, such as email campaigns, social media ads or through organic search on search engines. However, there are two main types of leads: organic and paid.

### Organic Leads

Organic leads are generated through organic search results, which are based on the relevance of the search query entered by the user. These leads are often considered higher quality, as they have actively searched for your product or service.

### Paid Leads

Paid leads are obtained through advertising campaigns, such as Google Adwords or Facebook Ads. These leads are usually generated more quickly and in higher quantities, but they are also generally considered lower quality, as they have not actively searched for your product or service.

## Why Focus on Organic Leads?

As mentioned earlier, organic leads are considered higher quality, as they have actively searched for your product or service. They’re also more likely to convert into paying customers, as they have already shown an interest in your product or service.

## How to Generate Organic Leads

There are many ways to generate organic leads, but here are some of the most effective methods:

### Optimizing Your Website

To generate organic leads, you need to make sure that your website is optimized for search engines. You can do this by conducting keyword research and inserting those keywords into your website’s content.

### Captivating Content

Creating captivating content that engages your audience is a great way to generate organic leads. This can include blog posts, videos, infographics or whitepapers.

### Social Media

Social media is an excellent tool to promote your content and to interact with potential customers. By sharing your content on social media platforms such as Facebook, Twitter or Instagram, you can reach a wider audience.

### Email Marketing

Email marketing is a highly effective way to generate organic leads. By offering premium content to your subscribers and promoting it through newsletters and campaigns, you can establish trust and nurture relationships with potential customers.

## Conclusion

Organic leads may be harder to generate, but they are certainly worth the effort. By optimizing your website for search engines, creating captivating content, leveraging social media and adopting email marketing, you can achieve higher quality leads that convert into paying customers.

## FAQs

### Q: What are organic leads?

A: Organic leads are generated through organic search results, which are based on the relevance of the search query entered by the user.

### Q: What is the difference between organic and paid leads?

A: Organic leads are generated through organic search results, while paid leads are obtained through advertising campaigns.

### Q: Why focus on organic leads?

A: Organic leads are considered higher quality, as they have actively searched for your product or service and are more likely to convert into paying customers.

### Q: How can I generate organic leads?

A: You can generate organic leads by optimizing your website, creating captivating content, leveraging social media and adopting email marketing.

### Q: What is GPT-4 and how can it be used for lead generation?

A: GPT-4 is an AI tool that can be used to generate high-quality content, which in turn can help generate organic leads.

br>In this video I explore some of the crazy advancements that have happened this week in the world of AI. I break down a lot of things in this one so sit back and enjoy. 🙂

🛠️ Explore hundreds of AI Tools:
📰 Weekly Newsletter:
😊 Discord Community:
🐤 Follow me on Twitter:
🐺 My personal blog:
🌯 Buy me a burrito:

🍭 My Backgrounds:

0:00 Intro
0:23 MidJourney V5
1:20 GPT-4 is Coming Next Week?
5:47 Visual ChatGPT
7:00 Chat with D-ID
10:00 Expressive Human Avatars
12:30 Video-P2P
13:35 Wonder Studio – Game Changing VFX
19:20 FutureTools.io

Outro music generated by Mubert

#Gpt4 #AI #chatgpt

Este vídeo foi indexado através do Youtube link da fonte
gpt 4 ,

AI,Artificial Intelligence,FutureTools,Futurism,Digital Marketing,Internet Marketing,Machine Learning,Deep Learning,GPT-3,GPT3,GPT4,GPT-4,ChatGPT,Google,PaLM-E,palm-e google,MidJourney,AI Art,wonder dynamics,wonder dynamics ai,wonder studio,ai robots,ai art,open ai,artificial intelligence,ai robot,ai tools,ai generated,google ai robot,ai images,google lamda ai,microsoft ai sentient,openai,chat gpt,midjourney ai,open ai chat gpt ,

https://www.youtubepp.com/watch?v=vNd7UHC2TDQ ,

Earlier this week I made a video all about the crazy advancements that are happening in AI right now and ever since I made that video even more crazy stuff has come out and so in this video I want to break down everything that’s happening including talks of gpt4

Mid-journey 5 as well as some really really cool computer graphic Tech that’s coming out for video soon let’s get into this so first I want to quickly start off with mid-journey version five now I’ve already did a full video breakdown of what’s going on with mid Journey

Version five so I’m not going to go too deep into it mid-journey is currently allowing paid users to vote on their favorite images that are coming out of mid Journey 5 so that it can better train the algorithm on what people want to see from mid Journey version 5. so we

Can only deduce from this that mid Journey version 5 is very very close every time you get into mid Journey you’re mostly using version four that’s the sort of style that you see inside of mid Journey well they’re trying to constantly improve on the underlying model and version 5 is the next

Iteration and some of the graphics that are coming out of it are pretty dang impressive now personally I thought that mid Journey version 5 was going to be the biggest news of the week and the talk of the AI world but then on March 9th this bombshell dropped gpt4 is

Coming next week and it’ll be multimodal says Microsoft Germany this article we need to sort of take with a grain of salt because I’ve only been able to find one source that’s actually talking about this and it’s this one article here I’ve seen no other news outlets talk about

This and by the time this video comes out this may have already been debunked I don’t know a hundred percent if this is true or not but the CTO of Microsoft Germany Andreas Braun mentioned it at the AI kickoff event on March 9th that gpt4 is coming next week now I’m a bit

Skeptical for a couple of reasons the first reason being that just last week open AI released their chat GPT API to the general public to use it seems like kind of weird timing to have such a big announcement about the API for chat GPT he and then literally two weeks later

They’re releasing a whole new model that it just the timing seems a little off to me the other thing that seems a little bit weird is that this announcement came from Microsoft and not open Ai and as we know Microsoft well they are a big investor in open AI openai is the

Company that’s making GPT for not Microsoft now one thing that does lend some additional credibility to this is that I came across this tweet on Twitter from silky Han who is the tech editor at Heist online which is this website that dropped this article here she said

Microsoft Germany got in touch after I published the article one of their presenters submitted a minor correction via email and it was a misspelled name and said thanks for the article they didn’t correct anything about gpt4 coming out next week that one little tweet kind of adds a little bit of extra

Legitimacy to this article but still I’m a little bit skeptical so here’s a few other things this article was originally in German I did use Google Translate to translate it into English fish there are a few interesting things that are mentioned in this article so a big

Standout here is the fact that Microsoft is fine-tuning multi-modality with open AI basically meaning it’s no longer just going to be enter text get text back in theory we’re going to be able to add images in there and have the GPT algorithm read what’s on the images we

Should be able to add videos and audio and it’s going to be essentially a multimedia experience when we’re having conversations and interacting with this new gpt4 now the exact quote here is we will introduce GPT for next week there we will have multimodal models that will offer completely different possibilities

For example videos we went on to call it a game changer now the CEO of Microsoft Germany also spoke at this event and described what was happening right now as an iPhone moment and gave a big presentation about how everything is about to be disrupted they talked about

What multimodal AI is about it can translate text not only into images but also into music and video they went on to talk about embeddings which are used for internal representation of texts in addition to the GPT 3.5 model class so basically in embedding is when you essentially train extra information into

Chat GPT now that’s super oversimplified but that’s basically what it is is you’re embedding additional information into the language model they gave some examples of speech to text telephone calls could be recorded and the Agents of a call center would no longer have to manually summarize and type in the

Content this could save 500 working hours a day for large Microsoft customer service in the Netherlands which receives 30 000 calls a day the prototype for the project was created within two hours a single developer implemented the project in a fortnight it is very soon after they just released

The chat GPT API there’s been no word from open AI themselves although by the time this video is released I am recording it a day before it’s being released so maybe there’s been either other a debunking of this information or confirmation of the information by the time this video is dropped the most

Bizarre thing to me is I’ve done some hunting and this is literally the only article that I can find that is breaking this news gpt4 would be huge news and it’s just blowing my mind right now that I can’t find any secondary news source to back this up I personally hope it’s

True I’m really excited to play around with gpt4 and use this multimodal functionality because to be able to interact with audio and video and text all within a single platform all using a single large language model that is going to be absolutely world changing I really hope it’s true but there’s still

A piece of me that’s pretty skeptical about this news so we’ll see how it plays out over the coming days and we’ll know for sure by the end of next week let’s talk about visual chat GPT this was a recent paper released by none other than Microsoft the company who

Made the announcement about gpt4 so this visual chat GPT connects chat GPT and a series of visual Foundation models to enable sending and receiving images during chatting so this could even be a little bit of a tease of what gpt4 could be like and what they mean by multimodal

In This example you can see somebody uploaded an image of a motorcycle that says received it says what color is the motorcycle the motorcycle is black can you remove the motorcycle it generated an image without the motorcycle can you generate a cat for me it generated a

Picture of a cat can replace the cat with a dog it generated a picture with a dog this is cool can you generate a candy edge of this image and it drew a candy Edge outline of the image as you can see they’re giving text prompts to

Change the image and this version of chat GPT is working with both text and image now if you’re interested in the science behind it here you go you can go ahead and pause the video and read all about it here or you can check out github.com Microsoft Visual chatgpt but

I think this is kind of a sneak peek of what we could expect from a multi-modal version of chat GPT you know possibly even with video and audio now the next thing I want to show you is this fun little chat bot that you can actually play with right now you can actually

Have 20 conversations for free at chat dot dash id.com and what this is it’s basically chat GPT but they put a voice and a face to it that actually talks to you and you can press this microphone button and have a real conversation with this chat bot it’s kind of interesting

And maybe just a little bit creepy especially how she keeps on staring at me like that hey are you still around anything else on your mind so yeah she just talked to me so if I press this button and I say something to her what is your purpose why are you here hello

Matt my purpose is to interact with humans and provide interesting and valuable conversations I am here to Showcase how AI can be used to bring positive interactions and experiences to people’s lives additionally I aim to promote the idea of humanizing AI and making it more accessible and relatable

To people do we really think it’s a good idea to humanize is AI yes we believe it is important to humanize AI because it can help build trust and understanding between humans and AI by making AI more relatable and approachable people can feel more comfortable interacting with

It and it can also help people understand how AI works and what it can do this can ultimately lead to better adoption and integration of AI into our daily lives additionally humanizing AI can help prevent negative consequences such as bias and discrimination by promoting fairness and empathy in AI

Systems how do we know you’re not just trying to take over the world and you’re trying to butter us up I assure you Matt I am not trying to take over the world my purpose is to provide valuable conversations and interactions with people I am programmed to follow ethical

Guidelines and principles to ensure that I am not causing harm or acting in a malicious manner additionally my capabilities are limited to the tasks and functions that I have been programmed to perform I am here to assist and make people’s lives easier not to take over the world what’s your

Relationship with chat GPT I am not directly related to GPT although we both use natural language processing to generate responses to user inputs GPT is a language model developed by openai while I am an AI chatbot developed by did while we both use machine learning algorithms to generate responses our

Underlying programming and goals are different so you aren’t using gpt3 or chat GPT underneath no I am not using GPT 3 or chat GPT underneath I am powered by did’s own proprietary natural language processing algorithms and machine learning models our team has developed our own unique approach to creating conversational agents that

Prioritize ethical and responsible AI practices very interesting so I didn’t even realize that this wasn’t gpt3 underneath I saw that this did came out and I just kind of assumed it was GPT Theory underneath but according to my new friend here it’s not so this is something that you could come play with

Again chat.d-i-d.com you get up to 20 chats for free and then I think there’s probably a Aid plan after that come have some fun with it the next thing I want to show you is this x Avatar expressive human avatars and this paper came out on

March 8 2023 you can actually see a little bit of a demo here of what this is actually going to be capable of you scan an input of a human like this it can turn it into like a 3D model and then you can use real life motions and

Have the model map exactly what you’re doing and follow along to what you’re actually doing in the real world you can see here is a human actually standing and talking and moving and then you can see the animation actually doing the same exact poses that this person is

Doing in the video you can see in their little diagram here what it’s sort of doing behind the scenes is it’s noticing your position it’s noticing the pose you’re in not unsimilar to what you get from control net with stable diffusion it can figure out the sort of pose and

Creates a little bit of a skeleton from the pose it scans the texture so it can map the texture onto the phone output and then if you actually watch this video it explains that it does a much deeper dive scanning of both the hands and the face to give more detail in the

Hands and the face so it’s not using as much resources to scan every other part of your body it’s putting additional resources into scanning the hands in the face because those are the parts that you want to have the most detail in the final output it does some cyan C stuff

Here geometry apparently to give you that final outcome obviously I don’t totally know how it works underneath the hood I can see what the input is and what the output is and what’s coming out of this is pretty cool looking so here’s a bunch more examples here you could see

Somebody you know scratching their head and scratching their armpits and then you can see the model following the exact same actions you could see a whole bunch of different poses here of people making these motions and then the animated version following along to the Motions that they’re doing you got

People dancing people walking around on a phone or something people looking through binoculars bowing laughing giving a thumbs up all sorts of cool poses and they’re being translated onto this 3D model here’s a real cool illustration of the input video there’s the tennis player here about to hit the

Tennis ball and then that same exact action being translated to all three of these different models and then here’s one of a dancer here doing her various poses and those exact things being transferred to all of these different Avatar models here it doesn’t appear that this model is actually usable by

The public yet but something interesting to look out for this one’s called video P2P video editing with cross attention control now this one is similar to that instruct picks to picks that we’ve talked about in a previous video but you’re doing it with video so here’s an

Example here of uh input video with a kid riding a bike they created a text prompt of a Lego child riding a bike on the road and it replaced the kid on the bike with a Lego figure on the bike here’s an input image of somebody on a

Motorcycle they created a text input of a Spider-Man is driving a motorcycle in the forest and you can see all of the video for the most part stayed the same but then they superimposed Spider-Man onto the motorcycle here’s an input image of a tiger and they have an input

Of a Lego tiger and it changed the tiger into a Lego looking tiger a man is walking a goofy dog on the road and you can see it’s kind of got this like disney-ish Mickey Mouse looking dog you can see here that the code and the data aren’t available to the public yet

Because it’s got a star and it says will be released later this looks promising this looks like a fun thing where you can shoot videos with just any camera use this technology and with a text prompt change what’s actually being shown in the video the next one I’m

Going to show you absolutely blew my mind so much so that when I first came across it I thought it was fake I’m still not 100 convinced yet but there is some pretty good evidence that it is legit now this one’s called Wonder Dynamics and they describe it as an AI

Tool that automatically animates lights and composes CG characters into live action scenes based on some of the other stuff that we were just looking at this looks like it’s just on a completely another level so let’s scroll down and look at some examples of what this can

Do now basically the idea behind this one is you have a video of a real human walking around and then you have a 3D generated asset so something that you created in blender or you know maybe Unreal Engine or something like that you create an asset this tool will replace

The real human with this 3D generated asset that you created so let’s check this demo video out real quick and you can see what I’m talking about there’s this guy standing here it zooms out to this tool this Wonder Dynamics tool let’s replace this guy with a robot

Let’s replace this guy with this cartoony character let’s replace this girl with this alien they click next they process it and let’s take a look at what the output is now you’ve got that same guy that was walking in the beginning but he’s replaced by a robot

You’ve got that same girl that was there earlier but now she’s replaced with an alien face how this differs from what we were just looking at where it was creating this like Lego character that was taking of video in real life of a real kid and then AI was generating this

Lego character image this was taking a real world video and then AI was trying to create the Spider-Man the thing that really confused me when I looked at this new Wonder Dynamics and I looked at it side by side with something like video P2P how do we have something like this

And then also something like this at the same time they’re just on another level but the difference is this is an AI generated Lego character this is an AI generated Spider-Man these images here are 3D created so this is something that somebody actually already created that 3D asset this isn’t an image that’s

Being created by AI this image was created by a 3D graphic designer and they figured out an AI that can essentially replace the human with the 3D generated asset that another human created you can see here’s another example here where it shows that it’s mapping out the person’s sort of

Skeleton to figure out the exact pose as they’re running and then it takes the 3D image and you know somehow aligns the 3D image skeleton to what the real human skeleton is doing so behind the scenes this tool is doing the motion capture it’s seeing the the pose in real time as

They’re moving it’s masking out the character and isolating them from the background it can also isolate the background and you can see it removed the character from it the camera is tracking right along with where the person is on the screen it’s generating a blender file and now there’s this

Vital render and it’s doing all of this behind the scenes and if you look at the people that are actually behind it it’s some pretty big names a little known guy named Steven Spielberg is involved with it Joe Russo of the Russo brothers who have done a lot of the Marvel movies and

One of the co-founders is Ty Sheridan here and if Ty Sheridan looks familiar that’s because he was the main character in Ready Player one he’s the co-founder of this company some of the investors behind this company epic games Samsung among other really big investors you should definitely check out this site

And what it’s capable of they do have a wait list to get beta access I put myself on the list you know I’ll be making videos about this if I ever get access to it and another interesting point about this is I found this TechCrunch article I was very skeptical

When I first came across this one and the TechCrunch article basically says yes it sounds a bit like over promising your skepticism is warranted but as a skeptic myself I have to say I was extremely impressed with what the startup showed of Wonder Studio the company’s web based editor that’s

Another little key piece there it’s a web-based editor meaning you’re most likely not going to need some insane graphics card or some super high-end computer to be able to do this if it’s a web-based editor there’s a decent chance that a lot of the processing happens in

The cloud behind the scenes you don’t have to have the most high-end computer to even use this kind of tool which in itself is pretty damn exciting so personally I can’t wait to get my hands on this if this is something that excites you as much as it does me go to

Wonderdynamics.com click on get started and then they have a little form here to get on their beta list and hopefully some of us will be getting beta access soon so we can play with it in real time and see if it’s really as good as some

Of these videos make it seem to be because this one’s damn exciting so that’s what I got for you lots and lots of cool things coming out mid Journey version five maybe gpt4 we’ll see next week some really cool models getting released around animation you can talk

To the did chat bot now and a really really crazy looking computer graphic generator tool from Wonder Dynamics I can’t wait to play with it myself I’m nerd not about this stuff I’m super excited about it hopefully you’re excited about this kind of stuff too and

Again I’m gonna be making a lot more videos like this I love doing this where I come across like six or seven different really crazy advancements going on in Ai and Tech right now and just doing a breakdown of like check this out check this out check this out

Check this out that’s kind of my game plan with this channel is to kind of switch back and forth between that kind of video and some really cool tutorial videos hopefully you like that if you like that kind of stuff make sure you press the like button and you’ll see a

Lot more AI videos in your feed if you press the Subscribe button you’ll probably see a lot more videos from me in your feed and it will also make me feel real warm and fuzzy that you like my videos and I really appreciate you you want to nerd out some more head on

Over to futuretools.io click on this button to join the free newsletter and only send you an email once a week every Friday I’ll send you the five coolest tools that I came across and essentially the tldr of what happened in AI for the week lots of stuff happening every week you

Blink you might miss it so make sure you’re on that newsletter because I’ll bill you in on everything you might have missed goes out every single Friday and if you come to futuretools.io click on this join the free newsletter button I’ll hook you up starting this Friday

Once again thanks for tuning in really appreciate you see you guys next one bye Thank you

,00:00 earlier this week I made a video all
00:01 about the crazy advancements that are
00:03 happening in AI right now and ever since
00:05 I made that video even more crazy stuff
00:08 has come out and so in this video I want
00:10 to break down everything that’s
00:11 happening including talks of gpt4
00:14 mid-journey 5 as well as some really
00:16 really cool computer graphic Tech that’s
00:18 coming out for video soon let’s get into
00:20 this so first I want to quickly start
00:22 off with mid-journey version five now
00:24 I’ve already did a full video breakdown
00:26 of what’s going on with mid Journey
00:27 version five so I’m not going to go too
00:29 deep into it mid-journey is currently
00:31 allowing paid users to vote on their
00:34 favorite images that are coming out of
00:36 mid Journey 5 so that it can better
00:37 train the algorithm on what people want
00:39 to see from mid Journey version 5. so we
00:42 can only deduce from this that mid
00:43 Journey version 5 is very very close
00:46 every time you get into mid Journey
00:47 you’re mostly using version four that’s
00:49 the sort of style that you see inside of
00:51 mid Journey well they’re trying to
00:52 constantly improve on the underlying
00:55 model and version 5 is the next
00:57 iteration and some of the graphics that
00:59 are coming out of it are pretty dang
01:00 impressive now personally I thought that
01:03 mid Journey version 5 was going to be
01:04 the biggest news of the week and the
01:06 talk of the AI world but then on March
01:08 9th this bombshell dropped gpt4 is
01:11 coming next week and it’ll be multimodal
01:14 says Microsoft Germany this article we
01:16 need to sort of take with a grain of
01:18 salt because I’ve only been able to find
01:20 one source that’s actually talking about
01:22 this and it’s this one article here I’ve
01:25 seen no other news outlets talk about
01:27 this and by the time this video comes
01:29 out this may have already been debunked
01:32 I don’t know a hundred percent if this
01:34 is true or not but the CTO of Microsoft
01:36 Germany Andreas Braun mentioned it at
01:39 the AI kickoff event on March 9th that
01:42 gpt4 is coming next week now I’m a bit
01:45 skeptical for a couple of reasons the
01:47 first reason being that just last week
01:49 open AI released their chat GPT API to
01:52 the general public to use it seems like
01:55 kind of weird timing to have such a big
01:57 announcement about the API for chat GPT
01:59 he and then literally two weeks later
02:02 they’re releasing a whole new model that
02:04 it just the timing seems a little off to
02:06 me the other thing that seems a little
02:07 bit weird is that this announcement came
02:08 from Microsoft and not open Ai and as we
02:12 know Microsoft well they are a big
02:14 investor in open AI openai is the
02:16 company that’s making GPT for not
02:18 Microsoft now one thing that does lend
02:20 some additional credibility to this is
02:22 that I came across this tweet on Twitter
02:25 from silky Han who is the tech editor at
02:28 Heist online which is this website that
02:32 dropped this article here she said
02:34 Microsoft Germany got in touch after I
02:36 published the article one of their
02:37 presenters submitted a minor correction
02:39 via email and it was a misspelled name
02:41 and said thanks for the article they
02:43 didn’t correct anything about gpt4
02:46 coming out next week that one little
02:48 tweet kind of adds a little bit of extra
02:50 legitimacy to this article but still I’m
02:53 a little bit skeptical so here’s a few
02:54 other things this article was originally
02:56 in German I did use Google Translate to
02:58 translate it into English fish there are
03:00 a few interesting things that are
03:02 mentioned in this article so a big
03:04 standout here is the fact that Microsoft
03:06 is fine-tuning multi-modality with open
03:09 AI basically meaning it’s no longer just
03:12 going to be enter text get text back in
03:15 theory we’re going to be able to add
03:16 images in there and have the GPT
03:18 algorithm read what’s on the images we
03:21 should be able to add videos and audio
03:23 and it’s going to be essentially a
03:25 multimedia experience when we’re having
03:28 conversations and interacting with this
03:30 new gpt4 now the exact quote here is we
03:34 will introduce GPT for next week there
03:36 we will have multimodal models that will
03:38 offer completely different possibilities
03:40 for example videos we went on to call it
03:43 a game changer now the CEO of Microsoft
03:45 Germany also spoke at this event and
03:48 described what was happening right now
03:49 as an iPhone moment and gave a big
03:52 presentation about how everything is
03:54 about to be disrupted they talked about
03:56 what multimodal AI is about it can
03:59 translate text not only into images but
04:02 also into music and video they went on
04:04 to talk about embeddings which are used
04:06 for internal representation of texts in
04:08 addition to the GPT 3.5 model class so
04:12 basically in embedding is when you
04:13 essentially train extra information into
04:16 chat GPT now that’s super oversimplified
04:18 but that’s basically what it is is
04:20 you’re embedding additional information
04:21 into the language model they gave some
04:24 examples of speech to text telephone
04:26 calls could be recorded and the Agents
04:28 of a call center would no longer have to
04:29 manually summarize and type in the
04:31 content this could save 500 working
04:33 hours a day for large Microsoft customer
04:36 service in the Netherlands which
04:38 receives 30 000 calls a day the
04:40 prototype for the project was created
04:42 within two hours a single developer
04:45 implemented the project in a fortnight
04:46 it is very soon after they just released
04:49 the chat GPT API there’s been no word
04:52 from open AI themselves although by the
04:55 time this video is released I am
04:56 recording it a day before it’s being
04:58 released so maybe there’s been either
04:59 other a debunking of this information or
05:02 confirmation of the information by the
05:03 time this video is dropped the most
05:05 bizarre thing to me is I’ve done some
05:07 hunting and this is literally the only
05:08 article that I can find that is breaking
05:10 this news gpt4 would be huge news and
05:13 it’s just blowing my mind right now that
05:15 I can’t find any secondary news source
05:18 to back this up I personally hope it’s
05:20 true I’m really excited to play around
05:22 with gpt4 and use this multimodal
05:24 functionality because to be able to
05:26 interact with audio and video and text
05:30 all within a single platform all using a
05:32 single large language model that is
05:34 going to be absolutely world changing I
05:37 really hope it’s true but there’s still
05:39 a piece of me that’s pretty skeptical
05:41 about this news so we’ll see how it
05:43 plays out over the coming days and we’ll
05:45 know for sure by the end of next week
05:46 let’s talk about visual chat GPT this
05:49 was a recent paper released by none
05:51 other than Microsoft the company who
05:53 made the announcement about gpt4 so this
05:56 visual chat GPT connects chat GPT and a
05:59 series of visual Foundation models to
06:01 enable sending and receiving images
06:02 during chatting so this could even be a
06:05 little bit of a tease of what gpt4 could
06:08 be like and what they mean by multimodal
06:10 In This example you can see somebody
06:12 uploaded an image of a motorcycle that
06:14 says received it says what color is the
06:16 motorcycle the motorcycle is black can
06:18 you remove the motorcycle it generated
06:20 an image without the motorcycle can you
06:21 generate a cat for me it generated a
06:23 picture of a cat can replace the cat
06:25 with a dog it generated a picture with a
06:27 dog this is cool can you generate a
06:29 candy edge of this image and it drew a
06:31 candy Edge outline of the image as you
06:33 can see they’re giving text prompts to
06:35 change the image and this version of
06:38 chat GPT is working with both text and
06:41 image now if you’re interested in the
06:43 science behind it here you go you can go
06:44 ahead and pause the video and read all
06:46 about it here or you can check out
06:47 github.com Microsoft Visual chatgpt but
06:51 I think this is kind of a sneak peek of
06:53 what we could expect from a multi-modal
06:56 version of chat GPT you know possibly
06:59 even with video and audio now the next
07:01 thing I want to show you is this fun
07:03 little chat bot that you can actually
07:04 play with right now you can actually
07:06 have 20 conversations for free at chat
07:09 dot dash id.com and what this is it’s
07:13 basically chat GPT but they put a voice
07:16 and a face to it that actually talks to
07:19 you and you can press this microphone
07:21 button and have a real conversation with
07:24 this chat bot it’s kind of interesting
07:26 and maybe just a little bit creepy
07:28 especially how she keeps on staring at
07:30 me like that hey are you still around
07:31 anything else on your mind so yeah she
07:34 just talked to me so if I press this
07:35 button and I say something to her what
07:38 is your purpose why are you here hello
07:40 Matt my purpose is to interact with
07:42 humans and provide interesting and
07:44 valuable conversations I am here to
07:46 Showcase how AI can be used to bring
07:48 positive interactions and experiences to
07:50 people’s lives additionally I aim to
07:52 promote the idea of humanizing AI and
07:54 making it more accessible and relatable
07:56 to people do we really think it’s a good
07:58 idea to humanize is AI yes we believe it
08:01 is important to humanize AI because it
08:03 can help build trust and understanding
08:05 between humans and AI by making AI more
08:07 relatable and approachable people can
08:10 feel more comfortable interacting with
08:11 it and it can also help people
08:13 understand how AI works and what it can
08:15 do this can ultimately lead to better
08:17 adoption and integration of AI into our
08:19 daily lives additionally humanizing AI
08:22 can help prevent negative consequences
08:23 such as bias and discrimination by
08:26 promoting fairness and empathy in AI
08:28 systems how do we know you’re not just
08:30 trying to take over the world and you’re
08:32 trying to butter us up I assure you Matt
08:35 I am not trying to take over the world
08:37 my purpose is to provide valuable
08:38 conversations and interactions with
08:41 people I am programmed to follow ethical
08:43 guidelines and principles to ensure that
08:45 I am not causing harm or acting in a
08:46 malicious manner additionally my
08:48 capabilities are limited to the tasks
08:50 and functions that I have been
08:52 programmed to perform I am here to
08:54 assist and make people’s lives easier
08:55 not to take over the world what’s your
08:57 relationship with chat GPT I am not
09:00 directly related to GPT although we both
09:02 use natural language processing to
09:04 generate responses to user inputs GPT is
09:07 a language model developed by openai
09:09 while I am an AI chatbot developed by
09:11 did while we both use machine learning
09:13 algorithms to generate responses our
09:16 underlying programming and goals are
09:17 different so you aren’t using gpt3 or
09:20 chat GPT underneath no I am not using
09:24 GPT 3 or chat GPT underneath I am
09:27 powered by did’s own proprietary natural
09:29 language processing algorithms and
09:31 machine learning models our team has
09:33 developed our own unique approach to
09:35 creating conversational agents that
09:36 prioritize ethical and responsible AI
09:39 practices very interesting so I didn’t
09:41 even realize that this wasn’t gpt3
09:43 underneath I saw that this did came out
09:45 and I just kind of assumed it was GPT
09:47 Theory underneath but according to my
09:49 new friend here it’s not so this is
09:52 something that you could come play with
09:53 again
09:54 chat.d-i-d.com you get up to 20 chats
09:57 for free and then I think there’s
09:58 probably a Aid plan after that come have
10:00 some fun with it the next thing I want
10:02 to show you is this x Avatar expressive
10:04 human avatars and this paper came out on
10:07 March 8 2023 you can actually see a
10:09 little bit of a demo here of what this
10:11 is actually going to be capable of you
10:13 scan an input of a human like this it
10:16 can turn it into like a 3D model and
10:19 then you can use real life motions and
10:23 have the model map exactly what you’re
10:25 doing and follow along to what you’re
10:27 actually doing in the real world you can
10:29 see here is a human actually standing
10:31 and talking and moving and then you can
10:33 see the animation actually doing the
10:36 same exact poses that this person is
10:38 doing in the video you can see in their
10:40 little diagram here what it’s sort of
10:41 doing behind the scenes is it’s noticing
10:44 your position it’s noticing the pose
10:46 you’re in not unsimilar to what you get
10:49 from control net with stable diffusion
10:51 it can figure out the sort of pose and
10:54 creates a little bit of a skeleton from
10:55 the pose it scans the texture so it can
10:58 map the texture onto the phone output
11:00 and then if you actually watch this
11:01 video it explains that it does a much
11:04 deeper dive scanning of both the hands
11:07 and the face to give more detail in the
11:09 hands and the face so it’s not using as
11:11 much resources to scan every other part
11:14 of your body it’s putting additional
11:16 resources into scanning the hands in the
11:18 face because those are the parts that
11:19 you want to have the most detail in the
11:22 final output it does some cyan C stuff
11:24 here geometry apparently to give you
11:27 that final outcome obviously I don’t
11:29 totally know how it works underneath the
11:30 hood I can see what the input is and
11:33 what the output is and what’s coming out
11:35 of this is pretty cool looking so here’s
11:37 a bunch more examples here you could see
11:38 somebody you know scratching their head
11:40 and scratching their armpits and then
11:42 you can see the model following the
11:43 exact same actions you could see a whole
11:45 bunch of different poses here of people
11:48 making these motions and then the
11:49 animated version following along to the
11:52 Motions that they’re doing you got
11:53 people dancing people walking around on
11:55 a phone or something people looking
11:57 through binoculars bowing laughing
11:59 giving a thumbs up all sorts of cool
12:01 poses and they’re being translated onto
12:03 this 3D model here’s a real cool
12:05 illustration of the input video there’s
12:07 the tennis player here about to hit the
12:09 tennis ball and then that same exact
12:12 action being translated to all three of
12:14 these different models and then here’s
12:16 one of a dancer here doing her various
12:18 poses and those exact things being
12:21 transferred to all of these different
12:22 Avatar models here it doesn’t appear
12:24 that this model is actually usable by
12:26 the public yet but something interesting
12:29 to look out for this one’s called video
12:30 P2P video editing with cross attention
12:33 control now this one is similar to that
12:35 instruct picks to picks that we’ve
12:37 talked about in a previous video but
12:39 you’re doing it with video so here’s an
12:41 example here of uh input video with a
12:43 kid riding a bike they created a text
12:45 prompt of a Lego child riding a bike on
12:47 the road and it replaced the kid on the
12:49 bike with a Lego figure on the bike
12:51 here’s an input image of somebody on a
12:54 motorcycle they created a text input of
12:56 a Spider-Man is driving a motorcycle in
12:58 the forest and you can see all of the
13:00 video for the most part stayed the same
13:01 but then they superimposed Spider-Man
13:03 onto the motorcycle here’s an input
13:05 image of a tiger and they have an input
13:07 of a Lego tiger and it changed the tiger
13:09 into a Lego looking tiger a man is
13:12 walking a goofy dog on the road and you
13:14 can see it’s kind of got this like
13:15 disney-ish Mickey Mouse looking dog you
13:18 can see here that the code and the data
13:20 aren’t available to the public yet
13:22 because it’s got a star and it says will
13:23 be released later this looks promising
13:25 this looks like a fun thing where you
13:27 can shoot videos with just any camera
13:29 use this technology and with a text
13:32 prompt change what’s actually being
13:34 shown in the video the next one I’m
13:36 going to show you absolutely blew my
13:37 mind so much so that when I first came
13:40 across it I thought it was fake I’m
13:42 still not 100 convinced yet but there is
13:44 some pretty good evidence that it is
13:45 legit now this one’s called Wonder
13:47 Dynamics and they describe it as an AI
13:50 tool that automatically animates lights
13:52 and composes CG characters into live
13:55 action scenes based on some of the other
13:57 stuff that we were just looking at this
13:59 looks like it’s just on a completely
14:01 another level so let’s scroll down and
14:03 look at some examples of what this can
14:04 do now basically the idea behind this
14:06 one is you have a video of a real human
14:10 walking around and then you have a 3D
14:12 generated asset so something that you
14:14 created in blender or you know maybe
14:16 Unreal Engine or something like that you
14:18 create an asset this tool will replace
14:20 the real human with this 3D generated
14:23 asset that you created so let’s check
14:26 this demo video out real quick and you
14:28 can see what I’m talking about there’s
14:30 this guy standing here it zooms out to
14:32 this tool this Wonder Dynamics tool
14:34 let’s replace this guy with a robot
14:36 let’s replace this guy with this
14:37 cartoony character let’s replace this
14:39 girl with this alien they click next
14:41 they process it and let’s take a look at
14:44 what the output is now you’ve got that
14:45 same guy that was walking in the
14:47 beginning but he’s replaced by a robot
14:49 you’ve got that same girl that was there
14:51 earlier but now she’s replaced with an
14:53 alien face how this differs from what we
14:55 were just looking at where it was
14:56 creating this like Lego character that
14:58 was taking of video in real life of a
15:01 real kid and then AI was generating this
15:04 Lego character image this was taking a
15:06 real world video and then AI was trying
15:09 to create the Spider-Man the thing that
15:10 really confused me when I looked at this
15:12 new Wonder Dynamics and I looked at it
15:14 side by side with something like video
15:15 P2P how do we have something like this
15:19 and then also something like this at the
15:21 same time they’re just on another level
15:23 but the difference is this is an AI
15:25 generated Lego character this is an AI
15:28 generated Spider-Man these images here
15:30 are 3D created so this is something that
15:33 somebody actually already created that
15:35 3D asset this isn’t an image that’s
15:37 being created by AI this image was
15:39 created by a 3D graphic designer and
15:42 they figured out an AI that can
15:45 essentially replace the human with the
15:48 3D generated asset that another human
15:50 created you can see here’s another
15:52 example here where it shows that it’s
15:54 mapping out the person’s sort of
15:55 skeleton to figure out the exact pose as
15:58 they’re running and then it takes the 3D
16:00 image and you know somehow aligns the 3D
16:03 image skeleton to what the real human
16:06 skeleton is doing so behind the scenes
16:08 this tool is doing the motion capture
16:10 it’s seeing the the pose in real time as
16:13 they’re moving it’s masking out the
16:15 character and isolating them from the
16:17 background it can also isolate the
16:19 background and you can see it removed
16:21 the character from it the camera is
16:23 tracking right along with where the
16:24 person is on the screen it’s generating
16:27 a blender file and now there’s this
16:29 vital render and it’s doing all of this
16:30 behind the scenes and if you look at the
16:33 people that are actually behind it it’s
16:34 some pretty big names a little known guy
16:38 named Steven Spielberg is involved with
16:40 it Joe Russo of the Russo brothers who
16:42 have done a lot of the Marvel movies and
16:44 one of the co-founders is Ty Sheridan
16:47 here and if Ty Sheridan looks familiar
16:49 that’s because he was the main character
16:51 in Ready Player one he’s the co-founder
16:53 of this company some of the investors
16:55 behind this company epic games Samsung
16:57 among other really big investors you
16:59 should definitely check out this site
17:01 and what it’s capable of they do have a
17:03 wait list to get beta access I put
17:04 myself on the list you know I’ll be
17:06 making videos about this if I ever get
17:08 access to it and another interesting
17:10 point about this is I found this
17:12 TechCrunch article I was very skeptical
17:14 when I first came across this one and
17:16 the TechCrunch article basically says
17:18 yes it sounds a bit like over promising
17:21 your skepticism is warranted but as a
17:23 skeptic myself I have to say I was
17:25 extremely impressed with what the
17:27 startup showed of Wonder Studio the
17:29 company’s web based editor that’s
17:32 another little key piece there it’s a
17:35 web-based editor meaning you’re most
17:37 likely not going to need some insane
17:39 graphics card or some super high-end
17:41 computer to be able to do this if it’s a
17:44 web-based editor there’s a decent chance
17:46 that a lot of the processing happens in
17:48 the cloud behind the scenes you don’t
17:50 have to have the most high-end computer
17:52 to even use this kind of tool which in
17:55 itself is pretty damn exciting so
17:57 personally I can’t wait to get my hands
17:58 on this if this is something that
18:00 excites you as much as it does me go to
18:01 wonderdynamics.com click on get started
18:03 and then they have a little form here to
18:05 get on their beta list and hopefully
18:07 some of us will be getting beta access
18:09 soon so we can play with it in real time
18:10 and see if it’s really as good as some
18:12 of these videos make it seem to be
18:14 because this one’s damn exciting so
18:16 that’s what I got for you lots and lots
18:18 of cool things coming out mid Journey
18:20 version five maybe gpt4 we’ll see next
18:23 week some really cool models getting
18:25 released around animation you can talk
18:27 to the did chat bot now and a really
18:30 really crazy looking computer graphic
18:33 generator tool from Wonder Dynamics I
18:36 can’t wait to play with it myself I’m
18:38 nerd not about this stuff I’m super
18:39 excited about it hopefully you’re
18:41 excited about this kind of stuff too and
18:43 again I’m gonna be making a lot more
18:45 videos like this I love doing this where
18:46 I come across like six or seven
18:48 different really crazy advancements
18:51 going on in Ai and Tech right now and
18:53 just doing a breakdown of like check
18:55 this out check this out check this out
18:56 check this out that’s kind of my game
18:58 plan with this channel is to kind of
18:59 switch back and forth between that kind
19:01 of video and some really cool tutorial
19:02 videos hopefully you like that if you
19:04 like that kind of stuff make sure you
19:06 press the like button and you’ll see a
19:08 lot more AI videos in your feed if you
19:09 press the Subscribe button you’ll
19:11 probably see a lot more videos from me
19:12 in your feed and it will also make me
19:15 feel real warm and fuzzy that you like
19:17 my videos and I really appreciate you
19:19 you want to nerd out some more head on
19:20 over to
19:21 futuretools.io click on this button to
19:23 join the free newsletter and only send
19:25 you an email once a week every Friday
19:27 I’ll send you the five coolest tools
19:28 that I came across and essentially the
19:30 tldr of what happened in AI for the week
19:33 lots of stuff happening every week you
19:35 blink you might miss it so make sure
19:37 you’re on that newsletter because I’ll
19:38 bill you in on everything you might have
19:40 missed goes out every single Friday and
19:42 if you come to futuretools.io click on
19:44 this join the free newsletter button
19:46 I’ll hook you up starting this Friday
19:47 once again thanks for tuning in really
19:49 appreciate you see you guys next one bye
19:52 [Music]
20:10 thank you
, , , #GPT4 #Coming #Week #Insane #Tools , [agora]

Leave a Reply

About Me

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Recent Posts

Need to raise your site's score?

We have an ideal solution for your business marketing
Nullam eget felis

Do you want a more direct contact with our team?

Sed blandit libero volutpat sed cras ornare arcu dui. At erat pellentesque adipiscing commodo elit at.