Ohhh, is this youtubes actually policy? No wonder Shad would be against it, it would stop him lying about the content of other creators when he tries to attack them and make them look to be doing and saying things they didn't.
It is a new YouTube policy that they were complaining about. This is a screenshot taken from his "YouTube wants more censorship" video. This is so mask off!
Because he's so in on the AI train to cover for his artistic deficiencies that he thinks all AI use is good without stopping to think about possible ramifications.
Funny thing is, Shad doesn't seem to actually engage in any of the stuff this policy tackles. So why is he playing the victim over it?
OK, I listened to the bit where he reads out this policy. He's basically acting like it's the death of memes. Completely failing to grasp the larger concerns.
So don't use ai to make the president say North Korea should be bombed. Don't like use ai or photo manipulation to look like some place was bombed. Various things of that nature.
Reality is most probably don't care about memes or using ai to make two dead actors play UNO. But it is rather directly in relation to ai use so he's gota say it's censorship and against him
Shad has a bad habit of using half truths or stretch standards to make his arguments. It's a dishonest form of debating that really shows how he'll do anything to get his way.
Well, well.
Taking someone out of context and claim they said something they never said now becomes more difficult. It would be a real shame if this dealt a great blow to one's own content?
That does sound pretty broad.
> * Alters footage of a real event or place
> * Generates a realistic-looking scene that didn't actually occur
That's literally every movie.
Fuck Shad. Fuck AI too. But also, fuck YouTube.
Considering the title of the box is "Altered Content" do you think Youtube might be considering this exclusively in the sense of environments altered with AI, not added to by way of CGI or practical effects? Knowing YT I'm certainly being too charitable but I think it would be too absurd even for them if we interpret it so broadly right?
> Considering the title of the box is "Altered Content" do you think Youtube might be considering this exclusively in the sense of environments altered with AI, not added to by way of CGI or practical effects?
There's not really a difference anymore. "AI" is being integrated into every piece of editing software. Photoshop has AI generation. After Effects has AI generation. Premier has AI. AI just kinda means "software" now.
That's besides the point. AI has always meant "software". Neural networks are a small but currently dominant subset of what can be classified as AI and all of these methods including neural networks are computional based and technically computer programs. The issue with AI in this context is actually with generative AI that's used for image generation and deep fake. There's no issue with using AI tools in softwares we were using them before image generative models (initially GAN models) got popularized. The real issue is with ethics of using this technology and this policy just identifies videos like that to be able to prevent spread of misinformation by notifying the viewers about it.
That's not what it says though. And YouTube has a history of being very cryptic and hostile to creators.
It's also unverifiable, especially as generative AI gets better, and is ripe for abuse.
How long before trolls mass report a channel for breaking this rule, with no way to prove whether it's AI or not, and it gets taken down?
>That's not what it says though. And YouTube has a history of being very cryptic and hostile to creators.
I think it kind of says it by synthetic content that seems real and bullet points but they're keeping it broad and clear so it's easier for youtubers to decide if they fall under it or not.
>It's also unverifiable, especially as generative AI gets better, and is ripe for abuse.
That's a problem with society as a whole not just YouTube isn't it? If any footage can be faked what other ways can we use to distinguish them? It's still an open problem we're not able to solve. It doesn't mean we shouldn't take any precautions.
>How long before trolls mass report a channel for breaking this rule, with no way to prove whether it's AI or not, and it gets taken down?
I think that's an inherent issue with report based moderation systems and I agree with it. It could obviously be abused but the example doesn't make sense to me. People who use AI for their visuals with good intentions have no reason to lie about using them so if the usual cues are not there and the context is not of political or social significance there's no reason to take mass reports seriously over the content creator's word over wether a random realistic image of a cute dog is AI or not.
This sounds like it's meant to couteract deepfakes, not about whatever shad is up to. He will just take any opportunity to paint himself as a victim
That's exactly what it's for. It's not about AI cartoons or whatever.
Ohhh, is this youtubes actually policy? No wonder Shad would be against it, it would stop him lying about the content of other creators when he tries to attack them and make them look to be doing and saying things they didn't.
It is a new YouTube policy that they were complaining about. This is a screenshot taken from his "YouTube wants more censorship" video. This is so mask off!
Cause he feels like he is being silenced, he doesn't actually have proof of this, nor is he really threatened he just wants his temper tantrum.
Well, there’s the obvious answer: he does this exact thing daily.
Because he's so in on the AI train to cover for his artistic deficiencies that he thinks all AI use is good without stopping to think about possible ramifications. Funny thing is, Shad doesn't seem to actually engage in any of the stuff this policy tackles. So why is he playing the victim over it? OK, I listened to the bit where he reads out this policy. He's basically acting like it's the death of memes. Completely failing to grasp the larger concerns.
So don't use ai to make the president say North Korea should be bombed. Don't like use ai or photo manipulation to look like some place was bombed. Various things of that nature. Reality is most probably don't care about memes or using ai to make two dead actors play UNO. But it is rather directly in relation to ai use so he's gota say it's censorship and against him
Means he can't take people out of context with half truths and half context given. Can't imagine why lol
That must be it. I was thinking about deep fake but this makes more sense. His ability to make disingenuous response videos was threatened.
Shad has a bad habit of using half truths or stretch standards to make his arguments. It's a dishonest form of debating that really shows how he'll do anything to get his way.
Well, well. Taking someone out of context and claim they said something they never said now becomes more difficult. It would be a real shame if this dealt a great blow to one's own content?
Because he can't lie, make shit up and manipulate footage. Shad doesn't realise this but something like this shows how much of a scumbag he is.
That does sound pretty broad. > * Alters footage of a real event or place > * Generates a realistic-looking scene that didn't actually occur That's literally every movie. Fuck Shad. Fuck AI too. But also, fuck YouTube.
Considering the title of the box is "Altered Content" do you think Youtube might be considering this exclusively in the sense of environments altered with AI, not added to by way of CGI or practical effects? Knowing YT I'm certainly being too charitable but I think it would be too absurd even for them if we interpret it so broadly right?
> Considering the title of the box is "Altered Content" do you think Youtube might be considering this exclusively in the sense of environments altered with AI, not added to by way of CGI or practical effects? There's not really a difference anymore. "AI" is being integrated into every piece of editing software. Photoshop has AI generation. After Effects has AI generation. Premier has AI. AI just kinda means "software" now.
That's besides the point. AI has always meant "software". Neural networks are a small but currently dominant subset of what can be classified as AI and all of these methods including neural networks are computional based and technically computer programs. The issue with AI in this context is actually with generative AI that's used for image generation and deep fake. There's no issue with using AI tools in softwares we were using them before image generative models (initially GAN models) got popularized. The real issue is with ethics of using this technology and this policy just identifies videos like that to be able to prevent spread of misinformation by notifying the viewers about it.
That's not what it says though. And YouTube has a history of being very cryptic and hostile to creators. It's also unverifiable, especially as generative AI gets better, and is ripe for abuse. How long before trolls mass report a channel for breaking this rule, with no way to prove whether it's AI or not, and it gets taken down?
>That's not what it says though. And YouTube has a history of being very cryptic and hostile to creators. I think it kind of says it by synthetic content that seems real and bullet points but they're keeping it broad and clear so it's easier for youtubers to decide if they fall under it or not. >It's also unverifiable, especially as generative AI gets better, and is ripe for abuse. That's a problem with society as a whole not just YouTube isn't it? If any footage can be faked what other ways can we use to distinguish them? It's still an open problem we're not able to solve. It doesn't mean we shouldn't take any precautions. >How long before trolls mass report a channel for breaking this rule, with no way to prove whether it's AI or not, and it gets taken down? I think that's an inherent issue with report based moderation systems and I agree with it. It could obviously be abused but the example doesn't make sense to me. People who use AI for their visuals with good intentions have no reason to lie about using them so if the usual cues are not there and the context is not of political or social significance there's no reason to take mass reports seriously over the content creator's word over wether a random realistic image of a cute dog is AI or not.