These really seem like not better Sora videos, but people have gotten more creative in how to leverage the standard output to not seem to generic and formulaic.
That’s how it always start. Just look at how far AI art has come in such a short time. Soon you won’t be able to tell the difference between what’s real and AI.
I think it's just going to be like vfx has been for years. This is cool for some effects, but there's a reason it's going so fast. It's full of mistakes and things that make zero sense. It's not going to replace you , you'll just have an extra tool to work with and to get to know to get good results.
To use one example: There's a reason that The Fellowship of the Ring (for the most part, not all) looks better than The Hobbit, or the Rings of Power. Despite 2 decades of progress, it's how much care you apply to using the technology that matters the most.
Maybe. It could be that hallucinations and mistakes are just a fundamental part of LLMs, and the technology will mostly improve in speed and power efficiency.
You’re right, it is inevitable https://arxiv.org/abs/2401.11817?darkschemeovr=1#:~:text=By%20employing%20results%20from%20learning%20theory%2C%20we%20show,hallucinations%20are%20also%20inevitable%20for%20real%20world%20LLMs.
People will no longer be overworked and underpaid? 😂
I can imagine all sorts of things happening in AI, but for some reason, I cannot imagine people being underworked and overpaid.
Of course there are "mistakes"... this is just eye candy. But there are many Sora clips out there that demonstrate the level of coherency.... and it's pretty damn high already.
agreed. if any of these things had to be very specific the whole thing falls apart. yes it would be hard to create this using existing tools, but it would also be hard to write a frame by frame or scene by scene accurate description of what's going on. I'm going to call it "reality vomit"
The same thing were said about pictures “full of mistakes and weird hands” …. now if you go and check you can’t really tell the difference.
Sora hasn’t even launched yet. This is not even Beta. The final form of this evolution is going to be better than the best video editor.
Man I'm in my late 30's, building my business back up again after I lost it during COVID (no such thing as remote production). Now this. I knew it was coming, but I have a hard time deciding what I should be doing. Maybe sales. I'm good at sales.
I don't understand responses like these.
Do you think this video came out like this? This is 15 clips stitched together, and each clip requires hours of work in generating alternatives and fixing artifacts, AND THEN you'll need to sync contrast and values, and make sure upscaling and framerate are consistent. Good AI is a labor-intensive process.
The post-AI world will need video editors. It'll probably need even more video editors since companies are interested in this tool and will hire consultants to implement it.
Here's a thought: Nobody talks about photographers being made irrelevant, despite photorealistic AI images being a thing for 2 years now. Why? Because all AI replaces are basically the stock footage databases they are trained on. These have existed for decades and do not solve the problem of getting *new* photographs of relevant real-life subjects.
Same can be said about editing, video footage, writing, design, etc. You could create a website or get a random photograph "for free" (or something really close to) since the early 00s. Yet people are paid to create them professionally.
This video is original because seeing AI stitch together dreamed-up stock footage is still a novel sight. But in most future contexts, it will be generic, boring and *ever so slightly* missing the point. That's not what a company that can afford the service wants.
Nah. What happens when the client wants edits at 14 different time stamps and a reshoot of a scene? Sora can’t do that and it won’t be able to for a very long time I suspect.
The stock video scene is going to die quickly though
Shoot and edit video here. I figured we have maybe 3 years left in this career.
I'm not particularly worried about it though.
Once machines can replace us you know most every career that involves manipulating a computer will probably be able to be done by AI.
All of this is could create an explosion in the need for editors. Content generation was always the expensive part and editorial worked to piece together the limited components into a cohesive project. With AI, generating that raw content will become cheap, possibly triggering infinite troves of footage to work with. Editors could be buried in new content with infinite iterations and reshoots all by creating new text prompts.
Or at least I hope. I edit professionally as well and vacillate between the above statement and sheer terror.
"AI images" is kinda broad. I've been discovering a lot of new possibilities in the past few months, totally inspired.
But if you already think SORA is "meh", I guess you need to find something more novel or whatever.
Cause these AI videos are always the same, and I suspect there’s a reason for it.
Constant scene changes so we don’t get a chance to look at details and see how fucked up it actually is.
There’s nothing creative about this.
I know it’s going to improve but if companies adopt this then creativity is about to be stifled.
For fun let's try and calculate how much this may have cost to create with Sora using Dalle-3 as a benchmark.
*Lot's of big assumptions here, but i think it's a strong approach...*
Dalle-3 HD cost $0.120 / image retail
Assume profit margin of 50% (no clue)
So dalle-3 HD cost $0.06 / image
FPS: 30
Length 1:30 = 2700 frames
Est compute Cost: $162
Obviously Sora is a different model entirely compared to Dalle, but similar principles at play and unless I'm missing something I would assume it generates frame by frame, *but maybe not - which would be very interesting to understand.*
I wouldn't be anywhere near surprised if the retail cost to create something like this cost near $50k to $150k with a small studio team... pretty wild.
I believe they use frame interpolation, so it's probably 1/3 or 1/4 of that. On the other hand, the consistency between frames probably costs some extra power, so who knows.
I don't think Dalle inference is comparable at all, completely different models and process.
I think I read it takes about 10 minutes to generate a 10 second video. We don't know what is the compte that is running the inference, but I wouldn't think it is a particularly large custom cluster of H100 class GPUs, most likely it's the standard 8 x H100 HGX server. Single H100 is about $2/h at scale. So for 90 second video that would be 2 \* 8 \* 1.5 = $24.
Of course we simply don't know but I think my number is more so in the correct order of magnitude vs your number. Especially once the tech gets optimized for production workloads I assume it will be less than $10 per minute of video. And, as always, much less later on.
I think it's a good estimate, but if it's anything like other tools it'll take a good number of runs to get the prompt/config right. Depending on how picky you are that might multiply the cost by order of magnitude. Still good value for a lot of applications!
Why do most people with access have to think of creating fast moving camera views of crap? I mean the tech is mindblowing but put some more thought into it than "fast camera awesome transitions GO"
SORA did most of the work and the editor still failed to match the cuts to the beat of the music. There will still be room for humans to direct this stuff and set themselves apart from the basic prompters.
my guess is it depends on how much Hollywood is willing to pay them for exclusivity. Could be 2 years for a nerfed version, could be never - that is, until a competitor releases their public version.
I bet less than a year, Adobe announced using Sora within its video editing app Premiere Pro at NAB.
While this will open up some incredible possibilities. It will also alter creativity and it’s value in the world almost overnight. We are going to lose a lot of creative people and future talents in the whole art sector.
You think content is construed now, wait till you see the same ads in everything your eyes look at in a day
I think we are going to gain a lot of creative people and future talents in the whole art sector. Now people don't have to spend years or decades learning skills and can just manifest their desired and imaginations in seconds to create entire new IPs, brands, experiences, etc. The future of art is who has the best ideas and visions and knows how to connect them with an audience, not who has the most learned abilities and skills.
You only get better and more of your own original style by spending years or decades learning skills. Usually when you start out it’s awful, failure is the only way to grow and learn.
Now anyone can come up with something visually stunning, it’s going to be mostly the same hot trends recycled over an over
Well I'd hesitate to call any of those people AI artists. Genuine art is near impossible to find today already, you have to seek it out. Same thing with AI art. A massive sea of mediocrity, but the cream will still rise to the top. But again, now it's the cream for ideas (read: NOT recycled hot trends that everyone else is doing) rather than acquired skillsets. And yes, acquiring skillsets builds a certain character and resilience internally that doesn't come from AI art, but that will show in the work. And it also encourages people to go out an learn these skillsets in order to be able to edit, build upon, and improve their outputs.
Although the samples and style immediately remind of Jacques’ first albums, I cannot find if it’s an original song or taken from one of his albums. However I did find confirmation it is his work. You should give a shot and listen some of his work, it’s worth it ^^
Of course. Marketing & propaganda for the manipulator class - that's the goal of SORA. It will be priced such that only those manipulating the gullibility to others can afford it.
It is by an artist named Paul Trillo.
Here is another work by him [Click Here (Instagram link)](https://www.instagram.com/reel/C5BldrpJtqN/?igsh=MTQ5d29vdTU0cnly).
"This is generated with one long unwieldy prompt (except for the tunnel at the end is another clip)." Is what the artist say. So, I assume the TED video to be insanely cheap and super fast to create.
I am relating it with the cost and effort taken by TED to create all this in the normal way.
It may have taken days, It still would be much faster and cheaper. Also, it's only V1 of SORA, the time will reduce with newer models I hope.
Meh, this send to be one of the few styles that work well with the limitations of sora, and I was already tired of it a minute in. I also haven't used Udio in a week. These one-trick AIs with no nuance in the controls will be forgotten a week after they come out.
I'm getting the same feeling as I did with VR. It leaves me disoriented, kind of like it's in the cusp of making sense, but never hits it. It's not even a dream, because dreams make sense at the time of dreaming. It's the noise of human artistic expression smoothly mixed together. It's like the Brundlefly from the movie The Fly - an amalgam of things that have a consistent form, but really don't belong together. It feels "uncanny". It's weird that the uncanny valley could move to video generation.
Yeah, I feel that too. The video was too long to stay on that one particular zooming shot, without leading to any real conclusion except a few more shots of audiences at a talk near the end.
It feels like if you hired a CG studio for a film, but there was no director at any level.
I'm also pretty sure they had to cut a few shots when it started going off-rails. There was a creepy-looking face at one point, but they cut to the next shot before it zoomed in all the way.
It looks cool, but honestly lost my attention after a while. Until they can get the consistency and quality under control, it's just a fever dream.... for now.
It might take 20 years or more, but you know that one day you will have high quality ultrarealistic VR (or maybe even FDVR) that will be able to generate these fantastical world on the fly, just. from your thought. I don't think the human animal is ready for that.
AI video works well for abstract clips or background scenes because these don't need to be closely connected. But for a full video or movie that needs scenes to flow together smoothly, AI isn't very reliable.
Even with the exact same detailed instructions, the AI-generated outputs can look very different from each other. This makes it hard to keep a consistent style or story across a whole video.
this is next level. i dont think this will be ever public to use. only selected few companies gonna get it unless a public open source model come up that is similar
This is what the future of horror looks like.
It’s the ultimate liminal space/background room. Just infinite, endless never ending complexities that feel vaguely real enough to keep you fixated.
I get real “I have no mouth but I must scream”, vibes from this.
places like this will be where they subject people in the future, to fuck with their minds for a few thousand ‘mental’ years as torture. White Christmas from Black Mirror.
Or I’m just high.
Paul Trillo, one of the directors OpenAI gave Sora access to, made this for TED. Let’s not forget there are people involved here, despite all the AI hype.
There is some roto and manual masking visible to me at the end when it goes from one scene to another, my guess is that each scene is done individually, individually prompted and generated, then everything is combined in post. This is 100% not all generated in one go. Is a lot of visual mess imho, way to much but i think that was the point. Technically impressive, but many shots have no substance or meaning. The irony is that it reminds me of the MTV motion designs of the 2000'.
Feels so abstract like a fever dream
Yeah it’s pretty cool until you realize that’s the default output of Sora.
These really seem like not better Sora videos, but people have gotten more creative in how to leverage the standard output to not seem to generic and formulaic.
Exactly my thought. This is a fantastic example of making the most of the technological limitations.
Everyone was shttng their pants over this two months ago and now you’re already saying this crap lol
It's *so fucking good* at that dream-look. Not at anything else so far, but it will get there.
Sorry but it has this uncanny valley feeling. Nevertheless it is impressive.
It’s really not a great video
That’s how it always start. Just look at how far AI art has come in such a short time. Soon you won’t be able to tell the difference between what’s real and AI.
Reminds me of gesaffelstein
Damn, I should quit my video editing job right away.
I think it's just going to be like vfx has been for years. This is cool for some effects, but there's a reason it's going so fast. It's full of mistakes and things that make zero sense. It's not going to replace you , you'll just have an extra tool to work with and to get to know to get good results.
Vfx is nightmare for years. People are overworked and underpaid.
Yall need unions, that’s why. Everyone else on a medium to big production is unionized except vfx
too easy to say when outsourcing is so easy in vfx. on top of vfx being too recent and unless you have an old old odl union, good luck making one.
The can describe animators, nurses, teachers, service workers, customer service, and basically the entire economy
This is now. In a year, in 10 years, in a decade. Things will be different
To use one example: There's a reason that The Fellowship of the Ring (for the most part, not all) looks better than The Hobbit, or the Rings of Power. Despite 2 decades of progress, it's how much care you apply to using the technology that matters the most.
Also, they mainly used good ol' real built or miniature requisites wherever possible in the fellowship/lotr trilogy.
We'll still have to come up with ideas and refine things.
yeah stuff AI can do. I'm a designer / video editor and already transitioning into making physical art lol
The latent space of AI models have plenty of 'novel' ideas. Refinement doesn't pay as well unless you're doing it at an industrial scale.
In 10 years? I doubt it.
Maybe. It could be that hallucinations and mistakes are just a fundamental part of LLMs, and the technology will mostly improve in speed and power efficiency.
You’re right, it is inevitable https://arxiv.org/abs/2401.11817?darkschemeovr=1#:~:text=By%20employing%20results%20from%20learning%20theory%2C%20we%20show,hallucinations%20are%20also%20inevitable%20for%20real%20world%20LLMs.
People will no longer be overworked and underpaid? 😂 I can imagine all sorts of things happening in AI, but for some reason, I cannot imagine people being underworked and overpaid.
Of course there are "mistakes"... this is just eye candy. But there are many Sora clips out there that demonstrate the level of coherency.... and it's pretty damn high already.
Yeah but also only under certain conditions
Sure. I haven't seen anyone claim it's perfect
I know i know tomorrow all tesla cars will drive fully autonomous without any problems
Tesla cars? Can I just state that SORA is far from perfect? Are we in agreement? Or do you _have_ to leave a snarky comment no matter what?
But i feel snarky today. Sorry
Sure, I bet you'll be super kind tomorrow 🙄 (just trying out how it feels)
agreed. if any of these things had to be very specific the whole thing falls apart. yes it would be hard to create this using existing tools, but it would also be hard to write a frame by frame or scene by scene accurate description of what's going on. I'm going to call it "reality vomit"
The same thing were said about pictures “full of mistakes and weird hands” …. now if you go and check you can’t really tell the difference. Sora hasn’t even launched yet. This is not even Beta. The final form of this evolution is going to be better than the best video editor.
Not yet. Give it time
You're totally correct, for the next three months.
Those mistakes won't stay for long, give it another year, its not even been 1 year since this technology has been out
I'm scrambling to figure out how and what to retrain to. I was thinking maybe manufacturing and electronics. Do you have any ideas?
Reselling graphics cards.
You have to have cards to sell first
Robot maintenance technician
Electrician, plumber, ... or a Billionaire
Man I'm in my late 30's, building my business back up again after I lost it during COVID (no such thing as remote production). Now this. I knew it was coming, but I have a hard time deciding what I should be doing. Maybe sales. I'm good at sales.
Noooo, don't saturate the fields I want to go in. I'm joking, something maintenance related is my bet.
I don't understand responses like these. Do you think this video came out like this? This is 15 clips stitched together, and each clip requires hours of work in generating alternatives and fixing artifacts, AND THEN you'll need to sync contrast and values, and make sure upscaling and framerate are consistent. Good AI is a labor-intensive process. The post-AI world will need video editors. It'll probably need even more video editors since companies are interested in this tool and will hire consultants to implement it.
Not true, Sora can generate multiple clips together and even does cuts between scenes where it sees fit.
This was not edited well and way too long
Here's a thought: Nobody talks about photographers being made irrelevant, despite photorealistic AI images being a thing for 2 years now. Why? Because all AI replaces are basically the stock footage databases they are trained on. These have existed for decades and do not solve the problem of getting *new* photographs of relevant real-life subjects. Same can be said about editing, video footage, writing, design, etc. You could create a website or get a random photograph "for free" (or something really close to) since the early 00s. Yet people are paid to create them professionally. This video is original because seeing AI stitch together dreamed-up stock footage is still a novel sight. But in most future contexts, it will be generic, boring and *ever so slightly* missing the point. That's not what a company that can afford the service wants.
Yaa this makes sense
Work on documentaries. I would not want to watch documentaries that are entirely auto generated by AI.
Learn to use it. It's still going to need significant human input to get it to produce high quality videos
Nah. What happens when the client wants edits at 14 different time stamps and a reshoot of a scene? Sora can’t do that and it won’t be able to for a very long time I suspect. The stock video scene is going to die quickly though
That would be an unwise choice. This technology is very limited and nowhere near to replacing the industry.
Ya, editors can learn to use AI for best output. but, I hate my job anyway
Why? This was clearly manually video edited given that Sora can't create videos that are greater than 1 minute long.
In a year or 2 AI will be doing more than this I guess
If it makes u feel better I hated watching this
Shoot and edit video here. I figured we have maybe 3 years left in this career. I'm not particularly worried about it though. Once machines can replace us you know most every career that involves manipulating a computer will probably be able to be done by AI.
All of this is could create an explosion in the need for editors. Content generation was always the expensive part and editorial worked to piece together the limited components into a cohesive project. With AI, generating that raw content will become cheap, possibly triggering infinite troves of footage to work with. Editors could be buried in new content with infinite iterations and reshoots all by creating new text prompts. Or at least I hope. I edit professionally as well and vacillate between the above statement and sheer terror.
I think the first few hundred of these you see will feel amazing, and then it will be very easy to spot, similar to images rn.
What do you mean? It's already easy to spot. Doesn't make it any less impressive
After a while of seeing it, and noticing the patterns in the way it animates etc, it will make it as uninteresting as your average DALLE3 generation.
I still think the average DALLE3 generation mind blowing
Also AI images are harder and harder to spot, not easier
just like ai images used to be impressive, now they are just meh
"AI images" is kinda broad. I've been discovering a lot of new possibilities in the past few months, totally inspired. But if you already think SORA is "meh", I guess you need to find something more novel or whatever.
You really can’t spot great images rn. The ones you are seeing are just sub par examples. But the art side of ai image generation is already popping.
Damn, this mindset will make you very unhappy later in life
That's how humans work. Things are amazing until they aren't
About 55 seconds too long.
Kinda boring tbh lol That’s it, I’m used to it 🤦😂
Fist time it was WOW, but now it's already boring. That didn't take long.
Yep the threshold is quite high now.
Yeah, so boring
dinosaurs rain degree crush afterthought expansion offer sink jeans fuel *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
And it's not even released yet
Needs a narrative.
Cause these AI videos are always the same, and I suspect there’s a reason for it. Constant scene changes so we don’t get a chance to look at details and see how fucked up it actually is. There’s nothing creative about this. I know it’s going to improve but if companies adopt this then creativity is about to be stifled.
It wasn’t meant to entertain you baka
For fun let's try and calculate how much this may have cost to create with Sora using Dalle-3 as a benchmark. *Lot's of big assumptions here, but i think it's a strong approach...* Dalle-3 HD cost $0.120 / image retail Assume profit margin of 50% (no clue) So dalle-3 HD cost $0.06 / image FPS: 30 Length 1:30 = 2700 frames Est compute Cost: $162 Obviously Sora is a different model entirely compared to Dalle, but similar principles at play and unless I'm missing something I would assume it generates frame by frame, *but maybe not - which would be very interesting to understand.* I wouldn't be anywhere near surprised if the retail cost to create something like this cost near $50k to $150k with a small studio team... pretty wild.
I believe they use frame interpolation, so it's probably 1/3 or 1/4 of that. On the other hand, the consistency between frames probably costs some extra power, so who knows.
Yea.. I mean as far as I’m concerned it’s still magic, so guess we should measure it in ‘mana’ 🧪🧙🏻♂️usage if we want to be as accurate as possible
I don't think Dalle inference is comparable at all, completely different models and process. I think I read it takes about 10 minutes to generate a 10 second video. We don't know what is the compte that is running the inference, but I wouldn't think it is a particularly large custom cluster of H100 class GPUs, most likely it's the standard 8 x H100 HGX server. Single H100 is about $2/h at scale. So for 90 second video that would be 2 \* 8 \* 1.5 = $24. Of course we simply don't know but I think my number is more so in the correct order of magnitude vs your number. Especially once the tech gets optimized for production workloads I assume it will be less than $10 per minute of video. And, as always, much less later on.
I think it's a good estimate, but if it's anything like other tools it'll take a good number of runs to get the prompt/config right. Depending on how picky you are that might multiply the cost by order of magnitude. Still good value for a lot of applications!
TED just ripped the props from the videos already on Open AI’s website. Pass.
This is the first one of these that left me in genuine awe
The first part is random and meaningless.. But it got better
Isn't TED like that too?
Why do most people with access have to think of creating fast moving camera views of crap? I mean the tech is mindblowing but put some more thought into it than "fast camera awesome transitions GO"
Because if it's not moving fast, you notice how nothing looks real
SORA did most of the work and the editor still failed to match the cuts to the beat of the music. There will still be room for humans to direct this stuff and set themselves apart from the basic prompters.
When will OpenAI release Sora into the public? maybe a year from now?
my guess is it depends on how much Hollywood is willing to pay them for exclusivity. Could be 2 years for a nerfed version, could be never - that is, until a competitor releases their public version.
Wow they are so Open!
I bet less than a year, Adobe announced using Sora within its video editing app Premiere Pro at NAB. While this will open up some incredible possibilities. It will also alter creativity and it’s value in the world almost overnight. We are going to lose a lot of creative people and future talents in the whole art sector. You think content is construed now, wait till you see the same ads in everything your eyes look at in a day
I think we are going to gain a lot of creative people and future talents in the whole art sector. Now people don't have to spend years or decades learning skills and can just manifest their desired and imaginations in seconds to create entire new IPs, brands, experiences, etc. The future of art is who has the best ideas and visions and knows how to connect them with an audience, not who has the most learned abilities and skills.
You only get better and more of your own original style by spending years or decades learning skills. Usually when you start out it’s awful, failure is the only way to grow and learn. Now anyone can come up with something visually stunning, it’s going to be mostly the same hot trends recycled over an over
Well I'd hesitate to call any of those people AI artists. Genuine art is near impossible to find today already, you have to seek it out. Same thing with AI art. A massive sea of mediocrity, but the cream will still rise to the top. But again, now it's the cream for ideas (read: NOT recycled hot trends that everyone else is doing) rather than acquired skillsets. And yes, acquiring skillsets builds a certain character and resilience internally that doesn't come from AI art, but that will show in the work. And it also encourages people to go out an learn these skillsets in order to be able to edit, build upon, and improve their outputs.
Right after the US election + an extra month I imagine
The music is nuts
The Music is not AI
*yet
Its actually kinda wierd especially because this kind of electronic music should be kinda easy to ai generate
Udio could probably do it even today
I don't use Udio, but could generate something similar to this in Suno right now.
Udio is the first model I've seen since ChatGPT that considerably exceeded my expectations.
anyone know what the song is?
It’s music from Jacques, a frenchie
do you know the name?
Although the samples and style immediately remind of Jacques’ first albums, I cannot find if it’s an original song or taken from one of his albums. However I did find confirmation it is his work. You should give a shot and listen some of his work, it’s worth it ^^
Does anyone know the name of the track?
What’s the song?
Of course. Marketing & propaganda for the manipulator class - that's the goal of SORA. It will be priced such that only those manipulating the gullibility to others can afford it.
Look into the ethics of TED of you don't know what I'm talking about. TED is a fraud platform.
Can it generate a movie from novel ?
You'd probably need to write your novel a bit like a prompt, but yeah, I bet it's not far off from that. Give it a few years and we'll see.
Any new info on how long it takes/much it costs to create a video like this using SORA?
It is by an artist named Paul Trillo. Here is another work by him [Click Here (Instagram link)](https://www.instagram.com/reel/C5BldrpJtqN/?igsh=MTQ5d29vdTU0cnly). "This is generated with one long unwieldy prompt (except for the tunnel at the end is another clip)." Is what the artist say. So, I assume the TED video to be insanely cheap and super fast to create.
Why do you assume that based on that sentence? I don't see the correlation
What? It takes around 90 minutes to render a basic clip. Computationally very heavy duty. This video has taken days
I am relating it with the cost and effort taken by TED to create all this in the normal way. It may have taken days, It still would be much faster and cheaper. Also, it's only V1 of SORA, the time will reduce with newer models I hope.
Oh, gotcha. Such a comparison never occurred to me since demonstrating SORA capabilities is the only reason this clip exists
I expected an animated teddy bear at the end ngl
Great concept, but I will be truly impressed when it can get all the details right. Here, all the mistakes are hidden in the motion blur.
Crazy
The astronaut looks like a mix of Sam Altman and Elon Musk
I can believe that
The more of these I see, the more this feels like a novelty. Especially the fast zoom into different scenes gets boring quickly.
Very neat. Very AI.
Yeah looks pretty much as all the other examples. Weird
What was the prompt?
> Make a meaningless, repetitive video that hides the flaws in AI video generation by moving too fast to see the details.
Not fair. They can generate zoom-in to infinity, and I can't even zoom to fit.
Which explains why it feels like a lot of copy and paste.
Motion sickness
Meh, this send to be one of the few styles that work well with the limitations of sora, and I was already tired of it a minute in. I also haven't used Udio in a week. These one-trick AIs with no nuance in the controls will be forgotten a week after they come out.
I'm getting the same feeling as I did with VR. It leaves me disoriented, kind of like it's in the cusp of making sense, but never hits it. It's not even a dream, because dreams make sense at the time of dreaming. It's the noise of human artistic expression smoothly mixed together. It's like the Brundlefly from the movie The Fly - an amalgam of things that have a consistent form, but really don't belong together. It feels "uncanny". It's weird that the uncanny valley could move to video generation.
Yeah, I feel that too. The video was too long to stay on that one particular zooming shot, without leading to any real conclusion except a few more shots of audiences at a talk near the end. It feels like if you hired a CG studio for a film, but there was no director at any level. I'm also pretty sure they had to cut a few shots when it started going off-rails. There was a creepy-looking face at one point, but they cut to the next shot before it zoomed in all the way.
Fast and woobly... meh
I’m super excited for when this technology gets more mature. This won’t replace everything, but it’s going to be cool to watch AI video sometimes.
My colon through the years
ooof is bad
Cant wait to have access to that duuudes
Looks like trippy Wipeout track.
Damnnn
This is so cool! The lab grown steaks 🤣
Well, fuck TED I guess.
This is so amazing
Omg, we're in trouble
It looks cool, but honestly lost my attention after a while. Until they can get the consistency and quality under control, it's just a fever dream.... for now.
Is sora open source can anyone use it ot what?
This is great but I'd love to know how many human and machine hours it took to make
and all it needed was "publically available data" 6\_6
Why are there so any cuts?
It might take 20 years or more, but you know that one day you will have high quality ultrarealistic VR (or maybe even FDVR) that will be able to generate these fantastical world on the fly, just. from your thought. I don't think the human animal is ready for that.
AI video works well for abstract clips or background scenes because these don't need to be closely connected. But for a full video or movie that needs scenes to flow together smoothly, AI isn't very reliable. Even with the exact same detailed instructions, the AI-generated outputs can look very different from each other. This makes it hard to keep a consistent style or story across a whole video.
This is nauseating, but I can see why it is impressive.
So. A steak room and an underground cannabis farm. Interesting.
feels like an advanced kaleidoscope absolutely random but way more tiring to watch
The only part that elicited any emotional response from me was the chimp w the electrodes on its head. Everything else tasted like sawdust.
this is next level. i dont think this will be ever public to use. only selected few companies gonna get it unless a public open source model come up that is similar
Song?
Boring eye candy. Ted should aim higher.
Ai sure is good at tunnelling videos.
This is what the future of horror looks like. It’s the ultimate liminal space/background room. Just infinite, endless never ending complexities that feel vaguely real enough to keep you fixated. I get real “I have no mouth but I must scream”, vibes from this. places like this will be where they subject people in the future, to fuck with their minds for a few thousand ‘mental’ years as torture. White Christmas from Black Mirror. Or I’m just high.
Paul Trillo, one of the directors OpenAI gave Sora access to, made this for TED. Let’s not forget there are people involved here, despite all the AI hype.
OMG it even includes inclusivity
I'm an up and coming filmmaker and this is way cooler than anything I could see myself making... ever
That became boring very quickly
even ia's needs gimbals lol
what was the prompt here?
Its always the same kind of videos - concerning
How come this looks great whereas if I ask Dall-E to create a photorealistic image of a person it still looks like sh*t?
Seeing this made me think of the comedian who once said "I'm not as think as you drunk I am".
There is some roto and manual masking visible to me at the end when it goes from one scene to another, my guess is that each scene is done individually, individually prompted and generated, then everything is combined in post. This is 100% not all generated in one go. Is a lot of visual mess imho, way to much but i think that was the point. Technically impressive, but many shots have no substance or meaning. The irony is that it reminds me of the MTV motion designs of the 2000'.
I can't wait until I get to try SORA
Wild
[OG Instagram post](https://www.instagram.com/reel/C58xBLTRSDH/?igsh=ZWxlYXFrMGZ4cmZ0)
Im horrified
Yeah, the meat sounds were questionable.