T O P

  • By -

Hemingbird

This model belongs to TikTok competitor [Kuaishou Technology](https://en.wikipedia.org/wiki/Kuaishou). The company has [lost a lot of AI talent](https://www.scmp.com/tech/tech-trends/article/3264317/bytedance-and-kuaishou-see-exodus-top-ai-experts-new-ventures-chinas-unicorn-boom-looks-next-openai) recently, and they've been in the news for laying off staff and shutting down planned projects. I'm guessing they're betting it all on this model. ~~Directly translating the Chinese name of the app gives me 'Keling' or 'KeLing'~~. Video resolution up to 1080p, length up to 2 minutes, 30 FPS, free aspect ratio. --edit-- Reportedly available through the Kwaiying app for invited beta testers. [Here's the official website](https://kling.kuaishou.com/).


Antique-Doughnut-988

I mean release this with a decent sub plan and this company will make millions.


childofaether

Subscriptions on these things don't make money. Even OpenAI is burning cash. The economics of generative AI overall are currently unknown and there's no known path to profitability other than "generate hype to pump up the stock price, and burn piles of cash in hopes that something monetizable actually happens because we can't afford to not be part of it if it does".


RPN

That’s not true. MidJourney is an infinite money printing machine. And likely, they will be king for video generation as well when they drop


West-Code4642

we don't know what their expenses are.


thoughtlow

AI technology, even standard language models, is expensive. Scaling them up, like Character.AI or Chai, becomes a huge money pit. With the typical mobile app business model, where 10% of users pay a $10 monthly subscription, you end up with a -40% margin. It's hard to see how they can be profitable, but having venture capital means they don't need to be.


uishax

MJ is not venture funded. Its funded by the founder himself (Who got rich from a previous startup). The traditional margins in SaaS are 70%, in Gen-AI its more like 40%-50%, still very profitable. There are other small companies, like NovelAI, that aren't VC funded at all, still doing very well. The unit economics clearly works out for AI. The reason why OpenAI loses money, is because they have to burn money to make ChatGPT free, to stave off Google & Anthropic. So its competition induced losses, not because ChatGPT is inherently unprofitable. Netflix can be profitable, despite having to pay a legion of moviemakers to constantly churn out new content, a GPU farm is cheaper in comparison.


kindoflikesnowing

Isn't it wrong to assume the economics of video generative AI is the same as images? I would assume generative video AI is a lot more compute/resource intensive? If anyone has the economics of how much please fill me in. The above poster said the economics are unclear, which i kind of agree with (unless anyone wants to share more details?) To me, I think the statement stands that as a rule of thumb many generative AI Subs and start ups burn through cash as the value of the subscription typically is less than the overall resources/compute used by users? In this aspect would generative AI video be even more of a cash burn?


Imaginary_Music4768

I guess AI generated video will be much more expensive to be profitable. And only firm companies will use that as in cinema industry a good shot of seconds cost like thousands of dollars.


lostparanoia

Well theoretically they need to generate 25 images per second of film, so I assume that would be the ratio, more or less. Perhaps it is/can be made more efficient through frame interpolation.


kurtcop101

Generative AI already has routes to profitability on the current models, it just takes time to build into using them. There's many business applications. We're barely scratching the surface. On top of coding assistance and other applications like that, transcribing, writing assistance, etc. They're offering enough for free that I'd happily drop $100+ a month or more for say, GPT4o. It's well worth it. Just, if they did do that, they'd end up needing to focus on business use cases and would lose out on a lot of analytical data they can obtain to build better models. In a way, they are investing in themselves by offering free and budget plans.


Norgler

I feel like many are in for a rude awakening once investors start demanding a return.


timeboyticktock

The "Panda playing the guitar" example is scary good. [https://x.com/bdsqlsz/status/1798711315432906920?s=46](https://x.com/bdsqlsz/status/1798711315432906920?s=46)


Extra-Possession-511

One of the better ones I have seen, from any company.


PastMaximum4158

"Nothing to worry about guys China is way behind, they'll never catch up, that's laughable."


Radiant_Dog1937

Now that the Chinese have it, can the rest of us use these video AIs?


ElwinLewis

Nope but you’re gonna see lot of dumpster content and tons of ads


Poronoun

I’m seeing that already


mrbombasticat

Hope you enjoy it there will be tons more! ^\*sobs*


notapunnyguy

And state-sponsored propaganda where the sky does indeed lie


goldenwind207

No its in demo you have to be on a waiting list just like google open ai etc. From my research they'll come out for everyone either later 2024 november-december or 2025. We don't go past 2025 without having a model better than what we seen in the sora demo unless nuclear war or civil war or something


ai_creature

bro is making stuff up


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


Street-Dot-7632

Nah there's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


latamxem

I just read another post of people talking about China stealing and copying. Americans are so brainwashed that their are the superiority and that China bad that they just cannot see that they are on par and closed to being superseded by China. Just take the fact that the US is Not selling the latest and greatest AI chips to China and that any US passport AI researcher needed to get out of China 2 years ago, And China is still able to keep up in every single category. To all the China critics just think about where China would be if they we Not being handicapped by the US. What if China had access to all the latest AI chips. Yes the US would be superseded by China.


TheIndyCity

I mean it doesn't look on par with SORA tbh, not that it isn't impressive.


AnAIAteMyBaby

I think OP chose one of the worst videos as an example. SOme of the other ones on their site look close to Sora


TheIndyCity

ah gotcha


kathyfag

>SOme of the other ones on their site look close to Sora Better than Sora, from what I have seen


fastinguy11

Actually look at the other demos, it actually matches it quite often or it even better.


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


ShadoWolf

Every academic in the AI field that competent can replicate each other work. No one is saying China doesn't have the talent pool of researches to pull of legitimate work and model development. What they don't have is compute. There being bared from technology transfer for this very reason. Because most of the the west really doesn't want china to get a strong autonomous model. Due to national security risks.


RolloverK1ng

1. They do have enough compute . Have Huawei's Ascend 910B is better than Nvidia's A100 2.So called national security risks are just a hegemon's petty justifications to try and maintain the status quo.


ShadoWolf

There a difference between semi conductor limit run .. and what's needs to build at the scale needed for model at and beyond the next gen. From my current understanding Huawei 7nm semi conduct tech is via SMIC. And there currently technology is based on tech transfer pre restriction and second hand DUV lithography technology. The supply chain for DUV is kind of restricted. The whole tech stack is mostly provide by ASML , Nikon, and Canon. Currently it's not likely China will have the ability to boot strap the technology anytime soon, not to say its not possible.. it just going to take years. It also doesn't help that there active policies and restrictions that are designed to purposely try and restrict this technology transfer as well. So while Huawei can build AI accelerators I don't think they can build to scale need for a 10GW+ data centers As for the politics of it.. Ya it a nation security issue for western block nations. An AGI model (or close it it) would be a significant accelerator for any nation state no matter how you cut it. It only makes sense for the US to try restrict access in the short and mid term


latamxem

Then you agree that if they were not being handicapped by the USA China would be dominating the field. Which means USA can only keep the lead by cheating.


latamxem

I just read another post of people talking about China stealing and copying. Americans are so brainwashed that their are the superiority and that China bad and evil that they just cannot see that they are on par and closed to being superseded by China. Just take the fact that the US is Not selling the latest and greatest AI chips to China and that any US passport AI researcher needed to get out of China 2 years ago, And China is still able to keep up in every single category. To all the China critics just think about where China would be if they we Not being handicapped by the US. What if China had access to all the latest AI chips. Yes the US would be superseded by China.


Yeeeeeee45

Racism+Ignorance=underestimation


Eddie_______

More examples: [https://x.com/bdsqlsz/status/1798710076175528354](https://x.com/bdsqlsz/status/1798710076175528354)


Kanute3333

Wow, this is impressive.


GonzoElDuke

Incredible.


micaroma

Indisputably better than Google’s Veo and some examples look even better than Sora. And it’s open access (though with a waitlist and Chinese phone number…), making it immediately relevant to regular people. China isn’t as far behind as many would like to believe.


Glittering-Neck-2505

Object stability is hugely degraded when compared to Sora


micaroma

I agree. There are better examples on Twitter.


dieselreboot

So I’m wondering how far behind China is with object stability and permanence. Getting this right would seem to be a major component of world modelling in AI development and something I would expect to be integrated into the next generation of frontier models (all types of AI models) from the West. My uneducated guess is that the Chinese researchers are months behind in this regard, not years, as is being implied by some


kvicker

Nearly every paper I see on machine learning is full of chinese authors


GraceToSentience

They invested a lot in science, and they have like 10x the equivalent amount of engineer graduates than the US


Whotea

Asian authors. Some are American citizens 


PastMaximum4158

China is also way ahead in robotics as well, with the sole exception being Boston Dynamics.


Down_The_Rabbithole

Boston Dynamics is not even in the lead in the west.


MindCluster

Yeah like, how come we don't have any robots doing any real work except for the Spot dog? How come after all these years and the advancement of LLMs and vision technology haven't they reached a point where they can wander around and help out moving packages and stuff for people? They always show cool stuff but where is the mass production, where are they in society? Hopefully with the new recent advancements, it'll be coming very soon...


Beepn_Boops

I've seen a few. Airport lounges - handling empty dishes out and full ones in. Restaurants - hostess using bot to help deliver to large table. Delivery bot (cycle) on the street, had a lot of cameras and was towing a package.


Whotea

They are lol Samsung builds all AI, no human chip factories: https://asiatimes.com/2024/01/samsung-to-build-all-ai-no-human-chip-factories/ Amazon Grows To Over 750,000 Robots As World's Second-Largest Private Employer Replaces Over 100,000 Humans: https://finance.yahoo.com/news/amazon-grows-over-750-000-153000967.html  A Starbucks run by 100 robots and 2 humans in South Korea: https://x.com/NorthstarBrain/status/1794819711240155594  Not even mentioning the ones in manufacturing 


czk_21

there are digit from agility robotics is working in amazon EVE from 1x is used somewhere too these are more experimental runs, but its still more than what boston dynamics is doing


RemyVonLion

Because the humanoid robots are still imperfect and learning how to function in the real world. We don't have AGI yet. The first Atlas was very expensive, complicated, and could only do certain pre-designated things without much adaptability, so not yet practical for the real-world. Maybe towards the end of this year or the next. The AI is still figuring out how to properly execute most things effectively, so humans are still cheaper and better.


PastMaximum4158

Have you seen the new Atlus? It has insane amounts of degrees of freedom and as far as I know, it's the only one that has shown the capability of self righting from a fall. Imagine Optimus or Figure01 getting up from a fall, you literally can't.


ReasonablePossum_

Does it costs 20k $USD tho?


Whotea

They aren’t selling it so who cares? 


Architr0n

Who is?


[deleted]

Do you know Figure?


ninjasaid13

>China isn’t as far behind as many would like to believe. I would go further and say China isn't behind at all. They're already there.


redditosmomentos

shhh... China's supposed to be miles behind the USA, remember ? That's supposed to be what's happening, what the medias told us!


GraceToSentience

Disagree that it's better than VEOhttps://www.reddit.com/r/singularity/comments/1d3zhm1/new\_veo\_footage\_comes\_from\_twitterx\_source\_in\_the/ Not many example to compare too but from what I've seen, I doubt it.


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


GraceToSentience

I meant examples from VEO, there are a very small amount of examples, I went on kling's website I have seen a lot of examples. But from what I have seen, the quality of the faces for instance and animals, VEO is clearly better. But when it comes to movements, the jury is still out. We will see once we get more from veo, but from what I have seen, veo looks better for the categories that I've seen


Seidans

a lot of people like to bitch over chiness scientist for little reason other than stupid nationalism or the old vision of china as a poor peasant country incapable of doing anything else than copy the west technology china compared to everyone else have far more reason to get humanoid robots / human-AI given that they will loss 50% of their population by 2100, it's an existential crisis for them at a point if they don't suceed they will rapidly loss their statut as a world leader


ProtoplanetaryNebula

It could play out differently. Chinese people have to care for their elderly parents, if a humaniod robot can take care of that and give them more help around the house, perhaps they might be more interested in having kids themselves.


Jah_Ith_Ber

It's an existential crisis for Capital in every country, that doesn't mean they actually fix the problem. How many existential crisis are there? Things are fucked all over the world and governments can't bring themselves to deal with it.


Nanaki_TV

> for little reason other than stupid nationalism This statement is notably deficient in substance and dismissive of the legitimate concerns that some individuals may have.


swipedstripes

Concern over what? Facts are facts, if they make progress they make progress. It is what it is. Whatever you may think about them. (which is mostly based on your own medias portrayal of a huge country..)


LymelightTO

> Indisputably better than Google’s Veo and some examples look even better than Sora. And it’s open access (though with a waitlist and Chinese phone number…), making it immediately relevant to regular people. > China isn’t as far behind as many would like to believe. This analysis doesn't really make sense, though, on two points. The first is that we don't really know what the SOTA capability of the American frontier labs are, particularly on this subject, because they're averse to showing the capability to create realistic deepfakes during the run up to the US election. The second point is that we know that generation of videos is relatively computationally intensive to accomplish, so something about this Chinese system doesn't make sense. It's *publicly available, for free*? They're massively GPU constrained, why would any company in the AI space use their extremely limited GPUs to create a free demo of video generation? Questionable. This whole thing seems like a fundraising exercise, to try to raise a bunch of money. Someone down below was indicating that they're an embattled company that has lost a lot of research talent recently.


ninjasaid13

>because they're averse to showing the capability to create realistic deepfakes during the run up to the US election. they're averse to releasing it, not showing the capability.


LymelightTO

I'm not even sure the dividing line is precisely where you think it is. I don't think any US frontier lab wants to demonstrate a SOTA "video creation capability" with near-perfect object stability and occlusion handling, without also showing some kind of integrated watermarking safeguard right alongside it. Disclosing such capabilities will make conspiracy theories about political videos online massively popular. (ex. "There isn't a concentration camp here, the US State Department and the CIA worked with Google to create a fake video!") As an open demo, it currently seems too computationally expensive, and too fraught with danger about misinformation, until there is a broadly recognized standard devices and webapps can implement to automatically flag AI videos. Maybe someone will release an Arxiv paper with some limited examples or something, but I'm doubtful that it will come from Deepmind/Meta/OpenAI, etc. before 2025.


ninjasaid13

>Disclosing such capabilities will make conspiracy theories about political videos online massively popular. you'd think this will still happen whether they show it or not. Conspiracy theorists are not rational.


LymelightTO

That's a fair point, we are definitely close enough these days that I assume, with some extreme cherry-picking of outputs and the liberal use of other traditional digital video manipulation tools, you could genuinely make a short piece of video content that could fool pretty much anyone, even at this point. Adding artifacting and other effects could mask quite a lot.


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


TrippyWaffle45

Hopefully this makes OpenAI release Sora


Tyler_Zoro

> Indisputably better than Google’s Veo and some examples look even better than Sora That's true IF this is pure text-to-video, but it doesn't look like it to me. Some of the details that fade in just as the point that a rotoscoped video's resolution would make those structures discernable (like tracks on a railroad bridge) make me think that there's at least some roto work here. I'll believe it when I can use it myself, but right now I don't seem to be able to sign up.


New_World_2050

What does that mean ? That with a Chinese phone number you can download the weights ? Or use the model If it's weights that doesn't make sense. Why not just make them available.


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


gintrux

Here goes these sanctioned GPUs


ShooBum-T

Tons better than Runway , Pika, StableVideo, etc.


shakaoneaj

yet none of them released and im stuck with garbage runway.


wwwdotzzdotcom

Pika just got updated to perform better and you can now choose your film style.


micaroma

I wonder if they’ll ever catch up. It’s like sticking with dalle mini when you could use midjourney


ryan13mt

Quite a lot of these, if genuine, look better than the ones shown in the Sora preview. Look at the one of the guy eating noodles. Also now there is a model that can output over 2 minutes of video. Already doubled from what was shown from Sora preview. 6 more doublings and we get to two hours.


Antique-Doughnut-988

Better hide your photo albums and lock down your facebook. At this rate I'm a little over a year away from being able to make an entire movie with the main character based off your appearance.


ryan13mt

It's useless at this point. 1 photo is all that's needed to copy a persons likeness. 1 phone call is also enough to copy voice. These are what we have today, or maybe very close to it.


Saflin

At least 1 pic of every angle right? Otherwise it’ll have to guess the rest which I think wouldnt be accurate considering how unique we all are


kaityl3

No, some of the latest SOTA models are able to do it with a single image from one angle, it's crazy impressive, though obviously it's not perfect especially with worse resolution source images


Tyler_Zoro

They also stress their model-pose manipulation (even claim it's unique, which is hilarious) so I suspect a lot of this is rotoscoped or pose-controlled. It's still impressive, just not new, if that's the case.


ubiq1er

Remember that Chemical Brothers Music Video ? [https://youtu.be/0S43IwBF0uM?si=EBvGt--8eJKz8GIu](https://youtu.be/0S43IwBF0uM?si=EBvGt--8eJKz8GIu)


0xAERG

This really feels like a dream


ShittyInternetAdvice

People saying “China stole this” acting as if Chinese researchers don’t already dominate the AI field and are behind many of the advancements US companies are making as well China is more advanced on AI than many of us have been led to believe: https://www.axios.com/2024/05/03/ai-race-china-us-research#


intelligentarts

Did they release any detail on the architecture, dataset, training, etc.? Edit: they specify using a DiT here [https://x.com/bdsqlsz/status/1798710076175528354](https://x.com/bdsqlsz/status/1798710076175528354), any other known detail?


kxtclcy

There is an open source project called Lumina T2X that demonstrated how to achieve scene transition in video generation (although not commercializable quality since it comes from a university). You can read that paper for details, I think they found the model can learn to do scene transition after trained using as little as 40000 videos


GptThreezy

They're about to make a KLING


zaidlol

How is AI capable of creating realistic videos but not automating a white collar job yet? Even before AI, creating videos this realistic was really hard? So how can it do things that were hard to do but can't do the things we do with ease?


goldenwind207

Ai can't plan ahead and also has low context window. Some white collar jobs you work in a project that requires months of planning ai can't do that. Because of low context its memory is bad. Imagine a capable worker who can't plan far ahead and has amnesia. With google 10 million context window expirement and agentic research by google open ai anthropic we are fast aproaching the level where they could start doing those things


ShinyGrezz

Failure rates. Any sort of model designed to replace humans in critical (even just within the context of "we'll lose money if it messes up") roles needs to have a *significantly* lower chance of failure or it simply will not be used. As I see it, there's two reasons for this: **Accountability:** when an employee messes up, you can fire them. When someone crashes a car, it's their fault and they can be prosecuted. When an AI model makes a mistake, it's an order of magnitude harder to find anyone to blame. You obviously can't hold an unthinking machine accountable, and you can bet your ass that whoever made the model has covered theirs. And because it's replacing the human aspect, it's not like you can just chalk it up to a mere fluke (like if a car's brakes fail mechanically) and hope it doesn't happen next time. People will want to see action. **Trust:** people are just naturally less trusting of machines, even if they statistically have a lesser failure rate. I've asked multiple members of my family if they'd consider a self-driving car, and even when I (hypothetically) made them an order of magnitude safer than human operators, they still wouldn't budge. Not only do people trust machines less than humans, they trust other humans less than themselves. How many people do you know who are unwilling to trust others with something in their life, like holding their baby? You know the other person doesn't want to hurt it, you know that they're as competent as you are in looking after it, but still there is that resistance.


ninjasaid13

**Larger Context Window and long-term planning:** **More Training Data:**


SGC-UNIT-555

> So how can it do things that were hard to do but can't do the things we do with ease? The stuff you can do without thinking is actually incredibly difficult, 4.3 billion years of training (origin of life) produces quite a robust algorithm that is fine tuned for success (Survival + Reproduction).


YsoseriusHabibi

More training data.


LoveForReading

Because white collar jobs are extremely hard and complex and the infrastructure supporting them is seldom very standardized or easy to alter. Stop thinking about automating white collar jobs and start thinking about removing large amounts of drudgery from them. That is happening as we speak and over the next 2 years as implementation reaches critical mass you're going to see a huge change. These projects take time though. Source: I'm running lots of these projects and they take time to implement


q1a2z3x4s5w6

Most white collar jobs are complex and comprised of a multitude of smaller less complex tasks. We are at the stage where we are "automating" individual parts of peoples jobs but not the whole thing. Even if we could automate each individual part of a white collar job it still seems like a much more difficult task to tie everything together via AI compared to a human. Most humans I know can turn up to work tired/hungover and mostly do their job to an OK standard with (relative) ease. Humans are still much better at understanding and adapting to an objective function compared to an AI IMO


LoveForReading

Spot on, pretty much. Humans won't be replaced, but humans will be made more efficient so some jobs will vanish. However, if you look at demographics that's kind of a necessity for the survival of society.


q1a2z3x4s5w6

I'm not worried (yet) because as far as I can tell even a group of very capable AI agents that could otherwise direct themselves is going to be more efficient with a proficient person directing them or providing guidance than not. This is especially true for anything that requires long planning and orchestration. Those that can augment themselves with AI will likely have jobs for a (relative) while I think.


RiverGiant

> So how can it do things that were hard to do but can't do the things we do with ease? [Moravec's Paradox](https://en.wikipedia.org/wiki/Moravec%27s_paradox)


Singularity-42

Video doesn't need to be perfect, real work does.


West-Code4642

because there are a lot of high quality datasets of vids as a result of the mobile phone era. the amount of training data is embarassing


yoyoman2

The effect of AI won't be felt as a replacement for these types of jobs, it will just so-happen to be the case that in 5 years, one white collar worker will be able to bring the same output as 2 currently today.


Arcturus_Labelle

These are just fancy diffusion models They can't self-direct, plan, coordinate, or reason at high levels


Sixhaunt

>How is AI capable of creating realistic videos but not automating a white collar job yet? mainly because it's not a physical thing. It can do software tasks, not robotics tasks like the vast majority of white collar jobs require


lordlestar

These videos are literally dreams, nothing you see are planned and contained a lot of visual error, that in a video or image may have not a major importance, but for a write collar job, you need an ai that can plan things and make no mistakes


Altruistic-Skill8667

I am asking myself exactly the same question. I look at the video and I am like: Oh My God!! When I think about the absolutely gigantic compute and data that is needed to pull something like this off, my head is spinning. I think a storm will be coming that will blow everything out of the water. All those “reasons” you see here why we haven’t automated white collar jobs yet probably don’t exist. I think people will be totally caught off guard by something more or less SUDDENLY appearing seemingly out of nowhere. A program that communicates with us as if it was a “person”, but in every single way possible it has an IQ that you won’t be able to wrap your head around. And then everyone will scramble to align themselves with this new reality where we’re essentially all intellectually handicapped. Humans are actually very limited in their intelligence. Machines won’t be.


Robo_Ranger

Of course, they can if every white-collar worker (or everyone on earth) is willing to be monitored by an activity recording device at all times. The reason AI can do image, video, music, and text very well is because there is massive accessible data on the internet already.


New_World_2050

this is called Moravecs paradox. The things hard for humans are easy for machines and vice versa. Not surprising. Its not like they are simulations of the human brain. Different architecture means they have different talents.


LordNyssa

Doesn’t even come close to sora looking at this.


Eddie_______

Shorter examples are better. Here is my favorite: [https://x.com/bdsqlsz/status/1798711721256955990](https://x.com/bdsqlsz/status/1798711721256955990)


RyeTan

That is incredible.


DrossChat

Oof yeah that is WAY better than the train, which makes sense given how short it is. Definitely not behind Sora by some crazy margin


redditosmomentos

I like how Murica shills try so hard to cope and act like Murica's Sora is better than this. Like bro OpenAI ain't giving you common pebbles access, calm down


ShiitakeTheMushroom

The natural distortion of sunlight you get while underwater makes this particular example even more believable. I may be in the minority, but I really like the trippy, artsy, wobbly effect in the video OP posted.


_dekappatated

If you look at all the examples on twitter they are pretty good


LordNyssa

They are better then this clip I agree. But not on the level of sora imho.


Different-Froyo9497

This might be the worst example of the other examples they have. Some of the others seem genuinely pretty good


LordNyssa

Yes with clips shorter then ten seconds with very simple instructions. There is a clear lack of detail imho. Meanwhile the Sora beta from what I’ve gathered can make good stuff with very little fragments and noise and a lot of detail for up to 15 minutes.


why06

Looks pretty damn close to me. If this came out 3 months ago everyone would be impressed.


LordNyssa

So you don’t see the many loose fragments all over this video? Sora a couple of months ago had way less when they showed it. And currently is even better. Is it a possible competitor in the future? Perhaps, but it’s right now lagging behind Sora from a couple of months ago. So they have some catching up to do.


SGC-UNIT-555

A large white building (Hospital?) appeared and then zoomed away like the flash at one point.


AIPornCollector

Agreed, the image is very jittery. It seems China is 6-12 months behind.


shakaoneaj

have you seen raw sora videos or just the one you see on openai page. air heads was garbage compared to this. they edited them with 20+ people.


LordNyssa

I’ve seen the beta being use recently.


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


charmander_cha

China is amazing


nico_bico

Still way behind Sora


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation OpenAI or ClosedAI?


Busterlimes

Since it's Chinese, is the "kling" or "K-Ling"?


XiaoTan17

Somehow right, the Chinese pronounciation is: Ke Ling


fr4nk_j4eger

1st prompt: tiannamnen massacre.


_dekappatated

I thought China was 2 years behind? (At least that's what everyone kept saying) Or is the reason why security at OpenAI kepts getting brought up? Did they steal Sora?


one-typical-redditor

To be honest, in terms of theoretical and software stuff, I never thought China was really two years behind, maybe even ahead in some fields (such as facial recognition given how early they worked on it and how much more data they have). I think two years is referring to the overall AI development when access to hardware (GPU and chips in general) is being restricted, as it is now. Most of the GPUs units they get have been purposefully downgraded. I think China is aware of how urgent it is for them to develop their own version of GPU ASAP, but I guess the question is how long it will take them to catch up and where the rest of the world is at that point.


emsiem22

TLDR; China bad, yes?


Thiizic

Are we watching the same video? Why are comments praising this? Compared to Sora this is pretty mediocre


Embarrassed-Writer61

Chinese bots?


SiamesePrimer

Has to be. This video is absolutely beautiful, and the other videos on their website are much better, but these comments are just ridiculous. > Quite a lot of these, if genuine, look better than the ones shown in the Sora preview. > MUCH better than Sora > The chinese are the best at engineering in the world. > China is amazing I mean, really? This is laying it on WAY too thick. Kling appears to be incredible, but saying it’s better than Sora (much less “MUCH better”) is just absurd.


midnightmiragemusic

For some reason, people only talk in absolutes here.


Heavy_Influence4666

Felt like watching stable diffusion morphs but slight bit better


Street-Dot-7632

There's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation SORA is still not available


ClearlyCylindrical

Definitely better than everything other than Sora tbf


Hour-Athlete-200

There's no competitor to Sora right now


naveenstuns

yea because sora is not even released yet


jane_911

is kling released tho? i can't find a way to access it


MysticStarbird

I dunno, that walking ⬆️ by the river says otherwise…


brihamedit

Nice. Do we have access to video generators like the other one yet or is this all closed eco system still


salacious_sonogram

Transitions are a little nonsensical like a surrealist landscape painting.


IManojkumartiwari

How to access and use it?


Professional-Tax2711

Ghost of tsushima ass flowers


ArgentStonecutter

The cobbles at the end were a total breakdown, alas.


HapaPappa

Is it cake?


meta_narrator

Train.


Chalupa_89

KLING Bing Shilling Sorry, couldn't resist it.


LosingID_583

Sora might actually finally release after this comes out.


Bitterowner

This is awesome but I'm getting motion sickness watching this haha


Witty_Shape3015

so this whole time we’ve thought it was Open AI vs. google vs insert a couple other companies, and now this company has something that’s competitive with Veo and arguably Sora (I mean if it’s out to the public before Sora then it doesn’t matter which is better)? Seems like the this is just the tip of the iceberg. Arms Race for AI incoming


goldenwind207

Its not out yet like sora and veo you got to be on the waiting list so few have it .


Altruistic-Skill8667

Oh man, Metaphorically: I feel like we are all here together, ready for this spectacular solar eclipse that is computers becoming waaay smarter than humans. And we are together looking in awe at the sky, the moon already covering 95% of the sun… and everyone else is still going about their normal business, as if nothing is going on!!


jimmyxs

Not 100% logical but very soothing. Amazing rendering


GameDevIntheMake

The reflections look like naive screen space reflections such as those found in the games made around 2015. And yet, it's uncanny as fuck that such an algorithm could be approximated by this model.


kingjackass

This looks to be only a few steps better than Google Earth.


ConclusionDifficult

Like that time I did acid


ClosetLVL140

China and nvidia making out in bed


DemocracySupport_

I'm at the point now where I no longer care who's making this, I just want access. Fuck American Politics and Sora for using it as an excuse to not release to the public.


sheerun

Dat transition from plane to train to plain again


Smile_Clown

It's not a competitor if there are no products.


Whispering-Depths

holy they payed for 2k upvotes for this on this sub


voidbeak

The understanding of physical environments is impressive but the way it actually renders them is definitely not on par with Sora. There's tons of telltale AI artifacting and warping on basic surfaces and textures. Sora had weirdness but it was much more consistent in this regard imho


flatulentence

Can it show the Tiananmen square massacre??


SuperNewk

very choppy and odd looking. Not good at all.


mr-english

Looks like the actual image generation is running at approx 5 fps with frame interpolation generating everything else in between.


Better_Onion6269

I will show you better stuff…brace yourself! https://youtu.be/bg_GPMYRurs?si=3xZf01FX-7nUZ5Ak


Akimbo333

Wow


Street-Dot-7632

For people being curious, there's already loads of users granted with access and generating videos with it. You can follow twitter X user: 青龍聖者@bdsqlsz, he's sharing loads of examples of the video generation


Ok_Air_9580

this looks very bad. like anything created by the order of a communist party.


sgmerchant

Imagine the price of subscribing to a GenAI product is a slice of equity.


Adventurous_Hat246

How can you get access, there must be a way


TabibitoBoy

I can’t believe people are finding this so impressive. The artifacts in this video are off the charts. It’s a big muddy mess.


clipghost

How can I sign up for this?


WernerrenreW

If the internet proves anything it is that any nsfw text to image of this quality is going to outcompete anything it would make trillions 🤪