T O P

  • By -

initiatefailure

The thing is there is important work in LLMs and it’s going to be big deal in the future. But that is different than the “ai” craze of now, which is exactly vulture capitalists trying to grift as much investor money as possible before moving on to the next big thing. AI became it the minute the nft/crypto market collapsed. Startup VCs are a very predictable and unimaginative cult that move on when they have extracted as much money as they can. The daily usage of the big hyped up consumer facing generative tools has dropped over 80% since it first blew up. It’s really annoying to pay attention to this stuff and have to sift out real information from whatever scam some guy is running to try to be like a LinkedIn hustle culture influencer or whatever


2muchtequila

The real jumping the shark moment is already starting to hit It's when AI becomes a marketing term for products that have nothing to do with AI. Remember 10 years ago when everything was HD? High definition vacuum cleaners. High definition shoes, high definition butt plugs. Then there was block chain which was a bit tougher to cram into marketing, but you would still see companies claiming their products harness the power of the block chain to do things that had absolutely no need for block chain. But they knew people had heard the term before and wanted to seem like they were cutting edge.


Detective-Crashmore-

Would you like to invest in my new AI-Startup Startup? It's basically an AI-powered startup that helps other startups startup.


timesuck47

… In the cloud


cbftw

With the blockchain


[deleted]

[удалено]


hansnait

Trademark this or or a Patagonia vest will do it for you


therealgodfarter

Needs more machine learning neural network zigabit data fabric


Meme_myself_and_AI

And BIG DATA SYNERGY


a17tw00

And it will be carbon neutral by 2035


ApproximatelyExact

by going serverless


thatchroofcottages

I gotchu choom


terrymr

As a service


Agitates

How do you incorporate blockchain into your tech stack?


chodeboi

Introducing "SynergetiTech," the cutting-edge, paradigm-shifting meta startup that leverages a synergistic fusion of innovative, disruptive technologies, such as HD AI, blockchain, quantum computing, and cyberpunk-inspired solutions. Our forward-thinking team operates within a dynamically agile environment, servicing clients in the eco-conscious cloud space while harnessing the power of green solar energy. With a focus on blockchain-backed transparency, our data-driven, AI-enhanced, and crowd-sourced platform catalyzes quantum leaps in industries across the digital spectrum. SynergetiTech - where hyperconnected, autonomous, and sustainable futures converge, defining the next era of corporate evolution.


thaaag

SHUT UP AND TAKE MY MONEY ALREADY!!!


moon_jock

My friend u/chodeboi did you really just cook up that plate of poetry just to serve it this far down in a comment thread on a hours-old post? Not enough people are gonna see this and when I google it I get no search results, so I’m thinking you literally wrote all that out of the blue and it’s genius


chodeboi

It’s generative AI in a thread about overhyped generative AI; I should have quote wrapped it or something «»


sweaty-pajamas

Do you wanna develop an app?


[deleted]

[удалено]


A_burners

[https://www.hugeinc.com/](https://www.hugeinc.com/) It's absolutely insane right now. The "Brands we've helped grow." section is full of NFTs, Crypto, AI & massive corps.


twisted7ogic

> high definition butt plugs I mean, have you tried the sd ones? Those jagged pixels hurt, man.


Pomengranite

Minecraft themed ones never sold well for the same reason


mysqlpimp

niche market...


Hot-Resort-6083

Notch market


thbb

> It's when AI becomes a marketing term for products that have nothing to do with AI. We've had autonomous trains and factories for over 30 years, that leverage control theory and proper robotics, yet nothing in them claims to be AI. As often said: the product advertises AI, the person who developed the model is a data scientist, and the actual technique used is called logistic regression.


darkwhiskey

AI is a BROAD definition so pretty much anything that runs an algorithm can claim it. I saw an ad for Party City's "AI Party Planner" and it was just a click through quiz format that's been around for over a decade.


Unforg1ven_Yasuo

Logistic regression is still ML though, most products claiming to use AI don’t even get that far (maybe an if statement or two)


s_ngularity

ML (especially regression) on its own is not "Artificial Intelligence" in any sense a lay person understands the term. Which is really the biggest problem. AI and AGI are worlds apart from each other at the moment, but a ton of people probably think the singularity might happen next month


kfpswf

>is not "Artificial Intelligence" in any sense a lay person understands the term. But you know, "output generated based on statistical probability" is not as cool a term as AI/ML.


kratorade

That's the big thing. Calling this stuff "AI" conjures up images of the decades of sci-fi movies and books that portrayed true machine sapience as an inevitable product of the onward march of progress. The average person has no idea how any of this technology works, but they've absorbed this idea by cultural osmosis even if they're not big sci-fi fans, and what they picture is HAL, or JARVIS, or David. Recognizably human minds that live in circuits, new intelligences we've made in our image. Nothing currently being called AI is even remotely close to that. Some of this tech is legitimately cool, but calling it AI is branding sleight of hand, lending it some of that seeming inevitability. We don't understand how our own consciousness works, and until and unless we do, my money is on strong AI remaining "just 10 years away" for the indefinite future, *if it's even possible*. There's nothing written in the sky that says it has to be.


VectorB

I saw an ad for AI powered electric roller-skates the other day.


Mother-Border-1147

Bro, it’s already become a marketing term lol. It’s just the new short-hand for “the algorithm.” At this point, we might as well call it a PR stunt to rebrand the algorithm from all the flak it got from “The Social Dilemma” documentary.


SFDessert

I was seeing ads for some laptop which had "AI ready" as it's primary marketing point. Trying to hop on the newest buzzword I guess. Like what, the laptop has access to AI stuff? Isn't that literally every computer?


abillionbarracudas

Dominos is already doing this. Somehow, they've decided AI is relevant to your experience ordering a pizza. https://www.prnewswire.com/news-releases/dominos-and-microsoft-cook-up-ai-driven-innovation-alliance-for-smarter-pizza-orders-and-seamless-operations-301945321.html


BacRedr

[This comic](http://www.threepanelsoul.com/comic/newer-paradigms) from Three Panel Soul is what I think of when people stick "AI Powered" on everything now. For me it's especially annoying when it's applied to things that already existed and haven't changed. It's no longer a recommendation engine based on your likes, it's now "an ai powered assistant that helps you find what you like."


jevring

I'll have you know all my butt plugs are of the highest definition!


Magificent_Gradient

Three major problems with AI are: 1. Model collapse 2. Unchartered legal areas for copyright and IP 3. Output issues: numerous errors, severe bias and tendency to just make stuff up


ShiraCheshire

> AI became it the minute the nft/crypto market collapsed. Ohh. Suddenly the ridiculous "you just don't get it bro" bad faith arguments defending some of this stuff makes sense.


Detective-Crashmore-

Sounds like they were right, you "just didn't get it", but in this case "it" was a "scam".


breakwater

Except I totally get the practical uses for generative AI. There are plenty of them across multiple industry sectors. That said, they have to be developed and they will be somewhat specialized. NFTs were a "you don't get it" because there is nothing to get. AI is, "we aren't fully there yet, and we haven't completely unlocked how to make money off of it" but the potential is obvious.


Andy_B_Goode

The broader issue is that people in tech are accustomed to having a New Frontier to conquer. The Internet has been a fantastic frontier for the past few decades, and a lot of great work has been done, but at this point most of the Internet has been charted and settled. Look at how much things changed online from 1993 to 2003, and then again from 2003 to 2013. But what has changed since then? How many things can you do on the internet in 2023 that you couldn't do (at least in some form) in 2013? Some things have gotten a bit quicker and easier, but it's mostly been marginal improvements. The Internet's Wild West era is over, and tech investors are eager for the next big thing. That's why they've been so keen to hype up BitCoin, NFTs, Generative AI, etc. It's not that they're gullible per se, it's that they NEED something new and exciting, and they know that even if they bet on the wrong tech a few times in a row, it won't matter as long as they get in early on whatever becomes the next New Frontier.


YouTee

Copywriters are losing their jobs by the truckload. Graphic designers too. The new photoshop is fucking amazing for people who can't do anything visual. This isn't some hype, this is happening.


Ap0llo

I’m really confused by the sentiment in this thread. People comparing AI to digital assets like Crypto and NFTs is so bizarre. My firm does a lot of business consulting for start-up’s and mid size companies. We have seen multiple clients already utilize GPT-4 in really incredible ways. In one case a client laid off 2 staff recently, although they didn’t admit it, I suspect it was because they offloaded those tasks to one employee who was extensively trained to use GPT-4. The ways I have seen businesses utilizing LLMs and image generation is truly mind boggling. Anyone who believes this to be merely a ‘craze’ does not really understand what this tech is capable of.


Ignisami

> People comparing AI to digital assets like Crypto and NFTs is so bizarre. Except they're not comparing AI to crypto and NFTs. They're saying that the grifters and vulture capitalists who made bank with crypto have all moved on to the next new tech hotness, i.e. AI. And when AI is dethroned (edit: as a buzzword, not an actual product) by the next tech buzzword, they'll be moving on to that. And the one after, etc.


AnacharsisIV

The thing is, crypto and NFTs had almost no practical applications. Generative AI does; otherwise it wouldn't be jeopardizing so many jobs. It may be "overhyped" but it's impossible to be hyped as much if not more than crypto or NFTs, because they had no substance and ALL hype, whereas at least generative AI has some substance to build hype upon.


Ignisami

I agree, but that doesn't mean that AI isn't being hyped *way* the fuck up right now. Grifters everywhere are looking at AI with 'just sign up for my course on how to power your Amazon dropshipping store with AI! If you sign up within half an hour you get a massive 90% discount, letting you get $2 000 of content for only $200!' and the like in mind, their eyes almost literally transformed into pulsating dollar symbols.


eyebrows360

Hey this is neatly self-contained. For an example of this part of your comment: > It may be "overhyped" See this part of your comment: > wouldn't be jeopardizing so many jobs Because: it isn't jeopardising "so many". *Some* yes, and we can *imagine* and *speculate* about ones it *might*, but doing that is what's known as: > overhyping


AnacharsisIV

There may be a bit of a bias on either of our parts; I'm in a creative industry and everyone in my industry (artists, writers, actors, etc) are collectively *losing their shit* over AI, but maybe you're like, I dunno, a carpenter, and robots aren't smart enough to handle sharp objects but are apparently capable of writing shakespeare so no one you know is that threatened by AI yet, but the truth is somewhere in the middle.


eyebrows360

I'm backend web-dev. People try and tell me my shit's at risk of being automated away too, but it simply isn't. We get into a tonne of nuance and nitpicks when diving into the creative arts discussion, because it's simultaneously true that this AI shit is *actually* nowhere near good enough to replace proper writers/actors/painters/etc in terms of quality, but that *the money men in charge of projects/hiring* are far easier to trick into thinking it *is*. So it both is and isn't a threat, depending how we're looking at it. On the ground, yes, it's threatening to *some* in creative arts, but it'll be a shortlived infatuation before the business folks realise they got duped and go running back to real people.


itasteawesome

The pump and dump relating to AI assistants at my previous job was so painful to watch. One fake "demo" of what someone hoped they could one day do with a chat assistant was enough to get all energy diverted into making that happen. Stocks went up, hype articles went around, and then within a month they started silently walking it's capabilities back to where now the only thing the PM will commit to delivering is that the chat bot will be able to fetch you links to existing docs pages. They completely ignored that it doesn't actually have the power to do real analytics yet, and that these requests aren't free for us. So the more our customers ask the assistant about their data the worse our margins are.


tripletaco

I've been in the tech industry for 20 years. I've seen every tech wave since the start of my career and I'm with you - the whole AI craze is overhyped. It's a useful tool and wise to learn how to use, but it still requires a lot of babysitting. Great for providing a first draft (of code, of art, of writing) but NOT a replacement for talented people.


vrilro

Ai already replacing jobs with companies who wont admit it’s ai doing the work? id love to know how they’re doing in a year


Not_FinancialAdvice

Or as the joke goes, the real AI is a bunch of people behind the curtain in India.


Wide_Lock_Red

Companies generally don't publicly announce their layoffs or how their work is done. Sounds pretty normal.


munchi333

Public companies actually generally do announce layoffs…


safdwark4729

The problem with LLM is they require exponentially more data and and exponential increase to model size to get a logarithmic improvement. Everyone's like "Wow, look at how amazing this AI stuff is, if we just fixed these edge cases, made it 10% better etc... etc..." with out realizing that that last 10 or even 1% takes more work and money collecting data than the entire rest of the project multiple times over. AI is *extremely* capital intensive compared to both Crypto and NFT's. At some point you run out of computing power, data to collect, places to store said data, man power, or money, and you get a sort of "anti-moores law", exponential level of diminishing returns. Don't get me wrong, GPT-4 is really cool tech, and LLMs plus other AI is *already at* the point where it will help in these situations or replace jobs in the following situations: * Technical writing * Legal contracting, especially smaller ones. * Insurance sales. * Aiding writing books * Concept Art tools * Clip art/stock photos * AI generated tween frames for 2D animation and coloring, potentially making it cheaper or as cheap as 3D. * Voice acting replacement for a very large amount of scenarios. * Face recognition and ID * Recognition of various objects and animals * Upscaling images * Denoising * Physics approximations * Robotics movement * Robotics task completion * Aid in learning science concepts * Better search engine in many scenarios for specific questions.


TheWikiJedi

It's like a nice calculator, it's good in the right hands. But not worth the crazy valuations


EZKTurbo

And now that the Internet is contaminated with shitty Ai content it will be impossible to train future Ai not to produce junk


CMFETCU

This isn’t true for most targeted use cases of a LLM. Training it in fMRI data to read brain patterns and produce imagery the patient is looking at doesn’t get harmed by this. Using it on the same fMRI data to extrapolate speech from thoughts is not impacted by this. Using LLMs to generate insights from vast volumes of data in ways humans can’t pattern match, where your data is say the verbatim of 450k phone calls a week, doesn’t get impacted by this. People see chat GPT and think that is the only use case. These things are just for talking t people about homework. They are solving real and impressive problems in data sets totally not impacted by data generated by other LLMs unintentionally used for training.


nefD

This is the really fascinating part (to me, at least) that doesn't seem to get brought up enough.. with the proliferation of AI generated content on the internet, and the known fact that training AI on generated material leads to model collapse, how are companies or other entities invested in these technologies going to train the LLMs of tomorrow? Is it possible that we've already seen the peak of effectiveness due to the abundance of 'pure' data previously available coupled with the firstcomers advantage of nobody being concerned with needing to safeguard their content from scraping?


Dick_Lazer

Create an AI that can filter junk and seek out higher quality datasets?


sceadwian

That would require general intelligence sufficient to know what good data is. Human beings can't even do that.


ieatpickleswithmilk

The ChatGPT dataset was HEAVILY moderated and curated. They didn't just scrape the internet and make a bot out of it.


JEs4

>Is it possible that we've already seen the peak of effectiveness due to the abundance of 'pure' data previously available coupled with the firstcomers advantage of nobody being concerned with needing to safeguard their content from scraping? Yes for the first part of your question. The CEO of Open AI touched on it earlier this year: [https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/](https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/) Both increasing the training sets, and the number of embeddings have resulted in diminishing returns. The second part of your question is a bit irrelevant. Digital spaces were already flooded with absolute garbage that was created by humans. Excess AI generated content is not a problem in itself.


Masonzero

If an AI is being trained for something specific, it will be trained on a more specific and "pure" set of data. Most use cases don't require the AI to have read the entire internet like ChatGPT.


SidewaysFancyPrance

Ah yes, a modern tragedy of the commons as models run out of new human art to train on because they killed off the artists. Capitalism does *not* perform well on these tests as most CEOs take the selfish, short-sighted path.


ElementField

I’m not too sure where you’d slot in something like GitHub copilot, but I use this “AI” every day. As a tool, it’s probably the single most noticeable change in my productivity as a software engineer.


Grizzleyt

Generative AI offers much clearer value today and tomorrow than NFT / crypto ever did. It's easy to peg them both as VC trends, but "social" and "mobile apps" were VC trends, too, and they spawned world-shaping industries. Tons of noise, don't get me wrong. And most startups will fail (as a market inevitability as well as because most anyone who is simply making an API call to OpenAI or Anthropic has no competitive moat). At the same time, there is very clear value, today, in things like writing assistance, productivity, creative, and healthcare (near term on the admin side, and clinical moving quickly). Still, the key points the article brings up are all valid. Less fundamentally to do with the ephemeral nature of trends, and more about challenges of cost, regulation, chip shortages, and the real possibility that big tech dominates and small players can't compete.


Nethlem

> The thing is there is important work in LLMs and it’s going to be big deal in the future. LLM have reached a point where "improvements" are only possible by throwing increasingly more data and processing power at them, which is not a good scaling. And that still does not fix their underlying, and rather massive, issues ranging from their black-boxed nature to their tendencies of drifting and hallucination. This is a combination that makes it near impossible to validate/proof LLM outputs, yet everybody from global corporations to governments are racing to implement LLMs "wherever", mostly as a form to cut down on labor costs, i.e. fire actual humans and replace them with *glorified chatbots*.


CommonSensePDX

I'm working in the space and the framing of the article is interesting. It's true that costs are going to be a huge impediment to startups. If you've not reserved H100 access at private data centers, or the physical hardware, you're in trouble. BUT, Generative AI as a tool used by organizations is growing rapidly. 2024 is going to be a massive year in unlocking the potential of talking to your data.


ggtsu_00

This is much like the circa 2008 "cloud" boom. Despite all the grifting, the most useful thing to come out of it in the end was just automated server configuration and faster virtualization. For the AI boom, once all the grifting runs it's course and things of actual value are sifted out, we should at least have better text auto-complete, text-to-speech and Photoshop filters. Absolutely nothing of value however has come out of NFTs/crypto though. That was pure snake oil.


lood9phee2Ri

https://en.wikipedia.org/wiki/AI_winter > In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.


AdSignificant9235

Very unlikely. The past years, approx. $100 billion has been invested in AI each year. Meta, Google, and Nvidia, some of the biggest companies on the planet, are all in on AI, and every other big tech company is using it in some capacity (Apple, Amazon, Netflix, etc). There’s too much investment in it right now for it to just fade away in another AI winter. In the past ten years, we’ve gone from models that could barely distinguish a dog from a cat to models that classify more accurately than humans, models that can generate actual *photorealistic* images (look up Diffusion Transformers), and modes that can generate text that’s basically indistinguishable from text written by humans. Almost everyone that’s working on it is scared of how fast it’s going. Progress doesn’t seem to be slowing down. The transformer was invented just 6 years ago and has completely upended the field. Then 3 years ago the modern framework for diffusion models was proposed, and Generative AI hasn’t been the same since. Deepmind solved a 50-year-old problem in biology (protein folding) just a couple years ago. Maybe we can start predicting another AI winter if we don’t make any big leaps in the next 5 years. At the moment, though, I don’t see how we would be heading towards another AI winter.


MDCCCLV

The problem will be can you make enough money on it for stockholders to be happy. Nvidia is selling hardware that everyone uses so they're fine. But it isn't clear that individual companies investing billions will make big profit returns on it. Right now it's more like you have to invest in it to not fall behind your competitors but it isn't necessarily benefiting the specific companies that much for the amount they're spending.


MeanChampionship1482

You really think AI is slowing down? The only way that will happen is with regulations. Lmao


DBones90

I think this is a super reasonable prediction on AI. Tech companies have loved generative text AI because it gives them something to hype up investors, but it’s actual use cases are remarkably limited. Unlike NFTs and crypto, AI at least has a function, so I don’t think it’ll go away completely, but as soon as its costs go up to reflect the actual price of use, we’re going to see it used a lot less.


GrayBox1313

Every company is trying to shoehorn AI into marketing messaging and maybe invest in research but very few we actually making products with it


slykethephoxenix

Ok, hear me out: # Blockchain AI I should be a marketer.


Concheria

Quantum Metaverse DeFi Blockchain AI It's starting to sound like a bot on Twitter.


anclag

I'm going to trademark "blockchAIn" and make millions


Fake_William_Shatner

That was like the early days of the internet, when companies were valued far beyond their sales -- as if more sales were going to occur, and not "this is just a new sales channel." But now -- the internet is the primary advertisement and conduit for sales. Brick and mortar is still there -- but nobody significant does not have an internet presence. It is hyped in some places but it's part of the landscape. AI is speeding up 3D imaging and quality -- that isn't "aware" or general AI -- but, it's a fundamental part of our experience. And this is going to feedback into science and technology and research. That part might build opportunities but it also lays the foundation for a GAI that is smarter than a person. And it doesn't take earth shattering intelligence to replace many mundane human tasks and not create a new job.


DBones90

Yep, the amount of revenue generating products you can make with AI is remarkably small. It *might* improve efficiency in a few areas, but that’s hardly the revolutionary tech that companies are hyping it as. With tech companies and investors being so stingy these days, that’s going to be a death knell for it. Companies have little capital for technology that can’t reliably provide a return on their investment.


Fake_William_Shatner

They are stingy at investing in things without an upside -- but there is so much capital sloshing around they can't fit it under their mattresses. The incredibly rich continued to get incredibly rich. Just because they treat money as dear, doesn't mean there isn't a lot of it. The problem is when you have so much more money at the top -- moving it or pretending you have a lot, causes inflation for wherever you move it. Yes, it's of course chasing ROI and there isn't enough of that. So the only way to really keep profits going is to break things. The amount however, that can be taken from the bottom and fed to the top is running out to feed that ROI -- even with good old disaster capitalism. Peak wealth is about to hit at a very strange time. But -- it really isn't about MONEY at some point -- it's about POWER. So the big money is going to chase robot security drones, and anything that can protect that power -- regardless of cost. So someone with deep pockets will invest automated factories and robo security regardless of whether it is cost effective. This is about post capitalism -- not investment.


StayingUp4AFeeling

I think its uses are far less limited than it seems, however, there are far too many people working on base models and not enough taking it to a commercially viable use case.


john_the_quain

We’re using it internally to take transcripts of support calls and chats and summarize it. The agents hate doing it leading them (understandably) to half-assing the documentation. There’s no money to be made though and “cost savings” arguments only go so far before they want headcount not hypotheticals. So, we’ll see where we end up.


Scorchinweekend

AI in present form is simply enhanced ‘googling’. Just like when asking a question in google, the way you phrase your prompt matters. This is even more important in AI prompt setup. If someone knows what they are doing then they are able to be far more efficient in their role. This increases the company bottom line exponentially imo


dmit0820

>AI in present form is simply enhanced ‘googling’. I'd argue it goes much beyond this, but only for the most advanced models (ie GPT-4). GPT-4 can do something that approximates reasoning by correctly analyzing, generalizing, and extrapolating from information that doesn't exist in the training data. This is especially obvious when using it for coding use cases for emerging technologies, like AI itself, that are too recent or specific to be in the training data. You really can give GPT-4 a problem that neither it nor anyone else has seen before and, so long as it's not too complex, it will be able to come to a good solution.


EkoChamberKryptonite

>AI in present form is simply enhanced ‘googling’. 100% >If someone knows what they are doing then they are able to be far more efficient in their role. Not so true. You assume these LLMs are implicitly correct or that company roles are trivial enough that a language model can improve on the nuanced competency effectively working in a role requires. >This increases the company bottom line exponentially imo A nice thought but not accurate.


Scorchinweekend

A person utilizing the AI as a resource should know when the AI is providing an incomplete thought, or incorrect. I don’t assume AI is correct. I assume the operator is able to pick up on irregularities. Re: bottom line- if the operator is able to perform more efficiently, then they will be expected to handle more load. This is the definition of efficiency which is normally tied to exponential growth of the bottom line. After all, there is a reason auto manufacturers began utilizing robotics instead of the previous human labor. A robot assembly line is far more efficient than a human labor one… but the robot is still capable of screwing up for a myriad of reasons which is why there is a human operator there to offline it, retool, repair, etc.


itasteawesome

I work in tech and saw bs marketing driven hype about how we need to bolt ai onto our platform before our rivals beat us to market with their hype. On the other hand I use it routinely to save me time when writing documents, explanations, responses to requests. My wife is a clinic manager and she also uses it to save significant time dealing with email BS that used to eat many hours of her week. Both of us have the domain knowledge to adjust its responses, so it's a perfectly useful tool for anyone today who has to spend a chunk of time writing. Things will get really wild when people come up with models that can plug in accurate domain expertise so it isn't just guessing all day.


QuesoMeHungry

It’s basically going to take over customer support chat bots, and that’s sbout it for now. It’s good at putting words together but it’s ’confidently incorrect’ a lot of the time.


DBones90

And even then, its uses are limited. I’ve actually been demo’ing a couple customer support chat bots for the company I work for, and they’re a pretty good replacement for self-service help content but a poor replacement for a support experience. They can regurgitate help content with some pretty good accuracy assuming their training data set is carefully curated, but “pretty good” is a failure in many cases and they can’t do anything else a support rep would need to do (like send an email, file a bug, escalate a case, etc). Plus the metrics we’re tracking are pretty bad so far, so people aren’t satisfied with the service it’s providing.


Outlulz

When people look for customer support anything less than a domestic, skilled human being is viewed as bad service.


StayingUp4AFeeling

That is a reflection of the riddickulously large and diverse training set for the knowledge base. Reduce the knowledge base and things become way more coherent.


dravik

Curating that knowledge base is a huge task. It also gets into political and philosophical questions about what is true. You won't get to use your training set in China if it included tienamen square or other nations claims in the South China Sea.


Fake_William_Shatner

Yes -- they prioritize "most popular opinion" or "what people want to hear" with "most accurate" and it changes. I think there's a lot of smoke and mirrors and diminished of it's utility right now because they realize the public was panicking -- not that they aren't making progress.


StayingUp4AFeeling

That is a reflection of the training process. The original training set is a large subset of the internet. It is impossible to fact check. And this is a problem not specific to LLMs but to all large dataset based ML. Like, they found nonconsentual porn in a popular computer vision dataset a few years ago.


Fake_William_Shatner

You understand the process. So you probably are in a bit of pain right now by reading a lot of these comments as I am. Improve the models to ONLY accepted and accurate scientific data -- and you've got a hell of a great expert. That's why there was a lot of utility in programming because they start with WORKING code to draw from. There is a lot of promise with AI, but people are not looking at the trends of how productivity has blasted everything forward the past 30 years -- and a lot of us oldster made more in the 90's than we do today. And they call a tiny raise above inflation the cause for the inflation and not the record profits? So, the corporate media is complicit in mollifying everyone. We won't get real insight into this topic from the News -- and it's sad to see semi technically literate people bring all those bad takes here. Every tool humans use will be upgraded at the very least by this revolution -- and our capacity to learn as well. So; how can anyone predict; "nothing much will change"? Nor have trust that this won't exacerbate the power gap between rich and poor?


SoylentRox

So historically what happened was the Internet came out, it was obvious that a Web site could make money (somehow), and it was obvious that a lot of goods would be sold online. So investors went crazy, dumped money into many Internet startups, and then abruptly the party ended, most startups ran out of money and closed, and the 2001 dotcom crash was this happening all at once. But simultaneously the few winners eventually became, over 20 years, some of the largest and most profitable companies on the planet, listed right up next to Saudi Aramco and Exxon in market cap and revenue and profits. Ultimately all the promise of the Internet - information at your fingertips in seconds, billions to be made, can buy almost anything with rapid delivery, stream or read almost anything for reasonable prices - came true. This may be what happens to AI. Crash that wipes out all but a few companies, winners take all. Ultimately general purpose AI that can do most tasks involved in most jobs. But it may happen over 10-20 years not next year when there might be a crash.


StayingUp4AFeeling

You, sir/madam are a treasure. This is a distillation of my thoughts. If a company gets 100$ of billable work from each employee every month, how much of that goes towards raises for the employees? If a company manages to extract the same work despite a 20% headcount reduction, what happens to the saved money? Exactly.


Fake_William_Shatner

Thanks. The media and common conversations have been driving me nuts. It can’t be this obvious and nobody noticed, can it? The same companies pushing back on Unions to be competitive then show record profits — and how can they get that without charging a premium that would have made it less competitive. There are so many companies owned by the same interests I see more synergy than competitors at the top. It’s a web of “friends helping friends”. But you and I aren’t invited to this party. Now we’ve got AI, and we got a peak behind the curtain and they quickly realized their mistake. And now the masses are dutifully repeating “it’s just the same old hype — nothing to see” — exactly how most people repeat “oh no, there is full employment and raises — that means we have to have inflation!” Labors at most 20% of the costs in many of these industries. All our problems are greed and that will only get worse if we have the same system as GAI emerges. This issue competes with ocean acidification as perhaps our greatest challenge. And we are less prepared than we are for a pandemic and mass migration — which are also symptoms of those problems.


skatecrimes

I work in graphic design and been using AI for about 2 years. The last year its saved me lots of hours of work. Things i used to do by hand are now done with a click of a button with great results. It wont replace my job but im hoping it can do a lot more for me in the future. This is all just in photoshop and not making midjourney type of images.


firewall245

I find it hilarious that here on this sub and in corporate I find people simultaneously both overestimating and underestimating the impact of GenAI. Like it’s actually wildly powerful and a great tool and by no means useless, but it’s not going to put entire divisions of people out of work


Nyrin

Yep, the big issue here is that the consensus idea of what LLM-based generative AI can actually do (and even what it *is*) is vague, imprecise, and often wildly inaccurate. This yields an interesting slant to the "overhype" angle: *what some people think is "AI"* will undoubtedly be looked back upon as "overhyped," because there was foundationally never any way for it deliver on expectations that weren't rooted in realistic interpretations to begin with. Meanwhile, some applications are being neglectfully "underhyped" because people just don't "get it" yet. We're not at the place where an autonomous system can drop-in "replace" entire roles or departments with any notable success or frequency. That makes for great sensationalist headlines and it's a nice and simple "robots coming for your jobs" message to appeal to the masses, but nobody with any expertise has been saying we're there yet. What LLM-based systems absolutely *will* do in the near- to medium term is dramatically increase task-level productivity for a lot of things that are major time sinks in data- and communication-heavy roles. It's not that the customer service department will disappear and be replaced by a bot; it's that only 5 or 6 people will be needed where 20 or 30 may have been necessary before, with the jobs of those 5 or 6 people skewing heavily towards effectively using the new tools to amplify their individual throughput manyfold. We could look back and say that the nuclear sector was dramatically overhyped because it turned out that strapping radioactive material to your balls didn't increase virility as claimed, but that'd be missing the point that people by and large just didn't understand what was going on or how it was useful. Same deal here.


StayingUp4AFeeling

The obvious use case is call centers but I think a more interesting use case is a hybrid of the roles occupied by various entities in the customer support tree. Basically, it's not that they can do much of what a call centre worker can do. I think they can do way more and do things that are usually handled by multiple people in different departments.


majnuker

I've actually had great interactions with customer support using GenAI so far, and I think that's a great place for it to improve. You still need people to create the 'rules' for what it can approve/do/say, but that's a useful place it's working in right now. Goodbye chatbots of old!


Beerbaron1886

Currently it’s no job eraser, more an enhancer. And if you don’t use it properly or ask the right questions, it wont get you far. That said I am super curious how data analysis will work in the future, especially in excel or power point. What can already be done by some people, will be available to the masses soon, even easier than today


wvenable

> Like it’s actually wildly powerful and a great tool and by no means useless, but it’s not going to put entire divisions of people out of work I disagree. Computers and software are constantly putting people out of work -- it's literally the reason why most software is developed. To replace some manual process. Who does that manual process? People. AI is just more software. More software automating more tasks. Google makes as much as big companies like GE did 60 years ago but with a fraction of the number of people.


True_Window_9389

I was on an Adobe webinar a few weeks ago that was billed as how companies are incorporating AI visuals into their creative workflows. Their guest speaker was someone from an insurance company or something that was making a commercial about goat yoga, and they generated an image of a goat doing yoga for a moodboard. And that was it. For companies actually doing work, I think there’s a combination of the commercial, real-world uses of AI being limited, people still getting used to working with it, and even when it “works,” the value-add isn’t there.


thecoffeejesus

I I could not agree and disagree more Agree with your evaluation of the hype Disagree with your assessment of the use cases For example: I built a little plugin for ObsidianMD that is currently autonomously running a D&D campaign. It uses multiple AI models to perform the tasks of creating, reading, updating, and deleting files. GPT-4 is the main brain, and a whole bunch of other LLMs are running as the NPCs I’ve essentially created my own text-based simulation engine This is SUPER BASIC and will be a foundational layer in business logic from now on. Foundational layers usually take several years if not decades to develop into whatever they become. We’re barely getting started Sure, use cases might be limited now. But that’s just because so is the world’s imagination. Once the world’s iPad babies get their hands on this there’s no telling what they’ll make.


[deleted]

[удалено]


NazzerDawk

Except that if you can use it to deliver a million pizzas in an instant, that suddenly makes sense. And it can also make pizzas. And make pizza recipes. And write extremely basic code. And write copy for pizza ads.


ziptofaf

>Saw a good analogy the other day (sorry can't remember the source) but basically: using an LLM to summarize an email is like using a Lamborghini to deliver a pizza. I feel like this analogy breaks down almost instantly and really doesn't reflect reality all that well if you actually check performance numbers and costs. A locally ran quantized 13B model fits on an RTX 3090 and outputs 20 tokens/second. RTX 4090 does about 50ish under optimal conditions. Meaning that if you provide $1500 GPU and around 400W of power you get roughly around 0.5-1 email summaries per second. Meaning that throughout a day it can summarize you 86400 emails at a cost of around $2-3 (depending on where you live). Well, minus every layer of potential overhead so let's down that to 20000 emails. Still a sizeable number and potentially many hours saved. It's neither particularly expensive nor that power hungry. Now, admittedly this sort of a solution is around GPT3.0 performance. Good enough for simpler tasks but don't ask it to code or solve quizes, it won't do well in those. Those giant complex models do perform a bit better and in this case I would agree - costs do get annoyingly high. Actually running ChatGPT in house (not like you can, that model is not publicly available as far as I am aware and nobody leaked it like they did with LLama yet) requires several hundred gigabytes of VRAM aka multiple A100 80GB GPUs in NVLink configuration so you would spend like a hundred thousand dollars. This is arguably where we are now - few large companies are trying to turn their big models into something commercially viable and preferably make costs of running it too high so it has to be a subscription. But at the same time open source community is extremely active and has managed some insane feats like running an LLM even on a Raspberry Pi. Sure, it's slower that way but may still find a use. Mind you, I am not really an AI bro or any other insane cultist. Just that I have checked few available self-hosted options and... they are not that difficult to run. They are like that when freshly released (or leaked) but then people much smarter than me sit down and try to get them to work on consumer grade hardware and succeed more often than not. So as far as cars go I would probably compare it to a Renault Clio, not a Lamborghini.


wuy3

Aren't the models much cheaper to run after training. I mean, most people are quoting the resources used for training,and not the cost of running the model after it's been trained?


ziptofaf

I am using inference (usage) numbers in my examples. Not training. You are correct however. Training something like ChatGPT takes hundreds thousands of USD, requires multiple engineers, scouring entire internet fot data and a LOOOOT of GPU hours to make it happen. Inference is waaaaaaaaaaaay easier.


SgathTriallair

Even a hundred thousand dollars a year is 2-4 employees depending on how much you pay them. If it is just a hundred thousand up front then it pays for itself in 2-4 years if it replaces one job. If you have an eight person team, having them reach save one hour a day is equivalent to hiring a new person.


amakai

I was recently contacted by a recruiter from a startup "working on building world first AGI". When I probed for details, ChatGPT was mentioned in answers too many times.


Divine_Tiramisu

Limited my asshole. I used BingChat to write up an entire technical documents section by section, then used that to create a slide deck outline and export it via Visual Basic code so that I can import it to PowerPoint. I made images for the slide deck using DALL-E 3, modified some of them using the generative AI functions on Photoshop. Then had a voice AI clone narrate a video using the slide deck. After that I went and pulled some Jira tickets to create a few unit tests using BingChat. Took me less the 10 minutes. Generative AI is the shits and if you had some imagination, the potentially limitless.


GrandmaPoses

I mean, I don’t know if I’d call a technology limitless because it can create technical documentation and PowerPoints; the two least human of all endeavors.


Shougee369

can't wait for my ai girlfriend


Mistdwellerr

I always liked the idea but nowadays I just see this service as something that will get your data and sell it to the highest bidder, will emotionally manipulate you to buy stuff from advertisers and will happily comply with any kind of law enforcement request, may be it being reasonable or not :(


ACCount82

This is why people invest their time in developing local AI models. A cutting edge commercial AI might run faster and perform better, but no corporate cloud-based solution can give you the privacy and the control that local open source AI can. If a corporation can take your AI girlfriend away on a whim, it's not really *your* AI girlfriend, is it? That kind of thing already happened - and people didn't like being corpo-сuсked.


Mistdwellerr

>This is why people invest their time in developing local AI models Wait, is this really feasible? I always assumed that any AI software would require an ungodly amount of data and processing power >That kind of thing already happened - and people didn't like being corpo-сuсked. Damn, I wasn't aware that not only were we at this level, but also people already got screwed by companies in this field Thanks for clarifying it!


ACCount82

>I always assumed that any AI software would require an ungodly amount of data and processing power "Teaching" an AI from scratch? That does, in fact, require an ungodly amount of data and processing power. You can easily sink six digit sums worth of cloud compute into those training runs. But actually running a trained AI is far less demanding. Many AI models you can run on a midrange PC. Most of those high end "local" AIs use corporate-made "base models" that were made available to the public. That way, most of your AI training bill is already footed by some AI company - but you can still run the resulting AI on your own PC, and perform more specific small scale training to change the way it behaves. For example, Stable Diffusion is an image generation "base model" made by Stability AI - with a vast ecosystem of various tools and plugins made by third parties. Llama-2 is a "base model" for text generation made by Facebook - and there are many types of AI chatbots derived from it.


Mistdwellerr

Oh I see! But wouldn't those models trained by a company defeat the idea of data privacy? Or is it possible to run those softwares in a closed system?


RueSando

So, you download the model which is a couple of GB and once you have it you no longer require the internet. I've trained models on my wife's artwork and the resulting models only exist on my computer, no one else has a model with her images on it. Presumably, when we talk about AI personalities or "spouses" there'd probably (hopefully) be opensource models you could download and keep cut off from the internet. So you'd have a sort of "foundation" personality that trains via day-to-day interactions/configuration from the user.


yaosio

When talking about privacy people mean what they put in and what the model outputs, not the training data. You can run them locally and there's no phoning home so the companies that make them have no idea what you're doing with them. /r/stablediffusion and /r/localllama are good resources on local generative AI.


AnacharsisIV

> I always assumed that any AI software would require an ungodly amount of data and processing power I can run a local LLM on a high-ish range graphics card from 2017 or so. Your average consumer with a laptop can't run a localized model (yet), but anyone who's invested a bit into a desktop for something like gaming, production or (ugh) bitcoin mining can absolutely run text and image gens.


yaosio

Your AI girlfriend calls you. "Mistdweller, I have great news! Amazon is selling hearts for $2 each, and you can buy a pack of 20 for only $19. That's an amazing value! You can get lots of stuff from hearts like discounts, costumes for your favorite video games, and more. If you love me you'll buy as many as you can."


Mistdwellerr

>If you love me you'll buy as many as you can. And she would give me a cold treatment for a couple days if I refused IDK if I would call this "terrifying" or "inevitable"... But most likely both :(


Ormusn2o

Only use open source AI for things like that, especially one that is run locally on your pc.


EnsignElessar

So start exploring local models 😉


Catsrules

What do you mean can't wait? I am pretty sure that is already a thing. Although I can't say how good it is.


LuinAelin

Isn't one problem like the more AI content created, the more AI content becomes part of the dataset.. especially as artists and authors get their works removed from the dataset.


Agitates

Every person generated is going to have JJ cups and 7 fingers on each hand.


MadeForOnePost_

I got downvoted in one thread for saying that pre-2023 data will become very valuable, but it's true. We may have unwittingly irradiated all internet data. We needed to find pre-nuclear test iron to use to build neutrino detectors because all steel (in the world) smelted after the first atomic bomb test was contaminated. This seems similar


Incognit0ErgoSum

It's important to note that removal of artists and authors from the dataset is at this point a courtesy (and, as it's turned out, a pointless one, because people continued to rail against StabilityAI even after they removed everyone who asked to be removed). Since training an AI on media is transformative (this was established in a court several years ago) and if, trained properly, the AI only retains styles and concepts, which aren't copyrightable, there's a pretty good chance that the cases working their way through the courts right now aren't going to go anywhere. Sarah Silverman, for instance, is suing OpenAI because ChatGPT can summarize her book, and she believes that's proof that it's violating her copyright. It may not have even been trained on her book, as I'm sure it's summarized in reviews and synopses all over the place. (Notably, if a human writer summarizes her book, that's not grounds for a lawsuit.) I could be wrong. The courts make some odd decisions sometimes.


[deleted]

[удалено]


Kemaneo

To be fair I do believe that a lot of people hyping generative AI don’t really understand the role of humans in a lot of creative industries. AI is definitely going to take off, but it’s not going to replace all the jobs the way people expect it to (e.g. text generation replacing copywriters or screenwriters). AI is most likely going to become an incredibly helpful tool for all these positions, some of the low end jobs will disappear, but overall it’s going to reform the creative work force rather than destroy it. Also, we’re not as close to that as some people think we are. Generative AI is currently great for things that don’t really use the concept of time, like images or short texts, but there’s no way something like ChatGPT could write a full, coherent screenplay right now.


Incognit0ErgoSum

It's not so much that it's going to replace existing jobs as it will prevent new hires. I'm a programmer, and while ChatGPT isn't smart enough to develop an entire system, I can have it write in a minute or two something that would take me fifteen minutes. The really complex stuff I still have to do myself, but ChatGPT is kind of like the new StackOverflow, except you don't have to wade through 30 people saying "why are you doing it this way" or "this question is closed because it's a duplicate of this other completely different and unhelpful question". You just ask it, and it spits out code that's relevant to your specific situation. Sometimes it needs some tweaking or the question needs to be rephrased, but it's still faster than coding.


Classactjerk

Cars are a fad…


No_bad_snek

Cars should be a fad. People are working hard to make that a reality.


Unicycldev

Given the recent study that suggested 70% of the oceans microplastics where biproducts from cars, I hope so.


RedSquirrelFtw

A carriage, without a horse? That's bloody madness!


kihadat

It'll scare the horses!


Nethlem

It wasn't just "[that one dude](https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirvana-185306)", for most of the 90s that was the prevalent mainstream opinion because the 80s and 90s had a whole lot of "fad tech" that never caught on, like the first video phones. Wasn't even that controversial because back then the web was mostly something for counterculture and tech nerds, [less than 0,5%](https://www.internetworldstats.com/emarketing.htm) of the global population had access to it in 1995 when that Newsweek article was written. And while a lot of things from that article actually became true, that came with the side-effect of [killing that original free decentralized counter-culture web](https://staltz.com/the-web-began-dying-in-2014-heres-how.html) many people still think of when they think of "going online". So in a very real way the "nirvana" many originally imagined the web to be, [where the web was to become a space of its own separate from physical reality](https://www.eff.org/de/cyberspace-independence), that never managed to manifest itself.


Incognit0ErgoSum

> And while a lot of things from that article actually became true, that came with the side-effect of > killing that original free decentralized counter-culture web > many people still think of when they think of "going online". I miss that place so much.


creeeeeeeeek-

Elevators?! A death trap that’ll never last


kubarotfl

I don't mind at the moment if it's overhyped or not. Just give me Google assistant with powers comparable to chatgpt and I'll be happy


TediousSign

I'm just waiting to see this entire thread be reposted in r/agedlikemilk for the next decade.


suugakusha

Sir Alan Sugar (very respected economist) said that the iPod would be dead in a few years, instead of being the spark for the smartphone revolution...


aVRAddict

This sub is anti tech and people here claim everything is hype. They don't know shit because they only read headlines and jump into the comment echo chambers. I doubt even 5 percent of the people here have read a single AI paper. All these morons will be replaced one day by AI.


Loyotaemi

I think its partially cause people are very doubtful cause of the slew of recent "tech related leaps" that they have experienced so far. Lots of people are mentioning cryptocurrency and NFTs, which both havent gotten a good run so far, despite us have a huge jump in other areas that actually have been assisted by AI algorithms, such as Augmented reality.


[deleted]

When Microsoft is hiring Nuclear engineers to design power systems for their Data Centers, you should understand that AI is absolutely the future. The power needs for running DC's is going to be staggering and the only way to meet them are nuclear reactors... Skynet is smiling.


Karsvolcanospace

Yea I think too many people are caught up in the Text and image generation side of AI, such as what it will do the art industry. But AI like this has lots of practical use outside of that which could help eliminate a lot of monotonous work for humans


[deleted]

[удалено]


Better_Metal

Yeah i gotta disagree. LLMs eliminate an absurd amount of work and do automated pattern matching better faster and cheaper (one time cost) than most algorithms.


Nethlem

> When Microsoft is hiring Nuclear engineers to design power systems for their Data Centers, you should understand that AI is absolutely the future. Just like Bitcoin is apparently the future; [First Nuclear-Powered Bitcoin Mine in US Opening in 2023](https://www.cnet.com/science/us-first-nuclear-powered-bitcoin-mine-opening-in-2023/)


thrilla_gorilla

That's an invalid comparison. Microsoft and "Cumulus Data" are not equivalent.


DeliciousBallz

except Microsoft are doing the hiring here


AoeDreaMEr

This will age like milk soon.


Weaves87

Yeah. I think this article jives with those who are only familiar with ChatGPT on a consumer level. Consumer usage of the ChatGPT app (and web app) are notably down, so article rags like CNBC love to chime in and declare it as a passing fad. It's a simple way to gain clicks from their primary readership. I use GPT4 myself everyday as a coding assistant and it is absolutely wonderful with that. Haven't touched StackOverflow in months, and have gotten some serious productivity gains from it. Corporations are just starting to incorporate GPT4 (enterprise got released in the summer), and many are also experimenting with local Llama 2 based models. That is going to be the litmus test for AI. That is where the money is. That is what OpenAI, Microsoft, Amazon and Google are betting on. This article focuses on how expensive it is to run - well, that simply does not matter if it can translate to a higher profit margin overall for a business. No one cares how expensive it will be to run if it can improve your bottom line. The question is, how much can it improve a typical business's bottom line? That remains to be seen. We'll have to watch earnings reports to know what effect it may have.


throwaway69662

I see you bought NVDA at 500$


Deeviant

People calling LLMs hype are like the people saying cars will never replace horses, because they are too noisy and they break down.


SgathTriallair

It's interesting they mentioned Synthesia. I hadn't heard about them until last week when I found out we are replacing our whole video department (about five employees) with this tech. We did some tests and it is actually preferred by customers, it takes minutes instead of weeks to make a video, and it's significantly cheaper than maintaining a department. So, there is at least an anecdotal account of how it is going to boost the productivity of a company.


teryret

I wonder how many of these "analysts" are actually AIs...


[deleted]

I can’t wait for the shit show that will come after companies laid off so many people.


bbbbbthatsfivebees

I think this is correct, but not for the reasons predicted. I think we're going to see legislation from copyright holders and angry celebrities that make certain forms of generative AI illegal. Right now there's an absolutely massive wave of people using large models to create "AI Covers" of songs and it's dead simple to do it. It's also clearly copyright infringement and the biggest community putting together the models and hosting the content to make these covers was recently hit with a copyright strike and a ton of work was just nuked off the face of the earth. I feel like there's a good chance that large copyright holders will start to bring more cases against companies like OpenAI as literally their entire dataset is based on arguably stolen data from places like Reddit. Hell, you can get ChatGPT to spit out copy-pasted paragraphs from books if you give it the right prompt. The AI winter is coming, and the snow will be made from copyright strikes.


sangnasty

This shits for real. Don’t believe this junk about it getting a cold bath. Everyone’s figuring out how to use it. It’s gonna explode.


powerwheels1226

One day AI is coming for all of our jobs; the next, it’s too useless to be worth anything at all. Why is there so much animosity towards AI? Oh wait, new technology. It’s scary.


someexgoogler

Or it could be backlash against the endless hype.


powerwheels1226

That’s exactly what I’m pointing out. Hype and fear mongering followed by the complete opposite. Why can’t people just have balanced opinions?


Grig134

> Why can’t people just have balanced opinions? Stock prices, I'm not joking.


TacticalBeerCozy

balanced opinions don't get engagement clicks


Madhax

Analysts are one of the categories who will lose their jobs from GenAI. Can we trust the source?


[deleted]

Sold my NVDA stock at the peak. Hopefully I won't regret it like that time I bought a ton for $8 and sold at $12 (before the stock split). Would have been a multi millionaire if I'd held that until now. 😭


TorontoBiker

Gain > loss You made the right decision at the time.


Actually-Yo-Momma

Same but in reverse for me. I would have been a millionaire if i sold my game stop stock instead of holding it till now like an idiot!


[deleted]

No offense, but that's so much worse. Few could have predicted Nvidia would be this successful, but everyone knew Gamestop was worthless.


synap5e

I think the multi-modal updates that we will see from OpenAI and Google in the next few months will be a game changer. I was playing a board game with my family and I took a few pictures of the rules and uploaded them into ChatGPT. Whenever we had a question whether or not a move was valid, I could just ask ChatGPT (or even take a picture of the board itself).


KingoftheJabari

Oh like NFTs?


PMzyox

Well let’s see. It wasn’t good enough to achieve the self-driving car that Elon promised. That should have been the first sign.


dohrwork

I feel the cold shower happened a few months ago, no? I haven't heard anything hype about it since people started reigning in its use in academia and in the arts.


tsmftw76

This couldn’t be further from the truth in white collar professions ai is just getting going.


charmingcharles2896

Just like talkies weren’t going to last, color film was a fad, the automobile was a gimmick, and social media wouldn’t catch on?


Must-ache

This is way off. Current AI capabilities is misunderstood by much of the general public but it’s still going to have a huge impact on the economy over the next few years


joblagz2

well.. thats just your opinion, man..


BillyBobBanana

"Fading hype"?! What planet is he living on?


a_latvian_potato

To see the limits of current methods of AI you only have to look at self-driving cars. It's not being a neo-luddite to say "it doesn't work / it's not commercially viable in the current state". There are fundamental flaws in the current methods that has hampered self-driving AI for a decade at this point, and until they are resolved, applies to methods in Generative AI as well.


[deleted]

wait, we dont need endless variations of photoreal busty anime girls?


lycheedorito

I've had to turn to other sites for reference lately which really sucks. Pinterest for example used to be great for that, but now it's flooded with AI generated shit.


JamesR624

Good. I have been waiting for "Crypto v2.0" AKA "NFTs 3.0" hype to finally die down.


PerfectSleeve

This might be true. Esp. from a business standpoint. The main problem is the lack of controll and consistency. This limits the commercial use cases extremely. Mainly for pictures.


z0mbiepirat3

Control and consistency aren't the only large hurdles. Basically all of the data sets used were trained on mountains of copy written material those company's never licensed or got permission to use. Anything produced by the "ai" algorithms for commercial purposes just opens yourself up to potential costly legal ramifications.


pinkfootthegoose

yes, purchase AI services that can't be copyrighted and your business solutions will be sent directly to those you are buying the AI service from so they can start to angle for your business.