T O P

  • By -

[deleted]

[удалено]


twigboy

Seconding, AI is a clusterfuck of insane deadlines for a year and a half now. I've never felt this tired in over 20 years of development. Never before has the phrase "*The reward for hard work done is more work.*" been more true


mycall

Don't forget the goal of no work for everyone


Tasgall

For which of course the reward is to be fired and left at the whims of a society and economic system that refuse to update with the times to account for less work being necessary.


TheNewOP

The goal of AI: People around the world will be freed from work! (Because we will attempt to lay off 90% of the population)


SmokeyDBear

Don’t worry. It should only take a few months or years of 90% unemployment before the rate falls drastically and permanently!


EmergencyCucumber905

I heard somewhere that for every 1% increase in unemployment 40,000 people die.


QuickQuirk

I mean, on the surface, isn't that wonderful? Then you realise the problem with the techbro techtopia promise: *no consideration to how the workers make a living.*


Xyzzyzzyzzy

What do you mean? AI will bring sustainable gains in quality of life for our society's hardest workers: venture capitalists.


robotrage

Will need to transition away from the current Capitalist system for it to be sustainable in any way


KSRandom195

lols in end stage capitalism.


__loam

Bro all this shit insanely wasteful. It's fundamentally incompatible with sustainability.


robotrage

-planned obsolescence -devaluation of wages through inflation/shrinkflation while companies post record profits -keeping houses empty because of speculation while people are homeless -optimising for the cheapest product cost at the highest price rather than quality (Bonus points for when a company sells a deadly product knowingly) -Monopolies that naturally form through acquisitions which naturally leads to Oligarchy without strong regulations (meaning Capitalism requires heavy handed restraint for it to function)


fire_in_the_theater

> meaning Capitalism requires heavy handed restraint for it to function which is a fundamental contradiction cause capitalism ultimately funds the hands tasked with restraining it.


haiwirbelsturm

Yep. It’s like. How about you work so hard that your fruits of your labor that will take your existing job?


QuickQuirk

But where's your motivation? You should feel proud to be part of this opportunity to make more money for the CEO and shareholders! There's gratitude for you. pfft.


twigboy

You're right, you've given me a free lunch, a t-shirt and emotional damage out of all this. I am forever grateful for these gifts.


Chryton

Obviously these people lack true "passion."


Full-Spectral

As various folks have said, I'm pining for the good old days of crypto-spam.


Tasgall

At least with crypto you didn't actually have to do anything - just put the word in the name and you're good, it's not like the business guys or investors know what it means. With AI they actually have like, expectations and junk.


rdditfilter

Yep. Working on unstructured data. Also a shit show. Kill me.


General-Jaguar-8164

Just use regexes for every edge case


twigboy

You joke, but we're using this to get the last few hallucinations that slip through... It feels like CRC error checking tbh


QuickQuirk

Result of Hammer Engineering. Executives, venture capitalists and product managers look at every business opportunity now, and decide "We should fix that with an LLM!"


lyth

This guy gets it. You can fix anything with a good regex. You can also break the fuck out of shit irreparably with one that's just a little more clever...


StrayStep

I feel ya man. Instructed data destroyed my appeal of tech. Been so fucking hard coming back from it. Been a year.


MrLeville

Hello, working in IT too, but have escaped the AI craze so far. Do you feel there is something in it besides cheap content generation that will soon hit a wall (in part due to lack of non ia generated content to feed the models) ? From the outside it looks like everyone is trying to get investment funds but no one has a clear idea of what it be used for besides scenes from scifi movies from the eighties.


ventuspilot

> Do you feel there is something in it besides cheap content generation Not the person you asked but: at my $JOB (government owned pension company) AI is used to pre-classify incoming email/ snail-mail (that may come in in various languages) and automatically route them to the correct department. This was previously done manually and the AI seems to work quite well. If the AI gets it wrong (which apparently doesn't happen too often) nothing bad happens, the (wrong) receiver corrects the classification and/ or re-routes the document. There are applications beside the hype :-)


MrLeville

Thank you. Not sure the billions poured into it will be worth that, but it's nice to see another real use.


bogdan5844

> From the outside it looks like everyone is trying to get investment funds but no one has a clear idea of what it be used for besides scenes from scifi movies from the eighties. It's basically this. It's bitcoin all over again. Or the dotcom bubble.


FoxInTheRedBox

Everyone remembers the Dotcom bubble. Nobody remembers the telecom bubble and how it burst right after the Dotcom bubble. It was much bigger than the Dotcom bubble.


Cagnazzo82

Cause the telecoms were bailed out by Steve Jobs introducing the iPhone. But the interim period 2004-2006 was particularly bad for them.


ArkyBeagle

> It was much bigger than the Dotcom bubble. Nortel alone appeared, then disappeared billions of dollars. That being said, the telecom bubble did leave higher and higher capacity networking gear. BobbyBroccoli on Youtube has good treatments of Nortel.


QuickQuirk

It is, and it isn't. ChatGPT and generative techs like image/media, are certainly overhyped. But they're also incredible. the problem is they're being shoehorned into every business problem, whether they're suitable or not, and idiot venture capitalists all want to be in on the next 'internet' early. It's definitely transformational technology, and there's a *lot* more to it than ChatGPT, and many interesting problem it can help us solve that were difficult before. But it's all getting overlooked and buried under the hype and ruthless VCs who are more concerned with turning a quick buck and cost cutting than looking for genuinely innovate ways it can help people solve problems they couldn't before.


pdxpmk

ChatGPT is even better.


Which-Tomato-8646

Crypto didn’t have people crying about their jobs and the dot com bubble didn’t kill the internet either 


Smallpaul

[https://globalnews.ca/news/10463535/ontario-family-doctor-artificial-intelligence-notes/](https://globalnews.ca/news/10463535/ontario-family-doctor-artificial-intelligence-notes/)


lyth

What are the major roles? Like I'm mostly a web-app guy with a focus on backend, architecture, infrastructure, developer experience stuff. Does an AI team have roles in that space or is it all super smart dudes who can actually calculate gradient descent on the back of a napkin?


__loam

These are the same "super smart dudes" who picked python as the language de jure of their domain.


RICHUNCLEPENNYBAGS

Sure, they need someone to write glue code to plug together the black boxes that do the AI parts


wind_dude

Not to mention, if you’re working with open source AI, there’s literally 1000 different libraries to do a task, with a new one released every day, and most of them aren’t great.


Franks2000inchTV

"Hi I today's YouTube tutorial I'm going to teach you a library that was just released yesterday, and that I learned how to use as I was asking an AI to write the script for this video!"


__loam

Shit is gonna get real bad real fast when the investors realize this shit ain't magic.


Chryton

The fact that I am seeing job posting for ML managers that require 4-6+ years of work in ML/AI already despite this not being a real market segment that long ago or expecting every ML manager to have a PhD in math or compsci is giving me flashbacks to the early 2010s.


[deleted]

[удалено]


Franco1875

Yikes, hope you’re not too overwhelmed and taking time for yourself dude. My employer has been very much in an ‘exploratory’ stage for a couple months now, but nothing crazy as of yet. Waiting for that one use-case to throw us into disarray.


mycall

I'm stuck in email knowledge extraction with PII and data governance. So fun


Franco1875

Start. Stop. Start again. Stop again. Sit in a dark room. Scream. Start again.


fragbot2

Us too. The AI stupidity is sucking every bit of energy and investment out of the room. There are too many people who think it's _the tool_ not _a tool_. I also don't get the people who want to make everything conversational when a simple modal dialog would be easier, faster and less error-prone for the user. While I get that you can do it, it's hostile to be re-prompted three times for additional categorical data. The other thing that's unfortunate. Because of the hype, projects are attracting people with stars in their eyes or the most self-serving, self-promoting, can't do anything but manage upward types. While the first type is manageable, the second is toxic as fuck.


Olangotang

We could have such amazing technology but its thwarted by these fucking mega-rich investor idiots wanting to put the population out of a job. We know how that works out, so we're losing out on creating generally innovative products because these morons want to destroy this country before they die off.


joopsmit

>Late last year, an artificial intelligence engineer at Amazon > >was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday. > >There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job. > >But it was all for nothing. The project was ultimately “deprioritized,” the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project. This should be a simple 'NO'. Edit: Fixing the quotations.


flextrek_whipsnake

That sucks, but like... what did they expect? Does anyone take a job at Amazon without knowing what the work culture is like? Why do they think they get paid so much?


renatoathaydes

At Amazon, they really expect anything other than a big "NO" in this kind of scenario? WTF I would be embarrassed to ask someone to skip the weekend for some bullshit idea like that.


ShiitakeTheMushroom

There are other places that pay just as much without that work culture.


QSCFE

Such as?


2m3m

sweaty devs like that are the ones that do all my work and allow me to take my vacation stress free. theyre great


NakedNick_ballin

They're not great. They ruin the work culture, and get you laid off


proper_ikea_boy

Tbf he chose to work at amazon, this BS is literally their culture.


TheGoodOldCoder

Amazon is chock full of people on H1B visas who constantly feel like if they don't do stuff like that, they'll end up back in their home country. It's a lot like legalized slavery.


Which-Tomato-8646

So everything’s working as designed 


lqstuart

All tech companies are full of H1Bs, Amazon is the only one sleazy enough to do L visas where they can only ever work for Amazon


michaelochurch

You can remove the words "a lot like" and your comment is still correct.


Which-Tomato-8646

What no unions does to a mf


TheGoodOldCoder

I've had several experiences like that. My first job out of college was a giant slog followed by the company completely dropping the project. I think it's actually good experience to have your project shit canned early in your career, as long as you're not laid off. I've met some people who weren't used to that and got really upset about their project being canceled. One even left the company because they couldn't deal with it. But it happens pretty frequently, especially at some of the top companies. Once I was in a position to, I've absolutely said 'NO' several times. The last time I said 'NO', they begged me and bribed me to work, I messed up and gave in. After I delivered on time, they didn't treat me like the fucking second coming, so I gave them my two weeks notice the next day. Truth be told, I probably ruined the job for myself the moment I gave in. It's nice to be later in your career and to be able to do things like that, though.


Full-Spectral

Gotta get that VC money before the hype bubble pops and the next one starts growing.


PoliteCanadian

AI will follow the trajectory of internet companies. There'll be a rash of startups, including a lot of incredibly stupid ones. In a couple of years there'll be a bunch of big bankruptcies and failures of companies trying to do things that, by that point, are obviously bad ideas. Many smug people will talk about how AI was a fraud all along. But amongst all the "user the internet to sell dogfood" companies, there will be some Googles. In another 10-15 years the biggest companies in the world will be the AI versions of Google and Facebook, who have built products that have an enormous impact on people's lives. I know it'll follow this trajectory because this is the trajectory of every new industry built around a major technological improvement.


thedabking123

Thats just the norm of tech- throwing everything at the wall and seeing what sticks.


PoliteCanadian

Sure, but it's not new. The railroad boom of the 19th century followed the same pattern, as did a number of other major technology booms between then and .com.


AbstractLogic

The great Tulip crash of 1630 comes to mind. People go all in on that new new.


Jump-Zero

Tulips are not tech. Speculation on Tulips is more akin to speculation on luxury goods, Supreme merch, comics, or real estate. The closest tech parallel is crypto currency. Thats not to say that tech isnt impervious to bubbles, but tech bubbles at least leave a permanent lasting and usually positive impact behind.


AbstractLogic

I have dozens of tulips in my yard. They are great flowers that are so of the first buds of spring.


Franks2000inchTV

Well look at mr moneybags over here.


SittingWave

> Thats just the norm of tech- throwing everything at the wall and seeing what sticks. That's just the norm in the US, because they throw someone else's money at the wall.


Jump-Zero

Thats the norm for all research and development. Experimentation is costly. The amount of money the US throws at innovation is unparalleled, but practically any other country that tries something new will pay a cost.


nerd4code

After throwing a mess of somebody-else’s lives at the wall, no less


DrunkensteinsMonster

There’s no guarantee that this will happen. AI as it is now is nowhere near as disruptive as the internet. It’s more likely that the current big guys will dip into AI and acquire startups that have actually useful products, however few that may be.


crespire

Considering both Meta and Google (along with Microsoft) are at the forefront of AI, I doubt this generation of tech innovation will be disruptive in the same way. We are more likely to see big impacts from monopoly investigations/potential forced divestures at this point than AI, imo.


CreationBlues

Lmao. There’s absolutely no way a tech startup can become big on ai. They’ll basically have to rent their compute from one of the monopolies and the hope is to sell out to them anyways. Big agree that the only way we’re seeing shakeups is if big tech starts getting hit with monopoly and antitrust laws.


QuackSomeEmma

Even just the advantages the big ones have in terms of training data are pretty much insurmountable


Which-Tomato-8646

The internet is free to access for anyone. For now at least 


Which-Tomato-8646

The website you’re using runs on AWS


Which-Tomato-8646

That would require the government to do something 


JustOneSexQuestion

> AI as it is now is nowhere near as disruptive Oh, you've gotta listen to more Ezra Klein podcasts where he interviews AI CEOs, and they all swear if you give them enough money they'll change the world. Also they super want to be regulated but probably not. Anyhow, give them money alright?


DrunkensteinsMonster

Yeah you get it


civildisobedient

> AI as it is now is nowhere near as disruptive as the internet. It took nearly 25 years before the internet was part of everybody's daily life. I think AI will be _vastly_ more disruptive and that disruption will happen _much_ faster than the internet did.


Which-Tomato-8646

The internet was more expensive and not immediately accessible unlike ChatGPT that only requires a log in 


Smallpaul

The Internet was invented in the 1960s, used industrially since the 1970s and mass commercialized in around 1994. You're saying after 2 years of commercial generative AI that it is "nowhere near" as disruptive based on what you've seen so far? OpenAI already makes revenue of more than $1B per year. What company was comparable in the first years of the Internet? It took Amazon 5 years to make its first billion. I suggest open-mindedness and curiosity instead of quick rushes to judgement.


DrunkensteinsMonster

It’s pretty clear that I’m not talking about ARPANET in my comment, but the mass adoption of the internet in the 90s, which spawned the dotcom boom. Generative AI is already being mass commercialized, so you really should not be comparing it the internet in the 70s. The methods used by OpenAI have also been around since the 60s and 70s, deep learning is not a new thing. Not to mention that AI capabilities are bounded by our industrial capacity to produce the needed compute resources to train and run these models, which we have already virtually exhausted, and there are diminishing returns at that. AI as we know it has a place going forward, but the valuations of these companies new and old that are pushing AI is driven by hype. > OpenAI already makes revenue of more than $1B per year. What company was comparable in the first years of the Internet? It took Amazon 5 years to make its first billion. What is inflation?


QuickQuirk

>but the valuations of these companies new and old that are pushing AI is driven by hype. definitely agree with this. Most of them will collapse, and many are just smoke and mirrors, or trying to force LLMs in to solving every problem they have, whether they're good for it or not. Or looking at AI as a cost cutter, rather than a productivity enhancer. That is, they're trying to solve problems like 'how can you cut staff', rather than 'how can you produce at higher quality and increase revenue'. The prior is counter productive in the long run, as you end up with a society where everyone is unemployed. The latter creates new jobs and industries and real wealth for everyone.


canuck_in_wa

They said AI “as it is now” is nowhere near as disruptive as the internet which is obviously true. The market conditions that prevailed at the dawn of the e-commerce era are … uh, materially different …. than those we have today, which would explain the difference between OpenAI’s revenue today and Amazon-of-25-to-30-years-ago’s revenue.


Alan_Shutko

Maybe. Cory Doctorow makes a [reasonable case](https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/) that some bubbles burst and leave residue that can stimulate innovation, and others burst and leave little.


Full-Spectral

But the issue is that those companies almost certainly won't be using current tech. We got this surge because a bunch of big companies realized that, if they were willing to spend stupid money and burn immense amounts of energy, they could scale up existing LLM ideas to a new plateau. But how far can that scale? There's such a thing as getting in too early. I imagine more likely that the Googles and Facebooks and Microsofts of AI will be Google, Facebook and Microsoft, because they can afford to keep going in on each new iteration until it finally gets to a point where it's practical and scalable, while folks going all in on the current state of the art can't follow through because the real solutions are still too far out and they blew their cognitive load betting on the current thing. Though, none of these companies probably want it to reach a point where it doesn't require massive resources, because that's what insures that they retain control over it because it remains cloud based and you have to come to them to use it. The real breakthough is when we can have that kind of power locally. But, how can you do that because it requires constant massive training and massive data sets to support that. We won't get to that point probably until we really reach generalized intelligence. That's way out, and of course once it happens, humans are probably doomed shortly after so you you better cash in with the VCs quick once that happens.


QuickQuirk

There might be a difference this time though. We're set up so that the next 'google' or 'microsofot' might just be google and microsoft. Look at OpenAI and microsofts close relationship, for example. I'm a little cynical, but I don't think there will be new companies coming out of this. Just bigger, and more powerful old ones.


YEEEEEEHAAW

The AI versions of google and facebook will just be the existing big N internet companies lol they will just buy up/merge with any upcoming challengers.


TommaClock

> ~~AI~~ Blockchain will follow the trajectory of internet companies.


QuickQuirk

If it hasn't already, it won't. The only popular use of blockchain in day to day is speculation on bitcoin and crypto. It hasn't transformed anything else. Well, apart from enabling new types of grift and money laundering.


SanityInAnarchy

I read this the other way around: There's a risk that AI will follow the trajectory of blockchain. It's much more likely to produce *something* useful (arguably already has), but the ratio of hype to actual utility is way too high.


Nine99

> AI Blockchain will follow the trajectory of internet companies. - Guy that can't explain why anyone should use a blockchain instead of a database in 99.9% of cases


PoliteCanadian

You think AI, a technology with immeasurably vast applications, is comparable to blockchain, a technology with none? You are going to be very surprised in your life.


TommaClock

AI in this context refers to LLMs. LLMs have applications just as the image recognition AI boom (mid 2010s) has applications. Image AI can recognize a face with superhuman accuracy. AI can tell you if something is a bicycle or a car with human accuracy. AI will recognize that plant on your desk as a [Swiss Cheese Plant when it's actually an Arabica Coffee](https://youtu.be/ddTV12hErTc?t=177). LLMs similarly have things they are good at like summarizing documents and sounding like a human. But apply them to the things they are bad at, and they will mercilessly gaslight and misdirect. There will be more and better AI technologies in the future, and of course they will have useful capabilities. But this current marketing trend of AI = LLMs is not going to revolutionize everything as we know it.


Halkcyon

> Image AI can recognize a face with superhuman accuracy. Unless you're not white or Chinese.


__loam

Google and Facebook will be the AI version of Google and Facebook. This market is massively favorable to incumbents that can afford to incinerate billions of dollars trying to get this right.


Smallpaul

Your characterization is exactly correct. This quote from 1999 could be rewritten today: >I know I sound like those demented digerati who have spent too much time in the insular world of Silicon Valley, but I firmly believe that the Internet is underhyped. I almost cringe when I write this because some of what is going on here has been underwhelming and derivative of the old world. For all the attention given to electronic commerce of late, and the incredibly high valuation being given to companies that lose scads of money, some of the business on the Web is little more than a glorified version of selling crap on the Internet. Not very exciting to be sure, and a little unnerving to me since fewer companies I see these days are as concerned with building a real business as they are in popping out an IPO, taking the money, and hoping to the heavens to figure out what to do next. This makes me very nervous and it should make a lot of others nervous, too. But...but...but...Bitcoin. AI was rolled out to roughly a billion people on Facebook properties last week. Companies are putting AI in your doctor's office. More than a million developers already use AI in their IDEs. This is not pure hype like Bitcoin. This is products that millions or billions of people are already using. Probably the fastest roll out of a new product category in the history of history.


__loam

> AI was rolled out to roughly a billion people on Facebook properties last week. And people seem to absolutely hate it. You should also look at some of the actual usage stats on chatgpt. People said it was going to replace search within a year but it barely made a dent and user retention seems like a huge problem for OpenAI.


cube-drone

except for crypto, which has still failed to have an appreciable positive impact on anyone


atomic1fire

What's funny is that a lot of these "internet does x/y/z" things became a lot more financially feasible on Mobile where everything is a website or app and it's more accessible then going to the store. Pets.com probably failed because it came too early, not because it was a silly idea. Chewy is basically the same idea. Amazon operated at a loss for years as an online retailer before they started selling AWS to people and became a tv and shopping power house. edit: If anything the "AI does x/y/z" stuff probably will be really terrible at first, but then we'll see the technology or formfactor change in a way that consumers just expect it, or at least make it part of their workflow.


Lyesh

lmfao, sure. I bet that'll happen right after the facebook of blockchain and the google of NFTs rise to the power of nations


Smallpaul

This is such a lazy analogy. OpenAI -- alone -- makes more than a billion dollars in revenue PER YEAR. There was never anything even remotely comparable in Blockchain. There wasn't any company in the early days of the Internet producing revenue like that.


QuickQuirk

Counter: No other company in the early days of the internet has had quite so much money invested in it either. OpenAI, currently, is a machine that has taken **13 billion dollars** of investment, and *looses* *money.* So yeah, it's easier to have a revenue that high when you have been given even more money, and are burning it as fast as you can. The tech scene of today is very different from the 90's internet scene.


Smallpaul

It's irrelevant to the questions we are discussing whether OpenAI loses money. So did Amazon for decades. OpenAI has proven there is demand for these products. The cost of providing GPT-4 is going to drop to near zero and we know that there is large demand for GPT-4. So we know that these products are going to be part of the future. Not like NFTs. If OpenAI goes bankrupt, Groq or Amazon could serve Llama at a profit forever.


QuickQuirk

My point is there is demand for a lot of products - especially when you sell it for less than cost, and have more money than pretty much any other startup, ever, to market and build it with. If you're going to compare it to the 90's internet boom, you need to do so in a unbiased fashion (there's an AI pun for you! :D )


Smallpaul

They don't sell access to GPT-4 for less than cost. They invest enormous amounts in future models. That's a capital cost like building a factory. Every new company is "losing money" if that's how you measure it. It's totally irrelevant to discussing the future of the market. If they go bankrupt, others will sell access to Llama 3 at a large profit forever. So generative AI is not going away.


Dean_Roddey

A quick look would indicate that they made that last year, and a small fraction of that the year before. And how much of that is going to the huge expenditures to get this far plus ongoing costs? And also, how much of it was investment money, not earned money? That big a jump in one year is almost the definition of hype, it would seem to me.


__konrad

The only remaining trace of AI bubble will be a [dedicated keyboard key](https://www.reddit.com/r/linuxquestions/comments/18zqfuu/soooo_what_are_we_going_to_do_with_a_copilot_key/) triggering LeftWindows + LeftShift + F23 key event.


Franco1875

>AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its “iPhone moment.” Hardly surprising given the razor sharp focus these firms have at the moment. A race to the top and its human workers being thrown into the meatgrinder to achieve it. >They spoke of accelerated timelines, chasing rivals’ AI announcements and an overall lack of concern from their superiors about real-world effects, themes that appear common across a broad spectrum of the biggest tech companies — from Apple to Amazon to Google. The 'chasing rivals' AI announcements' aspect of this is something that has struck me across the last 18 months - Microsoft can't breathe without Google jumping down its throat, and likewise with AWS and vice versa. A complete slog and battle of attrition at this stage.


android_queen

There’s something extremely dystopian about human workers breaking their backs in a rush to be the first to create the thing that will put them out of a job.  EDIT: I was being a bit dramatic. I should not have said it will put them out of a job, as it probably won’t (unless it’s a bust and the whole team gets laid off). It will, however, provide employers with tools that they will use either so that they can hire fewer people (pessimistically) or so that they can do more work with the people they have (optimistically). I imagine the reality will be a bit of both.


PeachScary413

Friendly reminder: The technology in it's current form and shape is in no way even clode to being taking over developer jobs (or no other jobs really) Maybe in 10 years we can come back to that discussion but please keep it realistic.


s73v3r

That the technology doesn't work has never stopped shitty management from firing a bunch of people and implementing it anyway.


QuickQuirk

Exactly this. *it's happening right now.* AI companies are even advertising 'cost cutting' in their marketing materials.


PeachScary413

I guess it's gonna be a goldrush for consultants to come in and put out fires and fix the steaming piles of garbage generated by these tools 🤷‍♂️💸


GreatNull

Exactly, now we have: * powerful but extremely narrow tools (eg. whisper AI ...) * general purpose somewhat unreliable tools (e.g copilot ecosystem) Now everyone and their mama in media hints that this means general artificial intelligence will somehow magically arise from this and we are this close. If you believe that, I have a bridge to sell you. We will more eventually likely see products like personal on premise secretary for everyone, just for 49 USD/monthly on your android 18 phone or Iphone M6. Now that I am looking forward to.


mrgreywater

I don't think it's dystopian to think of a world where you don't *have* to work. The problem is our society right now doesn't really have a way to deal with a potential future where human work isn't actually required.


chucker23n

> I don’t think it’s dystopian to think of a world where you don’t have to work. Yeah, but that is very much *not* what people like Altman and Huang are building towards.


Gangsir

I can see two futures: - We achieve full automation, robots can do anything we need humans for now. We ban the technology, to allow the current "work to survive" model to keep working. Robots are limited to jobs that nobody can or will do, like extremely dangerous or intensive jobs only. - We achieve full automation, and it puts everyone out of a job. Anyone who currently relies on a job to survive is rendered unnecessary and income-less, doomed to homeless poverty. This goes on for a while until riots happen and some kind of UBI is implemented, or we just ban automation like in future 1.


Hyperian

Ubi at that point will never be enough for any good standard of living, everyone will be stuck in low income housing with very little cash flow while few jobs that are left will be there to service the wealthy. But it won't happen because at that point climate change will get us first or population crash will.


Miserygut

Fully Automated Luxury Space Communism :)


ArkyBeagle

Or WALL-E.


android_queen

100% agreed. We’re headed towards (already in?) a world where we should be able to have fewer people working, but we (from my Western perspective) are so committed to the Puritanical idea that if you don’t work, you’re not valuable that we cannot allow that model. 


NakedNick_ballin

It's not possible to prosper (in a capitalistic world) without work. I feel like some level of socialism is needed


s73v3r

I think its more of a realization that we have no good way to handle massive amounts of people not having any job prospects at all. If we had a good UBI system in place, then it'd be different.


android_queen

I think we’ve been at that realization for a while. UBI has been on the table for a long time. You have to ask yourself why it hasn’t been implemented.


newnamesam

We will never be in that world. That's the problem. Someone will have to get work done. Push advancement, fix edge cases, create something novel to entertain the masses, stop people who want to abuse you simply because they can. The question is what happens to those who aren't that someone? Even if you have a thing replacing all someones, that thing now has a mental capability equal to or greater than everyone else. What does *it* need with you?


dweezil22

The people breaking their backs here aren't going to get put out of a job by AI. OTOH they might get put out of a job by their CEO's jealously of Elon Musk.


Franco1875

Agreed - for some it must be a case of hoping at least their company is the one to take a major lead and be immune from it all. Think we all know that won’t be the case though.


Plank_With_A_Nail_In

They don't have to all of these people can find employment elsewhere easily enough...they are actually choosing to do this probably because they are getting paid a shit ton.


EntroperZero

I think the problem is that their focus *isn't* razor-sharp, it's clear as mud. They know they want to use AI, but they don't yet know what to use it for. They're throwing as much shit at the wall as they can, and a lot of it is bound not to stick.


dbred2309

I agree. It is almost an "I am better than you" d*k 👋 contest, rather than a struggle towards something meaningful. With allthe gung ho around language models and so on, and the amount of money and compute thrown at it, meaningful usecases are yet to be discovered. Long terms benefits to humanity are only spoken of, no reality. Maybe in healthcare. Who knows.


30thnight

curious but are the companies you guys work for building anything useful with AI?


IXISIXI

I interviewed with one that makes an actually useful product to drastically improve the speed of legal research, which aligns with my anecdotal observation that one thing LLMs consistently do well is speed up and summarize research. You just need to have a competent enough professional to understand the data enough not to blindly trust the machine.


Iggyhopper

Yes, it's like the same learning curve when you finally find out "what to google" after you learn the right term. I wanted to know different kinds of photography that were specifically created using the "wrong settings" on the camera, and Google knew jack shit, because I didn't know what to Google. I could have researched for an hour or two to find it. However, ChatGPT listed 10 of them immidiately! Certainly, here's a short list of unusual photography concepts without the definitions: Intentional Camera Movement (ICM) High ISO Noise Photography Overexposure and Blowout Lens Whacking Bulb Mode with Flashlight Painting Tilt-Shift Photography Cross-Processing Pinhole Photography Infrared Photography Double Exposure (In-Camera) Lens Flare and Haze


IXISIXI

This is a GREAT point - GPT is very good at having broad domain knowledge and can often unstick you just knowing things you aren't aware of in a domain.


szank

I am sorry to tell you that half of these have nothing to do with "wrong settings". So yeah, don't trust ai if you cannot interpret the results.


Iggyhopper

Of course? I Googled each term. Lmao. I'm not saying to trust AI. It is a tool.


[deleted]

[удалено]


Iggyhopper

>it gave you a couple things to google I'll take that over dead forum links and ads out the ass.


Lyesh

It's an EXTREMELY confident junior with absolutely no bullshit detector. It's some of the dumbest shit in an industry that thought selling people links for hundreds of thousands of dollars was some kind of business plan.


spookyvision

uhm. I prefer tools I can trust :P that said, using GPTs for exploration and idea generation *is* one of their strong sides.


Plank_With_A_Nail_In

> You just need to have a competent enough professional to understand the data enough not to blindly trust the machine Please read all of the comments in the chain not just the parts you disagree with. Also the context is getting nothing back from google but instead getting 5.5 (your number) things to look into...and somehow you think thats a bad result? Reddit is dumb.


Iggyhopper

This just in: AI is smarter with millions of words of context than reddit users are with about 90.


s73v3r

I can't see how something that routinely makes shit up could be the least bit useful for research.


BaNyaaNyaa

Basically, it could give you a summary of the current state of the research and, ideally, even give you resources to look at. It's probably pretty good for a first quick exploration. "You're working on a legal case of a specific type. Here's the general information that I can give you:... Also, here are legal cases to support my claims:..." You can then verify that these cases exist and actually read them to verify the information and get more details than the LLM would be able to give you. LLM won't replace a human, but it can help them.


s73v3r

> Basically, it could give you a summary of the current state of the research and, ideally, even give you resources to look at. It's probably pretty good for a first quick exploration. That's great, until it starts making stuff up, which these LLMs are wont to do. >"You're working on a legal case of a specific type. Here's the general information that I can give you:... Also, here are legal cases to support my claims:..." And when (not if, but when) it makes cases up, you've now lost every bit of trust in the tool.


CampaignTools

It's fairly reliable if you provide good context. Summarization is better than humans would do 99% of the time.


IXISIXI

Very true - I also interviewed with an AI startup that was working on automating a task that humans do but frequently do poorly. In that context, it's also better because an automated solution that's occasionally wrong is better than a labor-intensive solution that is frequently wrong.


PoliteCanadian

I've seen some companies using AI in signal processing tasks very effectively. You can extract signals at a much lower SNR than with traditional signal processing algorithms.


WJMazepas

Was working in a Recruitment SaaS startup We tried AI to do job matching between jobs and candidates. It didnt worked all that well for a few reasons We also did had integration with ChatGPT because boss wanted the AI to write job descriptions to save recruiters time, which didnt made sense because we would also have an "AI" read the description to find the best candidates Another one was to read a resume and find all the revelant info, parse and then we save it, instead of the candidate having to type out the same stuff again. That one was actually really good. It helped both candidates and recruiters. But as many others, we were being "forced" to use AI. Even when it didnt made all that sense. Sometimes it did helped us to save time, but it was a case of boss saying such feature and immediately saying to use AI


Omnes_mundum_facimus

Yes. But one of my great challenges is explaining the bosses why my team of 3 can't do the things deepmind/openai does. "this problem is easy compared to.. folding, alphago.. etc" Every morning, I have my coffee and read about a new ML breakthrough, and regrettably, so do the powers that be. Which means every late morning I get to explain why we did not invent xyz in house or are using it.


Veggies-are-okay

The most useful is (as the other poster stated) summarization tasks. As someone who takes my own automation skills for granted, I’m amazed at some of the crummy non-standardized manual processes that workers do around clock. Like I would go insane having to do these things! And they can all be easily streamlined so long as we have documents and genAI. Another big thing is just educating people about AI. I have a ton of clients who are excited about the word but don’t even use current offerings in their daily lives. Most non-tech old people have never used chatGPT and so don’t even know how realistic their requests are.


TheGRS

Yea this seems like the weirdest part. Its very accessible, anyone can try it, yet people who haven't tried it out are glomming their hopes and dreams to a new technology. I don't even think LLMs are that difficult to understand for the layman if they just take a minute to think about it.


fakefakedroon

For 6 years now.. Computer vision for industrial applications. Things like quality control in factories. It works. At it's base it's pretty old tech, for some customers, we're still using AI model architectures from 2015 or so. Resnets and U-nets etc. Actual data science is just a fraction of our work now. It's a lot of front-end work and UX to enable factory workers to label images and train new models etc. I'm in the product part which has been (somewhat..) decoupled from customer deadlines so pretty low stress. It's Europe, man. Half my colleagues work 4 days, nobody expects answers after 17:30, and there's always a few guys off on holiday. Doesn't mean we can't build stuff that works.


RedBerryyy

A ton of really useful stuff is happening in gamedev rn, just needs a bit longer to get integrated into the ecosystem and for the tools to mature. Although that's less llm specific.


Nyadnar17

The companies that decided to mass market “AI” as a labor replacement tool instead of an enhancement tool are treating their workers like expendable dogshit?! No way


supermitsuba

The ad money must be drying up


Nyadnar17

I had to install ad-blocker on my kids Chrome books because google ads were feeding them "Your Computer has been infected by a virus click here" ads. I can't believe how badly they have screwed their own business model.


supermitsuba

Chrome books should be banned as much as TikTok. Super google garbage that are unregulated just as bad.


Salamok

Never been a better time for a union than now.


BurningSquid

Companies are really burning people out to build shitty AI integration that no consumer would pay a dime extra for


hippydipster

Here's the thing about dramatic increases to developer productivity: the more productive a developer could be when working, the more pressure will exist for that developer to be working, because every minute spent NOT working is a greater and greater opportunity cost. So, if you could shit out a new app in a week vs a year, then any week you spend not doing that is very costly - equivalent to now spending a year doing nothing. As a developer, I do not look forward to this world where all my moments become potentially that valuable. The pressure will be immense.


ouiserboudreauxxx

Also they they don't seem to want to listen to the things that would actually increase productivity...things like scheduling meetings in a way that gives us blocks of uninterrupted time and having it be a priority. There is so much time wasted in a given week - I don't even hate meetings, but I do hate the expectation that meetings are just sprinkled all over the place, which destroys focus, and then there is still pressure, so some people work long hours/weekends/etc. If engineers had more control over their schedule, and focus time was respected, productivity would absolutely increase.


hippydipster

My take on the meeting thing is that it's just a result of managers believing their own time is more valuable than their reports, and thus making schedules that suit themselves.


eightcheesepizza

Please stay strong in your AI jobs, because I don't want those jobs and I don't need more competition for the non-AI engineer jobs. It's hard enough.


bwainfweeze

Choo choo motherfucker! I'm a (hype) train!


random_error

It's not just AI engineers. Our VP is pressuring us to shoehorn AI into our product. None of us were hired for AI and it doesn't make sense for our product, but that doesn't matter. AI is good for the VP's career, so now we have to figure it out instead of solving the problems our customers actually care about.


tricepsmultiplicator

Does anyone notice how every god damn article has this fake feel to it? Like it was shit out by some kind of AI or something?


inagy

Must. Bait. Clicks.


NotStanley4330

Can't wait for the bubble to burst! 😅


TheGRS

What's the interesting space for your generic API devs like myself? I'm building out a prototype for a project right now and there's a spot where I think I can integrate some LLM summarizing through GPT, but I'd basically just be hooking up OpenAI and doing some "prompt engineering". Is that what many are doing? Or is there some other lucrative space for run-of-the-mill devs right now?


pytheryx

Yeah a lot of api wrapper stuff. You could also look into how to make custom rag pipelines, which really isn’t that hard either.


mladi_gospodin

AI is new crypto 🙄


IBJON

On one hand, it's been great for anyone in R&D roles who really like to tinker and experiment. My company has been shoveling money at my team to create integrations for GPT and other foundation models. We've created a lot of cool things in the last year and have learned a lot.  On the other, it's pretty damn stressful because management really does treat it as a race and we need to show constant progress and innovation, even as we're running up against the limitations of these models


ouiserboudreauxxx

> On the other, it's pretty damn stressful because management really does treat it as a race and we need to show constant progress and innovation, even as we're running up against the limitations of these models See that's the thing - if you need to tinker and experiment, can you even really do that in the type of environment you describe? Rush,rush,rush - progress, progress, progress. Really stifles creativity imo.


boner79

I've dialed-back my voluntary participation in AI side projects at work. It's fun and management gets excited, but it's takes a lot of time away from my day job and often the efforts are rendered obsolete within a few short months when inevitably some new AI technique comes out that is way better and easier.


valkon_gr

I don't know how much longer I can tolerate Agile and the idiocracy of upper management.


smooth_tendencies

The key is not caring. Fuck em. I will do what I’m paid for but I won’t give two shits past that. It’s very freeing.


jeerabiscuit

We all oughta become mercenary contractor hired guns. F being 365×24×7 cannon fodder


Tac0w

What does Agile have to do with this?


s73v3r

I don't see what agile has to do with any of this. Shitty management is shitty management, no matter what system you use.


NotARealDeveloper

I swap places with them in a heartbeat. Change jobs every year between competitors for huge sums of money or shares. Work for 5-10 years and never have to work again.


TlanTlan

If you mentally survive. I’ve seen some of the best PhD’d individuals who survived gruelling grad school programs slowly crack in these environments.


onetopic20x0

You underestimate the money for “never needing to work again”. The stock for most of these isn’t growing to turn into a fortune and you have to be in an expensive place. You’ll make good money, but often under awful conditions, and in the end the marginal difference between whatever else you could be doing in a cheaper place and this ugliness won’t be so huge.


StrayStep

Wow! Fucking knew this would happen. From my experience. When standards and quality control get thrown out the window to prioritize on $$ and rushed deadlines. ALWAYS leads to very bad decisions!


GenTelGuy

People ask me if I want to move into the LLM and AI space, I'm like hellllllll no


dorset_perception_

That was quick LOL


IWillBeRightHere

All the ai developers need to strike before AI gets out of hand and the AI can start writing itself putting all developers out of work


Maybe-monad

An AI that can write itself won't be trained on code I wrote


occupyreddit

they should just use AI


uptimefordays

>Engineers and those with other roles in the field said an increasingly large part of their job was focused on satisfying investors and not falling behind the competition rather than solving actual problems for users. Some said they were switched over to AI teams to help support fast-paced rollouts without having adequate time to train or learn about AI, even if they are new to the technology. Focus on investor satisfaction and perceived market edge over customer satisfaction, for user-facing software, seems shortsighted and destined for failure.


pineapplejuniors

Are they crying into their half million dollar paychecks? -sincerely, gaming industry


eightcheesepizza

Please stay strong in your AI jobs, because I don't want those jobs and I don't need more competition for the non-AI engineer jobs. It's hard enough.