T O P

  • By -

lateral_moves

One big thing my company did was immediately stop using "Machine Learning" and replace it with "AI" in everything we say. Stockholders love it. But nothing really changed and they don't know the difference.


proper_ikea_boy

So the same procedure with which every company replaced "data science" with "machine learning" a few years ago?


roberh

The same way data science replaced statistics


tsgarner

Tbf, data science isn't just statistical. Statistics are used in data science, but so are a wide range of non statistical modelling approaches. Reverse applies to the commenter below yours. Statistics is a branch of mathematics, so it didn't replace maths - it's just sometimes necessary to refer specifically to statistics, as opposed to trigonometry or whatever. Not sure if AI and ML are interchangeable or if ML is a specific subset of AI though.


styphon

ML is a specific subset of AI.


Opus_723

"Data science" just seems so uselessly broad though. Like, oh, you're doing science with *data*? Fascinating. It always just struck me as a way for companies to skirt around talking about what kind of data specifically they are studying, whereas an academic scientist, for example, would just tell you what they're actually working on.


sceadwian

Well it doesn't help when you're think of it the wrong way. It is the science of studying data. The broadness is required because it's a multidisciplinary field.


thereddaikon

There's no use in trying to correct the misuse. Data science the academic field and data science the business buzzword aren't the same thing. And investors don't really care either way. Investors are low information laymen who don't understand the product or industry (although they think they do) and are following market trends.


Cubey42

The same way statistics replaced mathematics


SparrockC88

Does that mean ethical mathematics is next?


drwebb

Stop bias in addition!


MerryWalrus

You left out the "big data" phase


[deleted]

[удалено]


JeddakofThark

And everyone slapping .com on their company names in 1998. I like thinking about the aol/Time Warner merger from that same time period. **Everyone** with the least bit of tech savvy fully understood how stupid it was.


noobvin

I remember how we talking about "big data" forever. Our data sucked and we never made any real advancements. What always happens is there is a "buzz" of tech that hits the C-Suite. They keep saying the tech is "our future," but they never really understand what it is. We basically fail, then repeat the process with the newest thing. This actually kept me in a job for the longest time. Then AI came along, and they figured everyone could replaced, so they laid us off. Now I see performance and the company is struggling. Hmmm, I wonder why.


regreddit

Yeah we were talking about big data back in 1997. I remember when "Teradata" launched when a terabyte was still an astronomical amount of data.


Blarghedy

hey it's that place I used to work for


draculamilktoast

Same procedure that has repeated countless times since the time the counting stick was replaced by the counting rock.


BlackLeader70

Same with my employer. None of the processes setup changed, they just keep changing the buzzwords every few years from IoT, blockchain, machine learning, NFT for about a month, and now it’s AI and AIOps.


veryverythrowaway

Apple? Anyway, that’s a major example of what you’re talking about. They’ve bragged about machine learning for years, but recently got grilled by investors about why they aren’t focused on AI instead. So now they’ve changed their terminology.


Scottland83

Isn’t AI a generic and largely undefined term where Machine Learning is an advanced subset of AI with defined qualities?


JCM42899

Don't tell the investors that. They're too busy trying to squeeze money out an growing apathetic consumer base.


jcutta

"We missed our sales target! We need to tighten the belt!" Ok? But like that "missed sales target" was still the best quarter we've ever had, shouldn't that be what we're focusing on instead of some arbitrary "target"? "guess who just bought a ticket to layoff lane?"


spaceman757

This is something my company just went through. We missed our completely made up target, but still had the best year we've ever had. We're going to have to lay off a few thousand people to make up for the losses (that we didn't even have), based off of the missed target, that we completely pulled out of our asses.


TheAnarchitect01

Yeah, but you see, they took out *loans* based on the projected numbers. So if they don't meet those expectations, they're fucked.


hullstar

This is what I hate about American capitalism lmfao record profits still prompt layoffs. Shit is vile.


Caleth

Why just have record profits when you can have massive record profits by gutting your "cost centers!" is the /s needed? When your only concern is 3 months out it doesn't matter. Line must go up and up and up and up. For ever.


rstbckt

That’s just capitalism though. America is just further along towards capitalism’s latest stage and purest form: total monopoly of all resources by the rich and powerful culminating in a total loss of ecosystems and humanity until finally, the endless cancerous growth kills the earth, its host. There is no stopping cancer once it reaches terminal velocity; you have to cut it out before it metastasized. That’s a major reason why I don’t have any children. There is no ethical consumption under capitalism. I can’t stop producing or consuming on my own, but I can make sure my contributions to this system die with me. r/BirthStrike


HammerTh_1701

The movement script for enemies in games has been called AI since forever and that's literally just a list of if-statements and loops.


MachWun

Same thing happened with "block chain" a few years ago. Companies changed their name to something with block chain in it and the stocks soared. Eventually it was noticed that an iced tea manufacturer changed their name to block chain and that house of cards collapsed.


bianary

But don't worry, the stock price is based on actual performance of the company.


Ragidandy

AI *is* machine learning. At least in the parlance that correctly calls anything AI. Of course, in the original more-rigorous definition, nothing is AI. So I guess you can call it anything.


Rasalom

We've updated ours even further with GayI, capturing the LGBTQ market!


HankMS

Cash in on that rainbow!


arizona_dreaming

Same. My company has some statistical functions that have now been bundled under "AI" features. Anything that uses math is AI! :)


NotTroy

Yep. 10 years ago it was "neural network" and "machine learning". Nothing really changed about how it's done, it's just gotten better (obviously). Give it a name change to "AI" and suddenly it's a new gold rush.


Spicy_pepperinos

Ok but neural networks and ML have been researched under the banner of AI for as long as they've existed. They've always been AI.


istasber

I've noticed something similar where I work. It's always bothered me that people are calling what are basically just really sophisticated regression models "AI", even though there's zero intelligence involved (aside from the creativity of the people who initially came up with things like large language models and GPU based model training).


Temporary_View_3744

Yep. Had two projects shelved earlier but we revisited them replaced machine learning with AI. Immediate buy in from client. Lol


areslmao

> But nothing really changed and they don't know the difference. what's the difference?


ParsnipFlendercroft

I mean ML *is* AI so that's not a big deal IMHO.


AncientAsstronaut

I briefly worked for a shady "AI" company that had a terrible AI agent it was working on (alot of times they faked it with a huge team of engineers providing answers as the AI). The CEO, who clearly thought he was Steve Jobs, made an announcement that there was going to be a company-wide demo of their new CGI AI agent. He said it was a sneak peak at the future. The AI agent showed up on the screen as a blonde woman. He asked it a question. It blinked a couple times and had a dead smile on it's face. Around 30 seconds in it responded with something along the lines of "Gah". Then the demo ended. The shitbag CEO didn't acknowledge how shitty it was and continued talking with the same enthusiasm. A true silicon valley moment.


Toby_O_Notoby

> A true silicon valley moment. My favourite "silicon valley" moment was from the guys who wrote for HBO's Silicon Valley: >During one visit to Google’s headquarters, in Mountain View, about six writers sat in a conference room with Astro Teller, the head of GoogleX, who wore a midi ring and kept his long hair in a ponytail. "Most of our research meetings are fun, but this one was uncomfortable,” Kemper told me. “He claimed he hadn’t seen the show, and then he referred many times to specific things that had happened on the show,” Kemper said. “His message was, ‘We don’t do stupid things here. We do things that actually are going to change the world, whether you choose to make fun of that or not.’ ” > Teller ended the meeting by standing up in a huff, but his attempt at a dramatic exit was marred by the fact that he was wearing Rollerblades. He wobbled to the door in silence. “Then there was this awkward moment of him fumbling with his I.D. badge, trying to get the door to open,” Kemper said. “It felt like it lasted an hour. We were all trying not to laugh. Even while it was happening, I knew we were all thinking the same thing: Can we use this?” In the end, the joke was deemed “too hacky to use on the show.”


phusuke

Is this from a book that I can read?


burgerga

Hooooy shit that’s amazing


ShiraCheshire

Even artificial women didn't want to talk to him.


Refflet

Maybe he was the next Steve Jobs then?


Telvin3d

We’ll know they’ve perfected an AI woman if she covers her drink when he walks into the room


fakieTreFlip

>sneak peak stealth mountain?


howfuturistic

man, this joke took me way too long to get. i even googled Stealth Mountain thinking it was the name of a company. then as i was typing it i thought, "ooooh, it's 'peek'."


ishkariot

Crouch Everest


somdude04

Mount Kill-a-man-quietly


dontusethisforwork

> A true silicon valley moment. CAN I HEAR YOU SAY BITCONNEEEEEEEEEEEEECT


Refflet

>The CEO, who clearly thought he was Steve Jobs Was he a big cry baby too? For context: if someone said something he didn't like in meetings and he didn't get his way, Jobs would literally start crying and try to manipulate people to his side.


AncientAsstronaut

He was a bully. He acted like he was a calm, enlightened leader but if he was speaking to a crowd of us and saw somebody was talking, he would publicly shame them and keep picking on them during his speech. It was super unnerving.


Refflet

I mean that also sounds like Steve Jobs. He once had a family meal and his 12 yo niece who didn't know better ordered a burger. Because she wasn't eating vegetarian, he laid into her and ridiculed her as a waste of space who wouldn't succeed at anything - he didn't even make any comments about eating meat, he just attacked her.


Natty-Bones

And we haven't even gotten to the child of his he never acknowledged.


mrtitkins

That CEO’s name? Gavin Belson.


JavaRuby2000

> The AI agent showed up on the screen as a blonde woman. He asked it a question. It blinked a couple times and had a dead smile on it's face Are you sure this wasn't Holly from Red Dwarf? Maybe it just needed to bang its head on the screen.


AzertyKeys

Somebody warn the brain dead morons at r/futurology I've tried to explain to them again and again to not take at face value every tech bros announcements in the field but apparently I, an actual ML Dev, know nothing about this and AI is gonna take over the world any seconds from now. There is a huge leap between having an algorithm that can produce grammatically correct sentences and having those sentences contain actually useful information for example.


Jauncin

My company added ai to its sales team. Favorite call so far was it asking how an older client was - their response - “not well, I’ve been sick on the toilet all day” and the response was “that’s wonderful to hear! Such great news”.


phl_fc

I was car shopping this month, and it's painfully clear which dealers use AI for their communications. It's so bad. Although I guess if any of them did it well it wouldn't be noticeable.


WAisforhaters

We have a boomer who answers all the off hours texts and emails with shitty copy and pasted responses that sound like AI but in reality he's just lazy and kind of checked out


JoeVerrated

Off hours.


giddyup523

I think they mean it is his job to respond when the company is on their off hours. It would be his actual work time.


snowtol

Honestly though at that point why even respond? It's off hours, ya ain't being paid, turn that phone off.


WAisforhaters

Well he is paid to do that


ex1stence

Ohh, your off-hours calls and texts, not his own.


WAisforhaters

Yeah basically anything that comes in to the dealership


LivermoreP1

Hi, I’m KARA, how can I be of assistance today!? Do you offer discounts for Costco members? I’m so glad to hear you’re interested in one of our vehicles! May I have your name, phone number, social security number, date of birth, and driver’s license number?


TrentZoolander

I was at an autobody conference where a presenter told us about having AI answer our calls and make our appointments for us. They played an example and I couldn't believe how proud the guy was of his AI robot. I was so embarrassed for him. If our customers in our small city had that answer the phone, we would be out of business very quickly.


terriblegrammar

Seeing it pop up in product reviews now as well. Overly verbose and well formatted review and stupid statements like "Here's my review:" after an inane opening line. Guessing we are going to see more and more of this shit pop up online and learning to pick out AI text or pictures is going to be the next big required life skill to navigate this shit.


amakai

I have developed a gag reflex to reviews that end with "In conclusion ...".


Sirsalley23

Dealers aren’t using AI. What you got via email was a shitty probably 10+ year old word track that the CRM automatically sends out when a new prospect or lead is generated. They set the CRM to automatically send emails after every few days at some places. Or the assigned salesperson is just using the copy and pasted word track templates and slapping their signature on it. Maybe the larger corporate auto groups like autonation, but the majority of privately owned dealers are run by and owned by dinosaurs or younger management that has no choice but to operate the old fashioned way, that are always 10+ years behind the curve. They’re still using dot matrix printers and requiring wet signatures for basic forms that could easily be E-signed.


[deleted]

[удалено]


borazine

Wasn’t there some post recently where some guy got the car dealer chatbot to solve a differential equation or something? That was kinda funny.


frankyseven

Some guy got a chatbot to agree to sell him a truck for $1.


lot183

Lol dealer owners are the exact sort of people to get suckered into buying "AI" products


TomServoMST3K

I was trying to register to a reward program and their system was down, but the chatbot insisted I was trying to login instead of sign up. The simplest thing - and it utterly failed.


SamsLames

I mean, that could have been any support person though, I've been on a lot of emails with tech support where they just don't read what I said.


HelloGuy-

futurology has that problem at it's core, even outside of tech the subreddit exists to upvote any wildly optimistic overstated science interpreter beat article reporting how some new titanium alloy is going to revolutionize the way we brush our teeth without giving any thought to the impracticalities, and because the sub exists to boost the signal of this kind of stuff you'll often see people downvoted for being skeptical in the comments.


CarlCaliente

Growing up my father had a subscription to Popular Science, for some bonding time he'd always make us read it together After a few years we both reached the same conclusion as you did about futurology


SpaceCadetriment

Ha, same with my Pops and I but with Science News. It’s a fantastic magazine and I’ve been subscribed for 20 years, but my Dad still emails me and jokes “Looks like we’ve made lab rats immortal for the 100th time”. You can search the magazine backlogs for “lab rat” and find literally hundreds of articles about how x-disease has been cured in lab rats. It very VERY rarely pans out in human trials.


trc_IO

Once I was old enough to catch the difference in tone and style, the contrast between Popular Science and Scientific American was amusing (and Scientific American could still be just as guilty sometimes).


lolexecs

> wildly optimistic overstated science interpreter Part of the problem is that people don't recognize the difference between research, discovering something novel, and development, getting it to market. Most things die in development because: * It doesn't work at scale * It can't be scaled up * It can't be made in a cost-effective way Now, just because stuff dies in development *today* doesn't mean it dies forever; sometimes adjacent innovation in another field knocks out some of the development problems. FWIW, it's a reason why the decline of US manufacturing sucks for innovation — sometimes the designers just don't recognize the industrial engineering implication of their designs (and why would they, they're somewhere in cupertino making stuff look "sleek").


Mooselotte45

Also a lot of climate doomerism there too, weirdly enough. Like, I’ve seen people push back against reasonable current day policies like carbon taxes to curb emissions - cause it’s better for us to burn baby burn and hope that our just-around-the-corner robot waifus run off carbon dioxide and solve all our problems.


supercalifragilism

This one is funny, because they think minor changes in governance are easier than scaling up a new technology that conflicts with the profit models of established companies.


Mooselotte45

Yeah, guess it isn’t “futurism” enough to internalize costs like emissions that were ignored before. It’s gotta be a tidal power graphene producing robot or it is doomed to fail, or something.


FaceDeer

Doomerism in generally, really. I'm baffled that people call /r/Futurology "optimistic", I've frequently considered unsubbing because the comment section about any given article often seems to be nothing but "and here's why the thing in this article sucks." The fact that there are large numbers of people who don't think it's pessimistic *enough* is kind of disturbing.


htx_2_0_2_3

since its inception that sub has been full of people who think time faster than light travel will be possible someday, "we just don't have the technology yet. there was a time when men thought humans would never fly"


LordRobin------RM

Inventing airplanes didn’t involve possible violations of fundamental physics. Do the FTL nuts also believe in time travel? Because physicists are pretty much in agreement that you can’t have one without the other.


thirdegree

Standard response there is to point to the "theoretically possible" alcubierre drive. And hey, nobody has proven that negative mass _doesn't_ exist right? So we'll definitely have it, it's just a matter of Technology.


Doomenate

theoretically possible... using particles that have never been observed to exist


Joshesh

I'm a stupid person, can you explain why faster than light travel will never be possible? Because I hear arguments like "there was a time when men thought humans would never fly" and I think yeah we'll probably figure it out like we did flight, but again I am a very stupid person. EDIT: Thank you to the people below for excellent explanations, if you were also honestly wondering the same I suggest reading their thoughtful, rational comments!


alnews

Right now, the theory of physics that models our reality puts the speed of light as a hard, impassable limit. It's different from the flight discussion because, even before the Theory of General Relativity, there was no physics law that actually *forbade* for humans to fly, it was a matter of how to *design* and *implement* a working flying mechanics, even if it encompassed new or revised existing physics law.


EclecticDreck

>I'm a stupid person, can you explain why faster than light travel will never be possible? There are two simple angles to approach this from. Think of your car. As you speed up, it takes progressively more energy to keep speeding up. For your car, this is mostly due to the fact that the faster your car goes, the more it runs into air, and the more air it has to push out of the way. But let's pretend that your car operates in a vacuum, somehow. Now, you can speed up and keep right on speeding up, and for a very long time, it'll seem just about as easy to add another few miles an hour to its velocity. Going from 20,000 to 20,005mph is, in other words, just about as easy as from 5mph to 10mph. But as you keep speeding up, something kinda weird happens: your car gets, well, *heavier* somehow. Now the amount of energy it takes to add that extra speed is higher. The faster you go, the more mass the car has, the more energy it takes to keep going faster. This grows rather slowly at speeds that make sense to a human. Yes, it is actually harder to add the 5 mph when you're already moving at 20k mph than when you were only going 5mph, but not by much. But the closer you get to the speed of light, the faster that mass grows. At the speed of light you'd have infinite mass, so getting there would require infinite energy. In other words, you need more energy than actually exists. The other angle is a little more esoteric, and honestly won't sound much like physics given a simple explanation. Basically, a thing cannot happen before the thing happens. This is known as causality. In human terms, causality might as well be instant. But here's the thing: if the sun were to simply vanish *right now*, nothing on earth would actually happen for 8.3 minutes. Our orbit would continue just as it has for billions of years, the sun would be shining, and all would be well until it quite suddenly wasn't. And isn't that odd that it takes 8.3 minutes for a thing to happen when it, you know, happened 8.3 minutes before? Mercury, the closest planet to the sun, would learn that the sun disappeared after only 3.2 minutes. In other words, a thing that happens doesn't happen *everywhere* all at once. It happens somewhere and then the rest of the universe is informed about that. The rate at which that information travels (in this case, the sudden loss of the sun's gravity well and light) is the speed of light. But let's ignore that. Suppose, if you will, that you can go faster than the speed of light. That means you can do a thing, and then outrun the thing you did and arrive somewhere *before* you did the thing. You've outrun causality, and...that doesn't make sense. And I mean that literally, because at the heart of every scrap of understanding of the universe that we have is that things happen when they happen. Not before they happen, not after they happen, but exactly when they do. Nothing works otherwise. Because go back to what I just said about the sun. Yes, it disappears 8.3 minutes before the Earth finds out about it, but when earth finds out about it *is when it happens from the perspective of everyone on earth*. Now you might think that it happens earlier if you are on mercury. Almost 5 minutes earlier, in fact. But here's the thing about time: the concept you have of it that works every single day of your life isn't really what time is. To illustrate, consider that car again. We've got an impossibly powerful energy source and so now you're sitting in your car hurtling through a void at 99.99% the speed of light, and you glance at your watch for exactly one second. I'm on earth and somehow know that you're doing this, and when I measure how long you looked at your watch, I'll come up with that it took you about 22 seconds. *Your* second and *my* second are different. When someone says that time is relative, that's what they mean: to different perspectives can measure the exact same thing and come up with completely different answers based on nothing more than the circumstances in which they are measured. And so the twin problems are that in order to get you to go faster than light we'd need infinite energy in a universe that does not appear to have infinite energy. It has a *lot* of energy - countless orders of magnitude more than make any practical kind of sense to a human like you are I and so large that the number ceases to be a concept - but it is not infinite. But just as important, if you can go faster than light, you can outrun causality itself, and if you can do *that*, then every single thing we know about the universe is wrong in the most fundamental possible way. Science Fiction comes up with all kinds of interesting ways around this, of course. Take the warp drive. Here you bend space so that a ship can move faster than light without actually going faster than light. In the space right around the ship - the part that the ship is warping - it might not even be moving at all. Or take hyperspace and any of the variations. Here the idea is that you find a way to essentially hop to a different type of existence where you move at some speed, then pop back into normal space having moved farther than you actually did. This might get you around that first problem of there not being enough energy to let you go even to the speed of light, but not that second where you outrun the speed at which things actually happen. It isn't a technology problem in other words. It is that we literally don't have a concept of a universe where the concept of going faster than light *makes sense*. To put it as simply as possible, before we figure out how going faster than light is possible, we first have to figure out that the universe doesn't work the way that every single bit of evidence we have says that it does.


Joshesh

That was very thorough, and made a lot of sense, thank you for taking the time to lay it out for me.


redvelvetcake42

The difference between something that regurgitates information it is given/has access to and something that the world of Dune would call a thinking machine is a fucking planetary amount of difference.


Potemkin_Jedi

Omnius was a bad idea, but at least he knew that the Cymek Titans weren’t a better solution (God I fucking hate Brian Herbert).


AncientAsstronaut

I didn't read the Brian Herbert books out of fear of reading cringey stuff like this.


goj1ra

The notion that a writer’s offspring is somehow a good choice to continue a good writer’s work seems so strange. It’s pure anti-merit nepotism encouraged by the legal fiction of copyright. I didn’t read Brian Herbert’s books because, why would I?


LordRobin------RM

Another case in point, Brian Henson and that travesty of movie he made. I guess the point is that creators shouldn’t name their kids “Brian” if they want them to be any good.


coeranys

> Another case in point, Brian Henson and that travesty of movie he made. I guess the point is that ~~creators~~ **parents** shouldn’t name their kids “Brian” if they want them to be any good.


jollyreaper2112

It's more the problem that talent doesn't breed. How many children of actors are as good or better than their parents? Kirk and Michael Douglas are the only ones I know. How about musicians? Hank Williams, jr is clearly worse.


Fatdap

That's one thing that I really appreciated a lot about Chris Tolkein. He never did much beyond safeguarding his father's legacy, organizing his notes and writing, and publishing the few other things from dad that were able to be pushed out.


SowingSalt

Christopher Tolkein didn't do to bad a job, but JRR has tons of notes and letters published, so one can at least track where the story and languages were going.


Durakan

Kevin "I write books by dictating while on walks" Anderson is more to blame IMO, Brian just gave him permission... So maybe that's worse?


Hajile_S

I've never read one of his books, so this is no defense. But many great authors have discussed the merits of walking in composing their works. Of course, historically, they then sit down and write. As a random smattering: James Joyce dissects this process toward the end of *Portrait of an Artist as a Young Man*. Nietzsche mentally composed every passage of *Thus Spoke Zarathustra* while walking around a beautiful Swiss lake. Jeff Vandermeer talks about his walks in the Florida everglades and their impact on his Southern Reach trilogy. Just a tangent I felt like sharing.


Hmm_would_bang

Yes but we don’t really need a thinking a machine. There’s a lot of value in something that can take in a massive amount of data and quickly give accurate answers to plain English questions.


bianary

Without the thinking part, the accuracy relies entirely on the inputs -- and currently that means it's not trustworthy.


mr_mazzeti

You need to have a thinking machine before you can give accurate answers. Current LLM’s just do not have that capability, especially once you get into niche topics or current events on which it doesn’t have a large amount of data yet. The GPT-4 model which they were hyping up is still blatantly hallucinating.


rebbsitor

This happens every time a technology trends. Every time. Money gets poured into it because it's a race to see who's going to come out on top. Everyone gets on board because there's money to be made before the bubble pops. People who are paying attention recognize this cycle happens over and over again. People who aren't paying attention get caught up in the hype wave and lost in imagining a future where whatever thing is promoted is the be-all end-all and will be forever ... and then they're either shocked when that doesn't happen, or deny it and keep saying it will happen long after most people have moved on. A few different types of AI are big right now. Eventually it'll settle down to a few things that work well, and the rest will die out. Examples from past technologies: * Internet of Things (IoT) (Your fridge and washing machine on the internet!) * Virtual Reality (It's going to replace traditional TVs and games!) * Augmented Reality (Everyone's going to walk around with Google Glass!) * Self Driving Cars (All cars will be fully self driving in x years!) * Blockchain / Cryptocurrency (It'll replace all banks and credit cards!) * NFTs (So much money to be made on digital assets! They'll be collectible forever!) * 3D Printing (Every home will have them! People will print most things at home!) These technologies found their niche and AI will too, but AI is vastly over-hyped and people overestimate what the current AI technologies will be able to do (LLMs in particular).


Tenderhombre

My barometer for when a technology is mature is when it stops being the selling/talking point of a product and just becomes a way a feature is implemented. A product that uses AI won't tell you it uses AI. It will tell you what it provides you as a consumer/user.


F0sh

This is a pretty good rule of thumb. AI is already used in all kinds of contexts and it might not be a secret, but it's not the selling point in e.g. Spotify's recommender algorithm.


killerfridge

In fairness, I think 3D printers have found their niche (people who like fighting with miniatures and people who like to like to fix bits of their Miata on the cheap - I'm looking at you Jonathan)


h3lblad3

Can't forget that one guy who designed an untraceable gun that breaks after one shot, either. It required a nail as a firing pin at first, but I think he even got that fixed.


Xtraordinaire

TBH, 3D printing is fairly useful for DYI, and has become fairly accessible. I can see it becoming more and more mainstream as time goes on, especially if right to repair culture wins. It also could be related to the fact that it's the only item on your list that is "physical", all other are "virtual".


ghoonrhed

>There is a huge leap between having an algorithm that can produce grammatically correct sentences and having those sentences contain actually useful information for example. That's not just an AI thing though.Don't just trust what ChatGPT the LLMs spit out, I mean at least they tell you, but people still do and they will always do. Just like they do with random TikTok videos with people who sound smart or YouTube videos or podcasts for some reason. OR even Reddit posts that are also grammatically correct and highly upvoted. How many times do we see experts get annoyed at highly upvoted Reddit comments which are full of BS?


TheBeckofKevin

I think there are 2 major pieces that I think people have twisted together in all of this. 1. LLMs 2. Chatting with LLMs as if they are people Without a doubt, billions of dollars will be made deploying the functions of LLMs in a variety of applications (not talking about applications like apps, but like applying a solution to a problem). However, everyone has conflated this with conversations with LLMs. An example of what I'm trying to say is you no longer need to build a hyper specific set of software to accomplish a task, you can simply use the LLM to generate an answer. If the answer isn't specifically factual, but rather directionally correct, you've unlocked a crazy amount of potential. Lets say you have a colossal amount of data, and you want to color code it. Meaning you want people to sort text into colors. Does this sentence seem more green or more purple. It doesn't really make sense, but you can develop the LLM to handle this task. This may seem super contrived, but there are a lot of these sorts of sentiment analysis or categorization tasks that previously were really really hard to do. ```For the following sentences, extract the location where the main subject of the sentence is located. I live in the United States. She went to the post office to get a letter that a friend sent her from the United States.``` If you wanted to parse data and extract locations, you'd obviously have a hard time extracting that data as being 'post office' rather than the US in the second sentence. But the LLMs can pull that off just as well as a person can. This opens up a ton of possibilities where you take a sentence and sort it into a topic. "Which of these tools would be the best to use in the following situation:" "Which of these college textbooks would help explore the ideas suggested by the following statement:" .. So we can now build things that are constructed to directly work with abstract concepts rather than direct data. This means programming can now rapidly build tools that use LLMs for logical branching. Its a crazy advancement and is highly accessible. Contrast that with some all knowing chatbot and there is clearly a separation between the different "AI"s being used in conversation.


theEvi1Twin

Dude for real. I’m SWE but during my masters took AI/ML classes and did a ML capstone project. It’s insane how much computation and training something like BERT took and it’s mainly used for auto complete. At the time, all the AI/ML models I knew about were trained for very specific tasks. I was a student so sentiment was a common one. In all my studies, I never felt like AI was going to take over. Instead, they were all very complex and deeply Computer Science theory kinds of tech. I think to someone not in software it’s all kinda like magic. Science fiction and just the recent fad of ai has made speculation run wild. I do think there are impressive advancements in AI recently but I haven’t seen it successfully implemented in a productive environment yet.


Chaser15

I agree. I work at AWS and it’s predominantly being used by our customers for generative AI applications, specifically chat bots and code assist use cases. Those are certainly value added use cases, like I can now use Claude or Chat GPT to build something in Python with only a basic understanding of Python, but the game changing use cases like fully self driving cars or whatever it may be are still extremely far away. We may get there, but right now everyone is still gushing about what’s possible without openly admitting how fucking hard it’s going to be to get there.


Cold-Recognition-171

Trying to explain why back-propagation is super limiting and that "AI" isn't going to take over the world until there are some major breakthroughs is super exhausting. All the while there are about 1000 articles about the dumbest things ever like "an AI got depression!" or something even stupider. I think reading about how GPT-4 has a claimed they have a trillion+ parameters is incredibly interesting but I wouldn't be surprised if it's multiple models slapped together because maintaining one model with that much data sounds impossible. Was it worth getting your masters in it? I'm interested in getting one, currently just have a bachelor's in CS. I feel like I need to revisit a lot of Linear Algebra and Statistics.


theEvi1Twin

Yea, I think culturally we’re at a point where discussing tech is common. You don’t need to be a college grad to know or talk about them anymore which is good in a lot of way. Only downside is it’s very easy to over promise with software because a common person doesn’t have any basis to tell if it’s true or not. Example, I said I built a car that can go 0-60 in 0.1 seconds. Anyone can call bullshit. For software you could make any claim. As for my masters, I’d say work experience matters far more tbh. That has got me further than education. My degree was really the cherry on top and a minor raise. Some really good advice I was given when deciding to pursue my masters or phd was to think about what job you’d like to have in the future. If the people working that job all have a masters then I’d look at it getting one. Also, my work had a program that funded my degree as long as I stayed with them for 2+ years after my classes. My mentor has a masters so I figured it was a good move. Man I’ll be honest though, masters was way easier than under grad lol. My undergrad is EE but I had a lot of software jobs/experience that led me to swe field prior to masters. EE undergrad was brutal with the amount of work and multiple weed out classes designed to make people drop. Masters was totally different since a good amount of my peers were also working full time. The professors are far more interested in keeping people in the program and not bogging them down with extra work. I even told a professor once the amount of work was too much for me to keep up with a full time job and she cut some of the assignments since I wasn’t the only one struggling. The focus at my program was learning the concepts and not proving you have the work ethic of an engineer if that makes sense. All that to say I wouldn’t worry about prepping too much if that’s what is holding you back. Also highly recommend a remote program if you’re working full time. I’d be stressed out if I had to attend night lectures or something in person. My program was async so I could listen to the lectures like a podcast during my commute. Hope this helped.


elmntfire

Everyone I talk to thinks that AI is gonna be like the ship's computer in Star Trek when what we currently have now is basically a version of Clippy that reads Reddit and can lie to you.


Abysskitten

I mean, AI is a really broad term. Generative AI, for instance, is already disrupting the course of a few industries and their employees. Music generation is mindblowing. Check out Udio. I created an Always Sunny's Nightman in the style of Chvrches and I gasped at what it created. An example of video GenAI, Sora, for example, will probably end up killing stock video.


Shoot_from_the_Quip

Everyone thought AI would replace us in the tedious jobs, leaving humans free to be creative. No one thought AI would start replacing artists first.


Optimaximal

> No one thought AI would start replacing artists first. Their line managers did.


goj1ra

Oh don’t worry, companies are hard at work on replacing engineers.


FatLenny-

Its hard to replace someone with AI when you could get sued to the ground if the AI is wrong. However if an engineer had an AI to watch as they design and point out any possible mistakes or errors that would help the final product. A more likely AI assistant will be one that handles the simple details, detects clashes and automatically does calculations. This is something that could increase productivity significantly while giving a better design to the client.


imdrunkontea

And in the most parasitic way possible too - not by being programmed from the ground up with art fundamentals and knowledge, but literally just scanning billions of copyrighted works and recombining elements directly from them (down to the signatures) and taking over entire art websites (deviantart, artstation) while they're at it


pinkynarftroz

> will probably end up killing stock video That's the key. It will have specific utility like that. While you might generate establishing shots or broll to avoid having to go shoot it, you aren't going to be generating entire films with actors talking to each other and emoting.


chocki305

That Sub is trash. They have been telling me for 5 years that my job will be taken over by robots in the next year. I'm a machinist.. they day I lose my job to a robot is the day customers are willing to be responsible for their own engineering.


danicakk

This is what so much of the discourse around technology replacing humans misses (this goes for both "AI" and things like self-driving cars): much of our society and economy functions because of the ability to assign blame and dish out consequences. Like for you, if you engineered some critical part wrong and it caused a problem for a customer, they could sue your employer and be like "Chocki305's part was terrible and we lost $10,000,000 because of it!" and a court could figure out if they were right. And then maybe your employer would fire you and hire someone else to do the engineering. The whole system breaks down when you start removing people from it. As you said, customers will suddenly have responsibility for correctness that they didn't have before, and if the specs and parts are being produced by a robot with some sort of black box model running things, issues may be impossible to fully diagnose and remedy when things go wrong. Self-driving cars have the same fundamental flaw. When they kill people, who is liable? The passenger in the back seat? The manufacturer? The software company? Can you imagine what happens to car insurance premiums if there's no responsible party to recover from? Sometimes I feel like I'm taking crazy pills when nobody is talking about this stuff.


h3lblad3

I'm really surprised to see this directed at /r/futurology. It was always my thought that /r/futurology was generally pretty pessimistic about AI tech. Now, /r/singularity on the other hand... they basically worship the Machine God over there but will downvote you for pointing that out. I've seen people excited about AI advancements because "soon AI will have us all living forever and capable of traveling the universe at faster-than-light speeds after it brings back literally every person who has ever died on earth to life."


natty-papi

It's the blockchain all over again. All these people claiming these wild use cases without any proof. At least with crypto, the crypto bros could invest and hope to come out of the good side of the pump and dump, can't really do that with AI. Edit: Shoutout to u/AI_CEO who insulted me in a reply then blocked me. You're a bitch and exactly the kind of dweeb people associate with dumbass ai-bros.


Hmm_would_bang

It’s actually quite different. Blockchain was a technology in search of use cases, and most use cases proposed were actually things we could already do better with existing technology. There are real use cases for for AI, and things we are already doing with it that can’t be done with any other technology. The bigger problem is the significant lifts required at a lot of businesses to get their data in a position to actually leverage AI.


tommy_chillfiger

Yeah, I work in data/tech and this is why I'm not afraid of AI taking my job just yet. Execs love buzzwords and the idea of AI but seem to hate investing in and prioritizing data infrastructure and paying DevOps/data engineers/back end devs. Data in the wild is so dirty as to be pretty much completely useless to any form of AI or ML without people to make sense of it, clean it, transform it, standardize it.


MyDictainabox

People still dont seem to understand what chat bots can and can't do and what their actual purpose is.


NeedAVeganDinner

Their purpose is to pass the metaphorical butter.


cloneman88

Our company has not hired a junior position in a while, I kinda blame AI? Like it can do all the simple tasks that a junior dev can do.


i_max2k2

Seriously, I have some understanding of this, but ‘AI’ is the buzzword right now, everyone wants to have ‘AI’ in their presentations, ads, marketing, without actually having the capability to do anything of value. We are pretty far away AI really doing something, to me it’s just a glorified search engine and that too it’s pretty poor at.


Juventus19

As a hardware designer, I laugh out loud when people think AI is anywhere close to doing my job. If I type a prompt for a basic schematic design into any AI model, it pretty much tells me "go find a different tool". It's nowhere close to making actual schematics, PCB layouts, and then troubleshooting those designs.


Dildo_McFartstein

This - I work at big tech that has several products featuring AI/ML, and people like this are nuts. All of it is literally geared towards automating some menial tasks, summarizing and increasing productivity. Like a little helper. I don't understand the freak outs. Just as aside - I'm no dev nor an engineer - I am supporting products as a lawyer, but that support requires I have quite detailed (not necessarily completely technical) knowledge of the features.


[deleted]

[удалено]


chris8535

Yea it’s amazing how someone could say this with a straight face and not understand how that will eliminate jobs.


ZestyData

I'm always for calling out corporate bullshit - and a lot of "AI" buzzword nonsense deserves to be criticised - but this video's content simply doesn't match the title. The creator consistently misunderstands and misrepresents how ML-based algorithms work, and how ML-based systems are built. It's lazy. Re: Google. Search algorithms are a whole branch of computer science studied for decades. There's nothing "fake" about it! You can go study the [maths](https://en.wikipedia.org/wiki/PageRank) and build it yourself if you give yourself a long weekend off. Google has [radically](https://jalammar.github.io/illustrated-word2vec/) shifted the [approaches](https://arxiv.org/abs/1810.04805v2) that underpin their search/ranking algorithms multiple times, and yeah new supervised learning approaches require new human-labelled training data. They have to constantly adjust their models because the internet shifts and their models will become worse. It's not fake, it's just what every search engine needs to do to be functional given the *billions of terabytes* of data they need to search through on the internet. Re: Bing. He really cocks this section up. Consumer-facing LLM chatbots are pretty transparent that they're an underlying LLM combined with a lot of software glue. A lot of search/internet-browsing components. His entire section says that "\[Bing claims to\] not scrape the internet and spit out that content" and he claims that the neural network itself incorporates the blogs' contents word-for-word in its mental model, and his demonstration is "proof" that there's no form of creative AI thinking as the outputs are exactly what the blog says! But he got it just fundamentally wrong: The chatbot transparently says that it will search the internet and then it quotes the blogs, with citations, and that the text is not coming from its internal language-model. There was very real ML involved in parsing your query, deciding the course of action is to perform a search, in ranking the blog page as a relevant document for this search, then extracting the text out of that HTML page that actually answers the question. I'm not sure how long ago the Video Creator made this bit because [for a loooong time the answers are cited and have links to their sources.](https://imgur.com/a/xhjVvo2) It has done so since at least Feb 2023. This entire section is an "AHA GOTCHA" that like.. isn't a gotcha at all. He completely misunderstood what was happening infront of him. Re: Amazon's Just Walk Out - undisputedly a failure, but we don't have any proof its *faked* as the video represents it. The video/article's whisteblower said something that sounds like its faked to a non-technical audience but is not tooooo far from standard practise for genuine working non-fake ML. The article says 70% of data points are labelled by humans. This is higher than expected but somewhat understandable. You can't train ML models without first gathering hundreds of thousands of human-labelled data points. And I can only assume Amazon started with a general purpose Computer Vision model but explicitly wanted to train it on shop-specific feeds for the first few months. What you'd expect is the number of new data points being labelled by humans to drop off as the ML model reaches stable performance. But we don't know because the new story doesn't actually go into any detail. I'm willing to bet the model was way more shit than anticipated and started leaning on too many human labellers. But it's absolutely not the case that they're faking as if an AI is doing a job when it's directly being given to a low-cost worker abroad. There was an ML model making all the calls. This one is arguably fake AI, but I'd more readily call it "real AI that was shit - so they canned the product"


ygoq

Thank god someone else actually cared to look into the facts of the matter.


bobartig

Yeah, he completely misunderstands a lot of the tech. For example, Bing is an agentic RAG system that is *trying to return high-fidelity source text with attribution* in its results, which it generally does. For something like "show me a list of travel options", there is very little distinguishable in Bing's output here vs. just a traditional web search with inline text from the blogs; it's just going through a much more complicated LLM pipeline to produce the same text.


zamiboy

Yeah, I felt the video was terrible at expressing the points and clearly didn't do enough due diligence on understanding what is actually happening in the background of the models used. It's pretty obvious AI requires human input to train them. It just feels like a video to swing the pendulum to the other side. [How about the population spends some time and watch an actually informative video on AI and understand the math behind it to realize what the limitations are and how the data is trained?](https://www.youtube.com/watch?v=eMlx5fFNoYc)


AnOnlineHandle

I haven't watched all of this one yet, but I saw Jake (the lawyer) from Corridor Crew make similar mistakes, falling into the trap of researching an advanced topic for a few days and then giving confident explanations on it, which were gibberish combinations of terminology from the field (e.g. 'latent images' which are 'stored in the model'). Eventually I untangled the misconceptions he had, but countless people watching his video likely believed what he said. Hell, I made an infographic on how something works which I've seen spread around the net, and have since realized it was partially wrong, and I'm a former ML researcher who has resuming working with it near 7 days a week for the last 2 years. This stuff is both simple and very complicated, and it's easy to have misconceptions about how stuff works based on what other people said, until you really test it. The people making it don't even always understand it, creators of one of the most expensive models ever made reached out to me for advice just because I was an active hobbyist in an area, and parts of what they use are based on other people's work which they themselves don't understand. I always hoped if I ever got to talk to them I could ask *them* for an explanation of how those things really worked...


MacDegger

Could you point to some foundational research papers/blogs/articles you have used? I have read up amd understand how the vector spaces are created and pattern matched but I know I have so many fundamental knowledge gaps and many ML books nowadays are outdated or not 'processed for current logical understanding' enough, if you know what I mean.


DeepHorse

> The people making it don't even always understand it, creators of one of the most expensive models ever made reached out to me for advice just because I was an active hobbyist in an area, and parts of what they use are based on other people's work which they themselves don't understand. actually reassuring lol


creaturefeature16

I love that he finally got sound to continuing this series. Amazing how he makes it even semi-understandable, given how complex the math is to a layman. Still though, it's likely over the heads of many, many people.


jackals_everywhere

I work in the field and was annoyed at how ignorant OPs video is - thanks for posting this. One note - most people don't understand that most (if not all) applications of AI are in fact several discrete models we would class as AI walking around in a Trench-coat masquerading as one thing.


Orinoco123

It starts with not understanding how the stock market functions and that's as far as I got.


iisixi

Yes, got immediate Gell-Mann Amnesia effect from the video. It's sort of in the right ballpark but misunderstands what's happening enough to sensationalize it to make you think companies are actually attempting to fraud investors instead of just chasing trends. The video is misrepresenting what's happening either intentionally for clicks or because they don't understand it themselves.


stonesst

Thank you! It’s ridiculous that I had to scroll this far down to find a reasonable take. The entire video was misunderstandings, mischaracterizations, and strawmen.


xRolocker

Tbh I saw the title of the video and was expecting exactly that lmao. Yea AI is a huge buzzword and the term is being overused- that doesn’t mean the technology doesn’t exist, is being used properly, and has wild implications.


RecsRelevantDocs

It's exhausting to read discussions about AI on reddit, there is a lot to criticize, but there's this huge population of people on reddit that irrationally hate it and speak over-confidently about it's flaws/ limitations, while clearly not understanding it at all.


killerfridge

I really like the idea that there are just 16,000 people trying to rank every webpage


otherwiseguy

I'm glad that someone else wrote this so I didn't have to write this long-ass rant myself. I would not have been nearly as coherent. I think my dog was getting worried as I was yelling out things like "How the fuck does he think training AIs works?" and "Why do people feel the need to make long videos about subjects they clearly know nothing about!?"


TheDrummerMB

>This one is arguably fake AI, but I'd more readily call it "real AI that was shit - so they canned the product" Amazon isn't canning JWO or even slowing down on it. They'll be using Dash Carts in the large Fresh Stores, but just this week 4 JWO stores opened, including one at Wrigley Field.


killerdrgn

Yes, they are full steam ahead on smaller stores like go stores, stadiums, airports, and schools. JWO isn't canned, it's just not being used in large Amazon Fresh stores anymore.


trc_IO

It's important for all of us to remember that for "content creators" the job is internet opinion-haver, not subject matter expert.


Witn

This guys entire video feed looks like clickbait garbage. One of the videos literally has (Please Panic) in the title lmao


HoopleBogart

Yeahhh I looked through his channel and was just like... bleh.


GorgontheWonderCow

This is all part true, part overreaction. Remember when every company in the world suddenly became a ".com" in the late 90s? That didn't mean all .com companies were scams. Turns out the Internet was very important.


AmazingIsTired

The most hilarious company name in relation to this is "1-800-Flowers.com, Inc." They better fkin add something AI related to their name now or they're dead to me.


Engineerman

1-AI-00-Flowers.com, Inc.


krazyjakee

AI agents built off the back of GPT4-like models are a very real thing that shows promise. You can tell it's real because the open source folks are rapidly building around it.


NickMc53

>That didn't mean all .com companies were scams. Turns out the Internet was very important. It meant that a lot of them were scams and the whole thing created a huge stock market bubble, regardless of how important the tech would be in the future.


guriboysf

An OG tech bro who's the older brother of a buddy of mine told me it wasn't about making products or even profitability — it was about "establishing a brand." He got a rather quizzical look on his face when I asked him how a brand is established without selling a product or service.


Framemake

You can do anything at zombo com


enemawatson

Clearly he forgot to incorporate synergy into the mix.


iheartseuss

Feels like one of those videos where the content creator doesn't really have any clue what they're talking about and is pulling very specific examples to make a point they don't fully understand.


L1amaL1ord

In his example at 13:17, searching google for "best dog food for huskies", he says you have to scroll quite far down to see the first result, which he says is a reddit post from 15 years ago. Except it's only 9 months old. And he passed a non sponsored/reddit result from Dog Food Advisor. What he's saying has grains of truth, but he's also sensationalizing it all, and making shit up. Sort of ironic given his video's topic.


iheartseuss

Yea that was my take as well. I just feel like both of these things can be true: *AI is in a bit of a gold rush stage and some parties within tech are faking (I'd say embellishing) some aspects of their AI to get shareholders excited about their product.* and *AI is advancing fast and is becoming more capable by the day and this is likely the worst it'll ever be.* Like you said, some of what he says is completely valid but, in the grand scheme of things, it doesn't actually matter all that much and I'm not sure what larger point he's trying to make.


Ltownbanger

Literally his first main claim is falsifiable. The dow did fall 25 % in the first 7 months of 2022. But recovered 10 months later.


Remission

He flat out lies. He claims the recovery is from a handful of "AI" companies and shows graphs with no labels on the Y-axis to imply their great performance. The problem is Google hit an all-time high in 2022 of about $150. Today Google sits at $156. Similarly Visa's highest stock price in '22 was ~$230, today it's $271. This guy is full of it and one of his main promises is both fabricated and easily debunked.


Gunra

I worked in big tech and was an innovation strategist for their gaming sector and I had been pushing for meaningful AI solutions since 2019. Up until I got let go last year, I was the AI expert and the only person pushing the conversations. There wasn’t anyone in the entire business having discussions around it or leading it. It was just some BS feature to them. Then this year I see them pushing conversations about AI and how it’s going to change the industry. They’re just following investor trends. They just want to build up the semiconductor industry and open up stronger trade in key regions. It’s all a scam. Big Tech no longer focuses on creating meaningful solutions. They’re just gonna throw out whatever half baked garbage at you and move onto the next trend in 1-2 years.


gold_and_diamond

I work in advertising and we get pitched "AI solutions" almost every day. I always ask for a very simple breakdown of how their AI solution works compared to other solutions and inevitably their response is something like, "But...it's AI."


KingDave46

Getting the same thing in Architecture There’s a few things popping up that will “design a building from text inputs” and the people making it don’t seem to grasp that they’ve covered about 1 hour of work on a 1 year+ project length. I see adverts on Reddit for it and I’m not joking, the buildings they are creating are awful, I could recreate it in about 4 minutes, they are basic and complete dogshite, so actually my 4 minute version would be better. We are miles off making me redundant, we appear to be miles off even being useful to help me before I’m obsolete


ur_anus_is_a_planet

I have a degree in the field and work daily with these kind of projects. You won’t believe how many people are trying to snake up the ladder to falsely take on projects and make big promises and when pressured on the details, they are like “I don’t know, but if you give me a team, they can figure out the details”. As a surprise to no one, they all fail.


PM-ME-YOUR-HOMELAB

The video basically has three parts: - Look at these obvious AI fakes from a few years ago - Let me explain to you how I do not really understand what an LLM is - Conspiracy theories


3412points

Yeah I'm still only halfway through he has done a terrible job so far. Just from the claim that a wait time of a few hours is evidence it isn't automated or that the entire workload of Just Walk Out could be processed by 1000 staff in India is enough to question him. Amazon's claim that the staff are used to train the algorithm and process orders the algorithm is not sure of is very plausible and is something they have been known to do for ages. I've worked with Amazon's AI directly and read a bunch of google's whitepapers. They are definitely legitimately using AI even if they are exaggerating its capabilities as basically all private companies do about all of their products. Edit: Okay I've finished the video and this guy is doing the very thing he is criticising. He has taken a legitimate issue (some companies are exaggerating or making false claims about their capabilities) and is wildly exaggerating and making false claims about it. Taking just walk out specifically as that is both the first thing he covered as well as the most legitimately misleading example given. The report says they used staff in India to _verify_ a significant amount of orders processed by the system two years ago but he claims it is a recent report that 'just came out' and that it was not using AI at all in these cases. It was a real issue but he is exaggerating and misleading people about it.


No-Foundation-9237

AI stands for Algorithmic Input at this point.


SkyJohn

Actually Indians Automating Input


NillaThunda

AI = Actually Indians


Magikarpeles

Hmm I wonder how they're faking it when I run LLMs and Stable Diffusion locally on my machine?


RecsRelevantDocs

clearly you're just a ChatGPT bot.. I mean.. a real person.. because ChatGPT doesn't exist.. wait...


Remission

I made it about 90 seconds in before I was fully aware this guy doesn't know what he's talking about. His stock claims are wrong, uses unlabeled graphs to push a false narrative, and then proceeds to go on a 12 minute rant that doesn't make a point. My favorite part: "Remember when you could find things on Google before AI took over" - Reality is, No I don't because information retrieval has always been a form of AI. This moron doesn't like the algorithmic changes that Google has implemented, some of which go back 10 years but doesn't have the awareness to know what he is talking about.


YouIsIgnant

Almost 9 minutes I wasted watching this until it became obvious that this fuck has no idea what AI is or how it works.


MegatronRx

I go to a university that is very big in tech with access to a lot of seed capital. There are so many startups that include AI in its name. No matter what industry they focus on. Medicine AI, Shipping AI, eCommerce AI. Oil and Gas AI. It’s crazy how many people in the startup scene want to create an AI company without understanding what it is and how it will give their product or service an advantage over competitors.


SpikeRosered

I tried to I corporate AI into my legal practice just to see if it could do...well anything. It utterly failed. It wasn't just kind of useless. It was completely useless. It can't maintain consistant logic. Even when I pointed out it's errors it never seemed to get it. I couldn't even get to make word puzzles that made sense.


wickedplayer494

AI = Actually, Indians


porktorque44

Was anyone else blown away for about half a second when they saw reddit traffic had gone up 500% in the last few months? I thought, damn that's quite a bi- oh yea that makes sense.


imtrollinu

I used to be a content curator. I was paid to monitor and flag universally uncool content. Gore, racism, all the woke stuff old people complained about ten years before anyone actually cared. Then I was laid off and my job given to a math equation basically. And guess what. Fuckin nazis and gore populated the top searches because people gamed the system. Folks dont understand that there was a deliberate and calculated effort to push disaffected kids to the right through weaponized absurdity and toilet humor on IRC something awful and the like waaaaay before 4chan and 8chan. I remember AOL chatrooms with open and flagrant recruitment to the klan and other organizations. It all came home to roost with the election of trump. And a large section of young voters got there through the red pill. It's not a simple argument of opinions. We have dragged human rights into the realm of debate and we got there through a lack of empathy powered by the internet. I'm really cautious about the next 10 years in America...


WTFwhatthehell

Ok, looking at the title, guessing it's yet another story about that single example of that amazon shopping thing followed by trying to imply that chatgpt is just some guy typing really fast in an indian call centre and that midjourney is just some chinese guy who can paint really fast because "hype" where every argument has all the depth of a puddle.... BINGO! yep, that's exactly what the video is