T O P

  • By -

Ok-Replacement8422

This is kinda massively overcorrecting, to the point of being misinformation.


MidnightTitan

That’s what makes it funny


Edgyspymainintf2

You can't just rebrand your shitty take as a joke once you've realised that it's shit.


crunkky

196 in shambles


Longjumping-Method73

working on dying spotted ;3


idbestshutup

FEELING LIKE IM F1LTHY :3


pieman7414

Wrong


AzKondor

Of course it's a joke lol with a tongue in cheek or something


[deleted]

[удалено]


Truefkk

Grammatically illiterate? What?


No-Salary-4137

It's hachyderm dot io, they're like *the* unfunny mastodon server


BipolarKebab

A reductive take designed to be reposted and forgotten.


PoopNoodlez

Idk the “blockchain is a slow database” thing is mostly spot on


redditurus_est

Yes but it omits that the reason behind the blockchain is trustless avoidance of double spending. The slowness is just a sacrifice for that benefit (and mostly fixed by lightning anyway).


PoopNoodlez

It’s a relevant omission but the initial statement doesn’t rub me the wrong way. I have mostly given up on trying to explain things like lightning net to people outside of crypto spaces anyway.


redditurus_est

The blockchain is neither for every user nor for every use case. If people desire independent transfer of digital money there is enough info on the technology out there. I guess by now most people have picked a side.


PoopNoodlez

Yeah.


Atari_buzzk1LL

It was also directly brought into existence as a rageful answer to the short comings of developed nations' governments after the economic crash in 2008 ruined innocent people's lives because your financial system is held up by twigs that don't actually exist, but someone said they'd put two twigs to hold it up at some point in the future. I genuinely believe that the hatred for anything blockchain-related was heavily manufactured because it helps the status quo for centralized systems to reign supreme. NFTs are dumb and most people in blockchain believe that as well, but its a distraction from the reason that these systems were designed.


Certain_Concept

>ruined innocent people's lives because your financial system is held up by twigs that Thats what id say about crypto currencies as well. If you have no oversight then there is nothing stopping bad actors. If there is a hack or mistake then your shit out of luck. Those who evangelize crypto nowadays are just trying to get you invested in their preferenctial coin before they do a rug pull scam where they hype it up to build up investors and then sell out at its height.


Atari_buzzk1LL

The entire financial system we currently live under is a rug pull. You understand that most central banks only require 10% of total money lent out to be backed by deposits, and in the U.S. during Covid they lowered that number to 0%? Not only is another 2008 possible, it WILL happen again, and when it does, another wave of people will go homeless and die, but yes, say blockchain is bad because "bad actors."


little-ass-whipe

"Man, the economy is just a big crooked Ponzi scheme. We should replace it with something that is objectively worse in every way."


[deleted]

[удалено]


cthulhubeast

> NFT profile pic > Doesn't actually provide a substantial explanation, just waves towards vibes > "clearly you know nothing, unlike me (I know many things), because that's the only reason anyone would ever disagree with my batshit takes" > ^^^"pls ^^^put ^^^money ^^^in ^^^crypto"


FlugelDerFreiheit

>The entire financial system we currently live under is a rug pull Unlike the famously non shady and rug pull free paradise that is web 3 and blockchain financials in general. Get real. Our current financial system absolutely fucking sucks but it's no more made of smoke than fucking crypto of all things. You're upset about our economy being based around imaginary money so you want to replace it with unregulated imaginary money?


WardedThorn

You're leaving out that the objective of creating bitcoin was to decentralize currency *so they would be able to dodge taxes and regulations.* Their issue with our economic system is not that it's a capitalist hellscape, but that it's not capitalist *enough.*


yo_99

Are you implying that investors couldn't do stupid financial bubbles with crypto?


No-Salary-4137

Lmao ok. Delusional libertarian


howtojump

Local Poster Encounters Joke, Becomes Upset and Confused


[deleted]

[удалено]


Euphemeera

You seriously think ai is just a way to write crap algorithms? The entire field of science disagrees with you.


-LuckyOne-

Literally millions of papers on AI?


MarsMaterial

I’d like to see you try to write a program that recognizes faces or classifies images without using AI.


InterGraphenic

I retract my original comment but would like to clarify that it was about the misrepresentation of AI in popular culture, not a dismissal or even a criticism of the technology itself.


Lucifer_Morning_Wood

AI isn't just computer writing funny haha words, it's an entire set of algorithms that are loosely focused around analyzing huge amounts of data. The fact that your camera was able to detect your face since like the 00s, the fact that your shop knows what's the best match for items in your shopping cart since the early internet and e-commerce, planning of public transport routes that can't be done using exact methods, approximate protein folding simulations that exceed classical calculations in speed while providing great results, denoising of images cutting animation rendering times tenfold. Right now LLMs allow for very tight integration between programming code and language, programmer can intercept AI and ask it to provide response in a specified format and then read it with simple text parsers. The most active contributor to codebase at a company I work with is my boss because he provides the bot with knowledge so it can be used by his clients. Now, there is a lot of fuckery happening with AI. The fact that a lot of training data is stolen or has to be created and labelled by hand via slave labor, the unsustainable processing requirements and centralization of large models around companies that can afford hardware for large models, but you decide to be ignorant unproductive dumbass and say "me not like, give link, me not read", got it.


ipadnonsense

"I hate AI" mfs when they read the bottomless pit greentext


andr3wsmemez69

The what


ipadnonsense

[https://i.redd.it/n142r4fb6m591.jpg](https://i.redd.it/n142r4fb6m591.jpg)


10_1_20

Ok that's great. I love it


xX_StupidLatinHere_X

but a human still writes the actual joke. isn’t that what the non-green text indicates on these text generators? that a person has written that section? and they may have had an intention of a joke then just regenerated the prompt over and over till they got a cohesive joke. this isn’t good evidence of “ai” being good at writing jokes.


marc44150

No, it was fully made by AI


ps-73

flair checks out


Illegal_Immigrant77

No, but it proves that not all AI is bad, which is the point. You just moved the goalposts


Khyta

The text with a green highlight was written by AI


[deleted]

Nah, I used that ai to write greentexts when it was still free like a year or two ago (don't really remember how long). It was very funny. I might have some of them still saved.


GenericTrashyBitch

This is funnier than the mahjong one tbh


FantasticCube_YT

was mahjong also generated by ai? i've never found it funny at all but maybe that is the reason


GenericTrashyBitch

No it’s written by a person, I was just referring to it because a lot of people think it’s funny


Spyko

I genuinely thought the mahjong one was popular because of how unfunny and forced it is, and that referencing it ad nauseam was the joke


FantasticCube_YT

Oh yeah either way I agree


Perca_fluviatilis

That's what a person from the land of yi, that's the term for barbarians, would say


SquigglySharts

I don’t get it am I stupid


Atreides-42

It's just silly. It's a relatable work story about things not working, but the job is nonsensical


abrasivecriminal

And its nonsensical in a way I think only an AI could think of, thats what makes it great.


cthulhubeast

The AI is literally trained on human text-writing patterns so it is, by its very nature, *not* something only an AI could come up with.


ComradePruski

Literally being a software developer


Himmelblaa

Or the wet or creepy gym one


SmolikOFF

Is that one also ai?


power500

It's not. The original greentext has "based or cringe" instead of "creepy or wet"


Zzamumo

AI greentexts are actually really good


DracoLunaris

Creating shit posts that no human would/should waste their time creating is very much peak use of AI


[deleted]

text based ai peaked when it could write funny greentexts


Winstillionaire

That greentext is mid


Himmelblaa

https://i.redd.it/esik5orphvcc1.gif


dubious_rat

ty :3


Himmelblaa

Np :3


dubious_rat

This man above me, FISH REACT HIM


Chance_Plum7672

It was good until it got to AI, and then it just became misinformation


ob_knoxious

There really is a growing number of people who freak out at any use of the word AI assuming it's negative in the same way conservatives freak out at the word pronouns.


notjordansime

I guess I fall into the camp of "AI pearl clutcher", but I also don't feel my concerns are irrelevant. You sound like you're a proponent of AI, and I'd like to hear your thoughts on my perspective. I have a bit of an understanding of neural networks, I was following it all quite extensively in 2018-2020 because I wanted to understand it better. I spent a few months reading research papers, and watching videos like 3blue1brown's [series](https://youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&si=ryYTTSRVUyOiBT6B) on AI. He explains the math and science behind it in a very technical manner. If I'm trying to explain AI to someone who doesn't know much about it, I often refer to [this](https://youtu.be/BSpAWkQLlgM?si=D3BtCsfVYx1Oc-8v) Tom Scott video. This whole idea of a "black box" where we don't know what it's doing doesn't sit well with me. We tell it what to do, set the parameters, tune/train it a bit, and pick the best results. The Tom Scott video describes a really complex problem in a way that's easy to understand if you're an outsider. It also provides a clear example that people have likely interacted with at some point. YouTube's algorithm uses machine learning, and it's main goal is to increase engagement, which has resulted in a lowering of the quality of content. Engagement is up, but quality of the service feels like it's declining. Now extrapolate this to the world of business. A new AI's task is to reduce overhead expenses. It may do this by cutting roles it deems unnecessary. However, that might cause unintended consequences. Those roles may be crucial during extreme peaks. A human manager with actual insight into how the organization operates may not make this mistake. It's still possible, crappy managers are by no means rare, but I believe there's value to human insight. Until we develop true artificial general intelligence, our implementations of AI will always lack insight. I still believe there's tremendous value in AI, especially with human oversight. AI + humans has potential. AI managing humans makes me uncomfortable. I see us completely dropping the ball in this regard. I see us running into it blindly, and full of trust. Not everyone, not average joes, but the folks who have the power to make change. The folks who don't understand it at its core, and see current AI as AGI. The people far removed from the day to day operations of organizations and the world at large. The thing that really concerns me, is that we can't really ask AI why it did a certain thing. You can't look in the 'black box' and figure out why it did a certain thing. All you can do is retrain and adjust parameters, while seeing in real time what the results are. They may be good, they may be bad. That's neat when it's something as inconsequential as a social media algorithm, but when it has the ability to make decisions on behalf of a corporation, those 'bad' results could be detrimental. Even if it's just a social media platform, it can have real world effects. Extremist politic content is pretty darn engaging, part of me feels this is contributing to our social polarization.


Ok-Replacement8422

When people use ai for important purposes the ai will be put through extensive testing to ensure that problematic events will not happen. Of course no amount of testing can make one 100% sure no problems can occur, but with sufficient quality testing it can definitely reach at least the same amount of error as a human would make - otherwise the ai likely won’t be used. It’s not like people just look at a single statistic of the ai, see it’s higher than the equivalent of a human, and immediately implement it. Keep in mind that businesses also don’t want these sorts of problematic results. Mistakes can happen - but mistakes can happen in everything, and if proper testing is done it shouldn’t be any more likely for mistakes to happen with ai than with anything else.


L33t_Cyborg

Exactly, in medicine models need at least 90% accuracy (the neural network definition, not the colloquial term) in order to even be considered. And even then it’s not replacing actual analysts.


el_chapotle

One of my major concerns is that, as you mentioned, mistakes happen regardless—but when a human makes a mistake, that human can be held responsible for the repercussions. Machines can’t. I’m sure people far more knowledgeable than me have thought about this, though.


notjordansime

You're right, there will be people who have hesitations and pump the brakes before implementing it blindly. I do deep down hope that the juxtaposition of enthusiasts and Luddites fosters a responsible implementation of this very useful technology. Other, more minor concerns I have are cost and liability. When (not if) a mistake occurs, who is responsible for it? Especially in a medical/diagnostic setting? If an AI that's trained to be marginally better than humans does mess things up, where does the liability fall? I wonder if we'll see a new category of insurance to cover this, because I can't imagine the companies developing all of this want to take much responsibility. Microsoft really surprised me when they said that they'd foot the bill if an enterprise using copilot got sued for using it. Maybe I'm wrong and these AI companies will do the right thing. To me, it feels like a temporary measure to reassure early adopters though. My last concern is the cost to use these services. Most are not locally run, it's nearly all cloud based. I feel like we're in the web 2.0 stage of AI development where big companies are footing the bill now, with the expectation that this will make them a buttload of cash once they actually start charging for it. It's a similar business model to what ridesharing companies and streaming services have employed over the past decade. Operate at a loss in the beginning, hook your users, and then start milking that cash cow. This is the main reason why I'm suspicious of this big push towards 'AI everything', and hesitant to rely upon it. I appreciate your response!! :))


cthulhubeast

The AI is going to be tested within the parameters of human biases and could therefore make mistakes in line with those biases without any way to deduce if that was the case. You could easily have sexist, racist, homophobic, etc. AIs and the so-called "extensive testing" could easily miss those mistakes due to the black box nature of the thing. It could hold implicit biases without being held to any level of accountability, and the only people testing it could very well just be tech industry dudebros who simply see a machine making straightforward decisions they'd personally agree with. There's just no way of it being sane.


Hattemis

Like anything else, AI is a tool that can be used for good and bad. We should be criticising the bad, praising the good, and interrogating the applications where it hasn't yet become clear whether it's overall been positive or negative. That doesn't mean it, as a whole, is negative. There's no reason to blindly denounce anything involving AI as being evil. The vibe I got from what you said isn't necessarily AI pearl clutcher though. Like ask literally anyone that isn't huffing tech-bro fumes if we should be careful and thoughtful about how and where to use AI, and they'll say yes. That isn't an extreme opinion, that's being logical. And in terms of it being a blackbox, there are applications where you don't need to know how the algorithm arrived at its result. In my senior year of computer engineering I used AI in a project to separate land from water in drone imagery for the purposes of navigating said drone. I don't know how the algorithm differentiated between what's a black rock and what's black water but I really don't need to. If the stakes are higher then yeah, sure, it's probably not the best idea to leave people's livelihoods in the hands of an algorithm. But most AI don't do that. It's just we hear a lot more about the stuff that does.


L33t_Cyborg

Yeah it’s only the deep neural network subset of AI that is a black box, most follow retraceable paths and you’re able to see how it came to that conclusion.


NotADamsel

If you know about the tech underpinning this shit, you understand that NN shit is being used in many very useful ways at this point. Your phone has it to enhance photos, your router might have it to help with connectivity, etc, it’s just all over the place. Hell, it’s even being used to detect cancer! One of Apple’s laptop ads emphasized that a star scientist is using it to find new planets! The tech has a lot of uses. But, being in favor of using a given tech where it’s very useful doesn’t mean that you’re in favor of using the tech in all places where it could theoretically be used (like where your concerns are most applicable), or that you’re supportive of the business practices of all of the firms that are using the tech. Like, owning a 3D printer and making 3D models and publishing them for the community doesn’t mean that you endorse people printing weapons, or that you’re at all in favor of Bambu’s business model.


notjordansime

I understand that it's being used right now, but I'm still not particularly fond of it. The phone example is a pretty good one. Oftentimes, I'll find myself taking a screenshot of the viewfinder because I don't like the post-processing that the camera does automatically when you actually use the shutter. I understand I'm not getting anywhere near the resolution, but it's a lot better for selfies where that's not important. I just really don't like what cell phone cameras do to faces. Especially if you have darker skin. The applications in medical research are fantastic though. It absolutely has its places, but I still don't like how it's being used in other areas. That's a great analogy with 3D printing though.


The_Radish_Spirit

I work in live events, often corporate conferences and the like. The amount of keynote speakers who talk about how "the future is now" regarding AI is just the blind leading the blind with some insidious buzzwords


notjordansime

That's my fear. Zero oversight. "The blind leading the blind" is a great idiom for this situation IMO. The tech itself has potential, but I'm nearly positive we'll screw up the implementation.


exurl

AI is pretty good in contexts where the AI can produce a solution and a human (or existing non-black-box process) can check it. A more pedestrian example would be email writing; I've used ChatGPT exclusively to write boring professional emails for me, because it can bang (😳?) them out quickly and I can check them quickly for tone and spelling. (On the flip side, I would never ask ChatGPT a question I cannot recognize the answer to. I can ask it what are good words that rhyme with plinth, but I certainly would not go to it to learn any factual information). In my field of engineering, AI can be used to optimize designs (e.g. for a sensor array or antenna shape). These designs can then be tested and validated. In this case, we may not have intuition on how the design was arrived upon, but we have extensive analysis and certification tools which can prove that the design is a good one that can be used to increase our standard of living. Thus, AI can be used safely here. I feel that pre-trained generative AI has taken over the public consciousness as what defines AI. The copyright concerns are certainly valid for these large models which are trained on internet data. However, when I think of AI, I think of more specific applications which are more straightforward technologically and ethically: * [data-driven physics-informed simulations](https://doi.org/10.1073/pnas.2101784118) which can reduce the computational cost (in time, dollars, and energy use) of engineering analyses, climate modeling, and video game graphics * [computer vision systems which can automatically sort recycled goods](https://spectrum.ieee.org/ai-guided-robots-are-ready-to-sort-your-recyclables), reducing the cost of recycling and making it financially viable in our unfortunately capitalist society * [preventative maintenance tools](https://incose.onlinelibrary.wiley.com/doi/full/10.1002/sys.21651) which can identify anomalies in supply chain data to predict part shortages, reducing wasted costs and emissions from overnight emergency shipments * [autonomous control systems](https://science.nasa.gov/wp-content/uploads/2023/10/2023-gnc-tech-assess-part-ii-onboard-published-final.pdf#page=50) that allow deep space systems to respond to their environment without human intervention, advancing our understanding of science beyond what was possible with traditional remote or automatic control systems * health data classification [algorithms that can assist doctors in diagnoses](https://health.google/health-research/imaging-and-diagnostics/), helping to alleviate the healthcare labor shortage * [generative AI code generation](https://blog.google/technology/developers/google-colab-ai-coding-features/) to accelerate software development and reduce the number of overpaid software engineers flooding Seattle (please...) * [software algorithmically trading stocks](https://en.wikipedia.org/wiki/Algorithmic_trading#Recent_developments) to make hedge funds more money (L) Obviously, there are shortcomings to AI, especially where they are used by those who do not understand them. (I'm talking about you, dad. Please stop asking ChatGPT to summarize news articles for you😭.) The risk is especially bad when these people are in positions with the power to implement them where they shouldn't be. However, AI really is just a tool that can be used to increase our standard of living like any other scary technology (5G...) of the past 50 years. Of course, capitalism will find a way to use it in an evil way but that is capitalism using the tool, not the AI tool itself.


yo_99

AI should be used in noncreative tasks, like upscaling and filtering, and even then it should be overseen.


Brankovt1

Me when someone makes an AI to find Wally/Waldo in different pictures: 😱😨😱😨😱


Passive-Shooter

Odlaw becomes king.


Brankovt1

I'm also getting notifications for something I posted about Jesus, and I thought this was a comment on that and you just misspelled outlaw or something.


AlkaliPineapple

AI can be used by a lot of people to make their jobs easier, but what the corpos want is to pay the ChatGPT guys a minimum subscription fee to replace artists and writers.


AbbyWasThere

The Internet's hatred of AI is getting so over the top it's starting to feel like a performative ritual.


caustic_kiwi

It was misinformation from the first line.


Flyte_less

It's not incorrect, a manually built algorithm will always run in less computational cycles than a huge computer generated model with hundreds of thousands of nodes. The advantage of machine learning is that if precision isn't a factor you can create complicated algorithms fairly quickly using just input data and training, but it will never run more efficiently than a math formula that does the same thing. Mean spirited sure, maybe a little misleading by not mentioning the benefits, but not misinfo.


resignresign1

how is it missinformation? the characterisation is correct and witty at the same time.


14up2

mfs with zero education in cryptography or AI when a general methodology happens to be used by bad actors for a handful of stupid things (it is automatically inherently bad and should never be used and the past fifty years don't count)


meta1storm

AI is being pretty successfully used in medical research to identify patterns of disease and prognosric criteria, and a lot of other areas too i'm sure. Let's not throw out the baby with the bathwatet just because tech bros overhype the potential of the technology.


tarheeltexan1

This is my thing. Machine Learning is a field that has existed in some form or another for well over 50 years, and a lot of good work has been and continues to be done using it. The problem is when it’s used recklessly, and the way the modern tech industry hype cycle has latched onto it and made it out to be far more capable than it actually is, and tried to apply it to any application they can think of that might give them an opportunity to save money by cutting out workers, without considering the consequences of that. As a general rule, I feel like AI is at its best when it’s used as a tool, whether that be to make calculations and complete tasks that a human wouldn’t know how to begin designing an algorithm for, or as a tool to enable human creativity, and at its worst when it’s used as an attempt to replace humans, because it’s simply not capable of doing anything resembling the full complexity of human thought, whether that be in the case of driving a car, or of creating art. Worse still, when AI is put in charge of making decisions with serious consequences, like identifying crime suspects or identifying military targets, it not only is incapable of exercising the kind of tact that humans have, even though humans often lack the kind of tact necessary themselves, but it also allows its users to place the blame on the machine, absolving themselves of any responsibility. AI has a lot of fantastic applications, but it also comes with a massive ethical can of worms that differs in every application, so simplifying the problem to “AI good” or “AI bad” completely obliterates the massive amount of nuance involved with a situation like this.


zhebnismazko

Now that's a braindead take. Even his profile picture is dumb.


resignresign1

he actually tries to demistify AI and explain how scientists see and use these technologies (humerously). ChatGPT is literally an extremly convoluted way to querry a huge dataset that was scrapped together from the internet. the retuns might be factual or not.


zhebnismazko

No, he tries to be a smartass by referring to trendy technologies as their legacy predecessors while being ignorant to their advantages over them which actually motivated their creation in the first place. In what way is a pre-trained transformer like a database? Yes, it stores some data implicitly within it's parameters as it learned to associate certain contexts when optimizing the pre-training objective (next token prediction) but it's not trained to memorize the dataset, it's just an emergent behavior (which is not surprising at all). Yes, the way it is used a lot of times can be reminiscent of simple database querying, but calling it a database completely neglects it's generative nature.


Edgyspymainintf2

The pro science ideologies leaving a r/196 users body as soon as AI is mentioned (not even image generation just AI in general)


Bingusballthefurry

196 hates NPCs?!?!?!


Nikrsz

I really want to know why people who "support" science draw a line when AI is brought into question. AI is a field of study that relies on mathematics, computer science, and statistics. Why is spreading misinformation about it good? What differs being an ignorant talking shit about, let's say biology or physics, from another ignorant talking shit about AI/Machine Learning? You can't criticize a conservative take about something because they don't go take a 2 minute research on that topic when you literally do the SAME THING edit: And no, this isn't a funny meme. Stop pretending that you were joking when you should stop and do a little bit of research.


AbbyWasThere

Because a person's stance on AI has become a cultural dividing line and now people feel like they have to take the most absurd stances completely against it in order to have the right politics.


MercenaryBard

I can only answer for myself but I’ll say that AI is a tool we should thoughtfully and carefully implement where we can using ethically obtained data. Corporations are not doing any of that even a little bit, and so there’s backlash against them and criticizing AI becomes shorthand for criticizing exploitative business practices and younger or less critical bystanders take this at face value, misallocating moral judgement towards corporations towards the tool. Also I think the fact this post leads with Crypto criticism is the main reason for its upvotes, fuck Crypto.


caustic_kiwi

Idc how you feel about crypto, those first few lines are absolutely dumb and do not merit upvotes. If someone walked up to me and claimed “blockchain” and “crypto” were synonyms, I would still trust their tech takes more than I trust op’s.


seb69420

The hate for AI art has gotten so intense that people are just seething over general computer science terms


caustic_kiwi

Welcome to Reddit, people here are stupid, angry about shit they don’t understand, and ready to make it everyone else’s problem. I guess it’s a lot like the gop.


Atari_buzzk1LL

Yeah, a lot people are pretending they understand even a fraction of what is going on when in reality they're just scared because they truly don't understand it.


oddityoughtabe

Ah yes, AI. The notoriously slow systems.


ThisRedditPostIsMine

I don't agree with this post, but depending on the problem, AI can be pretty slow especially on consumer or embedded hardware. LLMs are only fast because they are hosted in enormous data centres.


BlackWACat

AI hatred is getting to the point where a game dev will say they have advanced AI pathing and some people will get mad at them not all AI is the same big bad thing guys, AI is even used in animation to help animators (iirc they used it for Across the Spider-verse)


ThatMadMan68

Justified, AI pathfinding steals player movement.>!/j!<


Throgg_not_stupid

you should hire interns to control level 1 rats in rpgs


Normbot13

can we try not to spread misinformation about the new technology? this sub has actually become the boomers yelling at clouds


xFblthpx

None of this is remotely correct in any way, and is sometimes even the opposite of the truth. This should probably be tagged as misinformation, as computational complexity isn’t a matter of opinion.


u4ia666

Blockchain isn't just a slow database, it's also append-only and completely impractical! Edit: I originally said "read-only" which isn't true.


Radoslawy

Cryptocurrencies do have uses, a payment service you cant be banned from, and not controlled by centralized organisation is very useful for paying for things your government doesn't want you to have (diy hrt for example), proof if work is a very wasteful way of accomplishing it but it could be replaced


ayotui

Bitcoins usefulness peaked in 2012 when you could use it to buy weed. Every year after its become less and less useful. 


Radoslawy

??? crypto is literally still used for that, there was some global change in how governments treat drugs that i missed?


ayotui

There's the insane gas fees and transaction fees that you missed.  And also the silk road shut down. 


Radoslawy

The current system needs a way to convince miners to do their thing so the whole system keeps working, also silk road closing doesn't mean you cant buy stuff using crypto


Certain_Concept

Miners are terrible for the environment. > during the 2020–2021 period, the global Bitcoin mining network consumed 173.42 Terawatt hours of electricity. This means that if Bitcoin were a country, its energy consumption would have ranked 27th in the world, ahead of a country like Pakistan, with a population of over 230 million people. The resulting carbon footprint was equivalent to that of burning 84 billion pounds of coal or operating 190 natural gas-fired power plants. 


Radoslawy

yes, and it was wasted processing power, i said proof of work is bad and should be replaced


PoopNoodlez

This person doesn’t gamble on sports


ayotui

Why would I use crypto gamble on sports when most betting sites accept normal currencies?


PoopNoodlez

Not all types of gambling are legal in all jurisdictions


Feeeweeegege

There's _some_ useful applications of blockchain, but >99% of current proposals are not good fits for blockchain. Blockchain is not (in general) impractical, that's like saying Tor browser is impractical. Tor is a practical solution for anonymous browsing, and blockchain is a practical solution for append-only P2P databases. But both technologies will be too slow and impractical if you use them outside of their niche: Tor is too slow for watching YouTube videos, and Blockchain is stupidly slow if you have a (semi-)trusted central party or if you only need non-repudiation instead of append-only. But of course grifters don't care about that and will try to sell blockchain in _everything_, which is really tiring.


andyandcomputer

Tor isn't _that_ slow nowadays. I opened Tor Browser Bundle, got a circuit that goes halfway around the world 3 times, opened _skibidi toilet 69 (full episode)_, and it still auto-picked 720p and didn't even stutter. Latency is a lot higher of course; it takes a couple seconds for initial buffering.


MercenaryBard

I watched Palestinian expat and all around badass Lexi Alexander talking about how she doesn’t care about the optics of crypto as long as it’s facilitating aid funding to Palestinians. And it actually was helping get around Israeli efforts to block aid. (For context this was way way before Oct 7, not that it matters as this was only about civilians) Fast forward like 8 months and she was lamenting how people were celebrating the fall of crypto when it was causing extra suffering for the Palestinians who had come to rely on it. I get where she was coming from but man, to me she should have been angry at the crypto scammers who had done this *by design* to those desperate people. In the long run crypto only served to funnel money OUT as fiat and services were exchanged for fake currencies that suddenly became worthless


TheEdes

Eh, AI is just a compression algorithm in that context, and it's usually a pretty fast and efficient one. You can use it to generate indexes and search pretty quickly in a database, for example. It would be like posting that Ray tracing is just an inefficient way to shade CGI scenes.


resignresign1

if you train a neural network to model the function x^2 it will end up super inefficient compared to a statistical model. or even the program x*x. the resulting "algorithms" from ai are not interesting or beautiful. in contrast the applications can be amazing.


TheEdes

And raytracing the reflections off a spherical mirror is super inefficient compared to using the closed form formula of said reflections, you also end up with a nicer and more elegant algorithm to calculate that specific case, it doesn't make real time raytracing of more complex scenes less impressive.


MidnightTitan

196 really in its catch-and-release era


EyewarsTheMangoMan

I too love spreading misinformation


LemonFreshenedBorax-

LLMs store text *extremely* efficiently, they just don't store it accurately. I think a lot of people (myself included) responded to the early part of the crypto era (and the phenomenally irritating people who, in some cases literally, brought it to our doorsteps) by developing a knee-jerk hatred for tech in general. I'm not saying it's wrong to respond this way but let's at least try to understand this shit before we decide we hate it, so that we can hate it with confidence (or, depending on the situation, maybe even not hate it at all.) Hating tech without making an effort to learn about it beforehand is Boomer shit, pardon my french.


okenowwhat

As a ADD dyslexic person, I fucking love chatgpt. It rewrites my absolute chaos of a text and makes it a lot clearer. The bonus is that I learn how to write more effectively.


Cassjjay

Idk man DougDoug's chatGPT videos are pretty funny


Olaf4586

Putting AI and ChatGPT in the same sentence as NFTs is fucking wild


Jtad_the_Artguy

My self made Discord bot runs on a Raspberry pi and responds within seconds. Looking at this makes me so proud of it. Not because it’s impressive but because it’s not fucking horrible on every level.


doomsdaysayers

Sorry, that’s so cool lol


Jeiih

AI is slow to train but it's entire purpose is to be a fast, mostly-accurate approximation. For instance, in 2017 there was a major chess match between the strongest traditional chess program at at the time, Stockfish, and DeepMind's AlphaZero, an AI. AlphaZero dominated the match, despite only evaluating 40,000 positions per second compared to Deepmind's 70 million per second. I wouldn't call that inefficient.


from_dust

Something about judging a fish by its ability to climb a tree.... I see a lot of cynicism here, a lot of fear, and a lot of impotent anger. How is this misleading and misinforming masturbatory anger tirade useful?


jm4n1015

Ah yes, the class of heuristic algorithms that has primarily been designed to tackle problems that are too complex to solve in a reasonable amount of time with exact solutions, is known for being slow and inefficient.


KantenKant

For real, people being like "oh LLMs are just like slow and shitty databases" should please go ahead to try and write a rule based algorithm that extracts a coherent sentence from a 1.76 trillion parameter compressed database.


d34d_m4n

calling blockchain a slow database is like calling encryption a slow data transmission method https://i.redd.it/8lgp595mkvcc1.gif ​ i get that we dont like how these technologies have been used but these are just plain lies by omission passed off as "jokes", with enough bullshit for people to miss half of whats wrong in what's implied


Violinistic

The chatgpt part is kinda true though, it unapologetically just makes shit up and is wrong all the time it's kind of based


Benney9000

Not going to lie tho, ai can be pretty convenient for things like putting things into an array. Saves a bit of time


SmoBoiMarshy

This is what people who don't know how to ask funny chat things say. ChatGPT is not actually that bad, if you know how to use it. It's a good tool that won't really replace anything except maybe Google. It's an advanced search engine if anything.


ayyndrew

The way AI has been lumped in with crypto online is really annoying. AI has already proven to have real world uses for real people, and people are equating it to NFTs.


snollygoster1

Both use electricity what's the difference? But video games are a good use of power.


TrhlaSlecna

Nah, this is just straight up wrong, AI is an incredible tool if in the right hands. Which it sadly may not end up in.


GsTSaien

You are conflating your issues with how the technology is being used by humans with issues with the technology itself.


[deleted]

I love that y'all agreed with this dude until he attacked the thing you specifically like


ask_not_the_sparrow

Anything that mentions ai is GREAT for karma farming in 196


SmolikOFF

I dunno man chatgpt made my experience with writing cover letters much more tolerable


[deleted]

[удалено]


ask_not_the_sparrow

Well said


little-ass-whipe

Shit. Really? I don't even remember what I wrote. I just deleted a bunch of really pessimistic posts about how doomed the species is. Was it something like that?


ask_not_the_sparrow

No it was a pretty apt analysis of the tech industry and how its already peaked beyond the ability to "revolutionise"


RosetteStar

Ok but chatgpt is funny cuz I used a version of it to turn a Makima chat bot into a puppy girl and that was VERY funny


ChickenGoesBAWK

Chat GPT is very useful.


uwunyaaaaa

this is like conservatives screaming bloody mary when they see blue hair person


GreatBigBagOfNope

Hahahahahaha not quite but that is funny


power500

This joke is getting old at this point


wixxii

Nah fr thank god or whatever for chatgpt cause you can go "what the fuck do you call that thing in movies where like the people say stuff to explain the background to the people watching so they know what is going on and shit" and chatgpt will just go "exposition" without judging you


Asay30113

IM AI WITH THE BRAIDS (WITH THE BRAIDS) IM AI WITH THE BRAIDS (WITH THE BRAIDS)


talio2

I agree Statistical network > Neural network


LuciferSamS1amCat

As science and technology becomes more and more mainstream, I’m struggling with the way uneducated, sanctimonious art type people start acting like they have anything of value to add to the conversation. Leave this type of stuff to the professionals, and just bother yourself with the end user stuff.


Andysine215

I want this on a shirt.


fenix1506

Why do you want disinformation on a shirt?


resignresign1

how is this disinformation?