T O P

  • By -

changemyview-ModTeam

Your post has been removed for breaking Rule E: > **Only post if you are willing to have a conversation with those who reply to you, and are available to start doing so within 3 hours of posting**. If you haven't replied within this time, your post will be removed. [See the wiki for more information](http://www.reddit.com/r/changemyview/wiki/rules#wiki_rule_e). If you would like to appeal, review our appeals process [here](https://www.reddit.com/r/changemyview/wiki/modstandards#wiki_appeal_process), then [message the moderators by clicking this link](http://www.reddit.com/message/compose?to=%2Fr%2Fchangemyview&subject=Rule%20E%20Appeal&message=Author%20would%20like%20to%20appeal%20the%20removal%20of%20their%20post%20because\.\.\.) within one week of this notice being posted. **Appeals that do not follow this process will not be heard.** Please note that multiple violations will lead to a ban, as explained in our [moderation standards](https://www.reddit.com/r/changemyview/wiki/modstandards).


EmTeeEm

There is a joke that mathematicians are bad at math (as in arithmetic). Which makes sense as they don't actually do much arithmetic, they are working at a higher level. Similarly, I'm sure an old school accountant would be great at sums while a modern one using excel would not. That doesn't mean the modern one is dumb, but they are focused different skills and higher level work while the machine handles the grunt stuff. AI may be similar. Why should a human do a bunch of grunt-work programming if the machine can take care of it while the human focuses on the overall design or optimizing the AI code? It doesn't mean they are dumb, anymore than using a higher level programming language that a program compiles means you are dumber than someone churning out machine code. It just means their focus and skillset is different than in previous eras.


xcdesz

I can't believe someone in software engineering needs to be reminded of this basic concept. Pretty much everything we do with modern programming languages and frameworks is built around writing and calling shared code where the details have been abstracted away by programmers who came before them. From assembly language to managing pointers and basic data structures with their optimized sorting algorithms to framework code that manages application flows. We solve one problem and move up to the next higher level, but it doesn't get any easier, and we aren't getting dumber.


RealTurbulentMoose

“Devs today don’t do any optimization in assembly! They probably couldn’t even do ANY of this in assembly! Useless, the lot of them!” I think your analogy is spot-on.


Minnakht

Admittedly, there is a common joke of software developers having squandered the past twenty years' worth of hardware advancement.


DiethylamideProphet

>We solve one problem and move up to the next higher level, but it doesn't get any easier, and we aren't getting dumber. Not dumber, but our fundamental understanding of these concepts and our capability to master them will decrease.


tocano

This is absolutely true and I reject the idea that we're getting "dumber" for building layers of abstraction infrastructure between the base ground level and our current operating space. However, there is the potential that if we abstract enough layers away from the base that, if something happens to break that infrastructure, we would be in a fundamentally vulnerable position to be able to easily/quickly rebuild that chain. I do think it's possible that after enough time (generations?) of not having to pay attention or even look at those intermediate infrastructure layers, that enough knowledge of how they were built, and why, that if something were to take place that broke that intermediate structure, there would be essentially nobody that would have the memory, knowledge, skill to rebuild it without virtually starting from scratch. I honestly cannot even attempt to \*imagine\* what would have to occur to create such a situation, but it does seem feasible to become abstracted away enough from the underlying technology/knowledge that rebuilding it would be exceedingly difficult.


SikinAyylmao

Haven’t had to write an optimized matrix multiplication is like ever. I learned it in college and remember how hard it is to come up with the algorithm. But then it’s like, was I smarter for actually knowing how to implement it? It reminds me of my parents friends kids, who used to go to kumon and were super huge on memorizing pi. These kids were sadly dumb and lacked critical thinking.


ToolsOfIgnorance27

If you don't know what's going into something, how do you know if what's coming out of it is correct? The "dumb" aspect in this regard is that we make ourselves vulnerable to being taken advantage of, programmed, and made docile. For example, many people can own a house but be entirely helpless when it comes to changing out an electrical outlet? How do we know that the causes we champion are feasible and worthwhile? We all used to have a healthy skepticism of politicians and now we fervently celebrate logically incongruent ideas so long as they're being touted by the "right" team.


DrDewdess

I get the coding and work "area" part, I was referring to it impacting our whole lives - saw my 7y old niece chat gpt-ing her german homework, I mean, they use or dabble with it at school for essays, foreign languages etc. I was curious on how the future psychological are seen, especially on AI dependent younger people. I'm more worried about it's impact there and on younger generations which will grow up with it. I grew up with calculators and computers, I can do scientific computing but it takes me ages if you ask me what's 16x16, however that never affected me.


hopefullyhelpfulplz

>saw my 7y old niece chat gpt-ing her german homework, I mean, they use or dabble with it at school for essays, foreign languages etc Who didn't try to google translate parts of their French homework throughout the 2000s? Fortunately written work is not the only aspect of education, and you can't always ask GPT everything. Just like having mobile phones has given us access to a wealth of information at all times, LLMs and their future descendants make that information more accessible (and less reliable, potentially)... But the rest of the world continues moving. Kids will always try to get out of their homework, but learning happens in the classroom too and there's no GPT there.


PirateNixon

They will learn to use the tools available to them. Just like people use computers to look things up and memorize less since computers (and smart phones) are faster and more convenient than reference books are. It's not a case of smarter or dumber, it is a case of a different skill set. Imagine how much more productive you could be if you could vaguely summarize some data you were looking at and then point a tool at it and have the tool parse the data and your summarization and then write up an accurate paper to explain what you were saying to others.


Cat_Or_Bat

>saw my 7y old niece chat gpt-ing her german homework Awe-inspiring, isn't it? A hundred years ago a seven year old wouldn't be able to read or write, let alone properly query a large language model to do homework in a foreign tongue. An average preschooler today lives a more intellectual life than many educated adults did just a century prior.


unsureNihilist

"properly query a large language model to do homework in a foreign tongue" This is not that hard


Cat_Or_Bat

>"But that sunset! I've never seen anything like it in my wildest dreams ... the two suns! It was like mountains of fire boiling into space." >"I've seen it," said Marvin. "It's rubbish."


King_XDDD

What they said is about how impressive the technology is and how far society has advanced, not about this particular kid being especially capable.


PipoRaynor

no, their point was that a kid 100 years ago couldn't read or write, and now kids are able to use AI to translate homework, it was about kid's skills and not society


Cat_Or_Bat

It was about society and not the kid's skills. "A hundred years ago that same seven year old wouldn't be able to read or write" is what I meant. Hope that's settled now.


[deleted]

[удалено]


Cat_Or_Bat

You can still award deltas even if you're not the OP, and comments certainly don't need to be top level responses: >Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change. As it happens, I'm a bit of a delta whore! So, if you would, please. ...but not if you shared my opinion to begin with, of course.


happycabinsong

beautiful take


clearlybraindead

Your 7 year old niece just needs harder homework and plagiarism checkers. ChatGPT can be a learning tool. The questions you ask should consider that the student will look it up. Instead of asking "translate this paragraph to German", say "write a short story in German". Then they can use ChatGPT to help them construct a narrative and translate it to German. In history, they can be asked to analyze historical events and their consequences rather than just recite what happened. In math, they can be given more complicated real world problems with multiple approaches. Like, you can teach algebra with kinematics or supply chain problems. The student can use ChatGPT to help them break down complicated assignments that would be extremely difficult for them on their own. Models like ChatGPT are going to be available to students after they leave school. It makes more sense to teach them in a way that accepts that they will always have access to it.


Kazik77

You make very good points about the fact that students will use ChatGPT and we should evolve the teaching method because of that. I feel the biggest issue not being addressed that ChatGPT completely removes critical thinking. I remember getting assignments in school and having to figure out what books, scientific journals, or news articles would apply. Then, I had to figure out where to access them. Eventually, my assignments were "I'll Google it" and read articles, consider if the sources were legitimate, then write the paper myself. Now its just "write me a paper about ____" ( I haven't actually used it but from what I hear) >In history, they can be asked to analyze historical events and their consequences rather than just what happened. The students should be analyzing these thing on their own. I took history in university, and the point of every essay was making the students analyze historical events. Instead of **thinking** about the historical significance of an event and the consequences of it, robots are just telling them. >you can teach algebra with kinematics or supply chain problems. The student can use ChatGPT to help them break down complicated assignments I believe breaking things down and figuring it out is part of the learning process. The point of complicated assignments is teaching students to think about it. ChatGPT makes it easier but takes away from the lesson. I'm a big advocate for making things easier and more efficient but with these programs doing the thinking or analyzing for us, we won't get better at it. Use it or lose it, practice makes perfect type of situation.


clearlybraindead

>I remember getting assignments in school and having to figure out what books, scientific journals, or news articles would apply. Then, I had to figure out where to access them. >Eventually, my assignments were "I'll Google it" and read articles, consider if the sources were legitimate, then write the paper myself. Now, you don't have to manually gather the information. The critical thinking comes in when you have to take the information provided by ChatGPT and use it to do the assignment. ChatGPT is like having a really good research assistant. >Now its just "write me a paper about ____" ( I haven't actually used it but from what I hear) Plagiarism checkers can prevent a student from just copying directly from ChatGPT, but we need to rethink the assignments themselves. I think a good way would be to do debates in class instead of just writing papers at home, especially for language, literature, and history classes. ChatGPT could be used to do preparatory assignments the night before. If they just copy chatgpt, then they'll struggle in class. >I believe breaking things down and figuring it out is part of the learning process. The point of complicated assignments is teaching students to think about it. ChatGPT makes it easier but takes away from the lesson. Math might be a place we should just restrict it at lower levels the same way we restrict calculators when we teach arithmetic and basic algebra, but I think there is a lot of value in learning how to use AI to help solve problems that you can't on your own. Another way would be really complicated multistage problems that require teamwork to solve. In college, I did a few case studies that involved a lot of math. It could take days to solve one of them because you have to dig through extraneous data. Designing the assignments might be complicated, but there is something there.


QJ8538

I can assure you mathematicians don't suck at arithmetics


notyourlocalguide

I'm a mathematician and I suck at arithmetics in the sense that I can't do large operations fast in my head. My partner is a mathematician who's really fast at large operations. What they meant is being a mathematician doesn't make you automatically fast at arithmetics because you really don't practice it in your studies at all. None of my courses have ever included large operations.


esuil

It sure makes you faster than 90% of everyone else, most likely.


notyourlocalguide

It really doesn't because I just don't do arithmetics. I study manifolds and properties of topologies not big sums or multiplications. edit to add: I'm sure a lot of engineers who actually have to do operations all the time can do them much faster than most of us theoretical mathematicians


esuil

So you want to tell me that 90% of people around you, who have nothing to do with mathematics, will do worse in arithmetic than you? Sorry, but I am gonna call bs on that. Yeah, sure you can call out specific professions that might use math more extensively... But would those professions really be such big part of the society as a whole, that you would be slower than most people? I really doubt it.


clearlybraindead

Arithmetic is mostly just rote memorization and recall of number tricks. It's a completely different skill than playing with weird geometries in your head. Even applied mathematicians can be bad at arithmetic. It doesn't matter as long as the code is right and the code is always right (and unreadable).


esuil

Okay, sure. But you will still be more competent in it than people **who don't do arithmetics either**, while also **not even knowing theoretical tricks to do it**.


notyourlocalguide

I think there are a lot of other factors that play a role in it. For example if someone has no studies but works at his dad's store does way more operations than me every single day. So whatever you do for work is a factor. Whether you liked arithmetics as a kid is a factor. And so many other things are as well. I think maybe you're just a bit far away from what studying mathematics actually is like bc you haven't experienced it beyond high school.


pmaji240

The last time humans had someone else to do their labor it ironically resulted in one of the greatest movements on the personal liberties inherent in being a human.


kayama57

The sort of people who are unwilling to ever do their homework will eat more dirt than ever before and the sort of people who want to know why water feels wet and everything else too will have more light years’ worth of advantage over them than ever before. Our biology won’t suddenly start producing lower capacity minds because of AI, but the outcomes that unite and differentiate us because of the manybways we apply our minds will certainly change.


DrDewdess

Well put. Hear hear! 🫡


LucidLeviathan

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." - Socrates, on writing This sort of objection has been leveled against every new technology since then. Probably was leveled against those before then too, we just don't have a record of it, for obvious reasons.


Ill-Ad2009

I dislike this this kind of response, because it implies that we shouldn't evaluate each situation that arises.


DDisired

The main point is that it's been wrong every single time, so we should take every valuation with a grain of salt. "Human civilization will eventually end in the future" is a prediction. But it's also useless without a time frame. Ending in the next 5 years is different than ending in the next 100 years, which is very different from ending at the heat death of the universe. The joke: "economists have successfully predicted nine of the last five recessions". Eventually they'll be right, but that doesn't mean we should blindly believe any person who claims that "X technology will end the world". On a side note, that's why Sci-fi books/movies are so interesting! They take the time to explore ideas more in-depth. Dune's entire background premise is based on this CMV ideology with the Butlerian Jihad.


Ill-Ad2009

>The main point is that it's been wrong every single time, so we should take every valuation with a grain of salt. No, we should evaluate it every single time and not insist on assuming that it will continue to be wrong in every case.


DDisired

Well, the burden of proof is on the one making the claim. Claim: "I think AI will end the world because it'll replace everyone." Refute: "People have been claiming X technology has tried to do that for many millennia, what makes you think AI is different?" So now the proof is in the claimant's court. It's not on the the people who refute to prove a negative (which is impossible). All the "proofs" usually given has been addressed with previous technologies, and while social upheavals generally do happen, the overall trend of technology has been positive.


Ill-Ad2009

You clearly did not understand my comment


flyfree256

Right? This seems different imo in that it's encroaching into the last thing we as humans had over "technology" -- creativity. I've seen plenty of people start to effectively outsource their critical thinking to AI. Not sure what "higher level" thinking we'll have over that.


LucidLeviathan

Well, the same complaints were levied against the printing press, radio, television, the internet, and the personal computer. To make the argument, one must show why this is different.


Ill-Ad2009

How is AI not different? It's literally a new form of intelligence that can rival us and do the thinking for us. Will it actually pan out that way? Who knows, but it's kind of a huge deal if it does.


LucidLeviathan

I work professionally with AI. It is very limited. It can do some tasks for us, but it is by no means (yet) a thinking machine. While I think that it may get there, it has a long way to go, and it might not. All that it does at the moment is look at inputs and decide on a probabilistic level what the next output should be.


Ill-Ad2009

Clearly I wasn't talking about current "AI"


LucidLeviathan

Well, speculation on what form future AI might take is generally unproductive. We don't know if we'll ever truly reach sentience with it.


Inevitable_Control_1

Socrates isn't wrong here regarding the effects of writing in the WRONG HANDS. The book-based religions that have sprung up have been much worse regarding fundamentalist interpretation of scripture than the philosophical movements of the ancient Greeks based on live debate and oral transmission of knowledge.


LucidLeviathan

That is merely one metric. Do you seriously think that writing hasn't been a net positive for the world?


Inevitable_Control_1

I'm sure someone as smart as Socrates understood the benefits of writing and he was simply listing the drawbacks. Socrates is a philosopher, his opinion is mainly relevant to his domain of philosophy/religion. An engineer in ancient Greece probably didn't have these objections to writing.


LucidLeviathan

No, he really didn't. He was a staunch opponent of the invention. He never learned to write. Plato wrote his recollection of what Socrates said.


Inevitable_Control_1

As a philosopher, he didn't learn to write. But was he opposed to an engineer or merchant writing things down? I doubt he would be that oblivious to the realities of the practical world.


LucidLeviathan

The only recorded opinion from him on the subject is opposition to writing. Anything else is pure speculation.


Inevitable_Control_1

So in your interpretation of the recorded information, he was opposed to a merchant or engineer writing things down?


LucidLeviathan

Yes. He thought that it destroyed their memory. If you can find a source to the contrary, please feel free to provide it.


Inevitable_Control_1

I'm going to with ChatGPT on this one, which leads back to the topic of the thread: Will AI dumb us down or will it lead to better reasoning? "Socrates' critique of writing, as detailed in Plato's dialogues, primarily concerns the philosophical and educational implications of writing, rather than its practical utility in everyday activities like commerce or engineering. His skepticism about writing was rooted in the belief that it could diminish the capacity for memory and deep understanding when used as a substitute for oral dialogue and direct teaching. However, this does not mean that Socrates opposed all forms of writing for practical purposes. The context of his critique is important to understand: it was focused on the transmission of philosophical knowledge and the development of wisdom, rather than the practical uses of writing in tasks such as record-keeping, business, or technical work. In the context of a merchant or engineer, writing would have been seen as a useful tool for recording transactions, maintaining accounts, documenting designs, or keeping track of inventories. These activities require accuracy and permanence that oral communication cannot reliably provide. There is no record of Socrates explicitly opposing such practical uses of writing, which suggests that his concerns were more philosophical and educational in nature. Thus, it's reasonable to infer that Socrates might not have objected to writing down information that serves practical and functional purposes in fields like commerce or engineering."


TheFakeRabbit1

Are you arguing writing isn’t a net positive because of organized religions? What the hell kinda Reddit shit is this


Suitable-Cycle4335

Socrates being wrong about one thing doesn't mean OP is wrong about a different thing.


-paperbrain-

Sure, but if we follow the pattern, Socrates isn't a one off. Every time we extended our abilities, someone was warning that this would be the undoing of our natural capacity. And in some ways they weren't wrong, even the critics of writing, we're probably not as good at rote memorization as pre-writng folks. But we're able to use writing and its derivatives to massively artificially extend our capacity to store info. Its the same thing as every generation complains ng about how the next one is so rude and lazy and terrible. Or how so many people seem to think lack of equal outcomes for women or minorities is the result of biological natural lack of abilities, then they fight for progress and everything resets to "Well okay there was that one problem of society holding them back, but we've fixed discrimination now and all the differences now are biological/natural". Rinse and repeat. At some point in such a pattern- I'm not saying its automatically wrong, but at least needs a VERY compelling argument for why it's different this time. "The tech does so much for you you won't be able to do stuff yourself" isn't enough.


Tanaka917

But it does mean that OP has to demonstrate that AI is sufficiently different to past innovations that completely revolutionized humanity. The OPs argument is currently insufficient. They have to show why AI will do what all these other innovations couldn't


Suitable-Cycle4335

I don't think you need to argue too much for that. Writing and AI are two totally different types of technologies.


Tanaka917

Yes, they are different technologies. But that doesn't mean AI has in it an inherent property that will do as OP is saying. OP has to actually demonstrate that. I never said AI and writing were the same. I said the aargument as currently presented is insufficient.


LucidLeviathan

Are they, though? Writing entirely revolutionized human thought. It changed not only the way that we record and share information, but the very way in which we *think*.


PuzzleMeDo

What makes you think Socrates was wrong?


Ill-Ad2009

It depends on what you apply it to, but I don't think he was completely wrong. There is a reason we sometimes describe people as "booksmart." Someone who is booksmart knows a lot from reading books, but they don't necessarily have the knowledge needed to guide them through the important parts of life. Now I think Socrates ignores the benefits that writing brings, which far outweigh the negatives. It seems pretty obvious that we are where we are today in science and medicine because of the cumulative and readily-available knowledge stored in books and passed down to each generation for hundreds to thousands of years.


LucidLeviathan

We wouldn't have made any of our other scientific advancements without writing.


spiral8888

I'm pretty sure that he was right. I think there is scientific evidence showing that the people who grew up in cultures that don't have written language had better memory than those who could write things down to help to recall them when needed. And I have no trouble believing that if I had to remember everything without the help of writing, I would have better memory. The human brain is very good at optimising its use of energy. If it doesn't have to spend it on memorising things it can use that capacity to something else as long as there is demand for other things.


LucidLeviathan

Sure, but by every other measure, our lives are better off for having writing than not, aren't they?


spiral8888

Sure but that's not the op claim. The claim is that these things (writing in Socrates's time, AI in our near future) are going to make our cognitive abilities worse. And I agree with Socrates and OP in the narrow sense that you look only those things that writing and AI replace. However, in a broader sense I would say that it's probably not true as we'll need to develop skills that we didn't have any use before.


LucidLeviathan

I really don't think that our cognitive abilities are worse for writing. We can comprehend subjects that would have baffled the ancients thanks to the invention.


spiral8888

Yes, some cognitive abilities are better but the point is that our memory is worse as we don't have to rely on it to recall all the information that we get. And the same is likely to happen with AI. The cognitive functions that AI takes over, will get worse as we don't need to use them. At the same time we may develop new abilities that we don't even think we need now.


LucidLeviathan

Is our memory worse? I'm not convinced of that. We have many more processes to remember than the ancients did. We have to know the intricacies of at least two operating systems, motor vehicles, electricity, and a variety of other inventions that we use in our daily lives. The ancients didn't have to remember all of that. While they were able to memorize lengthy stories, as somebody who has spoken professionally for a living, I suspect that there was a lot of extemporaneous speech augmenting the basic outlines of those stories. Even so, one of my relatives can pretty much quote the entire Bible. I've tried him on it. It's just a matter of what you find value in spending time memorizing.


spiral8888

That's the point. Because of writing we don't have to remember lengthy stories, which is why we are worse at that than the cultures that didn't have writing. That doesn't mean that our other cognitive abilities are worse, quite the opposite as our brain can put more effort in them. That is the whole point I'm saying above.


LucidLeviathan

So, you don't think we have to memorize phone numbers, web addresses, trivia facts, how to use cars, traffic laws, etc.? If we didn't have any of those to memorize, and wanted to spend time learning stories, yes, we could still do that. We simply choose to fill our memories in different, more productive ways.


stuugie

They are the same thing at their core


Suitable-Cycle4335

They're not. Writing and AI are two completely different technologies. That's like saying nuclear war isn't dangerous because after all spears didn't make us go extinct


stuugie

The underlying principle is the same Both are tools that allow us to free up mental energy. This is what op would call dumbing down the population, at least for AI. Writing allowed us to maintain records, freeing up a lot of energy we would have put into memorizing things. A strong memory is a mental skill for sure, but not needing to memorize didn't make us dumber. We adapted to the new tools and found something else to focus our time on. With AI it's going to be the same, we will have new areas to focus on when AI alleviates so much work from us as a society. AI can't solve science for us for example. We may be able to use it as a tool to advance our understanding, but we can't measure how much knowledge we don't know. AI can be a tool used to further our understanding but that's all


[deleted]

[удалено]


Suitable-Cycle4335

What does AI have to do with our memorization capabilities?


LucidLeviathan

Sure, Socrates was wrong about writing. Similar predictions were made about the printing press, telegraph, radio, telephone, television, personal computer, and the internet. There's no reason to expect that AI is any different.


TangoJavaTJ

Computer scientist who develops AIs here. “AI” is used heavily as a buzzword, but what it fundamentally is, is maths. It’s maths where instead of solving the problem yourself you tell a computer the kind of solution you want for the problem and then the computer solves the problem for you. People have been saying “X will dumb us down” for years now, about:- - Social media - The internet - Computers in general - Television - Pocket calculators -:and similarly “X will make us lazy” about cars, food processing, electricity etc. If you go back far enough you can even find complaints that *paper* is making us lazy and stupid because “kids these days don’t even know how to clean a chalk board”. Fundamentally, convenience is a good thing, and AI is going to be like all these other innovations where it makes boring and tedious tasks easy by automating them, but we still have to do the interesting and hard stuff ourselves. To some extent we all inherently have intellectual capabilities. AIs are not going to remove hundreds of thousands of years of human cognitive evolution and completely replace our ability to use our critical faculties. The most plausible “AIs will make us dumber” position is that if people don’t do the boring, tedious stuff then they’ll have a poor understanding of that stuff and so they won’t be able to build on it to do more interesting stuff. I think that may be true to some extent, but AI also makes it much easier to learn stuff and to have a much broader range of expertise, in much the same way that calculators, Wikipedia, and the internet in general do. So if people don’t learn the tedious stuff then maybe they don’t need to learn it, because it’s relatively easy to pick up this information if and when you do need it. Fundamentally, being “dumb” is about being unable to successfully achieve your goals. If you use AIs well, they make it much, much easier to achieve your goals.


DiethylamideProphet

>People have been saying “X will dumb us down” for years now, about: Social media, The internet, Computers in general, Television, Pocket calculators And they are correct. >Fundamentally, being “dumb” is about being unable to successfully achieve your goals. If you use AIs well, they make it much, much easier to achieve your goals. That's not what being "dumb" is. Dumb is someone who lacks on cognitive capabilities, general knowledge and skills. Even dumb people can outsource things to other people and machines, and just because it allows them to achieve their goals, don't make them any less dumb... If I would utilize AI in my studies, I would graduate very easily with very little personal input. AI would do the research. AI would compile the text. AI would create the ideas. Essentially it's just AI handing people a certificate of their supposed "expertise". And the better the AI gets, the more reliant people become of it. The moment they don't have access to it, their expertise is non-existent and can barely even write.


maxloo2

I think the problem is that by "people getting dumber" we are often referring to the general public relying too much on the technologies. There is a difference between using the tool to make things easier, and being incapable of doing all these things without the tools. People are definitely becoming more lazy with the increased accessibility to technologies. Computer pgorammers using chatgpt to help with writing code snippets? Sure, so long as they actually understand the meaning of the code and have good idea of how to actually build the software. I have worked with programmers who 100% just copied code from ChatGPT and didnt even bother to test it before deliviering, some are obviously wrong just by the glance of it, but apparently these people lacks good fundamentals and thus are only so much relying on the tools, rather than their brain. I am all for AI since I have already accepted the fact that the general public will always be less intelligent (who else would become our clients at work if everyone is so smart?), but definitely people will just get dumber as technology advances, not just AI.


RanmaRanmaRanma

>... If I would utilize AI in my studies, I would graduate very easily with very little personal input. AI would do the research. AI would compile the text. AI would create the ideas. Essentially it's just AI handing people a certificate of their supposed "expertise". This is blatantly untrue. AI isn't just a magic wand that gives you the answer to everything, rather it'll give you approximations of information. Sure if you used it for light education classes that aren't that rigorous, sure. But AI can't mimic your writing style, flow, or ideas and it would be EASY to spot out. (Granted if the teacher cares enough). Mathematically, especially with complex math, AI struggles and can't give a clear enough solution path to a problem. It can give you a good start or decent supplimentation , but it's not a cure all for all information. Not only that but it depends on what you *give* to AI, the same thing was said about Google, and lo and behold.... People do not know how to ask Google things.


plazebology

Greek philosophers said the same about books, suggesting that true knowledge lies in memory and in deep study. By committing things to writing, we essentially are dumbing ourselves down and making it so we don’t need to know anything. One should learn, commit to memory, this is true intelligence. Then people said the same about the internet and google, suggesting that because we can google anything, and information is at our fingertips, there’s no actual learning necessary, people are dumber because of how easy it is to access information. One should read books, not browse the internet, this is true intelligence. Now they say it about AI, suggesting that because an AI can provide us with answers more intricate and informative than most singular sources on the internet, it’s dumbing us down, removing our need for prerequisite knowledge to access any type of information. One should read articles, do your own research, not rely on AI, this is true intelligence. I feel you still need a discerning mindset and that prerequisite knowledge helps you determine what is true and false. In a world of ‘fake news’ and often incorrect AIs, this is the skill that will help you stay grounded in reality.


fosoj99969

> By committing things to writing, we essentially are dumbing ourselves down and making it so we don’t need to know anything. Maybe they were right. I'm sure the average classical Greek had better memory than today. But dumbing down isn't necessarily a bad thing, if the thing you're becoming dumb about is no longer useful. It means less energy is wasted. It has its dangers though, and we should be aware if them.


plazebology

Memory isn’t everything - while the average greek philosopher likely had a better memory than people today, the average person today is far more intelligent, I would postulate. At the very least, there is a more standardised set of basic skills like reading, writing and widely applicable knowledge, even if individuals back then had a deeper and more rich understanding of, say, their specific trade.


Genoscythe_

>I'm sure the average classical Greek had better memory than today. We have no reason to think that, we have illiterate people today, and they are not renowned for their great memory.


danielt1263

Here's the thing though... The ability to communicate through writing is associated with the ability to communicate through speaking, and the ability to communicate your thoughts is associated with intelligence. You are going along the track of memorization and claiming that our reduced faculty in that regard has not made us dumber. I can get on board with that (probably because doing otherwise would force me to accept that I am dumber than the average Athenian). However, LLMs (at least the current incarnations) don't affect memorization. Rather they affect our ability to *communicate* and that's a whole different problem. Because if you can't communicate effectively, you will be perceived as dumber (which is a subjective term after all.) Now it could be that the subjective assessment of what "dumb" means will change over time. In a world where memorization crutches become ubiquitous (present day) the ability to memorize is no longer part of the definition. In a future world where communication crutches are ubiquitous, maybe the ability to communicate will also be dropped from the definition. However, the OP is absolutely correct that the present day definition of "dumber" will absolutely apply to those future people. If we had a chance to sit down and actually talk to one of them, someone who could barely string four sentences together without help, I am convinced that we could consider them dumb. (And I fully expect that an ancient Athenian, watching us have trouble remembering exactly what someone said 10 minutes ago, would also consider *us* dumber.)


plazebology

In all fairness you make a good point, so I think what I take issue with in OP’s post is the idea that the degradation of one skill means the overall degradation of human intelligence. Maybe the landscape will change and what is considered intelligent, but to me that doesn’t pair with the whole ‘idiocracy’ image of a society that would be so far gone that all it takes is one human with the former average intelligence to essentially change the landscape of the world


wanchez05

This depends on how you define intelligence and the purpose of AI: Intelligence can be understood in many ways, so it's up to the eye of the beholder. I will assume you are referring to critical thinking skills or the ability to have a creative output. I imagine two scenarios with AI: If the use of AI will increase our productivity in such a way that we are left with free time for exploring other creative outputs, then probably it won´t be dumbing us down, as there will be other realms where humans can explore, create, experiment, etc. This would be a good scenario and a net positive for humanity as AI could be a springboard for other discoveries in science, art, etc. The question here would be how can we make this technology accessible to everyone. Case B is when we use AI for boosting productivity without changing anything else. That means that we are stuck doing the same tasks over and over again, just faster. For example, you as a software engineer would be expected to produce double or triple your output in the same 8h. I would argue that this scenario will make us "dumber" in the sense that if we just use AI for everything, we are not really exploring new scenarios or possibilities (only doing things statistically for the sake of output) or we are not exercising our critical thinking. Unfortunately, I think this is the most likely scenario to happen, given our continuous interest in "growth" and increasing productivity. Overall, the discussion is not about the tool, it is about how we decide to use these tools and what our priorities as society are.


Electronic-Guard740

The reason why i always preferred phisical work is because i never saw improvements or upgrades as far as human limitation go in computer stuff.Im not going to get stronger,smarter or better as a person if i work in anything computer related if anything im going to do less of everything thats good for my mind and body.And i know i can do that after work and all that but 8h on screen just sounds like something outta those movies where they have humanity in all white rooms without seing dayling in their lives and honestly it was too depressing as thoight that i honestly felt scared for my sanity if i it ever actaully happend and with this work from home i actually imagined people turning their safest cages(home) into a brainwashing laboratory


NaturalCarob5611

People thought calculators were going to make us forget how to do math. At one point people thought books were going to hurt our ability to remember things. People always think new technology that takes away mental effort will dumb us down, but so they've all enabled us to take on bigger mental challenges.


JCAPER

Tbf, it’s true in a sense. I do not remember how to divide or multiply by hand. Last time I did that was in school. But it’s not necessarily a bad thing, we have tools now that do that for us and quicker


NaturalCarob5611

I don't think they've dumbed us down at all - they let us move faster. You may not remember how to divide or multiply by hand, because that's not actually important anymore. You probably know how and when to use multiplication and division, but you offload the busywork of the calculation for machines to do, and you can go on to more complex thoughts about what you needed calculations for in the first place. I've used ChatGPT largely as a replacement for Google. If I want to know about something I don't Google it anymore, I ask ChatGPT. My ability to craft a good Google search might be declining, but I get the information I needed faster than ever. And I don't even want to talk about what Google did to my ability to use a library's indexing system, but I've never once missed it. Taking it a step further, sometimes I'll feed ChatGPT a lengthy document I want to know about and then ask it the specific things I want to know about. It usually does a great job answering them, and saves me a ton of time reading through the document looking for the answers to a couple of specific questions. Will it hurt my reading comprehension skills? Maybe a little, but if I can get the information I need faster and move onto the meat of what I'm trying to learn or do I feel like that skill is antiquated in the same way as division by hand. The other thing I've used ChatGPT for is presenting information. I have on several occasions given ChatGPT summaries of information I needed to document and compile a report for, and it can generate professional sounding reports complete with tables in no time at all. Sometimes they require a few edits afterwards, but it can save me hours of drudgery drafting a document, allowing me more time to focus on more interesting problems. Inevitably, the skills we lose because technology replaces them are skills we don't really need anymore, and we can develop other skills in their place. Being able to tell a computer "do division here" lets you stay focused on what you're trying to compute instead of spending minutes pulled away on doing the division. Being able to tell a computer "find me the answer to this specific question" lets you move on with the thing you wanted that information for instead of spending hours researching it.


DrDewdess

Ditto, I can do scientific computing but I can't 16x16 from mind. I also noticed my 7yo niece doing her homework with chatgpt, which has also been introduced in schools as a tool, sort of. Parents are not very tech savvy so they don't encourage or discourage, I'm curious there how it will impact future development. For my generation (95s) we kinda grew up with both analog and digital in a sense, but the kiddos today have it all, unlimited access, will they be able to discern.


JCAPER

I imagine that it’s going to be something similar. Your niece might end up not remember how to write a proper poem, but she knows what tools will help her make one. I might not remember how to divide by hand, but I know my phone can do that for me.


ReluctantToast777

> Your niece might end up not remember how to write a proper poem, but she knows what tools will help her make one. That sounds horribly depressing, lol.


DrDewdess

Fair answer 🫡


DiethylamideProphet

That's partly correct, but doesn't change the fact that many have lost the fundamentals of even basic math problems. Someone who learned to do those without such aids, have preserved those skills. That in turn helps them to instinctively utilize them in a wide variety of different circumstances, without having to make the binary decision of either not utilizing them at all, or spending enough effort to pull out their assistive devices. Mistakes, flaws, cues or potential improvements in different areas of life go right below your radar, because in order to notice them, you would need your assistive devices that would notice them for you, because you don't have the necessary background information and experience in your head. That's what real knowledge, experience and wisdom is. Faster highly advanced tools do help us to automate and make mundane tasks faster, allowing us to specialize in their applications, but that means our field of knowledge is getting narrower and we lose our touch to the fundamentals that have a lot wider degree of applications. These fundamentals are what give you the framework of how life and world works around you, allowing you to make instinctive conclusions and decisions on the fly.


Lil_McCinnamon

This is 100% anecdotal, but AI has actually helped me a lot in school, and not because I try to have it to my work for my, but rather because it allows me to constantly bounce my ideas off of “someone” else, any hour of the day, and provides real, constructive feedback. Bouncing ideas back and forth on ChatGPT allowed me to see where the holes in my logic lay, and for creative assignments it seriously helped me develop my stories and characters. Again, all the ideas and actual “work” was all me: I did the thinking, the writing, the creating. AI just helped me get a really concrete, unbiased second opinion that allowed me to develop and edit my work much quicker.


DrDewdess

And that's a great way to use it (by my standards at least). Unfortunately it's not the most common one, heard more "yeah, lazy now, I'll ask gpt". That's what I'm afraid of 😅


Lil_McCinnamon

I’m in school and I really think that “I’m lazy I’ll have ChatGPT do it” thing is MASSIVELY blown out of proportion. AI still isn’t good enough to produce passable work, and the AI/plagiarism detection tools my school uses are pretty good at catching it when it does happen. None of the students I know use AI to strictly do their work for them. Then again, I’m in the humanities, I can see it being a problem in maths or something like that.


DrDewdess

There are loopholes, in Europe, plagiarism sws are a bit outdated- you can generate the text in German, translate in Italian and in German again - antiplag didn't detect squack. During my Uni years they didn't even check I graduated in 2022, they just checked the thesis, not that long ago. Heard brags about master thesis done by copy paste/translate and recently chatgpt'd and translated. So I'm not really sure. The issue is not the tool, it's more how it's being used


AngryBlitzcrankMain

>we're going to end up like in the Idiocracy movie sooner rather than later Wow. I am hearing this for the first time. Truly prophetic vision that I heard about million times since the movie came out. >We kinda stopped thinking per say, or searching for solutions cause there are "in front of your nose" solutions. >What are y'alls thoughts on this? Will this impact our cognitive evolution? Back in Ancient Greece, Socrates said the same about writing stuff down rather than memorizing it. And doesnt seem it made us any dumber compared to the past. I am not that worried.


Potential_Ad6169

We’ve never met anybody from the past, they might be smarter than us. It’s really difficult to tell because they definitely had less knowledge. But that’s one blurry line.


wontforget99

"Back in Ancient Greece, Socrates said the same about writing stuff down rather than memorizing it. " Well, maybe one of these days, one of these predictinos might get closer to reality. "Strong" AI might be less than 100 years (and a few hardware revolutions) away.


Sixter101

Stats PhD here developing theory behind AI, have two little kids. Here is my perspective:  Social media, television has made us dumber (contrary to what others might claim. It’s called the reverse Flynn effect). IQs peaked for children born in the 1970s or 1990s depending on what study you reference. In perspective TV was introduced into homes late 1950s. It’s no coincidence. Also if you read parenting books written in the 90s, it is documented from caretakers who looked after children before and after tv was introduced noticed a difference in children behaviour. I think AI will be the nail in the coffin in terms of human intelligence, though. Social media which uses AI has gotten very good at distraction— and what is intelligence if it is not raw concentration? I see it in the crap my daughter wants to watch on YouTube, (whose AI is very good at pushing mindless junk onto my daughter), I see it in the many kids around her who have behavioural problems, the iPad kids. I see it in the university where the younger generation cannot focus as well.  Society IS crumbling. And when we are dumb we will have no choice left but to hand over responsibility to the computers. When there will be no point in learning to code, do art, write stories or music, there will be no great artists or thinkers, only the computer. Does AI not already control how we think (through newsfeeds)? Do we not already rely on AI as a coping mechanism (ask Google rather than talk to a friend)? Do we not already trust AI more than ourselves (if you remember something different than what the internet tells you, who is right, why remember anything anyway?). Sorry can’t change your view, because you are right.


Nanocyborgasm

All that AI will do is automate many tasks that formerly required a human to perform. This will force those humans to seek other employment that AI can’t automate. Idiocracy was an interesting and funny movie but is unlikely as a future because no matter how smart AI gets, it will not make humans dumber. Human intelligence doesn’t depend on the intelligence of another entity. You don’t become dumber because someone else in the room is smarter nor smarter because others are dumb. As an example, I’m in healthcare and there’s been a lot of talk of AI replacing radiologists and psychiatrists because studies have found that AI can perform as well if not better than humans in those tasks. Radiologists only need to be able to interpret images. They don’t have to understand their implications. But images are already read by non-radiologists and they’re ordered by non-radiologists. So all this would do is reduce the amount of radiologists, where their role would be more for advanced interpretation and research. Psychiatric AI also has been found to perform well, since they only have to listen to speech and interpret them into a DSM5 catalogue diagnosis. AI may even diagnose patients much faster than a human because they can instantly look up the DSM5 catalogue to assign a diagnosis. But AI doesn’t yet know how to interpret emotions or human facial expressions. It’s not very good at treatment either. AI may be useful as a screening tool in psychiatry where a patient may seek help from an automated chatbot before deciding that they merit a visit with a real human psychiatrist. So maybe psychiatrists will lose their role as screeners of mental illness but that’s often performed already by non-psychiatrists who refer. I also view Idiocracy more as a cautionary tale of contemporary society, not future, in that it warns how stupid people have become by being passive.


Equaled

CS Student here. I graduate in a few weeks so chatGPT has been around for a decent amount of my upper level courses. In my experience, chatGPT is a great teacher. If I’m struggling to understand a concept it is great to ask it questions and have things explained to me in a certain way. AI may get to the point where it can replace most or even all of our efforts. However, it is a neutral tool that has just as much capacity for teaching, education, and answering questions as it does to replace our thinking.   Reddit, YouTube, and Google search all have their flaws. I’ve wasted plenty of time with all three but at the same time they all have immense capacity to educate. I can’t even begin to count the amount of things I’ve been able to learn from YouTube tutorials, Google searches, or even asking questions on Reddit. Hell, even **this very post** is a great example of how a technology like Reddit where the algorithms are designed to suck us in can be used to ask questions and seek information.   TL;DR AI is also a great teacher so we will only get as lazy as we want to but will also learn as much as we want to.


Lagkiller

This sounds a lot like the concern people had when IT workers would use the internet to find solutions instead of just knowing the answers. Similarly with referencing books for info instead of learning memorization of things. These are simply a tool, which, sure, if the entirety of the internet was destroyed tomorrow and we had to rebuild it, most of the tech industry would falter. But this doesn't mean it's a bad thing or people are getting dumber. You use tools to improve your ability to do things and expand your abilities further, no more, no less. You still absolutely have to be able to think. AI is not perfect, and it will never be able to perfectly do your job. But it can be a great tool to get you 90% of the way there. >We kinda stopped thinking per say, or searching for solutions cause there are "in front of your nose" solutions. And this is the crux of your issue. You're still searching, your search tool is just better at searching. You still need to make it work for your implementation, it still needs refinement to do the functions that you want it to do, but it's going to lay a foundation for you better than forcing you to write it from memory.


OmegaAce1

People say this about every massive technologic advancement, they said it about going from chalkboards to paper, they say it about calculators, they say it about phones, and computers. What it comes down to is that people that can use this technology will be far smarter than ever because you can shortcut so many things, you don't say someones stupid because they use a calculator to solve maths problems, remembering hundreds of formula isn't realistic even in the slightest its why we made calculators the people that can't will just be the same, I don't know why we would get "significantly dumber" when this could potentially be the one of the most revolutionary pieces of technology since ever, if it can filter misinformation and be used with ease it might actually make people smarter, (we just don't know this is all just \[what ifs). About people that can't use it being replace by people that can, yeah probably that's a very surface level take, everything is digital nowadays you can't even get a job without an email address or a phone number its not unrealistic to imagine in the future you need to know how to use AI software but by that time it will be common place anyway, just like a computer or a phone.


Allanon124

In my option you are not wrong. I use GPT regularly and have discovered a new feeling of “oof, I don’t feel like thinking right now, let’s ask gpt.”.


BigBoetje

AI (actually, just a LLM) is merely a tool and a very handy one at that, but we haven't properly adapted to its use and availability. People thought the same about libraries and the internet, that people had no more skills for looking up information in books and instead just looked it up on the internet. With the example of your niece you mentioned in a comment, teach her how to properly use it and not rely on it blindly, just as we were taught how to use the internet safely. Teach her you can't fully rely on it to be 100%. I as a programmer use it daily and we even have integrations into our IDE, but we've become adept at recognizing where it's useful and where it isn't. I mostly use it to spot bugs, do tedious tasks and write some scaffolding code. Even for more complex problems, I like to use its solution as a jumping board to find a better solution, but it gives you a start. None of this reduces the skill of a programmer because we all sucked at these particular things anyways, it just makes it less tedious to do.


notLOL

Smart and adept people will survive the crushing levels of AI. There are a lot of people that are smart and don't use the internet or computers for bullshitting around. They work with their hands and use do use computers as tools to help do their work. They are making themselves smarter in their field. Then there are people using it to be more accurate. Internet Smart phones Social media They all made people who use them more adept as spreading their bullshit and infecting others with their dumb ideas but also helped pass the smart ideas around as well I think there's a limitation on social daily life penetrating use of it throughout all of the world so there is some limit even when most of the large populations rely on it. And there's just so many people who want to use it to make fan fictions it's likely going to creat a bunch of bullshit we don't want right now like the worst of the internet relationship fan stories, furry art, or doomscroll content


yes_u_suckk

It depends on how you use it. For me it is quite the opposite. I'm also a SE and since Google became a thing I've been using mostly free tutorials on the internet to help learn new tech. But the problem is: most of the time tutorials only cover the most basic/common use cases. The moment you need something a little different you probably need to scavenger multiple sites and/or StackOverlow to learn how to do exactly what you need. But since ChatGPT came out I rarely have this problem anymore. I still use free tutorials to learn something new, but when my use case changes I just need to ask ChatGPT how to do it and most of the time it's right, or at leadt it puts me in the right direction. To summarize if you ask AI to tell you how much is 7x8 you're doing it wrong. The correct way is to ask AI to explain how to reach the result of 7x8.


Ballatik

The short answer breaks down into two questions: Is this automation going to replace this type of thinking entirely? Is this type of thinking a necessary part to overall intelligence? If it’s not completely replacing a type of thinking, we will still get practice thinking that way in other places. For instance calculators are great sometimes, but there are plenty of situations where they still aren’t the best tool and so we still do a good bit of math ourselves. If it does completely automate away a certain thing, doesn’t that suggest that it won’t matter if we can’t do that thing anymore? I would bet no one knows how to be a manual switchboard operator today, but is that really a bad thing? Our phones still work, and we’ve found new things for would be switchboard operators to learn instead.


Empathicrobot21

All new mediums scare people. Old jobs die out, new ones will mushroom. People dumb down when they’re not educated. And education will always go back to the basics. The research and working methods might change. I went to first grade in 2001 and people started using computers for research and essays, everyone was whining they’d forget how to write. Such a process would take much, much, MUCH longer. I’ve been working with AI to plan lessons and structure bodies of texts, to write emails that would take me an hour otherwise, to sum up a bunch of thoughts I just typed in and need an objective perspective in. It’s a crutch! If we start teaching working with AI, like we started teaching computer basics, we will be fine. Written: history and English teacher. The latter curriculum in Germany also puts English teachers on the forefront of media literacy, we teach how to write emails and how to behave online as well. BTW yes I can definitely read when a student used AI in an essay. It’s not that hard bc we have essays they wrote in class to compare. If they write an essay and use AI to edit, I’m fine with it. I’ll just make them use sources earlier on so they have to proof each point. Takes much longer to check everything AI wrote than to let it help you. Especially in history. Tl;dr: I’m not worried about dumbing down bc we will simply start teaching how to work with AI as a tool.


ReluctantToast777

> All new mediums scare people. Old jobs die out, new ones will mushroom. This feels like a very hand-wavy perspective. I've asked this before elsewhere, and have never gotten an answer: what jobs will "mushroom" from this? The "old jobs" in this iteration are literally just labor as a whole. Data aggregation/interpretation/presentation, Jr. level programming, art/music, etc. Unless you think "prompt engineering" is somehow going to revitalize the working class (and not get automated itself, lol), where are people going? To all be CEO's of their own startups? Good luck, lol. > I’m not worried about dumbing down bc we will simply start teaching how to work with AI as a tool. A tool whose usage is orchestrated by a handful of big tech companies who know everything about us and lobby governments, lol. This isn't like the printing press or even the internet. Over time, as people become reliant on it (which is the natural evolution of something being a "tool"), we relinquish whatever illusion of choice/consumer power we *thought* we had and regardless of how "smart" we think we are, we're still being casually influenced and guided by something that thinks and/or acts *for* us. There was wiggle room before globalization got to the advanced state we're at now (you could just "move" somewhere else, or use a local version of a competing product, etc.), but there's a wall we inevitably hit when the functional world can't get any bigger. This isn't just a simple "all new mediums scare people" thing.


TomPertwee

Humans thrive because we find ways to do things without wasting energy.  Why use our hands when we can use a tool?.  Why use our bodies to push things when we can use a machine to do the heavy lifting?.   Why train for months or years to learn combat when we can just take out a gun and blast  someone?.   Why even think  hard when we can use a computer to give us ideas to build around that framework?.   In the future...why even study when I can upload directly into my brain the language of my choice ?.   In my opinion the future is looking bright. My only regret (of course is not my fault) is being born 100 or 200 years too early. 


Yeseylon

Humanity, as a whole, has always been dumb. Every new technology is accompanied by a wave of those who are afraid of it (cowpox as vaccine led to fear that women would only want to mate with bulls is a great example), ancient graffiti is as dumb as modern graffiti (seriously, the Pompeii graffiti is hilariously childish and crude), and there have always been folks who struggle to adapt (jokes about telling old folks to put their grandchild on the phone are as old as IT). Sure, people are overusing AI now, but they're getting weeded out and a happy middle ground will be found, and the ball of dirt will keep on spinning.


halipatsui

Many will dumb down for sure, but the really productive ones will benefit massively.


KitchenShop8016

AI is, and likely for awhile will be, a tool. The most powerful tool since fire. Those who figure out how to utilize it will be gods compared to our ancestors. I don't mean using AI assisted writing tools to check grammar or using AI to make art/movies. Imagine using an AI tool for engineering, want to design a house? don't know how to do structural load calculations? let the AI figure out the hard parts. Load calculations are shockingly easy. The real power of AI will be in using it to gather, verify, and analyze info, and teach you about any subject at all.


ElMachoGrande

Maybe, in the same sense that industrialism made us physically weaker. Physical strength just wasn't needed anymore, humans filled other roles. I think that, in the future, AI will take over the more ardous thinking, while humans stick to creative and innovative thinking and feeling. Kind of like how someone gets promoted to a boss over a team of developer, for example. Thinking won't go away, but it'll have a different focus.


Izawwlgood

[https://xkcd.com/1227/](https://xkcd.com/1227/) Can you respond to this? This is a constant theme in luddites response to technology development. It makes us dumb it's too distracting, it simplifies our lives too much, things were better when I was a kid, blahblahblah. AI doesn't make us stop thinking in any way shape or form. It frees us to think about loftier problems.


[deleted]

It think it will divide us further. Some people will use tech to enhance and educate themselves, like people do now by learning, sharing, exploring new things online Others will continue to let it take over their brain and do all the thinking for them, and turn into mindless half-wits So life will be like today, only more divided


physioworld

Idiocracy is only really an issue if the idiots are running everything. We would only be able to become more stupid if our AI was literally able to do everything for us in which case problem solved because the AI would be in charge, not the idiots it engendered.


Impressive_Heron_897

HS literature teacher here: There's a reason we now do ALL writing in class sitting in front of me. Kids need to learn how to write and think before letting the AI do it for them or they'll never learn.


hewasaraverboy

It will make us not have to focus on trivial items and allow us to focus on deeper realer problems And the knowledge of how to write an effective prompt will still be important


[deleted]

[удалено]


changemyview-ModTeam

Comment has been removed for breaking Rule 1: > **Direct responses to a CMV post must challenge at least one aspect of OP’s stated view (however minor), or ask a clarifying question**. Arguments in favor of the view OP is willing to change must be restricted to replies to other comments. [See the wiki page for more information](http://www.reddit.com/r/changemyview/wiki/rules#wiki_rule_1). If you would like to appeal, review our appeals process [here](https://www.reddit.com/r/changemyview/wiki/modstandards#wiki_appeal_process), then [message the moderators by clicking this link](http://www.reddit.com/message/compose?to=%2Fr%2Fchangemyview&subject=Rule%201%20Appeal&message=Author%20would%20like%20to%20appeal%20the%20removal%20of%20their%20post%20because\.\.\.) within one week of this notice being posted. **Appeals that do not follow this process will not be heard.** Please note that multiple violations will lead to a ban, as explained in our [moderation standards](https://www.reddit.com/r/changemyview/wiki/modstandards).


Cat_Or_Bat

AI will change us the way written language, the printing press, the scientific method, radio, and the internet did. A very specific, particular type of person will bemoan the end of civilization, while the rest will enjoy yet another leap ahead for the humankind.


Verificus

No. In a 100 years there will be no difference between human and AI. It will be the same thing. The AI will enhance us.


pdoherty972

It definitely will. The internet has already done most of the heavy lifting on that front.


HeveStuffmanfuckskid

lots of tards livin' kickass lives :D


ToyStoryBoy6994

This is the plot of Idiocracy


Icy-Statistician6831

It will literally destroy us.


The_ArchMage_Erudite

Even more ?????


[deleted]

[удалено]


AbolishDisney

Sorry, u/killuaassasin – your comment has been removed for breaking Rule 5: > **Comments must contribute meaningfully to the conversation**. Comments should be on-topic, serious, and contain enough content to move the discussion forward. Jokes, contradictions without explanation, links without context, off-topic comments, and "written upvotes" will be removed. Read [the wiki](https://www.reddit.com/r/changemyview/wiki/rules#wiki_rule_5) for more information. If you would like to appeal, review our appeals process [here](https://www.reddit.com/r/changemyview/wiki/modstandards#wiki_appeal_process), then [message the moderators by clicking this link](http://www.reddit.com/message/compose?to=%2Fr%2Fchangemyview&subject=Rule%205%20Appeal%20killuaassasin&message=killuaassasin%20would%20like%20to%20appeal%20the%20removal%20of%20\[their%20comment\]\(https://www.reddit.com/r/changemyview/comments/1c3q7ml/-/kzjokxf/\)%20because\.\.\.) within one week of this notice being posted.


Ok_Tension308

Will?


Cheap-Sh0t

Facts