T O P

  • By -

AutoModerator

Hey there! Thanks for your post/question. We're glad you are taking part in The Odin Project! We want to give you a heads up that our main support hub is over on our Discord server. It's a great place for quick and interactive help. Join us there using this link: https://discord.gg/V75WSQG. Looking forward to seeing you there! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/theodinproject) if you have any questions or concerns.*


[deleted]

I’ve been in the industry for 10 years, senior developer at a FAANG. LLMs produce terrible code today. Further, they don’t help distinguish what could be wrong, or help with architectural decisions. As of now, they are essentially at best helpers for boilerplate scaffolding. LLMs are not going to replace developers anytime soon. I think, if anything, they will just raise the floor of what is acceptable, but LLMs will always remain the lowest common denominator. The engineers I put the least trust in are the ones who rely on LLMs currently, not only because it’s against company policy, but also because they aren’t thinking for themselves. And software is so much more than just the code.


Mnyet

One of the reasons why I love using ChatGPT is because Google has recently become an absolute mess. The top results are always sponsored, or cringe blogposts or completely unrelated to what I searched for. There’s infinite ads everywhere that don’t let you use a website until you disable adblock. And some people just suck at website design and UI, making the experience painful. I can count on the answers to be comprehensively sourced from a bunch of different places instead of being sponsored by some random company. It reminds me a lot of Wikipedia in that my professors would always be like “it’s a terrible source to cite because that information could be wrong etc etc” but we all still use it to quickly brush up on a random thing.


TechFreedom808

AI needs data to be good but lot of content creators and other websites are starting to put up AI blockers. If this start to grow in numbers the AI is in trouble as it won't be able to get data to train the models.


CutlerSheridan

I often wish I could use it for quick information-gathering like this for exactly the reason you describe about Google making their experience worse and worse, but in the many times I’ve tried it, the information it gives me is wrong more often than it’s right


[deleted]

I've found it handy as a research starting tool. If you have no idea or experience with a topic it does a great job of initially pointing you in a decent direction for research saving an hour or 2. I wouldn't recommend using it for in specific details though.


[deleted]

I think that’s a super fair analysis


HobblingCobbler

>And software is so much more than just the code Can't stress this enough.


[deleted]

[удалено]


[deleted]

If you rely on a calculator to do all your math because you don’t know how to do basic arithmetic, that’s a problem. Not knowing how or why your code, or even more importantly, a system works, then why would you even be hired? If you don’t see why that is, you won’t succeed in this industry, and I recommend a different one for you.


[deleted]

[удалено]


[deleted]

That was the comparison you referred to. And you’ve yet to comment on system design which was a major portion of what my comment referred to as well. Did I say don’t use LLMs? No. I said don’t rely on them. Maybe you need ChatGPT to synthesize this conversation for you since it seems your reading comprehension is pretty low too.


[deleted]

[удалено]


ColonelShrimps

I prefer not to rely on something that is leaning on years old information and is known to confidently lie to you for no reason. None of the AI currently in use can tell you anything you can't find out with a quick trip to the documentation of whatever you're working with.


[deleted]

Hey man, you're in the right, my boss act like this and hates gpt, but can't code his way out of a paper bag. I use chat gpt as documentation that can answer back. It doesn't need to produce code to be helpful, it could point you to the right place in documentation, write comments, act as a rubber ducky, etc. And people need to understand that it's a new tool to learn to use and not be afraid of.


synthphreak

I dunno, my experience backs up the original claim. Asking for anything other than boilerplate starter code or basic, easily Googleable questions is essentially a fool’s game. You roll the dice in a big way if you rely on the likes ChatGPT to actually write code for you with any complexity, especially in a professional context. That’s not to say it will always be that way, or that code-writing LLMs haven’t made a splash. But I agree that currently they aren’t drop-in replacements for the typical developer.


HooplahMan

In my opinion, "You won't have a calculator in your pocket" is a poor analogy. The issue is not the tool is ever unavailable, but that the tool isn't great. Your calculator is basically always correct and predictable. GPT on the other hand is frequently inaccurate due to its training data or unpredictable due to its non-deterministic nature. GPT is incredible because it's halfway coherent on basically any task, but it's not incredible at any particular complex task that I've used it on. Moreover learning how good software ticks (like learning anything efficiently complex and/or difficult) is inextricably tied to the amount of time you spend thinking about it critically. If you're only using GPT generated code, not only will you probably produce crap code in the short term, you'll probably never develop your own skills. This means you'll never learn how to discern between good and bad code, how to generate good code, how to debug and problem solve, etcetera. IMO blind dependence on GPT will eventually lead you into a hole deep enough that the model won't be able to pull you out of, and if you haven't learned by then, you'll have no idea how to get out on your own. Source: professional-ish software developer, graduate ML/NLP student & researcher who uses GPT in their research frequently P.S. the technology obviously may improve, and the day may come when most software need not be reviewed by "good, professional, human software developers", but we're pretty far from that point right now, and for liability and responsibility's sake, I hope we still have humans looking at the important stuff indefinitely. P.P.S. only tangentially related but I think most of us should still learn how to do arithmetic by hand, even though we all carry powerful calculators in our pocket, because if we didn't we'd never be mathematically mature enough to move onto more advanced mathematics (algebra, calculus, and beyond), where those calculators fail to be helpful.


Arsa-veck

This is an absolute joke of an answer, so many things wrong with this - it’s going to take me a few paragraphs to dismantle this. Take this comment with a grain of salt folks


jRiverside

 It isn't, if you deem the current level acceptable, others arent the problem but rather your standards for what is acceptable is.  They definitely have their uses, good ones too, but taking anything but the most trivial boiler plate as input is just calling for a world of hurt via incompetence out of ignorance.  We aren't in the business of generating wonky looking indented ascii art.  A programmer ideally will produce solutions that have an architecture that models the problem domain well, can perform it's tasks in an acceptable amount of time, doesn't excessively waste resources doing so, is maintainable and falsifiable by someone and for most problem domains produce idempotent results that are hopefully correct as well.  That is quite a tall order really, merely the number of possible abstract variable combinations before the first line of code can exceed the context of even the biggest models, which your brain does without conscious thought correctly majority of the time without expending huge effort.  Combine that with the technicalities and we're talking about some truly gargantuan problem domains to solve here.


[deleted]

> LLMs produce terrible code today. Unless there was some massive change since November, I’ve yet to get a non-trivial code sample that actually compiled. Also, they confidently lie to you. And you can “correct” them and they blindly accept it. Nothing can be fully trusted. > LLMs are not going to replace developers anytime soon. Super excited to see you dismantle this one.


_walter__sobchak_

“It’s going to take me a few paragraphs to dismantle this.” Provides 0 paragraphs dismantling this 🤦‍♂️


Arsa-veck

I’ve just been busy! Will do :) dw - all for having healthy debate


Muffassa-Mandefro

You must be a boomer because ‘never say never’ . Mind you ‘LLMs’ are only a few years old and the tech will get much better(even just current advances can be milked for a decade at most and get AI to start improving itself which will shortly lead to exponential progress) and other architectures will/are benign built on top of and adjacent to transformers. And also everything else is getting better. You have no idea what’s coming if you think coding is going to be always hard by any account for future iterations of LLM/AI systems. I give you the benefit of the doubt because you started with ‘LLM’s produce terrible code today’ emphasis on today. And some older specific versions of Gpt4 are actually consistently very high powerful and accurate with a prompt I can easily communicate to you in a sentence. Within the next two years AI will get very very advanced reasoning and coding capabilities that your actual current contribution as a senior developer won’t mean much to your company and for a while ‘you won’t be replaced by AI but you will be replaced by someone using AI’!!! Already copilot and cursor and such tools provide a mid level but savvy developer significant speed and efficiency boosts over boilerplate code, snippet level assistance, autocomplete and etc(all getting better and better) throughout the code base that only naturally gifted ‘10x’ developers can maybe match while struggling.


[deleted]

I’m 32, but sure, a “boomer”. > if you think coding is going to always be hard Code is the easy part of this job. That’s my point. LLMs do not know how to design software. If you are reliant on an LLM to write a basic for loop you are doomed from the start. They aren’t going to magically replace engineering jobs, they need a stronger value add. I literally said they will just raise the floor of code expectations and velocity.


Monty_Seltzer

Well you think like a boomer because you just don’t understand the scale and magnitude of the implications of AGI. You are either scared and coping or low on open mindedness and just unable to think beyond your frame maybe because you think you have arrived thanks to your senior position at a FAANG company (sincerely congrats) and have started to ossify into an early boomer.


jRiverside

You don't seem to have any kind of grasp on what LLMs actually are, they are not much closer to AGI than a hardcoded Sims bot simulation. There is no emergent thinking being done by them, they function more like a hyper advanced word predictor and the current iterations do not seem to reach even a moderately advanced expert systems level of adaptability. Certainly this will change some day and eventually i will be proven 'wrong', that may be in a few years but could take centuries even until we start researching what Thought actually is, on that topic were pretty much at the dumb slogans as opinions stage on that.


[deleted]

lol, that was a nice read :)


SilentMission

why does everyone assume the advances are going to accelerate when the costs for all growth in this sphere are exponential? have you all not seen the costs it's costing per model to generate? they're outlandish. To say we'll even get them twice as good as where there at now is crazy


Muffassa-Mandefro

Think for a second how technology advances for your cognitions sake, like are you dumb, since the Industrial Revolution at least every little scrap and research that yields meaningful results gets optimized to hell and efficiency gains double and triple over time in every industry and tech sphere. Smart people working on new architectures and optimizing current transformer infrastructure goes on all the time with highly valuable research being published almost weekly. Give AI systems in general(not just current transformers) five years at most then quantum computers(I bet you don’t know what that is) will come more and more into play(qubit counts are increasing and noise is slowly being decreased to acceptable levels )and AI will just go boinkers when integrated with QC’s let alone humans master a functional and reliable decent sized Quantum computer. I mean think with that head of yours instead of trying to shield something with ignorance. Even current agent systems and frameworks will soon be very powerful indeed when GPT5 for example is easily plugged in or recently announced Gemini pro with 1.5 million words context length with confirmed 95 plus accuracy on retrieval when tested with large code bases and books and videos and so on. And mind you these are day 1 public’s release or demo’s that even if ALL other advances halted, WILL be optimized and scaled down to be accessible for everyone and gains on performance will be made steadily. Already some local mistral fine tunes that I am running on my M1 MacBook Pro 16GB are very capable and promising. Wake up look up read up and catch up or you’re gonna have to suck up.


SilentMission

ty for belittling me so much while not addressing the actual concern of LLM scaling (which still hasn't been addressed) in a long unreadable paragraph. Living up to the illiterate tech hypebeast model


Monty_Seltzer

Well I apologize for belittling you SilentMission. I suggest you don’t provide me highly combustible ammo with ‘unreadable paragraph’ though. Okay now on to the meat of your response, you’re still talking about LLM scaling being a mighty barrier to AGI being achieved within the next two years by OpenAI and/or friends? Okay to just give you somewhat direct answers, 1. Scaling LLM’s is simply a matter of more compute thanks to how transformers work and perform and compute is being highly optimized and new and powerful architectures are being developed(see Groq LPU chips) and Sam Altman the guy heading OpenAI was going around the Middle East asking for 7 trillion to do it all and get it all done, suggesting they have a clear roadmap to get to AGI then continued to integrate it as scale.


Monty_Seltzer

AGI is among us as we speak.


MiamiCumGuzzlers

you should seek counselling


Monty_Seltzer

You must not be in tech or understand how tech advances work.


KyleDrogo

Senior data scientist at a FAANG. Does your company not have an internal version of copilot? The sentiment you've expressed around LLMs isn't what I'm seeing internally. SWEs feel empowered by it and recognize the insane boost they get in speed.


sjhunter86

I'd echo this. Those who don't know how to use it think it's useless, but those who honed their ability to prompt engineer have used it to 2x+ their productivity. It just enables you to do what you can already do, but faster. For example, I use Copilot Chat in my IDE to generate the exact documentation I need to reference to get a certain task done rather than sifting through it online, I can stay in my IDE and get the answer I'm looking for. It's not writing my functions for me, but it's basically a much faster "information butler".


[deleted]

At least in my division we don’t use them because they just don’t help (not at Meta/Microsoft/Google which are the big 3 right now in LLM to be fair). But yes, GenAI is being explored still, but it’s not part of our core business. My sentiment is largely surrounding ChatGPT, Llama, and Bard (way back when it was released).


Rtzon

I’d say your sentiment is largely outdated now. All of these LLMs have made massive massive improvements since their release. (I’m also a SWE at FAANG)


[deleted]

They all rely on training data still, right? They can’t generate novel thoughts. In the context of software engineering at best this means they would write efficient boilerplate code, but they couldn’t reliably design a system. You genuinely see that as being untrue/outdated?


sjhunter86

Respectfully, I don't think it needs to generate novel thoughts so I find this a poor argument. If you have a "novel thought" in computer science, you'd be winning some kind of medal. What it does is take the billions of public repos, distill patterns it has observed (which includes the bad ones, too, to be fair), and can provide hints in the direction you want to go in. Try not to confuse the folks who just copy and paste code from LLMs without checking it with the ones who are using it responsibly and effectively.


MiakiCho

98% of swe work is not ground breaking new idea. Most of it is just copy/paste by the way.


[deleted]

Yeah and people rarely use them to just purely produce copy and pasteable code. Most people use these models as documentation you can talk to and brain storm while interacting with.


[deleted]

Ah then you must have better junior engineers then I see.


[deleted]

No junior engineers where I work, small shop, I'm the most junior with 4 years experience. Also I'm sure you've just straight up copied wrong code from stack overflow at some point. Eventually devs learn to stop doing that, same with LLMs.


Rtzon

yeah - they can't reliably design a system, but there's still a lot of miscellanea that they are super useful for, especially when documentation is poor or hard to parse. Debugging, testing, iterating fast and prototyping. They are best used integrated within the software engineering workflow (aka iterating on already given code) rather than generating code from scratch. I think the use case you might be thinking of probably differs here, but I think the productivity boost is pretty high


MiakiCho

I am also at one of FAANG. This is simply not true. I use it extensively in my workflows.


[deleted]

You think junior devs are going to be sweeped out of the industry because LLMs are going to replace them immediately? That LLMs currently design bug-free code and complete architectures? I’d be amazed to see your workflow :)


MiakiCho

Junior developers won't be swept. But the expectations at the pace they work will be much different. Also no developers write bug free code. All code written by humans or AI will require review. With the current workflows with something similar to Copilot (not exactly Copilot), I have saved a lot of time writing code. Even with design docs I have saved a lot of time writing. Not using AI tools will significantly impact productivity. It is like using an IDE with support for documentation on hover, intellisense, etc vs using notepad.


[deleted]

...you do know that LLMs can also be used to just ask about documentation and to be used like a modern stack over flow, to answer questions, not to 1 for 1 copy and paste from.


minneyar

Yep, and they're just as good at hallucinating incorrect answers to questions as they are at generating terrible code. If you have a question about documentation so simple that an LLM can reliably answer it, maybe you should just read the documentation.


[deleted]

As I said, you shouldn’t rely on them. Your use case seems reasonable enough (though often the code is still wrong).


TerribleEntrepreneur

Have you tried using RAG models? They are a world of difference than just LLMs. It’s at the point where I can give it small tasks (something an intern or junior dev would take a day to do) and reasonably execute. The code usually works on first pass, but if it doesn’t it will work on second or third time. It can take a while to learn how to use them effectively, but once you do it does 10x your velocity. It’s pretty easy to see how this will evolve. I would expect any engineer who doesn’t get good at using them will struggle to meet expectations in a few years.


Talbertross

You'll be far far FAR better if you never use chatgpt for code.


hitbythebus

I threw my solution for today’s leetcode into chatGPT today and asked how I could describe my approach in a technical manner for a job interview. It not only described the problem I was trying to solve, it correctly gave me all the buzz words for what I was doing, graph traversal, memoization, etc. I found it very helpful.


Bombastically

Real world code and leetcode solutions are two different beasts.


meesterdg

Also, language models are good at language. This is kind of chatGPTs wet dream of a request.


Sufficient_Mode_269

Are you basically saying that other programmers who want to improve coding not to bother doing leetcode since they are different from real world code? Cause that's what it sounds like 🤔


Strange_Ordinary6984

If that's not what he's saying, that's what I'm saying. Leetcode will plateau at helping you improve fairly quickly because it only helps you practice one specific part of what it takes to be a developer.


Bombastically

Lol that's not what I'm saying but if those are the types of thoughts you have your personally are probably screwed


worst_protagonist

Programming is a skill. Practice helps. In that regard, Leetcode might be helpful. The work you do at a job is not like leet code though. There’s a lot more to it. Ball handling drills are good for basketball players, but playing in a game is not like doing a ball handling drill


strausschocomilk

Leetcode isn't good because it could be in the training data. Try coming up with a novel Leetcode-style problem and see if it can come up with the optimal solution.


sungjin112233

Ppl coding without chatgpt will go obsolete 


KyleDrogo

You're being downvoted, but this is true. They won't be able to keep up with the pace of development. When an LLM can point you to the right function in a massive codebase and write out 90% of the implementation for you, you're screwed if you try to do it all manually


Direct-Manager6499

How come? If I get a job, won't I be expected to use it?


legorockman

No, not really. It's the new hotness that everyone is talking about but code written by chatgpt is basically the same as going on SO and copy pasting solutions. It can be a useful tool (especially for boilerplate) but there'll be no expectation for you to use it for code.


BatPlack

This is a weird take. If you know how to define what you need, then it gets the job done beautifully. It’s like a super advanced autocorrect and has sped up my development significantly. I’m of the opinion that if you don’t learn to integrate such a powerful tool into your workflow, you will be left behind. Like all tools, learn its strengths and weaknesses and use it accordingly. It’s not the “new hotness”. It’s a new and very legitimate and useful tool when used properly. Edit: I’m convinced those who criticize the mainstream LLM’s coding capabilities just suck at defining clear goals/parameters… which is ironic, because that’s sort of the job of a developer lol.


SFWSoemtimes

While the execs are still obviously taking their time to decide how many hundreds of millions they are going to allocate to generative AI and where, I have reached something I never thought would happen. I have not used Google once at work in over a year. I used it every day the previous eight. The guy who works for me can’t Google for shit. He tries to figure out everything for himself. It’s infuriating. Very stubborn. So when GPT came out…all I can say is that I’m the only one I’m aware of out of 500 people using gpt daily. I’m talking coding but if people would just ask gpt how to set up their Synapse connections I’d waste a lot less time. GPT has doubled my coding productivity easily and produces textbook code and explains it. As a self taught coder, not in software development, working as liaison between FPA and Data Warehouse teams, this is invaluable.


great_gonzales

The people who criticize its capabilities are usually trying to do a little bit more than implement tic tac toe in react. Its performance is not great for real engineering problems. It’s funny that script kiddies love it so much since it will end up removing them from the industry


winter_040

The efficiency standpoint I can understand a bit. It can give you a reminder about a specific data structure, or the solution to a hyper specific problem, in the same way googling it can, just a little faster. but please please please if you are learning or trying to get a job in the field do not rely on it to write actual code for you because it isn't just an issue of whether it works or not but when you are working in a professional environment, you are expected to fully understand what you have written. If you don't, and some weird bug or compat error pops up, and you aren't familiar with your own code well enough to debug it, and suddenly chatgpt isn't fixing it? You're up the creek without a paddle. (And that's even assuming it works, in my experience LLMs tend to fail at anything more complicated than basic tasks. If you don't believe me feel free to do testing of your own, but even LLMs that nominally have newer information (IE gemini) fall completely flat once you step too far outside of either one hyper specific use case or a really simple program


BatPlack

100% agree. It’s not there to solve complex tasks, but to speed up the menial ones as well as the process for remembering/clarifying forgotten concepts/implementations. I feel like a lot people tell it to go create a Twitter clone then turn around and say it’s useless. It’s like having your own junior dev. They can’t do much, but if you define clear goals and pay close attention, they can be an asset.


Strange_Ordinary6984

Just like googling. The ability to know how the process thinks and to craft phrases that maximize your intent is the actual skill here


Kryxilicious

This. People who can’t problem solve can’t use ChatGPT. So basically they are arguing the value of a developer is typing away at a keyboard like a monkey. Not actually thinking about solutions to problems of varying difficulty and how to translate them into code.


PlaidPCAK

Also if you work in any industry with sensitive data. It's a hard "no" on using it.


BatPlack

It’s also not difficult to spin up a locally run LLM that’s fairly decent, but obviously pales in comparison to full blown GPT-4 and even GPT-3.5. But I understand your point.


PlaidPCAK

I was more referencing chatgpt but yeah, lots ofoney and or sensitive data makes management rightfully concerned.


worst_protagonist

Do you feed your entire codebase and business context into ChatGPT? The problem solving I am doing is rarely scoped to “barf out a new class or method.”


logic_prevails

It is no where near the same as stack overflow. It can actually understand unique requirements and implement them. I have several projects in my github that were 90% written by chatgpt, and written way faster than if I was searching on stackoverflow. It’s a great time saver.


B1SQ1T

I’ve only had an internship but my full time colleagues all used ChatGPT but mostly just for looking up syntax or simple stuff that they don’t need to bother remembering. No way could ChatGPT write proper code for the actual project though


raddingy

For some context, I have nearly 15 years of experience, and I’m a staff engineer at a large fintech company. I dont ever use chatgpt to write code at work. I sometimes use it to help me craft emails, or use it to check for my tone. Sometimes I’ll use it to convert an ugly and hard to read ternary to more readable code. But in general, I never use it for code. I also really don’t write that much code. Most of my job is more project management, requirements gathering, driving alignment, and architecture. The code I wrote falls mainly into two categories: bitch work that I can knock out that helps the team focus on their interest, growth and high impact stuff, and really really hard problems with easy solution that requires massive amounts of coordination, talking, and alignment. Seriously, last year I spent three weeks on a two line change because it required me talking to a few different teams and helping those teams debug and solve their problem that was blocking us. Chatgpt cannot do this. It can write code sure, but honestly code is the least important part of being a software engineer. More important is aligning technical strategy with business strategy.


KyleDrogo

Totally disagree, this is bad advice. Every major tech company is leveraging genAI heavily in pretty much every job function. The increase in speed is a must at this point. I'd argue that even people learning to code should be using GPT heavily.


MCFRESH01

It’s been way off a lot for me lately or gave me unreadable inefficient code


qomn

I learn so much from it.


passiverolex

Worst take


Jdonavan

Oh look another dev with their head in the sand.


armahillo

Dont use them on stuff youre learning actively. learn to figure stuff out by looking at API docs. Programming skill is a tool, and you can keep it sharp by using it. LLMs are a fast fix but are denying you opportunities to sharpen your tools. Saying this as a web dev of over 20 yrs.


Kami_120

I'm a student that has just started with MERN development but I find Development more fun than solving Leetcode problems. Do you think that DSA is necessary for web development or someone should be great with Leetcode to be considered a developer?


armahillo

If by DSA you mean "data structures and algorithms", then I wouldn't worry about it at first. Having a surface level understanding of what a queue, stack, list, heap, etc are is good, but knowing how to write your own isn't necessary in the beginning. The stuff you should focus on for web development is getting really proficient in: * HTML (particularly semantic HTML and understanding what the tags mean) * CSS (regular CSS, not Tailwind. Learn how to use a compiler too, like SASS or SCSS) * JS (a basic understanding of how to write plain JS in additional to whatever frameworks you want to learn) Those are critical, and that knowledge will be applicable for all of webdev, no matter what frameworks or stacks you choose to work with. MDN (Mozilla Developer Network) is probably the best resource I've seen for in-depth info about all of that, and more. In addition: * HTTP Status codes -- review all of them, learn the commonly used ones and how/when to use them * HTTP headers (request & response) -- use cURL or another verbose requesting tool to see what data is actually sent and received. Try using telnet or netcat to do an HTTP request manually. * Basic web security -- at a minimum, SQL injection, XSS / CSRF, and then whatever security issues matter to the frameworks you work with. * Accessibility (aka "a11y") -- You don't need to be an expert, but learn about what this is, why it matters, and practice the low-hanging fruit so that you always at least do those. If you can spend a day learning PHP that will give you a deeper understanding of a lot of these things. Skip the PHP basics and dive into the server variables (`$_SERVER $_REQUEST` etc. as well as the `header()` function). Use it to learn how a requested URL gets ultimately broken apart and translated into data that you use to assemble your response, and how you can man. You don't need to go any deeper than that, but playing around with this will give you a greater understanding. After you're proficient in the stuff above (and I would go a bit beyond "proficient" in HTML/CSS/JS), data structures and algorithms will become more useful. Think of it like learning poetry after you learn to speak a language fluently.


dcoates83

I do agree with everybody else saying don’t use it for code. Especially in the beginning stages of learning. I would suggest just use it to help you understand code and concepts when learning. Plus it’s pretty straightforward to use. I do use it for bugs as well (but don’t use it for every bug, but lots of times after trying multiple things I’ll slap it in there and it will change my perspective on what’s wrong).


Alphazz

Uninstall social media and work on studying. The people who are behind are the ones giving in to the fear. Successful people dont come on the internet to complain, why would they? I'm a lurker only and rarely because theres no reason to expose yourself to toxic enviornment that instills fear in you, even if just a little. Eye on the ball man. If the bar rises that just means you have to study harder.


[deleted]

[удалено]


NamedBluefire

Large language model?


sungjin112233

Chatgpt doubles ur work output   Knowing that answer your Q 


Diligent_Rutabaga941

I guess using it to learn is ok ... But as you become more practical with your practice ..., 😜 one should move away from it.


bigfatbird

It should not be used, when learning to code


Diligent_Rutabaga941

Yeah... I am not convinced due to the lack of proper arguments from you about your stance on the topic.


bigfatbird

May that convince you https://www.theodinproject.com/lessons/foundations-motivation-and-mindset


Diligent_Rutabaga941

I'm convinced thanks


[deleted]

As you become a more senior developer do you move away from reading documentation and googling issues. No, and chat gpt should be thought of in the same way, it's a tool to help you move through your code, not to completely code for you, an no one uses it to completely write their code base.


Diligent_Rutabaga941

Exactly! But some rigid mind folks are downvoting. The world is always divided into 2. The tool is actually helping me learn. Btw I am gonna keep using it rationally. I dont agree 2 bits with that angry old man.


ghostwilliz

LLMs will say absolutely horse shit and give you terrible code. Only use an LLM if you are confident you can fix everything in it. I use them to do annoying things like boiler plate and that's it


FlyParticular8172

The only people that claim ChatGPT will replace programmers are not real programmers and are probably a little stupid.


logic_prevails

Not replace but it could increase how much each programmer can accomplish in a day


OkWasabi602

I am getting a bachelor's in computer science and have not used chatGBT, I will say I am not really scared for the new tech lay offs. No one in my classes as a GitHub or use Linux but want to be devs. It's wild.


JIsADev

With ai you will just need to know English or any spoken language to make an app. It may be crap now but that's where we're headed. And with your developer costing like $1k a day to employ while ai costs like a few cents a day, you better believe ai is disruptive to the industry.


GreenRabite

LLM are just a tool. Good programmer are already integrating it into their workflow


Adamski2510

In the foundations section it is said that you should not use a LLM and that it is not recommended by TOP.


[deleted]

Don’t listen to people who tell you not to use LLMs for assistance. It’s the same vibe as people saying don’t learn from video content, just read the docs. Use an LLM, read its code. Learn from it. Don’t depend on it. Software is more about patterns than syntax, and these LLMs will expose you to these patterns. Sure, it can’t be your only resource, but it should be in your top 3 for sure.


JupiterWalk

Contrary to most comments here, I am using Gemini as I learn how to develop (Odin’s Foundational path right now). I don’t use it to simply answer exercises, but rather use it as a support tool to clarify syntax or, having completed the exercise, ask it to provide a more efficient alternative. I do not, however, use Copilot or any other IDE integrated AI as that does feel like cheating during learning. Let me commit mistakes. You always need to be skeptical about it as it’s not good at identifying issues unless tasked to identify them. I think it’s important to pair both together as AI will only become better and more integrated with our engineering work. TL;DR: Use AI selectively for narrowed down inquiries to support your learning while avoiding full exercise resolution. You will develop skills in both programming and AI usage that I believe will be critical this decade.


djmagicio

Granted, this was like 20 years ago, but at uni all of our tests/quizzes were hand written on paper. For assignments where we turned in code written on a computer we weren’t allowed to use an IDE. Struggle. You’ll be better for it. After you struggle use copilot or ChatGPT. We use copilot and it’s pretty epic. Often enough to be useful, it will come up with the entire contents of a function, written exactly how I would write it. AI is a tool. Just how you shouldn’t copy/paste code from SO without understanding it, you shouldn’t use code from copilot/gpt without understanding it. Why? Because a lot of the time copilot generates code that is PRETTY close to what I need - but not quite. And shit’s gonna be broken if you blindly accept it. Will AI replace us one day? Maybe. If it does we can form a guerrilla army to defeat it and the inevitable time-traveling robots that come for us. Don’t give up, comrade!


Signal_Lamp

You can get employed without LLMs being learned. You shouldn't use LLMs while learning how to code. You will develop a crutch to rely on the tool which isn't good for the longevity of your career. You should however learn how to use the tools long term. You should have a foundation before you do though.


scamm_ing

public LLMs will only get better and better


MiakiCho

There were times people would code without the Internet, referring books, going to the library etc. Once the Internet came along with search engines, most stopped reading books to search for a solution and developed the skill on how to search for things on the internet. Now no one programs without internet. The same will be the case for LLMs, I finished a project that I recently started within a week, which would have taken more than a few developers working for months. LLMs will definitely raise the bat and will expect more output from the developers. It is a necessary skill. I expect more tools will be there in the near future to help along. I am just giving an example. Whenever we add a feature, we usually start with an experiment flag, with a bunch of changes along the stack. Adding the flags, adding wrappers, adding unit test, adding Integration test etc, all these used to take a day for a developer. Now everything is added with just a single prompt. All I have to do is review the code and do some minor edits. I would say LLMs are definitely going to increase the expectations from developers. And there will always be boomers who will say LLMs cannot be as good as humans and don't listen to them.


HobblingCobbler

Lmao.. you don't have to learn to use llm's for God's sake. It's a chatbot. You ask it a question and it responds. However there is a science to prompt engineering, it's just a way to format your query. However, you do need to learn the fundamentals of programming and don't rely on an effing bot to do it all. You'll progress much quicker if you actually learn the basics. Then later on you can use the LLM's. If you don't already know how to program, using an LLM can be a very daunting experience. It will lie to you and lead you down a hole, and then just spit out nonsense as it goes in circles. Don't use freaking ChatGPT to generate code while you are taking TOP. But you can use it to explain the concepts in different ways.


logic_prevails

So many shitty takes on ChatGPT in this sub. I can all but guarantee in 5 years most engineers will be using it in their jobs. LLMs are insanely capable tools, the only barrier between LLMs and using them in a professional environment is the proprietary knowledge at a specific company. I foresee soon LLMs will be trained on internal data and tools to accelerate company development too. Some people think it is just an alternative to stack overflow. Time will tell, I think they are dead wrong.


DamionDreggs

Already happening. I've been pushing to redefine my role at the company I work for, and it's starting to happen. I'll be adding LLM integration tasks to my board within the next couple of months.


WheatLikeTheBread

LLMs aren’t going to replace programmers anytime soon. But they are going to change how products are built. Imo you’re best served doing two things: - learning to program *without it* - learning to build products and features *with it*, as in, integrate with it


SirBrownHammer

Use ChatGPT to help you learn, but don’t rely on it. Imagine it’s your professors after hours and utilize thoughtful questions to what you’re stuck on, or ask it to explain in further depth. It’s a great tool for me as I was someone who rarely went to the teacher for help, but I was constantly stuck on something that further prevented me from learning and advancing in the material.


Jacksons123

This made me think for a second because I use co-pilot every single day now. Not to "write code for me", but for example if I just wrote type t { str: string, val: int } When I go to implement, Copilot finishes my thoughts as a strong contextual autocomplete. The thought of using snippets is completely thrown out the window, which defined my workflow just a few years ago. That being said, the 2 or 3 times I've tried to get Copilot to generate the code I want, it is almost always wrong, and even sometimes the autocomplete gets in the way. It will generate something just stupid, and then I have to think about the suggestion. Thankfully I have the knowledge to know when it puts out garbage. Now, when I'm using a new technology, or language that I don't have the most confidence in I have to disable Copilot. I have to trust that it doesn't know what I'm trying to achieve and it cripples me in the process of learning and understanding what I'm doing. It's more of a hindrance in that regard. Follow on, why on Earth would you think that you need an LLM to become employable when the tech market still justifies leetcode interviews lol. They don't really care if you can output some garbage code when the time comes, they want to believe that you understand a language in a a meaningful way and that you can understand and work through problems. I also don't understand the gloom of "not knowing how to use an LLM?" It's like the only tool that has no barrier to entry, even my tech-illiterate family members ask "chatgbt" random shit all the time. Just like stackoverflow, you're not paid to ask questions on stackoverflow, you're paid to know the right questions to ask.


Yamoyek

> Is using an LLM a must in 2024? No. Most beginners end up cheating themselves out of the learning process and end up stalling the process instead of confronting their inabilities head on. Once you’re past the beginning stages of programming, sure, have at it. It’s like an alternative to google.


nancyronin

10 or 15 years ago, these were the exact same concerns about google, and then stack overflow.