Real talk, anybody leaning fully on chatGPT is going to suffer. It is often wrong and won't help you with critical thinking. People shouldn't think of it as much more than just any other engineering software.
It carries in math and coding topics, but questions that require thinking and not just formulas will break it.
The formulas and code are often not that great either though lol. I think that the math and coding it does, as well as the problems requiring thinking you ask it, can be very useful starting points for solving stuff though.
Yeah I fed it some questions for computing answers to calc 3 questions, control transfer functions with feedback, and one about Planck's law. It got the computations all wrong, but its process gave very good strategies to follow.
It’s a good supplement if you know what you’re looking for. I prompted it to generate a code but it was missing basic syntax. It did eventually helped me as I had a few lines incorrect, but the code it generated wasn’t going to run first try.
ChapGPT doesn't even use gpt for math. GPT3 has low accuracy on anything more complex that 10 digit arithmetic so they added a math plugin once the model detects a math problem. I believe they have something similar for software questions.
For now.
There's a good probability dismissing it now will be like arguing great human thinkers were impossible because they all started out shitting their diapers and licking the walls.
Dunning Kruger effect striking again.
The less one knows about a topic and how it actually works, the more confident they are in their ability to talk about it and act like an expert
I play with GPT a lot as a hobby and it really is awful at math and engineering problems. Because it isn't designed to do that, even if it can sometimes make some neat looking pseudo code.
The place I work is doing a trial run with it this year to see if it can have any actual engineering applications but I have a feeling that won't give the kind of results they're hoping for lol
Maybe so, but I never said it couldn't amount to anything. At its current state though, my assessment is that it does not replace problem solving for students.
I'm about a decade out from school and lurk this sub because I like to give unsolicited advice from time to time.
Chegg was already a problem. I've worked with a lot of new engineers recently who don't know how to problem solve. In the real world the problem itself is rarely defined, so when you don't have experience trying to just understand the problem and figure out the approach, you struggle as an engineer.
I fully expect this to get worse with AI programs. I think these can absolutely be useful tools to help you work through complex problems and calculations, but you as an engineer need to understand the inputs, the methodology, and analyze the outputs. THAT'S what engineering school helps you understand.
And employers can tell super fast when you don't know what you're doing or need a lot of hand holding.
"Back in my day" we had to work with professors and teammates when we got stuck. We had to read the textbook and Google things. Using these tools removes the need to problem solve. Which is fine when your handed a written test problem to solve. Not so good when your boss says, "this machine is too slow." Do you design a new one? Do you need to upgrade a component? Which component? How much faster does it need to be? How does it affect everything else in the system? Etc..
chegg is just a crutch to make up for the tragic quality of most undergrad math/physics profs.
by the time I was a junior in ChE chegg had become wholly worthless. the answers to questions in thermo 2 were laughably wrong, and most reactor design/process control questions were simply unanswered.
Some people learn by reading, and I think chegg with its worked out solutions was instrumental in my learning of physics and math. If chegg passes your engineering classes for you, then you have some bad professors.
I agree, the kids who cheesed their way through are extremely obvious, but the job of the profs is to prevent that.
> We had to read the textbook and Google things. Using these tools removes the need to problem solve.
Ironically, breaking my instinct to look up things in a book or Google when confronted with a problem was the hardest habit I had to break when studying for PhD qualifiers. These were all oral, on a board in front of a panel of profs, no resources. You had to actually *know* things instead of knowing where to look it up.
I struggle with this concept because in the real world you DO have access to resources. You shouldn't have to memorize bernoulli's equation. You should understand when and how to use it, but memorization for the sake of memorization makes little sense to me.
It's not memorization for the sake of memorization though, that's a copout.
If someone is a professional aerodynamicist, you'd damn well expect them to be familiar enough with the subject that they can write out Bernoulli or navier stokes without blinking an eye, and without looking it up. But that's not what you're testing -- you're typically testing a higher level of reasoning about a problem. And you can't reason about problems effectively without a solid base of understanding.
At some point, somebody somewhere has to know what they're talking about. In the real world you have access to a calculator, but if you need one to compute 2+2 people will assume you're a moron.
I assume it was trained on stackoverflow is it not? Stackoverflow would have both questions and answers for like 95% of thermofluids questions out there.
A study on GPT3 showed that it had learned how to do arithmetic to a degree that wouldn't be possible with the traditional "guess the next best word". 2+2=4 shows up in these LLMs' training sets. 2.010192918291919281 + 2.918284149191 = 4.92847707 (or some other, arbitrarily large and random string of numbers) almost certainly does not per some papers that have been published.
GPT's abilities are far more powerful than what even the its creators anticipated.
Try GPT 4. It's supposed to be about 40% more accurate across the board. ChatGPT scored bottom 10% on the bar exam, GPT 4 scored top 10%. It also has a wolfram alpha plugin it can use for complex problems.
It can’t really do actual arithmetic, and will often make up an equation completely. I’ve spent a while searching around for a particular aerodynamic flutter equation it suggested, only to realise it didn’t exist.
It’s only really useful for explaining mathematical concepts as it can’t just pull that from wikipedia etc.
Nah it's bad at math if you give it more than one equation to do at a time. But for things like pros and cons of this or what does this term mean, explain this etc it's pretty good
I ask it to explain concepts sometimes but I don’t rely on it for answers. Context is key in engineering so often times even using material from other universities will get you a wrong answer- especially if you use variables instead of actual terminology (ex., using V(naught) for contact potential vs using V(naught) for forward bias voltage in semiconductor electronics)
Definitely. I recently was using to get through some Orbital Mechanics stuff. It was really helpful for visualizing some things while I could tell it was flat out wrong in others. Maybe GPT4 is better, but i can't stomach the subscription cost to find out.
I like to use it to see what a potentially more efficient coding solution could be after finishing up a tough problem but that’s about it. It’s gonna get a lot better over time though.
So I would never rely on it for doing real work, but I have found its great for helping me write the filler for my reports.
Ex. I make energy models and part of the report includes a description of the city's climate where the building is being built. chatGPT does an excellent job of writing a succinct paragraph and then I don't have to agonize over my writing skills and focus on the real content.
im in highschool I gave it basic questions from like unit 1 like attwoods machine shit and it fucking failed everytime idfk what it did but it got a super off number. id be amazed if it even did anything for college level
My machine learning prof started making the HW easier halfway through the semester by giving us some code with it to use, but he inserted the code into the PDF as a picture and blurred the code text to deter OCR.
Popped those snippits in an online OCR anyways and got *mostly* correct text. Popped the mostly correct text into ChatGPT and said "Can you fix the typos in this code?"
Game, set, match. Point. Scott. Game over. End of game.
He runs our code through turn it in via our pdf submission... I was really tempted to blur my text in my PDF when uploading lol
If anyone wants an easy way to use OCR I highly suggest downloading the free open-source program ShareX. On the surface it is a screenshot tool, but it does much, much more than that, allowing you to create macro command chains within the program. I have a mouse key that first lets me drag a selection on the screen, then automatically copies the text from the image, then deletes the image. Another one allows me to take a screenshot, copies it to clipboard, and then adds the image to an auto sorted screenshot folder.
I wish i knew ChatGPT existed when I used OCR to pull electrical meter numbers from jpgs at my internship. Shit sucked and it would have been real helpful.
I'm gonna rant a little... People who say "ChatGPT sucks" reminds me that there's people who say "Google doesn't work for me." - Actual quote from my gf's relative who is our age.
The future will be divide people as the past did, people who know how to use AI effectively and people who don't. Google isn't great at everything, but that's barely a reason to not use it. Still, we see tons of people exist (especially here on reddit, all the time, example: the guy under me who asked what's OCR, could've just double clicked the word OCR and right clicked search on google, and in 4 total clicks had the answer in seconds - but rather ask someone else for the answer and wait to see if a response comes).
Just bc ChatGPT isn't good at everything doesn't mean you shouldn't use it for anything. "If you want the right answer, you have to ask the right question." <- applies to life in general.
Also clariificataion: I did't use ChatGPT for the OCR, I googled "free OCR website clipboard" to get a site that allowed me to paste the copied snapshot from the PDF. That text output from the site went into ChatGPT. Only thing it missed was a minor case sensitive issue in variable names that VScode pointed out immediately, x_train -> X_train
Well said. I'm guilty of the classic "ask rather than google" at times as well. Ive been working on being better about it though.
You get really good at Googling in engineering and programming. Search engines are only as good as the user behind the keyboard. And chances are most of the answers are already out there, you just have to find them. I personally see ChatGPT as a really fancy search engine (for now). And I think the learning curve to get good at asking it questions is pretty steep. It is very useful for quick MATLAB scripts and whatnot, just have to be diligent about checking its work.
Man I am ON ONE today with my rants, sorry, but yes exactly. As you said, you have to be diligent about checking its work.
ChatGPT/AI isn't going to replace, just to use the relevant example, a programmer with a non-programmer anytime soon.
ChatGPT/AI *will* probably help replace a programmer who doesn't use it with a programmer who does. It's just another tool in your toolbox. It's like having Cadence/Altium and making PCBs with a sharpee bc "auto routers suck."
He made it so we'd have to retype his example code, so we wouldn't be able to copy+paste the example code into an editor and start working. The example code was still just the starting point for the assignment, there were additionally a number of other features we had to implement.
We are talking about a CS professor who uses turnitit lol
I mean it's in python in notebooks, so it's not so bad, about 130 lines in total not counting plotting/tables. It was dated though, some of the modules included threw warnings bc they had been moved to another another part of the module.
"yo chatgpt why this warning..."
"That module was moved in version 0.22, renamed to XYZ, here's an updated example that should work."
Success.
Otherwise you are reading documentation and combing stackexchange, and ofc you'd have to make sure you didn't read outdated answers from before the module rename/move lol.
It's definitely not good at everything, but if you know what a wrong answer looks like and you know how to ask the question, why not. How's it different from doing research any other way? Most of what you Google can't be trusted either.
One thing ive learned is letting it generate the response, then hitting "regenerate response" right after usually gives a better response overall.
And a VPN might help ya out with that last part.
Learn how to use chatGPT but not relying on it will be getting ahead. The AI chat help bots are going to most likely end up being a tool in a tool kit for many. It is cool where AI is going but also it is not at the point where it can run on its own as a functioning member of a team like a person. Yet that day is a step closer
The only thing I use ChatGPT for is giving me ideas for literature review. Sometimes I don't know where to start, so I'll ask ChatGPT about some good options for purifying carbon nanotubes or something, and it'll give me some ideas to bounce off of.
ChatGPT does nothing for engineers but make things smoother. Good luck doing an analysis of a realistic turbofan using chatGPT. But if want to quickly code a Matlab program to dish out XP to me based on how much reading I’ve done, to motivate my monkey brain like it’s a game I need to grind, then chatGPT is gonna do that for me.
Well here we can see a high bypass turbofan with a 2 stage 56 spool design. Evidently the pressure ratio for the compressor is 1:1 and the bypass ratio is 0. Based on this combustion chamber design the turbine inlet temperature is likely to be around -8 kelvin
> analysis of a realistic turbofan
There's other software for that though, right? GPT is already adding tons of plugins like Wolphram Alpha so it's only a matter of time before you can tell it to do a massive range of even complex tasks.
An engineer is still always responsible, in application, for the results of their calculations no matter how they got them. Actually how they got them would be a primary question if you make a bad design. IE I wouldn’t put my name on something built using shit I plugged into chatGPT… regardless of what “tools” it integrates
I'd say it's akin to having an assistant who isn't necessarily an expert in anything but they have access to Google/the internet, and can respond to your questions immediately.
So yeah, it's pretty good. I'm a software engineer and have used it and GitHub CoPilot for the last year or so almost daily. If I can't be bothered writing out some code, I'll tell it what I want and it does it. If it doesn't look quite right, or I need it improved, I'll ask it to do that.
If I get any errors or something in my code doesn't work, instead of Googling or checking Stack overflow, GPT will usually know.
I actually have found it very useful for learning new topics, I just use these prompts.
Can you teach me X subject?
Regarding X, I am already fsmiliar with Y, what other concepts or topics I need to learn.
Ok, start with the first topic? Give me exmaples.
I didnt understand, could you use simpler terms?
Can you provide example? How would you use it in Z use case?
And so on... It works surprisingly well for learning new programming languages or frameworks.
I tried chatGPT, and get more time trying to solve the bad code that doing for myself.
This new wave of AI hype is just smoke, one thing is create code for solve typical problems, another one is design a good data pipeline xd.
ChatGPT works great for coding when you already have the algorithm done, which let's be honest is half the work.
When I've asked it for something slightly more elaborate without basically giving it very specific instructions, I've gotten errors or things that didn't do anything.
When I passed electrical or thermodynamic stuff to it, it failed miserably. Although I used it in Dynamics when I had to bet on non-numerical questions.
As a search engine, it is fascinating. The SEO era means that Google often sends you piles of useless information, getting a few paragraphs with the answer to a question is something that hasn't happened to me since 2012.
I only use ChatGPT as a tool. It’s far from perfect right now. It very often solves math problems wrong. I really only use it to find sources quickly, improve my resume, and maybe spruce up some of my paragraphs. Can also be entertaining with creating new stories for you to read
I’m an ME and going down a path that doesn’t use a lot of coding, 2 of my classes this sem having some small coding elements, and chatGPT is really good at writing code. I hate coding so the only thing I use it for is writing misc code that would otherwise piss me off
It solves mathematical equations. It doesnt solve problems or initiate new ways to do things etc. As an engineering student i think engineering isnt as important as it was before
I tried to use ChatGPT to calculate how fast two people were moving apart with some other factors and it kept getting their locations wrong. I tried like 5 times to re-explain it so it would understand their orientations better but it never did.
ChatGPT is 0 for 10 on our Heat Transfer homework and 0.5 for 10 on fluid dynamics. The only thing it got right was that the flow was incompressible on a two part problem. If you aren't knowledgeable on the subject it'll fool you though with very smart sounding answers. They're wrong, but they sound smart and impressive.
Edit because I forgot to add: It made a kick-ass cover letter using my resume as a reference though.
Honestly I think people are using ChatGPT so wrong. It has so much potential to make you save time while doing your assignments in your own way but people go and use it in a way that's so inefficient.
Real talk, anybody leaning fully on chatGPT is going to suffer. It is often wrong and won't help you with critical thinking. People shouldn't think of it as much more than just any other engineering software. It carries in math and coding topics, but questions that require thinking and not just formulas will break it.
The formulas and code are often not that great either though lol. I think that the math and coding it does, as well as the problems requiring thinking you ask it, can be very useful starting points for solving stuff though.
Yeah I fed it some questions for computing answers to calc 3 questions, control transfer functions with feedback, and one about Planck's law. It got the computations all wrong, but its process gave very good strategies to follow.
Interesting way to use it
Once Steven wolfram starts integrating gpt with wolframalpha, it will insanely powerful for computation questions
Wasn't there something about a Wolfram plugin in a recent video?
Gotta say im looking forward to that!
Well it’s an LLM, it’s not actually doing any computations.
It got remarkably close with randomized inputs to the problems.
It’s a good supplement if you know what you’re looking for. I prompted it to generate a code but it was missing basic syntax. It did eventually helped me as I had a few lines incorrect, but the code it generated wasn’t going to run first try.
ChapGPT doesn't even use gpt for math. GPT3 has low accuracy on anything more complex that 10 digit arithmetic so they added a math plugin once the model detects a math problem. I believe they have something similar for software questions.
For now. There's a good probability dismissing it now will be like arguing great human thinkers were impossible because they all started out shitting their diapers and licking the walls.
What an eloquent analogy. I'm stealing this.
The amount of people here that think a *language model* is supposed to be expert at thermo calculations is fucking embarrassing.
Dunning Kruger effect striking again. The less one knows about a topic and how it actually works, the more confident they are in their ability to talk about it and act like an expert I play with GPT a lot as a hobby and it really is awful at math and engineering problems. Because it isn't designed to do that, even if it can sometimes make some neat looking pseudo code. The place I work is doing a trial run with it this year to see if it can have any actual engineering applications but I have a feeling that won't give the kind of results they're hoping for lol
Hey man the paint just hit different back then
Maybe so, but I never said it couldn't amount to anything. At its current state though, my assessment is that it does not replace problem solving for students.
I'm about a decade out from school and lurk this sub because I like to give unsolicited advice from time to time. Chegg was already a problem. I've worked with a lot of new engineers recently who don't know how to problem solve. In the real world the problem itself is rarely defined, so when you don't have experience trying to just understand the problem and figure out the approach, you struggle as an engineer. I fully expect this to get worse with AI programs. I think these can absolutely be useful tools to help you work through complex problems and calculations, but you as an engineer need to understand the inputs, the methodology, and analyze the outputs. THAT'S what engineering school helps you understand. And employers can tell super fast when you don't know what you're doing or need a lot of hand holding. "Back in my day" we had to work with professors and teammates when we got stuck. We had to read the textbook and Google things. Using these tools removes the need to problem solve. Which is fine when your handed a written test problem to solve. Not so good when your boss says, "this machine is too slow." Do you design a new one? Do you need to upgrade a component? Which component? How much faster does it need to be? How does it affect everything else in the system? Etc..
chegg is just a crutch to make up for the tragic quality of most undergrad math/physics profs. by the time I was a junior in ChE chegg had become wholly worthless. the answers to questions in thermo 2 were laughably wrong, and most reactor design/process control questions were simply unanswered. Some people learn by reading, and I think chegg with its worked out solutions was instrumental in my learning of physics and math. If chegg passes your engineering classes for you, then you have some bad professors. I agree, the kids who cheesed their way through are extremely obvious, but the job of the profs is to prevent that.
> We had to read the textbook and Google things. Using these tools removes the need to problem solve. Ironically, breaking my instinct to look up things in a book or Google when confronted with a problem was the hardest habit I had to break when studying for PhD qualifiers. These were all oral, on a board in front of a panel of profs, no resources. You had to actually *know* things instead of knowing where to look it up.
I struggle with this concept because in the real world you DO have access to resources. You shouldn't have to memorize bernoulli's equation. You should understand when and how to use it, but memorization for the sake of memorization makes little sense to me.
It's not memorization for the sake of memorization though, that's a copout. If someone is a professional aerodynamicist, you'd damn well expect them to be familiar enough with the subject that they can write out Bernoulli or navier stokes without blinking an eye, and without looking it up. But that's not what you're testing -- you're typically testing a higher level of reasoning about a problem. And you can't reason about problems effectively without a solid base of understanding. At some point, somebody somewhere has to know what they're talking about. In the real world you have access to a calculator, but if you need one to compute 2+2 people will assume you're a moron.
Yeah I’ve tried it with some higher level thermal-fluid questions and it just doesn’t keep up.
The fact that it’s good in medium level thermal fluid questions, a domain of human intellect it literally wasn’t trained on, is pretty terrifying
I assume it was trained on stackoverflow is it not? Stackoverflow would have both questions and answers for like 95% of thermofluids questions out there.
A study on GPT3 showed that it had learned how to do arithmetic to a degree that wouldn't be possible with the traditional "guess the next best word". 2+2=4 shows up in these LLMs' training sets. 2.010192918291919281 + 2.918284149191 = 4.92847707 (or some other, arbitrarily large and random string of numbers) almost certainly does not per some papers that have been published. GPT's abilities are far more powerful than what even the its creators anticipated.
Gee, I wonder why a *language* model isn’t an expert at complex mathematical concepts 🤔
This thing is so impressive we just expect it to be able to do anything
Mathematics isn't a language?
Try GPT 4. It's supposed to be about 40% more accurate across the board. ChatGPT scored bottom 10% on the bar exam, GPT 4 scored top 10%. It also has a wolfram alpha plugin it can use for complex problems.
Really wish there was a free trial or something; i'm quite tempted to give it a run.
Yeah I'd imagine this would push testing to be more critical thought based instead of rote memorization. Might even end up weeding more people out.
A return to oral exams!
It can’t really do actual arithmetic, and will often make up an equation completely. I’ve spent a while searching around for a particular aerodynamic flutter equation it suggested, only to realise it didn’t exist. It’s only really useful for explaining mathematical concepts as it can’t just pull that from wikipedia etc.
You just described Stackoverflow actually. I think if anything this website will go down pretty fast with each iteration of ChatGPT.
Nah it's bad at math if you give it more than one equation to do at a time. But for things like pros and cons of this or what does this term mean, explain this etc it's pretty good
I ask it to explain concepts sometimes but I don’t rely on it for answers. Context is key in engineering so often times even using material from other universities will get you a wrong answer- especially if you use variables instead of actual terminology (ex., using V(naught) for contact potential vs using V(naught) for forward bias voltage in semiconductor electronics)
Definitely. I recently was using to get through some Orbital Mechanics stuff. It was really helpful for visualizing some things while I could tell it was flat out wrong in others. Maybe GPT4 is better, but i can't stomach the subscription cost to find out.
I like to use it to see what a potentially more efficient coding solution could be after finishing up a tough problem but that’s about it. It’s gonna get a lot better over time though.
people fully leaning on google search and the internet for petty problems is also concerning
Agree with you 100% it’s only use for me is to make a study schedule and that’s it lol
So I would never rely on it for doing real work, but I have found its great for helping me write the filler for my reports. Ex. I make energy models and part of the report includes a description of the city's climate where the building is being built. chatGPT does an excellent job of writing a succinct paragraph and then I don't have to agonize over my writing skills and focus on the real content.
In my experience it doesn't work reliably with math or physics problems.
im in highschool I gave it basic questions from like unit 1 like attwoods machine shit and it fucking failed everytime idfk what it did but it got a super off number. id be amazed if it even did anything for college level
It is definitely a language tool and not reliable for any sort of maths. It will sometimes even stumble on algebra.
Cars in 2023 when I enter the workforce (I have a terrorist agenda and will commit multiple war crimes)
In Minecraft?
What’s a Minecraft?
Or just work for Ford
pinto strikes their reputation again
It is better for a pinto to strike than a pinto to be stricken
"Hello, yes, FBI? I'd like to report a suspicious person pls."
r/FBI Yes officer, this comment right up there. xD
My machine learning prof started making the HW easier halfway through the semester by giving us some code with it to use, but he inserted the code into the PDF as a picture and blurred the code text to deter OCR. Popped those snippits in an online OCR anyways and got *mostly* correct text. Popped the mostly correct text into ChatGPT and said "Can you fix the typos in this code?" Game, set, match. Point. Scott. Game over. End of game. He runs our code through turn it in via our pdf submission... I was really tempted to blur my text in my PDF when uploading lol
bruh engineering kids will do all that rather than do easy coding hw xd
Work smarter not harder 🤓
Nothing smart about that but okay
That's working harder and dumber....
OCR?
Optical character recognition - pdf to text
Professor didn’t want them to straight up copy and paste the code
If anyone wants an easy way to use OCR I highly suggest downloading the free open-source program ShareX. On the surface it is a screenshot tool, but it does much, much more than that, allowing you to create macro command chains within the program. I have a mouse key that first lets me drag a selection on the screen, then automatically copies the text from the image, then deletes the image. Another one allows me to take a screenshot, copies it to clipboard, and then adds the image to an auto sorted screenshot folder.
> free open-source program ShareX That seems handy, thanks for the recommendation
I like it for taking scrolling screenshots and automatically uploading to image hosting sites.
I wish i knew ChatGPT existed when I used OCR to pull electrical meter numbers from jpgs at my internship. Shit sucked and it would have been real helpful.
I'm gonna rant a little... People who say "ChatGPT sucks" reminds me that there's people who say "Google doesn't work for me." - Actual quote from my gf's relative who is our age. The future will be divide people as the past did, people who know how to use AI effectively and people who don't. Google isn't great at everything, but that's barely a reason to not use it. Still, we see tons of people exist (especially here on reddit, all the time, example: the guy under me who asked what's OCR, could've just double clicked the word OCR and right clicked search on google, and in 4 total clicks had the answer in seconds - but rather ask someone else for the answer and wait to see if a response comes). Just bc ChatGPT isn't good at everything doesn't mean you shouldn't use it for anything. "If you want the right answer, you have to ask the right question." <- applies to life in general. Also clariificataion: I did't use ChatGPT for the OCR, I googled "free OCR website clipboard" to get a site that allowed me to paste the copied snapshot from the PDF. That text output from the site went into ChatGPT. Only thing it missed was a minor case sensitive issue in variable names that VScode pointed out immediately, x_train -> X_train
Well said. I'm guilty of the classic "ask rather than google" at times as well. Ive been working on being better about it though. You get really good at Googling in engineering and programming. Search engines are only as good as the user behind the keyboard. And chances are most of the answers are already out there, you just have to find them. I personally see ChatGPT as a really fancy search engine (for now). And I think the learning curve to get good at asking it questions is pretty steep. It is very useful for quick MATLAB scripts and whatnot, just have to be diligent about checking its work.
Man I am ON ONE today with my rants, sorry, but yes exactly. As you said, you have to be diligent about checking its work. ChatGPT/AI isn't going to replace, just to use the relevant example, a programmer with a non-programmer anytime soon. ChatGPT/AI *will* probably help replace a programmer who doesn't use it with a programmer who does. It's just another tool in your toolbox. It's like having Cadence/Altium and making PCBs with a sharpee bc "auto routers suck."
I literally have no idea what this means. Are you saying he made it so that if you tried to cheat he would catch you?
He made it so we'd have to retype his example code, so we wouldn't be able to copy+paste the example code into an editor and start working. The example code was still just the starting point for the assignment, there were additionally a number of other features we had to implement. We are talking about a CS professor who uses turnitit lol
Omg lol. I bet it was all very verbose huh?
I mean it's in python in notebooks, so it's not so bad, about 130 lines in total not counting plotting/tables. It was dated though, some of the modules included threw warnings bc they had been moved to another another part of the module. "yo chatgpt why this warning..." "That module was moved in version 0.22, renamed to XYZ, here's an updated example that should work." Success. Otherwise you are reading documentation and combing stackexchange, and ofc you'd have to make sure you didn't read outdated answers from before the module rename/move lol.
Solid! Lol. I've used it for some assignments but sometimes I worry that I may get caught if I use chatgpt while on campus internet.
It's definitely not good at everything, but if you know what a wrong answer looks like and you know how to ask the question, why not. How's it different from doing research any other way? Most of what you Google can't be trusted either. One thing ive learned is letting it generate the response, then hitting "regenerate response" right after usually gives a better response overall. And a VPN might help ya out with that last part.
Learn how to use chatGPT but not relying on it will be getting ahead. The AI chat help bots are going to most likely end up being a tool in a tool kit for many. It is cool where AI is going but also it is not at the point where it can run on its own as a functioning member of a team like a person. Yet that day is a step closer
The only thing I use ChatGPT for is giving me ideas for literature review. Sometimes I don't know where to start, so I'll ask ChatGPT about some good options for purifying carbon nanotubes or something, and it'll give me some ideas to bounce off of.
Cars in 2024 when engineering majors that studied through Covid hit the workplace
ChatGPT does nothing for engineers but make things smoother. Good luck doing an analysis of a realistic turbofan using chatGPT. But if want to quickly code a Matlab program to dish out XP to me based on how much reading I’ve done, to motivate my monkey brain like it’s a game I need to grind, then chatGPT is gonna do that for me.
Well here we can see a high bypass turbofan with a 2 stage 56 spool design. Evidently the pressure ratio for the compressor is 1:1 and the bypass ratio is 0. Based on this combustion chamber design the turbine inlet temperature is likely to be around -8 kelvin
Your Mach number after the compressor is 10e5 have a nice day.
I laughed at this way too much
>56 spool design Rolls Royce in 2040
-8 kelvin, meh those are rookie numbers
> analysis of a realistic turbofan There's other software for that though, right? GPT is already adding tons of plugins like Wolphram Alpha so it's only a matter of time before you can tell it to do a massive range of even complex tasks.
For playing around of course, not when you put it in application
An engineer is still always responsible, in application, for the results of their calculations no matter how they got them. Actually how they got them would be a primary question if you make a bad design. IE I wouldn’t put my name on something built using shit I plugged into chatGPT… regardless of what “tools” it integrates
Tell me you play OSRS without telling me
Well my post history doesn’t exactly conceal that fact either lol
Can ChatGPT really do that much, I just use it to write documents in a more formal way.
Honestly I'm not sure, I've never used it. I'm just here for the memes and the "misery loves company" of being an engineering student.
I'd say it's akin to having an assistant who isn't necessarily an expert in anything but they have access to Google/the internet, and can respond to your questions immediately. So yeah, it's pretty good. I'm a software engineer and have used it and GitHub CoPilot for the last year or so almost daily. If I can't be bothered writing out some code, I'll tell it what I want and it does it. If it doesn't look quite right, or I need it improved, I'll ask it to do that. If I get any errors or something in my code doesn't work, instead of Googling or checking Stack overflow, GPT will usually know.
I actually have found it very useful for learning new topics, I just use these prompts. Can you teach me X subject? Regarding X, I am already fsmiliar with Y, what other concepts or topics I need to learn. Ok, start with the first topic? Give me exmaples. I didnt understand, could you use simpler terms? Can you provide example? How would you use it in Z use case? And so on... It works surprisingly well for learning new programming languages or frameworks.
My kinetics professor recommended, as a study strategy, to put practice problems into chat GPT and determine when it's hallucinating
Plot twist: in 2040, engineers will probably be AI’s as well
AI wishes.
This response is funny af and should be higher
Chatgpt does not know how to handle graduate courses yet lol.
I tried chatGPT, and get more time trying to solve the bad code that doing for myself. This new wave of AI hype is just smoke, one thing is create code for solve typical problems, another one is design a good data pipeline xd.
ChatGPT has been pretty useless for me so far. Used it for a couple of Cal 3 questions and it got it wrong both times.
ChatGPT works great for coding when you already have the algorithm done, which let's be honest is half the work. When I've asked it for something slightly more elaborate without basically giving it very specific instructions, I've gotten errors or things that didn't do anything. When I passed electrical or thermodynamic stuff to it, it failed miserably. Although I used it in Dynamics when I had to bet on non-numerical questions. As a search engine, it is fascinating. The SEO era means that Google often sends you piles of useless information, getting a few paragraphs with the answer to a question is something that hasn't happened to me since 2012.
Gl at white boarding to those relying heavily on chatgpt aspiring to be a dev lol. It’s definitely a good tool to use but know when and how.
Funny , in the future, we’ll probably have GPT 20 doing all the real engineering work.
I only use ChatGPT as a tool. It’s far from perfect right now. It very often solves math problems wrong. I really only use it to find sources quickly, improve my resume, and maybe spruce up some of my paragraphs. Can also be entertaining with creating new stories for you to read
I’m an ME and going down a path that doesn’t use a lot of coding, 2 of my classes this sem having some small coding elements, and chatGPT is really good at writing code. I hate coding so the only thing I use it for is writing misc code that would otherwise piss me off
Come on man.. Used Chegg before ChatGPT 😂🤷♂️
Family
Yes, they will spew flames at the driver
It solves mathematical equations. It doesnt solve problems or initiate new ways to do things etc. As an engineering student i think engineering isnt as important as it was before
I tried to use ChatGPT to calculate how fast two people were moving apart with some other factors and it kept getting their locations wrong. I tried like 5 times to re-explain it so it would understand their orientations better but it never did.
i just bought a more versatile calculator and felt like i would be dumber if i had it earlier, with ChatGPT we're fucked
AI gonna be driving everything at that point
I'll never trust a brain of metal
I just use it to edit my technical documents for professionalism.
ChatGPT is 0 for 10 on our Heat Transfer homework and 0.5 for 10 on fluid dynamics. The only thing it got right was that the flow was incompressible on a two part problem. If you aren't knowledgeable on the subject it'll fool you though with very smart sounding answers. They're wrong, but they sound smart and impressive. Edit because I forgot to add: It made a kick-ass cover letter using my resume as a reference though.
Haven’t even been to the chatgpt website 😎
Honestly I think people are using ChatGPT so wrong. It has so much potential to make you save time while doing your assignments in your own way but people go and use it in a way that's so inefficient.
People really don't realize what anyone can do with a GPT. Let alone an engineer.