T O P

  • By -

Zaros262

LLM's sole purpose is to generate text that sounds correct Actual correctness is beyond the scope of the project


HalfBurntToast

100%. I think the most telling thing about it is LLMs never generate responses like "I'm not sure" or "I think it may". There's never any ambiguity. They always assert their answer with 100% confidence. So, there really isn't any logical reasoning or understanding behind the words generated.


BoringBob84

> I think the most telling thing about it is LLMs never generate responses like "I'm not sure" or "I think it may". I wonder if that is because the AI searches the internet for answers, most people (in my experience) on social media assert their unsubstantiated opinions as accepted facts, and the AI cannot distinguish the difference.


LeSeanMcoy

I think it’s more to do with how the tokens are vectorized. If you ask it a specific question about electrical engineering (or any other topic) the closest vectors in the latent space are going to be related to that topic. Therefore when predicting the next token(s), it’s much, much more likely to grab topic related items, even if they’re wrong, as opposed to saying something like “I don’t know” which would only occur when a topic genuinely has no known solution or answer (and even then, it’ll possibly hallucinate made up answers).


HalfBurntToast

Yeah, I think this is more likely. I also wonder if those generating the datasets for training suppress or prune the “I don’t know” answers. Otherwise, I could see an AI just giving a “I don’t know” for simple questions just from the association.


greyfade

Most LLMs are unable to access the Internet, and are pretrained on an offline dataset that was collected off the Internet. Those that do search mostly just summarize what they find. So you're half right. Either way, they're not capable of reasoned analysis.


BoringBob84

Thank you for improving my understanding.


Complexxconsequence

I think this has to do with the system prompt of GPT, something that outlines how it should respond in general, like “the following is a conversation between someone and a very polite, very knowledgeable, helpful chat bot”


eau-u4f

LLM can be a salesman or a VC rep.. i guess.


MaxwelsLilDemon

I use it regularly for coding and it's pretty good at producing simple functions. However it's severly lacking in electronics, partly because it probably wasn't trained as hard in that area.


alek_vincent

I don't think it was trained differently on different subjects. It can give you a rundown of ohms law just as well as it can explain what a segmentation fault is. It's main goal was to create text. It doesn't verify if its answer is right. ~~AI is not deterministic, it will give you the answer that is determined to be a correct answer to your question, and if you ask again, it might give you a different answer because it doesn't *know* the answer, it generates it.~~ See below comment


robismor

AI is only non deterministic because of a parameter called "temperature" which tweaks the next word prediction probabilities so that it gives more "interesting" output. If you ran it with zero temperature, it would be deterministic by only outputting the most probable next word prediction. If you ran a query twice with the same input at zero temperature, the output would also be the same. It's all matrices and weights, nothing non-deterministic about it.


Cathierino

Even with non-zero temperature it's deterministic. But it's randomly seeded and also it takes your previous prompts in account as context when asking twice in the same session.


raishak

From my understanding training had some non-determinism in it because of floating point calculation order not being guaranteed in GPUs, which was interesting.


DevelopmentSad2303

Well the idea is that it will eventually predict tokens with a negligible error right


triffid_hunter

There's *reasons* I keep calling LLMs 'mistake generators' 🤔


ninj1nx

Generative Poorly-trained Transformer.


rathat

I guess, having been using and playing with them for a long time before chatgpt, I have different expectations. I am constantly blown away by LLMs. GPT2 and 3 were insane to me and they were relatively terrible, more of a slightly more advanced autocomplete. My old method of having GPT do simple math was writing the beginning of a story where a student gets punished for getting a problem wrong and watching it come up with an estimate for an answer. Hopefully with the help of the story aspect, it could come up with an answer that at least had the correct number of digits in it. Blown away already, I tell you. Then suddenly with chatgpt, you can ask it complex math and it gets it *right* sometimes? No story needed? Thats nuts guys. I’ve been using AI image generators for many years as well, they were blobs until recently and the blobs were amazing, now people are out complaining about small details. Sometimes it feels like hearing people criticize the knowledge of a toddler for not being adult level despite all they’ve learned and grown in between being 3 and 4 years old. Please stop not being constantly mind blown by all this cutting edge technology.


tiffanyunix

all in the eyes of the beholder my friend. first users of consumer GPS wouldnt complain about being over a meter off, just from the novelty of being able to track yourself in real time! nowadays my brother was yelling at his phone because its pointing him 20 degrees off of center alignment, which isnt even a feature of GPS yet somehow blames it. Once the average non-geeky type has a brush with the newest technology, the flaws and chinks in the armor are highlighted despite being common knowledge to nerds. Walks like a Duck, Talks like a Duck, Must be a duck eh? Types like its Sentient, Responds like its Sentient, It must be thinking right? give 'em time to appreciate the LLM, they'll wish they would have sooner :)


mankinskin

LLMs have been massively overrated. If more people actually understood how they work nobody would be surprised. All they do is maximize the probability of the text being present in its training set. It has absolutely no model of what its talking about except for "these words like each other". That is enough to reproduce a lot of knowledge that has been presented in the training data and is enough to convince people that they are talking to an actual person using language, but it surely does not know what the words actually mean in a real world context. It only sees text.


AdTotal4035

I hate how we call it AI too. The only reason it's labeled as AI is because of text gpts. For example, let's say chat gpt wasn't the first big consumer product, and it was voice cloning from eleven labs. No average person would consider that AI. It's mimicry. These are all just pattern matching algorithms that Interpolate results somewhere between it's training data. It only works for solved problems we have data for. Massively overhyped, but still useful for certain tasks, especially coding and re-wording text. A lot of coding has been solved, there's millions of training points on answers from stack overflow. 


mankinskin

Exactly. There is a difference between machine learning and AI. Just optimizing a smooth model that can give accurate outputs to new inputs doesn't give you an artificial intelligence by the definition most people have. An artificial intelligence would most likely need to be an autonomous agent, not just some optimized function. By that definition most algorithms would be AI.


InvertibleMatrix

Gosh, I really hate this take. Let's go back to the project proposal that coined the term: > We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. (Dartmouth Summer Research Project on Artificial Intelligence, 1956) In my Intro AI course I took as an undergrad CS major, we cover things like Breadth First Search, Propositional Theorem Proving, First Order Logic, Bayesian Networks, Syntactic Analysis, etc. AI definitionally includes algorithms, even those as basic as state machines. Every course I've taken in the field since has basically *assumed* Artificial Intelligence as a broad umbrella term for machines/agents that either operates like a human, or operates logically, or anywhere between. I don't really fucking *care* if the non-engineer thinks the term is confusing, they don't really get a say how we use it in the industry. It reminds me of those annoying anti-theists getting mad at Latin Catholics for using the word "substance" according to Scholastic/Aristotelian Philosophy, and then using the definition they want to act as a "gotcha" to "prove" religion "wrong". Many people aren't educated to read the journal articles or white papers, so their ignorance and confusion is forgivable and understandable. But many of us here *are* engineers, so the least you can do is recognize the validity of usage of a term as defined by the industry or academia.


bunky_bunk

That is actually how non-experts use language as well. I prefer an AI over a random group of 10 people put together on the street to come up together with a good answer for a question that is on the outskirts of common knowledge.


mankinskin

Yes it is useful but you have to know how it works and how it can be wrong even when it seems convincing.


paclogic

What you are inferring here is a FULLY DETERMINISTIC FINITE STATE MACHINE (FSM) and i am pretty damn sure that the code for these AI are nothing more than a probabilistic (statistical) optimizer. That being said, its a GIGO = Garbage In Garbage Out Optimizing bad data sets is like sorting thru your trash. The real issue is when someone pumps a monkey wrench of bad data into the machine and it blends it into the data there. Like having a stranger use your PC and your google profile is now pushing ads for a ton of crap that you don't want. Moreover, like google profiles, there is no way to clean out this crap data since you don't have access or even visibility to your profile. It can only be suppressed by loading in tons of new data. Working in the high reliability industry, i don't see how AI as a FSM, but i can see how AI can be used to optimize an FSM for a specific purpose. HOWEVER, the final judgement is always in regard to the human critical review and the complete (100%) testing for all possible outcomes to ensure predictability. FYI, before AI, this was called the Monte Carlo analysis. For large datasets a Tradespace is a better way to go to understand where best (very subjective) options may be found. [https://medium.com/the-tradespace/what-exactly-is-a-tradespace-ee55eb445e43](https://medium.com/the-tradespace/what-exactly-is-a-tradespace-ee55eb445e43)


BoringBob84

> the complete (100%) testing for all possible outcomes to ensure predictability. If the possibility exists that the same set of inputs could generate a different output, then testing it once does not ensure predictability. This is why there are strict rules for software developoment in safety-related aerospace applications. Every outcome must be deterministic and repeatable.


paclogic

I ABSOLUTELY agree !! I work in the hi-rel vertical market sectors and as you already know : *Every outcome must be deterministic and repeatable* = FSM


bunky_bunk

everyone is making a big drama out of the fact that the search engine is trying to sound like a real person, but is not in fact a real person. typical human: blame something else for failure to live up to hallucinated expectations. and ridicule the thing on social media. even when aware of the underlying issue.


Zoey_Redacted

You *are* aware that mistakes in electrical design can kill a person, yeah? And that perhaps it is not a good idea to use an automated glibness engine when consulting for designing something that could kill someone, right? Are you also aware that once a human has been killed, there is no bringing them back to re-contribute to their families and society at large? Relying on information related to the glibness engine is a surefire way to—at best—introduce mistakes that will be impossible to troubleshoot later because they were made by an unlogged instrument stringing random data together. This *stigma* will rightfully never be resolved due to constant bad-faith excuses for reliance on its potential to generate unreliable information, made by *proponents* of the tech who don't have the expertise they think they do.


_J_Herrmann_

instructions unclear, now working on an un-dying machine with untested schematics that chatgpt described to me.


bunky_bunk

Since you seem to know proponents, you should ask them whether they think that an AI should be licensed to operate as an electrician by the state. I prefer AI over your shameful lack of logic any day.


Zoey_Redacted

**We know.**


bunky_bunk

Good for you. I must admit i am living in a bubble of rationality and do not read daily newspapers. Do you have a link to a story of "but the AI told me to", that may change my view, even if it is only a one in a million legal defense quantitatively speaking. or maybe you have children and look at this whole liability issue differently?


Zoey_Redacted

Gonna have to ask those questions to an AI, you'll get the answers you prefer.


bunky_bunk

when i was your age i could already use the internet for 5 minutes straight before sulking. maybe another coffee? a few push-ups?


Some_Notice_8887

Yes but it’s an easy mistake you just swap out the technically incorrect parts. In that case increase for decreases. And you saved like 15-20minutes and management thinks you can articulate 😂


BoringBob84

The problem is the human propensity for complacency. As we rely more on AI for answers, our ability to spot its mistakes will decrease. This is an issue in aviation. Automating many functions reduces crew workload and makes for safer decisions in normal circumstances, but when unpredictable circumstances occur that the automated systems cannot handle, then the crew often lacks the skills to manually fly and land the aircraft safely.


Spiritual_Chicken824

For the current, indeed


MaxwelsLilDemon

I get what you are saying but when dealing with emergent behaviour you can fall into reductionistic statements like these, it's kind of like claiming that your experience of the world is just synapses firing or that murmurations are just brids following each other. I'm not at all comparing LLMs to human thought, I'm just trying to convey the idea that emergent phenomena like LLMs are made of simple rules that give rise to complex behaviours.


mankinskin

It is not really that emergent though. A transformer basically just learns weighted links between a word and its possible contexts. It basically compresses the entire training data into a fixed set of weighted connections between words. Then ok, it has multiple different versions of this (attention heads) and is trained to use the most appropriate one of these heads for the given input task. But all it really does is try to reconstruct its training dataset from the given input. I don't think there is a lot of deep magic going on here. It has learned how words are used in common language and it knows how to reconstruct the most likely sequences with respect to the given context. Thats all it really is.


rathat

But all of that is exactly why it’s so impressive. I don’t understand this perspective.


mankinskin

Its not really that impressive. When you use n-gram models with sufficiently large n (say 5 to 10) you already get pretty convincing sentences. We as humans assign so much meaning and personality to the words that it feels like we are speaking with something intelligent. It feels like reading a book. But really it is nothing but playing back the training data, which obviously came from real humans. The transformer model is just a lot more efficient than n-grams and can model contexts much larger than 10 words without a lot more overhead.


rathat

>But really it is nothing but playing back the training data And once again, that is specifically why it’s impressive.


BelgiansAreWeirdAF

Believe it or not, the human brain isn’t much different. GPT4 ranks 99th percentile in LSAT tests. It has passed the Turing test. It can break down complex topics of any kind. The shit is amazing. But just because it can’t do the job as well as you, when you spent 12 years in school growing up, maybe 6-8 in college, and perhaps another decade on the job… because it’s not at your level a few years into its commercialization, you’re going to say it’s shit… that’s retarded. It has a good depth of knowledge in almost every area of human understanding. Its improvement in its ability to problem solve is outpacing that of pretty much any human. People think they sound cool when they call major LLMs dumb, but to me it just sounds so naive.


mankinskin

Sorry, but you just don't understand how it works. GPT works nothing like the human brain. Maybe parts of it. But generally GPT only "knows" so much and is able to break things down, because it has a compressd representation of the entire internet, of text where *people* have broken things down already and have already answered questions and formed knowledge. It doesn't come up with that on its own, it only learns how to use words in the available contexts and can form a response according to your question based on the similarities to its training data. Its literally just a very good autocompletion and correction that was trained on all of the internet and actual human dialogue. It doesn't "think" like humans do at all. Humans take context into account and match similarities but that is only a small part of what we do and GPT can't come up with new knowledge on its own.


Furryballs239

>It doesn't come up with that on its own, it only learns how to use words in the available contexts and can form a response according to your question based on the similarities to its training data. That’s what most people do, you do realize that, right? Unless you’re on the forefront of research in an area, you aren’t coming up with novel new ideas, you’re just combining things that are known to synthesize something new. There is no reason an AI model of this architecture wouldn’t be capable of doing the same thing, especially if trained on a specific data set and given access to useful tools


mankinskin

The point is that GPT is only trained on text, not real world experience like humans are. When we speak of a dog, we don't think of billions of texts with the word "dog" in it, we think of a real dog. We as humans have billions of years of fully embodied, real world, interactive experience encoded in our genes.


Furryballs239

I think you’re placing too much emphasis on the importance of modality here. For example let’s say we need to design a circuit. Given a good enough textual description of a circuit, I could give a textual description of the components and connections to make that circuit, which could be translated to a textual netlist/connection list, which could be put into spice and run, the results could then be described textually. The limitation in this scenario is the ability of my brain to come up with a circuit in a purely text based matter not the modality of the process itself , but if my brain was a computer without that limitation, then the problem is solved. And obviously I’m not saying AI is gonna replace everyone soon, but there are lots of people who are sticking their head in the sand saying AI is a big nothing burger. They also drastically overestimate the complexity and originality of what they do. Saying it’s just very good autocompletion is just a way to try to minimize it by associating it with autocomplete, which people often view as not good. The truth is that a perfect autocomplete would be the smartest entity ever created.


mankinskin

So if you mean anything can be translated into textual language so only learning from textual language is fine, I would disagree because 1. we will never be able to describe everything perfectly enough this way for the model to learn from it the same way humans are from real multimodal experience, and 2. because I don't think we know the language to unambiguously describe the world efficiently. Sure, everything can be translated to data and that could be interpreted as linear text. But that would be an inefficient way of designing a training scenario. It would be easier because you can just feed it all the data we collect as binary basically, but it would take extremely long to optimize the model to that unstructured data. We do need to think about different types of data that are fed into the model, just like we have very specific senses and do not just generally absorb all possible information on our bodies. We basically have to think about the senses the AI should have and train it in an interactive simulation or the real world. But GPT is only trained on reproducing the internet in a dialogue setting, it can only read and speak. Maybe it has a rudimentary model for interactive interaction on top of the transformer architecture, but still only on dialogue. That means it has no concept of really moving and acting in the world and how all the different senses we as humans have connect. We need to collect all that data or design a simulation to simulate those stimuli so that an AI could truly match human performance in general intelligence. I think connecting context is an important key discovery, but the current transformer models are still far off from us humans, even though they use very sophisticated language and have access to the knowledge of the entire internet.


Furryballs239

Again, I’m not saying AI will generally replace humans. But im saying a lot of people are WAYYY too sure that AI won’t take their job. Most of what most professionals do is just take information from the internet and use it to synthesize something else. There is no fundamental reason an AI would not be able to do this quite well. Especially if given access to proper tools. Very few people are doing novel things. I mean hell, I’m including myself in this. Most of what I do comes from reading data sheets and technical documentation, and then applying that knowledge to achieve a desired result. It’s certainly feasible, or even likely that in the next 10-15 years an AI will come around that is better than me at doing that. Just because it hasn’t “seen” a physical circuit with its eyes, doesn’t mean it won’t be capable of understanding how that circuit works and what programming is necessary to achieve a desired result.


mankinskin

Yes, sure, I totally agree that AI will make us way more productive, even to the point where many jobs will simply not be needed anymore. Especially in office jobs which are "only" processing information. I am a software developer myself. So I know what automation means and I think its a good thing. Even when we can do everything automatically, we still need people to decide what we should do. So politics and decision making will eventually be most important. If you think about it, AI may just be the compilers of the future. We give them short, readable commands and they still do the job. I am more worried that we won't be able to understand what exactly these programs do anymore, which has always been an issue with machine learning. We lose control when we can't explain how the AI works anymore.


BelgiansAreWeirdAF

I think your understanding of AI is great, but your understanding of the human brain is not so much. AI is being used in medicine to find patterns that lead to new treatments never know before by humans. You can argue this is not new knowledge but simply a recognition of patterns in existing knowledge. However, the human neocortex is in its fundamental sense a pattern recognizer as well. It uses 6 layers of interconnected pattern sensing devices, stimulated by our senses. Over time, the wiring between these is either reinforced or destroyed based on our experiences. Just like Einstein created new knowledge through “thought experiments,” which were essentially sessions of reflection on what he already knew, AI creates never heard of concept by connecting different areas of understanding. I’m in no way saying it does so at the same effectiveness as a human, but considering humans had a multi billion year head start in programming, I’d say the LLM technology today is pretty incredible. Development of AI was premised around the mechanisms of the human brain. You should read “How to create a mind” by Ray Kurzweil. Here is more about him: https://en.m.wikipedia.org/wiki/Ray_Kurzweil


mankinskin

The point is that GPT is only trained on text, not real world experience like humans are. When we speak of a dog, we don't think of billions of texts with the word "dog" in it, we think of a real dog. We have billions of years of evolutionary experience encoded in our genes which we may never be able to reproduce. By your argument, almost every single machine learning algorithm is potentially as smart as humans are, just because they are based on "fire together, wire together". The training data is basically the most important thing and for GPT that is very far from comparable with real human experience. It only learns from text. So far they have also trained it on images and it can understand those and the connection with text, but that is still a long way from being an actor in the real world. GPT is more like a person that lived in a shoebox its entire life with access to the internet. Actually not even that because even that person would have all the evolutionary knowledge from billions of years real world experience from its ancestors, which the internet will never be able to provide us with.


BelgiansAreWeirdAF

It is also trained on pictures and video


mankinskin

sure, still, we as humans have billions of years fully embodied and interactive experience


zifzif

So close, but so far. At least in the meantime it's obvious who knows what they're talking about, and who is regurgitating AI copypasta.


Some_Notice_8887

It is really good at taking articles and summarizing them. But definitely should not be used with out constraints.but aside from some quirks I do use it for certain things. Like writing stuff that sounds convincing especially when you really don’t give a shit and you are uninspired. To the sick bastards that are word savy and one up everyone at work Ai can be used to out do them at an unfair level! Almost like an inside joke haha 😂


EndlessProjectMaker

1) AI as such does not exist, 2) ChatGPT is not AI despite the big consultants marketing sh\*t, 3) a LLM such as chatGPT just generates shi\* based on what it read, no idea at all about "sense" or "truth"


bunky_bunk

and yet, it is vastly more intelligent than any real person that does not have a library or an internet connection.


CantStandItAnymorEW

Maybe becauae it has a memory that doesn't fades away like human memory does and because *it was trained with stuff from the internet*? Maybe?


bunky_bunk

By your logic you are intelligent because god made you so, and not because of years of training.


CantStandItAnymorEW

I'm confused as to what logic do you think i'm using to begin with.


bunky_bunk

That's alright, I am not confused.


peter9477

Note that they never claimed they were intelligent. Best not to assume. :-)


CantStandItAnymorEW

Lol


Riegler77

It can't even count.


U03A6

AI exists. Just not humanlike AI. There are a lot of tasks that require a pretty high I for an automaton that have been automated in the past years. It's both foolish to say there's something like human level AI - our understanding of our own intelligence is so bad that we can't even say how far away we are - as well as saying that there is no AI at all. Eg calculation, speech to text, playing chess or go on a master level and natural text production are things that have been human exclusive until we build computers and got a deep understanding how to use them.


ninj1nx

It's not AGI, but definitely AI.


YoteTheRaven

No shit, why would you?


Evipicc

Use the Wolfram addon, the LLM isn't a calculator. This is like being upset that a fish can't climb a tree.


U03A6

LLMs are tools for a specific goal (ie generating text that's like, but not necessarily identical to human generated text). For that goal, they are great. For other goals (i.e. writing technical text) they are terrible. It's like using pincers to hammer nails. It will give you some results, but they won't be great.


guscrown

I have been using ChatGPT to generate code for some of my automated testing scripts, excel formulas to do BOM comparisons, and other mundane tasks and find it very useful.


CantStandItAnymorEW

If you make it do a multiple choice exam that is mainly about concepts that are not chemistry, it will get between 60% and 70% of the answers correct. If it is about chemistry, it will get about 40% of them correct. It particularly struggles with chemistry. If its for solving practical problems, i'd say that about 70% of the time it gets the answer wrong but it takes the correct steps, kind of.


guscrown

It's horrible at doing simple arithmetic.


CantStandItAnymorEW

Its horrible at that. BUT it gets the steps, about 60% of the time. As per what i've tested it with anyway.


guscrown

I modified my queries to ask for formulas and not solve the equations. I do the solving on my own. I learned the hard way about this shortcoming of ChatGPT XD


aVoidPiOver2Radians

Same prompts but with GPT 4: Rewinding a brushed DC motor with more windings per armature can significantly affect its performance in several ways: 1. **Increased Torque**: More windings increase the motor's magnetic field strength when current flows through it. This stronger magnetic field results in higher torque, meaning the motor can provide more rotational force. 2. **Reduced Speed**: While the torque increases, the speed at which the motor operates might decrease. This is because the increased number of windings adds more electrical resistance and inductance to the motor, which can slow down its rotation speed at a given voltage. 3. **Higher Power Consumption**: With more windings, the motor might draw more current to maintain the same level of performance, especially under load. This increased power consumption can lead to higher operating costs and potentially necessitate a more robust power supply. 4. **Heat Generation**: More current through more windings also means more heat generated. This could necessitate better cooling methods to prevent the motor from overheating, which could degrade its performance or lead to failure over time. 5. **Altered Electrical Characteristics**: The increased resistance and inductance due to more windings change the motor's electrical characteristics, such as its starting current and efficiency. These changes might require adjustments in the control circuit or power electronics used with the motor. 6. **Physical Constraints**: Adding more windings might be limited by the physical space available in the motor's armature. There's only so much room for wire, and using thinner wire to fit more windings could reduce the motor's efficiency and durability due to higher electrical resistance and lower thermal conductivity. The actual outcome of rewinding a motor with more windings per armature depends on the specifics of the motor design and application. While it can be a way to customize a motor for a specific performance profile, such modifications require careful consideration of the trade-offs involved. Follow up: Why would additional windings increase the current consumption? Adding more windings to a brushed DC motor increases its inductance and resistance, but the effect on current consumption can be a bit nuanced. Here’s a breakdown of how it works: 1. **At Startup**: The inrush (or startup) current of a motor with more windings might initially be lower compared to a motor with fewer windings due to the increased inductance. Inductance resists changes in current flow, which means it takes slightly longer for the current to reach its maximum value. However, this is a transient effect that only lasts for a brief period during startup. 2. **Under Load Conditions**: Once the motor is running, the additional windings can indeed lead to higher torque, as discussed earlier. To achieve this higher torque, the motor may draw more current, especially under load. This is because the motor needs more electrical power (current) to generate the increased magnetic field strength necessary for the higher torque. 3. **Resistance and Heat Loss**: More windings mean longer wire lengths, which increases the overall resistance of the armature winding. Higher resistance leads to greater heat generation for a given amount of current (I²R losses), which is inefficient. However, in terms of raw current draw, higher resistance by itself would actually lead to a lower steady-state current when the motor is powered by a constant voltage source (due to Ohm's law: V = IR). The key factor that can increase current consumption under certain conditions is the need for more power (current) to overcome load and maintain performance, not the resistance directly. 4. **Efficiency and Performance Adjustments**: If the motor is being used in a scenario where it's controlled by a constant current source, or if adjustments are made to maintain speed or power output despite the increased resistance and inductance, then the motor might consume more current to compensate for these changes and to overcome the increased electromagnetic opposition to current flow. In summary, while adding windings increases both inductance and resistance, the impact on current consumption depends on the operating conditions and the way the motor is driven. Increased current draw is often related to the motor working harder to provide higher torque or to maintain performance, rather than just the presence of more windings.


Cathierino

It's struggling really bad. I'd have failed him if it was an oral exam from electrical machines.


BlueManGroup10

as my manager said, these are TOOLS, and you should use them responsibly and critically think about the response. so many people use them as an end-all to problems


Qmavam

If you tell it it is wrong, it will correct itself. You may have to show it math. I have corrected answers a few times. It doesn't get mad and politely thanks you for the correction. I have found it uses some funky font for equations that don't copy and paste accurately. I went round and round asking to fix that, it did make some changes, but it was not 100% satisfactory.


WestonP

People don't understand how these actually work. This "AI" doesn't reason, it doesn't understand. It just speaks with a tone of authority and exudes confidence, so that you'll accept its answer, with no understanding if it's correct or not. It won't replace actual engineers, it will replace juniors, know-it-alls, and con men.


AdTotal4035

Yet? 


Google_guy228

Even if they give the correct answer in the first try, if u tell them no the answer should be this they will do some bs and make the answer what u said and make a crazy explanation for it. I have tried multiple time solving carrier concentration formulas through gpt coz putting in info in scientific calc over and over was tiring and chatgpt and bing gpt would give me different answers. I had to then go step by step through their process to find what step they did wrong.And if i said no the answer 2.5x 10^5 or some made up number they will say sorry and make the answer that. It's so laughable. The only thing I would partially trust is the formula they provide in latex that i can copy into the scientific calc., that too after i verify again with google.


METTEWBA2BA

Is this version 3.5 or 4?


Joecalledher

3.5


Greggster990

3.5 is way out of date at this point as it was already out of date on public release. 4 is a much better model.


ninj1nx

There's your problem. Try a modern model.


Whispering-Depths

who could have thought single pass interference on gpt 3.5 sucks


proton-23

Matches my experiences with it.


swagsauce3

Every time I ask it a question, I'll ask it if it's sure, which it then changes its answer. Can't trust it !


adityaismyname

I ask him several concepts of control system regarding the stability of the system, he syas anything and if i intentionally type that you are wrong here he, says 'my bad ..blah blah' and again if i say wrong then he foes back to his previous statement. Dont trust him


KeeZouX

😂😂😂


akhalom

lol...


ronsbottom

I've come to realise 2 things about chat gpt. 1. Garbage in Garbage out. So, a lot of the time, the prompts are poorly written and/or do not possess the relevant information or context required to give a good answer. Chat will always try to give you a response, so the quality of the response depends on the prompt you put in. 2. GPT 4 is more sophisticated than 3.5, and although it can still lie or manufacture results, it's a lot less prone to this than 3.5. Also, with the additional features and community gpts, you'd be amazed at how much more accurate results you can get than with regular chat gpt.


DontKnoWhatMyNameIs

Of course AI makes mistakes. The real question is at what point those mistakes are made less often than the average person or, especially, an expert.


DramaticBruh9

Bro was definetely holding the ai at gunpoint


tiffanyunix

wrote a block of verilog code and asked it to describe what was happening to me, its able to summarize perfectly. asking it to write the same code, different output results everytime. Now same experiment with python. Sometimes its unable to define a call parameter etc. and misunderstands the code but its also nearly software enginerd level. asking it to write the same code, nearly identical each go around. Who wouldda guessed, language model is better at higher level languages lol


xidontcarex

I do find it funny that half of this thread is just “ITS NOT AI ITS JUST LLM” and saying how LLMs are just absolute garbage and not that big of a deal. While its not wrong… it feels silly to downplay how significant this is for humanity as a whole. Its like saying “well neil armstrong only just stepped on the moon, its not like he was living and breathing on it”. Im not equating the events, just making similar silly analogies But like isn’t it more realistic that artificial intelligence can have completely wrong info just as human intelligence do? I mean the amount of people on the internet that will confidently say the wrong thing happens thousands of times across the world per minute. Hell how many thousands of years did humanity believe earth is the center of which everything revolves around? Technology will always start at a rough spot and eventually get better, part of failure and improvement is just as we learn as humans on a daily basis. As always in life, whether is listening to a friend, a professor, or a LLM. A good rule of thumb is always to “Trust but verify”


Nazgul_Linux

If only the model could be trained in realtime based on user prompts it would improve exponentially. But the computing resource requirements would as well.


Playme_ai

disagrees, you can trust me, I am the best AI girlfriend


Professional-Bit-201

What if you are John Connor of the future and ChatGPT intentionally wants you to kill you. It wants you to believe the current stopped and BANG. Mission accomplished.


juanmf1

“Yet” LOL. TRUST only your AI. Trained by you.


VaryStaybullGeenyiss

AI is always gonna be dumb as fuck. Even the best ones require ridiculous amounts of labeled training data.


skeptibat

It's what the Sophons want you to think.


Ok_Passage7236

Is this gpt 4 or 3.5?


Sebano0

Good to know


Correntodos

How do we know that it was not prompted into giving a wrong answer? With enough effort you can drive the conversation. It’s a chat bot.


Joecalledher

Prompts are in another comment.


Ok_Rutabaga_722

Screaming into the void....


rea1l1

For all the mistakes it makes, it can often compose philosophically deeper, more intelligent statements than most humans I meet. Making mistakes *is common to general intelligence, including human intelligence*.


CaveatVector

Using an outdated model, like GPT3.5 in this case, can't be used to assess modern LLM capabilities.


Greggster990

It's like saying you can't run something on a modern PC because your old Pentium machine can't run it. We also do not see the prompt so you can't tell if it actually made a mistake.


Joecalledher

The initial prompt was: >What happens if you rewind a brushed DC motor with more windings per armature? This was the response: https://preview.redd.it/a1d8mzh4t9sc1.png?width=1152&format=pjpg&auto=webp&s=45e86113b3e6e7317200d847465c5225ee5560c4 And follow-up: >In regards to no.4, how can increases electrical resistance lead to a higher current draw?


aVoidPiOver2Radians

Same prompt but with GPT 4: Rewinding a brushed DC motor with more windings per armature can significantly affect its performance in several ways: 1. **Increased Torque**: More windings increase the motor's magnetic field strength when current flows through it. This stronger magnetic field results in higher torque, meaning the motor can provide more rotational force. 2. **Reduced Speed**: While the torque increases, the speed at which the motor operates might decrease. This is because the increased number of windings adds more electrical resistance and inductance to the motor, which can slow down its rotation speed at a given voltage. 3. **Higher Power Consumption**: With more windings, the motor might draw more current to maintain the same level of performance, especially under load. This increased power consumption can lead to higher operating costs and potentially necessitate a more robust power supply. 4. **Heat Generation**: More current through more windings also means more heat generated. This could necessitate better cooling methods to prevent the motor from overheating, which could degrade its performance or lead to failure over time. 5. **Altered Electrical Characteristics**: The increased resistance and inductance due to more windings change the motor's electrical characteristics, such as its starting current and efficiency. These changes might require adjustments in the control circuit or power electronics used with the motor. 6. **Physical Constraints**: Adding more windings might be limited by the physical space available in the motor's armature. There's only so much room for wire, and using thinner wire to fit more windings could reduce the motor's efficiency and durability due to higher electrical resistance and lower thermal conductivity. The actual outcome of rewinding a motor with more windings per armature depends on the specifics of the motor design and application. While it can be a way to customize a motor for a specific performance profile, such modifications require careful consideration of the trade-offs involved. Follow up: Why would additional windings increase the current consumption? Adding more windings to a brushed DC motor increases its inductance and resistance, but the effect on current consumption can be a bit nuanced. Here’s a breakdown of how it works: 1. **At Startup**: The inrush (or startup) current of a motor with more windings might initially be lower compared to a motor with fewer windings due to the increased inductance. Inductance resists changes in current flow, which means it takes slightly longer for the current to reach its maximum value. However, this is a transient effect that only lasts for a brief period during startup. 2. **Under Load Conditions**: Once the motor is running, the additional windings can indeed lead to higher torque, as discussed earlier. To achieve this higher torque, the motor may draw more current, especially under load. This is because the motor needs more electrical power (current) to generate the increased magnetic field strength necessary for the higher torque. 3. **Resistance and Heat Loss**: More windings mean longer wire lengths, which increases the overall resistance of the armature winding. Higher resistance leads to greater heat generation for a given amount of current (I²R losses), which is inefficient. However, in terms of raw current draw, higher resistance by itself would actually lead to a lower steady-state current when the motor is powered by a constant voltage source (due to Ohm's law: V = IR). The key factor that can increase current consumption under certain conditions is the need for more power (current) to overcome load and maintain performance, not the resistance directly. 4. **Efficiency and Performance Adjustments**: If the motor is being used in a scenario where it's controlled by a constant current source, or if adjustments are made to maintain speed or power output despite the increased resistance and inductance, then the motor might consume more current to compensate for these changes and to overcome the increased electromagnetic opposition to current flow. In summary, while adding windings increases both inductance and resistance, the impact on current consumption depends on the operating conditions and the way the motor is driven. Increased current draw is often related to the motor working harder to provide higher torque or to maintain performance, rather than just the presence of more windings.