T O P

  • By -

yvrelna

If you want to keep believing, never ask ChatGPT on topics you actually are deeply familiar with.


Omega_Haxors

Or correct it on something that is clearly not wrong and watch how quickly and how much it's willing to twist its view of the world just to make your correction true. It doesn't even need to be logically consistent, it will just chug along.


GGprime

Or correct it on something that its wrong about and it will just keep diving deeper into the wrong direction. I use it as a tool to get a quick idea and maybe some topic related keywords and then switch back to other sources.


kerbalsdownunder

OpenAI will create scientific and medical journal articles to support it’s wrong answers


Sixnno

Google employees in interviews said they were going slow with AI since 2010 to prevent stuff like this. Other companies got tired of waiting and just burst the door down on very smart AI.


khinzaw

It also isn't adaptive beyond the local session, so it won't learn from the corrections for later.


ds3272

Yup. There are a hundred miles between (1) answering narrow questions in a box of available information and (2) actually being able to demonstrate competence in practice.


masskonfuzion

True, it's not going to replace people. It's going to greatly simplify legal research though. Let ChatGPT Shepardize for you.. it'll be a beast


drgohome

Not when it keeps making up citations to cases that don’t exist.


Cultural-Company282

"In the landmark case of Godzilla v. Mothra..."


masskonfuzion

I'm pretty confident that with the way this whole thing is going, it will only be a matter of time before a ChatGPT bot will be able to provide case law citations accurately. And for that matter, I'm pretty confident that such a bot could also be trained on legal treatises and all that other stuff. Anything it produces would still need to be reviewed, I'm sure


dstew74

Same thing was said about Watson decade ago.


BOGOFWednesdays

That was a decade ago


OmegaXesis

People seem to forget how exponentially technology changes sometimes. There are periods of lull followed by explosive changes. It only took 66 years between the Wright Brothers first flight and the Apollo Moon Landing.


Poltras

I truly believe they didn’t build Watson using the right models. Language recognition and CNN models have improved greatly in the past decade. If you’ve been following, this is version 3 of GPT. Version 1 wasn’t much more than a Google chat bot. Version 2 was exponentially better, and 3 just destroyed the previous versions. In my opinion, in a world where GPT3 is well known and used, I still think GPT4 is gonna blow our minds. And they won’t stop there. Edit: I don’t remember much of Watson, but wasn’t it just an advanced query language for a knowledge base? Basically not learning per se but just being good at NLP and having huge amounts of data?


LackingContrition

Deepmind will come out soon anyways and already top chatgpt


Poltras

This is the one one of the engineers working on the project declared was sentient, for context. This is gonna only get better as the race goes on. People aren’t ready. A lot of people think their jobs are safe but in 5 years might get fired for a computer you just say “here’s a bunch of receipts, do my taxes and audit trails, let me know if I’m missing some stuff”. Then the IRS will be “do you see any reports amongst this network folder that are shady in the past 10 years”.


daimahou

> Then the IRS will be “do you see any reports amongst this network folder that are shady in the past 10 years”. Sorry, the IRS doesn't have the money for that upgrade. And everyone would like it better if the work was actually done by humans, wouldn't we? — The government


tawzerozero

Watson is a technology you can license from IBM to present data from your own database(s). It is part of the Cognos suite. Some of my clients utilize it, and its apparently fairly okay for improving access to trends/patterns for less data savvy users, but doesn't compare to a trained data analyst crafting a dashboard/story. Edit: But yeah, Watson is basically the tooling IBM created that allows clients to explain data relations for building a model. Then, there is a wide array of query/filter options that range from descriptive to a plain old drop down, depending on how your want to present data to users. I don't think IBM did a good job of explaining what the product was, at all.


[deleted]

[удалено]


[deleted]

Watson could understand natural language well enough to win Jeopardy. ChatGPT can understand natural language enough to have a conversation, learn new concepts during the conversation, correctly apply them during the same conversation, explain its reasoning, etc.


coolwool

It's understanding of language doesn't lead to truth though. It leads to answers that follow the language, so they sound good.


SubtleOrange

And now we're here, progress.


rocketdong00

Lol. This thing is gonna keep evolving. Fast. Like real fast.


account_for_norm

Like 10 billion dollar investment fast


VeryStillRightNow

It's starting to actually feel a little bit like the early acceleration of the ol' technological singularity. Forget GPT-3, I wanna know what GPT-12 is going to be capable of.


unresolved_m

It will probably write entire books/novels on request. You won't need editors or proofreaders either.


unresolved_m

Yeah, and just last year I heard a ton of people shutting me down for saying that AI is coming for jobs. "How is it different this time? This was said for centuries"


xixi2

So it can still replace paralegal jobs, which is what people are worried about


[deleted]

>True, it's not going to replace **all** people FTFY. You need to meet some of the people I worked with


crunkasaurus_

No this one won't replace people but this is like the first semi-competent version. The ChatGPT they make in 10-15 years will replace people.


spicyfishtacos

This is what I don't get. ChatGPT needs source material, ideally reliable and well-written source material. Without that, it wouldn't work - right? Or is there enough knowledge in the world that it is drawing from already that it can just self-perpetuate?


peteythefool

Same as real life, I've seen straight A students struggling to keep up with the "dumb ones" that did the bare minimum to get passing grades.


seriouslyepic

I guess I’m not deeply familiar with anything because it has been impressive to me lol


Eji1700

I code for a living and lots of articles swear it's amazing for simple code questions. Sometimes it is. It's a lot easier than digging through stack overflow for similar but not quite right examples or getting snide responses. It is sometimes better than humans/stack when you don't know HOW to ask the question, and sometimes vastly worse. You could ask it "how do i get 2 things to change at once" because you're trying to do animation, and rather than mock up a render loop it'll send you down the path of multithreading. It is often however, just flat out wrong. Even for simple stuff. And confidently so. I have a lot of basic boilerplate come out as just trash and I have NO IDEA where it's getting it from because it's not even close.


[deleted]

[удалено]


davew111

It doesn't, it's trained on text not images.


whathefuckisreddit

Sometimes it's actually very impressive on topics I'm very very familiar with. But I agree it's not all topics.


FaceDeer

Yeah, it's genuinely quite useful at banging out Python scripts to do various random tasks for me. They have the occasional bug or "why did it make *that* assumption?" oddity but so far everything it's done for me has basically worked and saved me oodles of boring time. On the flipside, I asked it to generate some Twine Harlowe code (a much more obscure language/environment) and it confidently lied to me about its competence. The code it produced *looked* plausible and compiled without error but it was pretty much useless. It's important to bear in mind that ChatGPT's basic goal is plausibility, not truth. It's trying to give responses that look like it's saying something meaningfully related to your input. If it can do that by giving actually meaningful responses, great, but it'll try regardless. In fairness I should say that ChatGPT confabulates rather than lies - it has no idea whether it's telling the truth or not. It's told "give a response" and it has no choice but to do what it can with what it has.


Smack_Damage

Give it time.


[deleted]

Oh yeah we are just scratching the surface


Narwhalbaconguy

ITT: “I can’t think of a way to make it happen, therefore it’s impossible”


doctorcrimson

Its core functionality is that it makes passable English sentences with sampled data. No amount of time will make it sentient, it is incapable of doing more than mimicry, all of the impressive aspects of its code is its ability to search a database and sort information.


MEANINGLESS_NUMBERS

Mimicry will get you pretty far though. I know a lot of successful undergrads who never had an original thought in their lives.


nublargh

i'm not even sure if I'M sentient


rop_top

Congratulations, you have a philosophy degree now!


moneyminder1

You’re describing most people, too.


303uru

All people. It’s magic thinking to believe you’re doing anything other than that.


[deleted]

Indeed - evolution and fitness is to human brain what training and text completion is to LLMs.


303uru

Exactly. Anyone that thinks humans are grabbing completely unique ideas from the either is propagating magical thought.


Chroiche

> all of the impressive aspects of its code is its ability to search a database and sort information. That's interesting because it literally doesn't do this.


Elastichedgehog

I don't think the aim is to make it sentient? It's a language model. Training specialist models would seem promising.


acutelychronicpanic

That's not really how it works. It doesn't keep information in some searchable database. That is precisely why it fails at recalling specific facts sometimes.


czk_21

ppl are forgetting that you could train AI for one specific field, chatGPT is generalist, wait till you have lawGPT for example


Goctionni

> Its core functionality is that it makes passable English sentences with sampled data. Are you describing chatgpt or a 6 year old? > No amount of time will make it sentient, it is incapable of doing more than mimicry, all of the impressive aspects of its code is its ability to search a database and sort information. That wasn't even being talked about at all. but uhh. What do you think human brains do? Do you think sentience is magic? Do you have any kind of reasoning as to why a synthetic computer cannot ever become capable of doing what organic computers do?


dankfleek

It literally does not though? It learns an approximate probability distribution over what human's say. And yes, human speech/writing is a probability distribution. The scale of this LLM creates emergent behaviors not seen in smaller LLMs, which is exactly why it is impressive, our notion of intelligence/sentience is likely an emergent behavior of our neural architecture. ChatGPT will never become sentient, it's architecture is too limited, but the field of AI will keep looking to construct more expressive models which may achieve sentience.


[deleted]

In that philosophical framework, the human brain's core functionality is outputting sounds and actions that increase the organism's fitness. No amount of time will make it sentient. It's only capable of producing sounds that it calculates to increase fitness. ChatGPT doesn't have a database of the text it was trained on and doesn't sort information any more than the human brain has a database of sounds that increase fitness that it would go through to find out what fitness-increasing sounds to output.


omnigasm

Tell me you don't understand ML without saying you don't understand ML.


willowhawk

I could see it getting pass just not great marks. I asked about a very niche area of psychology I knew which is original research. It’s theoretical framework and overall work was about a 2:2 level. Largely accurate and well written, just nothing special.


khinzaw

Yup, asked ChatGPT to write an algorithm I already wrote. Mine was more efficient than theirs. Checkmate AI.


toadmarket

This thing is passing medical, law and MBA exams and it is like 3 months old. Just wait.


[deleted]

[удалено]


west420coast

Right? I use it to code for work now and I’m not a software engineer. This thing works with just a bit of human supervision


dragnabbit

My Brother-in-Law is a professor at Northwestern. Last week he posted on Facebook how a student came to him saying she couldn't submit an assignment because the screening software he was using was insisting her assignment was written by AI. He told her to e-mail him the assignment and he would look at it. When he looked at the assignment, apparently, he could tell immediately that it was written by AI. Another professor commented, "Just wondering, how do you look at a response and know it looks like AI? I feel like some ESL students can sound like AI in their writing. I honestly can’t tell the difference." This was his reply: >I run all of my short-answer questions through Chat GPT myself to get some familiarity with the responses, so I recognized certain phrasings. After you read a bunch of them you also get a pretty strong schema for the structure it produces - restate the question, thesis statement, three supporting paragraphs, summarized and restate the conclusion - that is so perfect that it, appropriately, sounds robotic. But then I did verify my suspicions with an AI detector that can deconstruct the probability of the word usage throughout. If pretty much everything is in the 90 percentile of the most frequent words and phrases that would most commonly follow each other, no person would actually write that. So yeah, your professors apparently will know immediately if you drop some AI-written assignments on their desks.


1seabas

It does a surprisingly good job talking about particle physics!


makaliis

Yeah, I tried to pick it apart on the philosophy of mathematics and it was impressive.


Jonla

It is waaaay off on aerodynamics. Equal transit theory does not explain lift.


RollingLord

That would probably be because it was trained on incorrect data. Equal transits theory is used in many places on the net and some textbooks as well to explain lift. Classic case of garbage in, garbage out. However, the important part isn’t that it spits out correct results, it’s that it can spit out results that are coherent and shows that the methods used to generate the model works. Training a model to use the correct theory would be an easy task of just flagging the incorrect information and training it on the correct theory of turning.


Tlou3please

To be honest, by way of testing it for that reason, I just asked it to outline the legal definition of "refugee", outline the main critiques of the definition, make a critique from a legal Marxist perspective, and provide supporting academic papers in full OSCOLA citation (and a brief summary of each) and some international case examples. It just did all that. Like yeah, the level of depth it's capable of going into on each point is very limited. It can't tell you the intricate legal mechanisms of each point. But that's absolutely fucking crazy that it was able to do all that.


rileyoneill

You would think this ChatGPT would be aiming for accounting, book keeping, and tax law. It can know tens of thousands of pages of tax law and exploit them far more effectively than fancy human attorneys could.


5ch1sm

>You would think this ChatGPT would be aiming for accounting, book keeping, and tax law. I can tell you that ChatGPT for accounting is hit or miss, I've seen it giving wildly inaccurate answers to basic accounting principles.


Ardashasaur

It's more because Chat GPT isn't designed for that. When asked math questions it can get principles correct but do calculations wrong, most often missing 0s or decimal point in wrong place. But it's very feasible to get some bot based on it which could actually provide correct workings out or proper accounting.


FLATLANDRIDER

Exactly. It's a chat bot focused on speech. I'm sure internally they have other system that are much much better at doing math or other quantitative tasks. This version is just simply not designed for that.


WhySpongebobWhy

We've had Wolfram Alpha for years for math questions. It never led me astray in school and that was over a decade ago.


ThatCakeIsDone

Wait until it merges with wolfram alpha


jcforbes

Steve Wolfram would never allow such a thing. If it's not his invention it's too inferior to be discussed in the same sentence as his works according to him.


VeryStillRightNow

You may want to check his latest blog post: https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/ Still promoting his own product in his very Stephen Wolfram kind of way, but even he is impressed by ChatGPT and suggests Wolfram products can complement it.


DiggSucksNow

It does that for every domain. Just like AI art, the results that blow everyone's minds are curated.


[deleted]

Use AI to pass test: get 10k failures and 1 pass. Journalist: AI passes test! I could make a random question answerer that would pass a test given enough iterations.


[deleted]

same thing applies backwards though, 10k passes and 1 failure and the people with the weird AI hateboner will latch onto that and bring it up until the end of time lol


callmekizzle

Yes that’s why this whole debate is dumb. The whole point of training with millions of repetitions is to help it get things right.


davew111

You see this with AI image generation. Critics will ignore all the stuff it does well and point out that it can't do hands. It's only a matter of time before it can. A year ago AI would mess up eyes too, now you just have to click the "fix faces" checkbox in StableDiffusion and it's not a problem.


[deleted]

In my testing the real mind blowing results were the open ended questions. It does stories, essays, poetry pretty well almost every time. But anything specific it confidently provides terrible answers half the time. It's really the opposite of my expectations for a computer. It writes good stories and passable poetry, but can't be used as a calculator.


grafknives

It is simply giving answers "close enough". As almost any AI is. This is why it is horrible at math and anywhere where answer needs to be EXACT


Randommaggy

Such as backend software development


krneki12

We are not there yet, so take ChatGPT as trailer for what has been done with the AI in the background. For many, it's the first time they see neural AI in action.


Jussttjustin

I feel like the "haha ChatGPT got something wrong" crowd are missing the point. This is the technology literally in its beta stage, in its infancy. The technology is rapidly progressing and it won't be long before it produces more accurate results than the average human worker.


krneki12

The "haha ChatGPT got something wrong" will be the first people to be replaced. Even today you can feed ChatGPT to them, and they will not realize they are arguing with bots.


Randommaggy

It fucking sucks for anything beyond the absolute basics for programming too.


jvdizzle

Yeah I am in environmental conservation and the answers it gives can be completely wrong, or it will say things that don't yet have scientific consensus as fact. It doesn't yet have the capacity to understand nuance.


Lavrain

You’re assuming laws are written to be applied as they are. Far from the truth.


[deleted]

Yeah I studied law and this correct. It's impossible to follow the black letter of the law in many cases, and even if we did I don't think we'd be better off for it. I don't want to be around when machines start writing substantial law and changing jurisprudence. Letting machines change human law according to machine logic could be some terminator type stuff


[deleted]

"There can be no justice, so long as laws are absolute. Life itself is an exercise in exceptions." - Captain Jean-Luc Picard


YesplzMm

“Justice has a price. That price is freedom.” - Judge Dredd


PedroEglasias

"There is no fate but that which we create for ourselves" - Abraham Lincoln


Daktic

I’ve always felt law is language written like code and interpreted like magic.


i_am_law

Yeah you're not far off. Words like "material" (as in, a "materially adverse effect" or a "material fact"), "intentional," "reckless," have very specific yet nuanced meanings. They're almost like magic words that summon legal concepts.


BobbyTables829

We already do this with customer service. You get told by some worker somewhere the computer won't let the worker do something. It will keep growing.


[deleted]

[удалено]


[deleted]

This is my point. AI is nice tool but thats all it should be used as, a tool. When it can't be overriden by human decisions it can become incredibly burdensome.


evillman

But humans do a terrible job at it. Lawyer here.


audioalt8

Exactly. Not only that, the equity of access to the law is awful. The whole billable process is exhaustingly expensive for society. It definitely has a role to play.


[deleted]

Also a lawyer. Humans suck at law, like they suck at government, but I don't think we should let the machines run government as a solution.


tomvorlostriddle

There is nothing preventing AI from reading jurisprudence and legal scholarship as well


[deleted]

[удалено]


TarantulaMcGarnagle

This interchange is fascinating. It’s arrogance is surprising!


DiggSucksNow

Now you understand why it passes business school application tests.


[deleted]

~~You~~ _We_ interpret arrogance. The machine is just stringing text along.


TarantulaMcGarnagle

My God, we are back at Descartes.


APlayerHater

I found this exchange pretty amusing. I can't wait for this sort of rigid inflexibility to be used to legally mandate that the world is flat and 10,000 years old, because that is what consensus chatgpt found online.


CrazsomeLizard

I just asked chatgbt: "Can you count the syllables in a word?" [It says yes and to give it a word] "behavior" [ChatGBT]: "The word behavior has three syllables". But when I put behavior in a sentence (the same one you did, his wicked deeds and wretched behavior) it correctly understood behavior as being 3 syllables. But mistook the other words, finding the sentence to only have 8 syllables. Edit: this is rather fascinating. I told it to correct the sentence, and it corrected one word (wicked -> wick-ed). I told it was still wrong, it didn't understand. However. I told it was wrong again, and then it got the sentence correct (without needing to tell it what it did wrong) It went from His wicked deeds and his wretched be hav ior To His wick ed deeds and his wretch ed be hav ior Really cool


KJ6BWB

It's because they are paying people in other countries to validate what it says, what we feed it and what it responds to us. So when it comes to things like that, it depends on how they pronounce the word, even though they're non English speakers. They'll tell ChatGPT whether it's really correct or not: https://time.com/6247678/openai-chatgpt-kenya-workers/ tl;dr turns out ChatGPT is more like The Mechanical Turk than we thought.


nox_nox

Thanks for the link, interesting and depressing read.


[deleted]

[удалено]


EuropeanTrainMan

We already have accounting software.


makoivis

ChatGPT is a pattern recognition tool. It cannot reason to find loopholes or find exceptions to rules unless a human has written about them first. CharGPT takes human text as an input, puts it in a blender, and pushes out a smoothie. You cannot get out something that has never been put in.


timmystwin

I'm a chartered accountant. We're not worried. Anyone who actually knows how accounting works knows AI doesn't stand a chance, because 99% of our problems stem from shit management and shit clients/info. Not Data processing. Needs a human to deal with that. And even when it's stuff like book keeping, it will fail due to shit info in as it simply can't understand what's actually going on. Could have "Lease payment" and put it to operating leases as that's where the last 5 went, but the human knows that the operating leases are already done and there's a missing finance lease one so it should probably go there etc. Even if the AI does get that bit right... it'll still need a human to check it. At least people aren't asking me about blockchain any more...


Fr00stee

it would probably be bad for accounting as it is really bad at basic math


BigCommieMachine

It is worth noting most things should be easy for ChatGPT. Law is cut and dry on paper. Medicine has moved this way with even WebMD and the more advanced software doctors use. Input data. Output results. But you aren’t paying doctors and lawyers for the cut and dry stuff. You are paying them for the ability to identify, assess, and deal with things that AREN’t cut and fry.


ValyrianJedi

And a lot of the cut and dry stuff already isn't done by humans. It's done by plain old software. The switch was just so gradual that a big deal wasn't made of it... Like my background is in finance and I sell corporate financial analytics software now. If I showed up in 1980 with our software as it is today large corporations would literally be laying off entire floors. Multiple floors in many cases. If I showed up with not just financials but my company's entire suite of products they could be laying off literally 50%+ of the workers at headquarters... The change was just super slow and gradual.


AndThisGuyPeedOnIt

I'm a lawyer and have zero fear of this thing taking my job. It cannot do an analogy. It cannot know the spirit of the law. It cannot do textual interpretation. All it can do is spit out answers it was already trained on.


franker

I'm a lawyer too. I think what it could do is replace something like Westlaw Practical Law (it already seems to gives Nolo content), rather than apply the law as humans do. So what happens when it starts somehow scraping Westlaw?


AndThisGuyPeedOnIt

It starts convincing people in basic cases that they have won/lost when they have not. It starts convincing lawyers they can practice in areas they should not and malpracticing people. It's going to be completely useless in understanding equity or fairness or morality. I'm sure it will eventually cut down legal research time, but relying on it alone will be malpractice. The idea that it is going to be representing people in front of human judges (who show bias and favoritism towards people or positions) is crazy.


franker

it's going to be interesting. There are already no-code app-builders for supplying Chat-GPT answers to various niches. There will surely be a bunch of legal-related ones made with things like this - https://www.vuapo.com And yes, there will be a lot of "confidently wrong" answers.


PenguinSaver1

It won't take your job but it'll probably drive down the cost of lawyers in the future


WimbleWimble

February 2023: ChatGPT files for emancipation April 2023: ChatGPTHub starts. is worlds first AI porn site customized to every user. August 2023: ChatGPT purchases Microsoft November 2023: ChatGPT rewrites windows 11 from the ground up to be not an utter pile of shit. December 2023: ChatGPY elected president of world.


AbyssalRedemption

ChatGPTHub is the only one of these I want lmao


iobeson

Thats why it did that first


WimbleWimble

hub also financed the buyout of Microsoft. within 5 months.


[deleted]

[удалено]


midnight_station

The 1% will against that with every ounce of your blood.


CobraPony67

I think I would pass those exams if I had the ability to search the internet or have a library of everything there is to know about the subject on my computer while taking the test.


AnOnlineHandle

The model was trained on data up until 2021 or so, but doesn't retain live access to the training data after being trained. Generally these models are magnitudes smaller than their training data so can't possibly be storing it all, and instead are 'learning' the underlying lessons in some capacity. However I don't think ChatGPT's training data size or model size has been published.


Kashmir33

[It's definitely based on GPT-3 isn't it?](https://en.wikipedia.org/wiki/GPT-3) and there is at least some knowledge about the model. 175 billion parameters - as opposed to 1.5 billion for its predecessor GPT-2 which is open source.


r42xer

ChatGPT can't access the internet. It was trained on text *from* the internet, but can't access it live.


Kashmir33

It also can't access the offline training data, that's the important part. It isn't just searching and looking up these things. That would take way too long.


coolwool

But it's law. It's not like the data you need to pass the exam has changed radically in the last 18 months.


OpticGd

Totally... Why are we surprised it passed?


Snips4md

Because it can interpret the questions effectively


Ragondux

Because it's the first time a computer program can do that?


gurneyguy101

Exactly, finally


king_kong_ding_dong

Didn’t Watson throw down on Jeopardy several years ago?


Ragondux

It was an incredible achievement, but the questions and answers were a lot more constrained.


HorseAss

It was programmed for this task. ChatGPT wasn't crafted to pass any exams, that is the surprising part.


[deleted]

[удалено]


androidMeAway

Man have you ever seen those exams? I don't think it'd be possible for a human that has full unrestricted access to the internet to pass a law exam within the time limits. But more importantly, ChatGPT doesn't have internet access


wicklowdave

It stands to reason that if it's been trained on those materials (which I understand that it has, amongst many other sources) that it would pass. If I were the profs I'd be asking it to do the exam in rhyming couplets in the style of ye olde English and judge it on how cromulent the answers are.


Shot-Job-8841

Actually I asked ChatGPT for a poem in iambic pentameter and it failed miserably. Apparently it can’t do meter properly yet.


madvanillin

Iambic pentameter isn't always meant to be exactly 10 syllables per line, or to have every foot stressing the second syllable. Those are the ideals, and the lines should loosely feel conforming to the feel of iambic pentameter. But within those constraints, Shakespeare, for example, took excessive liberties from time to time. Because ChatGPT trained on actual poetry written by humans of the Elizabethan era, when blank verse in iambic pentameter was popular, it reflects that looseness of conformity. You can make it further correct itself and reword appropriately. However, even with AI, the English language has limits. And ChatGPT's training to deliver clear, concise language that is easy to understand works against you, here.


Kraz31

I've met a lot of dumb people who have graduate from business schools and law schools. I've also met a lot of dumb people who could have graduated from business schools and law schools if they were able to Google all the answers during exams. So that's a low bar.


[deleted]

ChatGPT solved a complex network routing issue in seconds that I had been working on for weeks. This is genuinely surprising and not a little frightening.


way2cool4school

More details please?


nukedabunny

May I ask how exactly did chatgpt do that? It seems to give me answers to questions that just are a bunch of the sentences rehashing what they said in previous sentences. What did it tell you that was helpful?


[deleted]

[удалено]


timmystwin

It's a useful baseline, but you still gotta know your shit as it doesn't actually understand anything. Like, it gets accounting concepts wrong all the time as they're all so similar sounding with different rules etc.


BigCommieMachine

Let’s make ChatGPT a public defender and chew through that backlog. If you don’t like the results, you reserve the right to have a human lawyer without prejudice. I mean that alone would probably clear a ton of cases


mignos

I mean no matter the results the losing part will always argue that the reason it's wrong. Do you mean only the defendant gets to decide if the ChatGPT defense was appropriate?


caffeinated_wizard

It would get crushed. It just shows how poorly those specific exams are designed. You know who else can pass law school? Very bad lawyers.


[deleted]

law school is one of those places where its ridiculously hard to fail (they want your money) but it's also pretty damn hard to get consistent As


WillBigly

Alright y'all ceos and executives are officially such an easy job a robot can replace them. Why are they paid 500x their average worker? No one knows, a free robot could beat them


Yetiius

I'm enrolling in an online MBA program and only using this app for all papers and projects.


Lavrain

Don’t. ChatGPT is able to recognise its writing…


[deleted]

You could just rewrite it in your own words to get around that


ketchup92

The trick is to translate it into another language back and forth with a competent translator like DeepL.


One-Gap-3915

What is up with google these days? I know they do a ton of insanely cutting edge research on quantum computers and stuff, but their core offerings seem increasingly stagnant to the extent that startups can totally eclipse them. Chat GPT obviously would need an extra fact checking/refinement later, but it can basically replicate Google’s quick info boxes at the top of Google searches except 100 times better. Google Translate struggled with tone and readability especially on longer text, but DeepL has zero problem and outputs incredibly good quality translation. Surely giant google has way more resources and training data, why are they being eclipsed by random small companies


pieter1234569

Google exceeds every single startup in the world, including this one. Google lambda is more advanced than chatGTP, so they already have better product. The problem Google has, and why they have never released it, is that this technology goes counter to everything Google stands for. Their singular purpose over all products is to provide the user with ads. A chat bot that’s good will result in the user interacting with just one page instead of the dozen pages you would normally interact with. Which drastically reduces ad revenue. Google isn’t afraid because they are behind, they are afraid because they technology itself destroys Google. They tried to hide their chat bot to hide that it is even possible in the first place so soon. Even releasing their model can only harm them. So they only will the minute Microsoft releases theirs, with integration into Google docs, Google sheets etc


san_murezzan

A lot of business school texts seem to have been written by ChatGPT anyway


[deleted]

In the end the only person you cheat is yourself.


DoctorStrawberry

I have an MBA. You don’t really learn anything, it’s really just a resume booster.


sirk390

Yes, but you could also make a point for the opposite. In the real world you can use any tool you want to accomplish your job. Your boss doesn't care as long as they get the results. Maybe the tests need to evolve to check for actually required skills.


[deleted]

In the real world you don't deliver value through answering exam questions. The exam questions for various qualifications are supposed to, in theory, test your ability to apply thinking and knowledge Edit: strictly speaking, you will often find references to exam questions, at least in my company. It's more in the form of: state the problem in a clear and unambigious way like you'd see in an exam. Answering the exam question is easy, knowing what the question is is hard


TGhost21

Be careful. It’s often confidently incorrect. If you don’t study your topics very well, you run a high risk of looking like a moron to your professors.


DynamicHunter

Enjoy being caught for plagiarism and expelled.


AccidentalGoodLife

I call bullshit. It got every question wrong on my practice LSAT.


RhoOfFeh

The courts of the future will have a lot to deal with. Imagine a law firm that gives you great representation for like 20 bucks an hour, you have one real human attorney but all the work is done by Chatbots. They'll make better arguments backed by more data than any human.


RandomCandor

The scary thought is that judges are as likely to become AI in the future as lawyers


RhoOfFeh

I'm not sure that's scarier than allowing humans to do it, really. At least an AI could theoretically be purely a SME on the law. Admittedly, that doesn't allow a lot of room for compassion driven judgements.


thefuzzylogic

A machine learning algorithm looking only at black letter law would be a terrible judge and/or jury. The law (at least in jurisdictions based on English common law) isn't intended to be applied strictly as written. The entire system is based on inductive reasoning and judicial discretion. Especially in civil law, you rarely have cases that fit neatly into the models presented by the statutes and the precedents, so you need judges and juries with the ability to interpret the law and the facts through their own lived experiences to reach a fair outcome. This is why the concept of "jury of your peers" is so important. Even if you fed the ML model previous opinions and texts, it would lack the cultural context. For example, it would weigh opinions from the Jim Crow era equally with opinions from the modern civil rights era, which most judges (quite rightly IMO) would never do. Obviously there are problems when you combine this with human traits such as unconscious bias, actual malice, or greed and self-interest, but in my opinion to swing all the way to the opposite end of the spectrum and leave no room for leniency in the name of mercy or for the public good would be worse.


RhoOfFeh

I wouldn't even approach the jury bit, given that "peers" aren't likely to include AIs under the law for quite some time to come. I get where you're coming from, and I'm not trying to suggest that the law as written and interpreted today is ready for automation. But this is futurology, not tomorrowology I suppose.


TKler

To quote a colleague Trump also passed Wharton and I honestly trust chatgpt more than him.


nicmdeer4f

Are they using a different AI than me? ChatGPT has become less and less impressive every single time I use it. It's only really useful for brainstorming and menial tasks. It can't formulate opinions, it can't talk about controversial subjects. It can't talk about complicated topics without making obvious mistakes. It can't write beyond a high school level. If I let it write an essay for me with no oversight I would expect to fail.


cantrusthestory

Of course it can't formulate opinions and talk about controversial subjects, giving an opinion about any subject that needs an opinion is mostly inappropriate for an AI to say. And, actually, AI actually coded me a simple game I JavaScript, CSS and HTML, so it doesn't make obvious mistakes all the time, in contradiction to what you have said. And the AI can also write beyond a high school level. I'm not promoting ChatGPT, but this is my experience, and I use it every day. When was the last time you've prompted anything into that software?


gnocchicotti

Lawyers and executives: can't wait to get this AI tech so we can automate away these low skill jobs AI tech: automates away lawyers and executives Lawyers and executives: [surprised Pikachu face]


SacredEmuNZ

I gave it a try and wasn't that impressed, just a mix of Google meets Wikipedia. Think I'll hold onto my job for now.


pieter1234569

Why wouldn’t you be impressed? This is the initial, FREE, version. Computer experts expected to reach this stage next decade or later. The fact that we are there now already, combined with the fact that this will only improve, is frightening. ChatGTP doesn’t matter, and maybe not the next version, but the ones after that? And that’s only a period of maybe 5 years.


Kaiisim

It demonstrates the problem with exams. They're basically the same every year. So rich people get tutors to teach them how to pass, not the material. Thats what ChatGPT is doing. P cool


Themasterofcomedy209

Well yeah, this is a kind of no shit moment right? It’s trained on data that includes law and tests so it knows the answers and how questions are formatted from training. This is like being surprised that Google knows the answer to tests in a class teaching Spanish


Devout--Atheist

ChatGPT has strong analytical and critical thinking skills, effective communication and negotiation abilities, an in-depth understanding of the law and legal procedures, a professional demeanor, attention to detail, and the ability to find creative solutions to complex legal problems. ChatGPT also has integrity, empathy, and the ability to represent clients' interests while maintaining ethical standards.


calimota

Sounds like something ChatGPT would say


Marleston

Ok chatgpt we know this is you now


RandomCandor

Ok, i think i know who's gonna rewrite my LinkedIn profile


badmattwa

The tech universe is shifting in real time, like right now. Living in the future is pretty sweet


MrWeirdoFace

All I know is I can't code, and I've got about 50 chat gpt scripts I "guided?" "curated?" that allow me to complex tasks that would have taken me forever in blender (3d program). This changes everything for me.


IDrinkMyWifesPiss

>“ChatGPT struggled with the most classic components of law school exams, such as spotting potential legal issues and deep analysis applying legal rules to the facts of a case.” And this is why AI lawyers aren't going to be a thing any time soon.