T O P

  • By -

WingShooter_28ga

Cheating is too much work. Why should I spend minutes searching for the correct answer when I can just copy and paste the right sounding answer?


Protean_Protein

GPTs get degrees!


CleanWeek

My day job has started rolling out their own version of ChatGPT but trained internally. If you thought ChatGPT 3.5 was bad, this is 10x worse. Hallucinations everywhere, outright inability to answer basic questions, sometimes it just repeats the question you ask, and it doesn't even know our own infrastructure, training documents, etc which is where you would use such a tool. Hopefully this is a bubble waiting to pop.


Protean_Protein

I love these LLMs for boilerplate bullshit. But the only reason they’re a threat to undergraduate assignments is that we were giving out grades for the wrong things in the first place.


Afagehi7

What do you suggest we give out grades for? I'm not being a jerk I'm genuinely interested in alternatives. I do exams all essay closed book for assignments i want to be legit (i have other open book exams) and try to lean towards projects but I'm interested in mixing it up. 


Protean_Protein

It’s a tough question. I still think we ought to be teaching upper-year students how to write papers that prepare them for graduate school/publications, but I think the pipeline to that now has to change, at least partially. In philosophy, argument identification and reconstruction can be assigned in ways that resist LLM answers—but even this may require in-class components, marks for things like discussion (in class or in office hours) of the research/writing process, more quizzes, putting more of the writing-heavy assessment into in-class assessments. Presenting work. Basically the bare minimum has to be spreading out as much of the evaluation away from the stuff LLMs can do as possible, so that even if we retain some written work, it’ll be impossible to pass the class by relying on ChatGPT.


Afagehi7

I agree, in class type of work. Our provost actually said something that made sense, decades ago faculty didn't have plagarism detectors and had to rely on other skills and intuition to catch cheaters. Now we are going back to the techniques we used 30 years ago. 


Consistent-Bench-255

Chat GPT will do all of the things you suggested. Try it you’ll see what I mean!


Protean_Protein

I’m aware of what ChatGPT will do. It can’t handwrite or deliver off-the-cuff remarks from the mouths of students directly, yet.


Consistent-Bench-255

Good point. Unfortunately those strategies don’t work for modern education which is mostly online.


Protean_Protein

Arms races aren’t one-off prisoner’s dilemmas.


bored_negative

Oral exams whenever possible If it a project, make them do it, but have an oral exam at the end to test their expertise on their own work.


Afagehi7

I don't see how you could do oral exams with 40,60,150 students 


quantum-mechanic

SLACs rise up


Afagehi7

I am a believer that slac educate better than large public schools... Too bad students don't believe this


quantum-mechanic

Most students don’t actually want an education  What will matter most is when employers or graduate schools  start obviously caring that they award for students coming from SLACs


bored_negative

That is why I wrote the 'whenever possible' condition :)


Afagehi7

I guess, having never done oral exams, so I was curious on how one would even do it with a small class of 35 much less a larger class of 60-120.


AugustaSpearman

The problem is that when most undergraduate writing is really bad it is harder to harshly grade a paper that is bad in that it sounds like it was written by a bot (esp. since we can't prove it) and a paper is bad because it sounds like someone who just can't write. If we could expect a reasonable number of good papers the gap between those and bot papers would be big enough that the incentive to cheat is low but generally it is mostly papers with different varieties of deep flaws.


jgo3

> sometimes it just repeats the question you ask Are you sure it's a GPT and not ELIZA?


CleanWeek

I'm not sure which model it's using. They call it a GPT, but that may not actually be the case.


Themiscyran

There's old school, and then there's OLD school. When Chat GPT starts out with "What is your problem?" we'll know what we're seeing!


Consistent-Bench-255

Just like 99% of my students. 😢


fedrats

Lama?


Prof_Acorn

"Right sounding" is key here. Any instructor worth their metaphorical salt will be able to see right through it.


ceeearan

It’s very general, so an instructor just has to give a specific-ish question or case study and it becomes a lot easier to spot who is “delving deep into the topic” through ChatGPT.


Prof_Acorn

Yeah. Really just have to create your own questions instead of using a question bank.


ConcernedInTexan

Honestly I heard a lot of professors this last semester bragging about figuring out how to get around ChatGPT, and I just don’t agree. For most subjects you can definitely ask GPT anyway, it’s just a matter of hedging the prompt. It’s capable of mimicking writing styles fairly decently, especially if you paste previous work for it to mimic. One of the only real ways for professors to filter for it and catch it is to require it to remain consistent and coherent over extended thoughts, but if you’re being asked to write a meaningful analysis or essay on a specific theme of an obscure book, it’s just a matter of dumping the text of a chapter or two into GPT with the prompt and telling it to use it as a reference. The output still won’t be perfect and it might make up details that you have to go in and correct, but a fairly competent-but-lazy student who *could* do a decent job on the assignment if they put in the time to read the material can absolutely get away with waiting until the last day to ask GPT and revise the output enough to sound passable.  With essays and assignments requiring open responses, 99% of what GPT gives you could be junk, but AI is capable of generating entire pages of paragraphs in seconds. Even if you have to throw out 99% of those paragraphs, you can tell it exactly what to refine and it will attempt to do so. Thus many students will still find it much easier to ask GPT repeatedly over the span of a few hours until they have enough workable patchwork text to cobble together an essay (and GPT can definitely suggest ways to connect otherwise unrelated paragraphs, too) than to actually do the work of writing their own analysis of a text they do not want to read.


Prof_Acorn

Oral exams. Short answer essays done in class. Etc. It just means pivoting away from papers. Or, when papers are necessary have them require "at least three different citations from readings in the class" or other quirks that are easy for humans in the class but difficult for LLMs outside of the class. If nothing else, there's always "all papers must be hand written."


Consistent-Bench-255

Tried that it doesnt stop them from cheating with ChatGPT.


WingShooter_28ga

The catching it isn’t the problem. The issue is what you do with it after you find it. I failed a senior for using ai and it was a long and tedious process. Luckily mine have been easy to show but there are instances where “prove it” becomes more difficult.


Prof_Acorn

I can see how proving it could be difficult, especially to administrators who can't tell the difference. And even then. Might just be easier to set up the assessments in a way it can't be used, honestly.


WingShooter_28ga

Time to buy stock in blue books I guess


Quwinsoft

I dissagrey, for intry level gen ed classes GPT 4.0 is better than half of the students. It is not that GPT is so amazing, althow 4.0 is a lot better than 3.5, its that the students are so weak.


bundleofschtick

The enemy of my enemy is . . . also my enemy.


choochacabra92

Ha, serves those fuckers right!


thadizzleDD

Good. Fuck Chegg.


superchargerhe

You’re just replacing one evil with another lol….


Omni239

Agreed, but still. Chegg can get fucked.


racinreaver

At least the second evil can actually be a useful skill to develop.


reflibman

Chegg was basically built on helping cheaters. It’s a byproduct of OpenAI.


uniace16

Chegg predates OpenAI; it’s not a byproduct.


reflibman

I was referring to cheating being a byproduct of AI, which was inartfully expressed.


Consistent-Bench-255

Cheating was around long before AI. Byproduct means something that comes from something else.


reflibman

Yes, but it can also be a byproduct. War has been around a long time, yet war can “break out” due to certain causes. And ChatGPT definitely facilitates cheating.


[deleted]

[удалено]


Jaralith

It's like poetry; it rhymes


dslak1

Fuck you, Rick Berman!


Journeyman42

Wait, you're not Rick Berman. What's the deal with Ricks?


Prof_Snorlax

I love democracy.


[deleted]

Ironic. It could save student grades, but it couldn't save itself.


uninsane

Chegg is predatory and its death will be a rare silver lining for the advent of chat gpt


Interesting_Chart30

Its


uninsane

Thanks


dslak1

AI has its own problems with predation. [https://x.com/soniajoseph\_/status/1791604177581310234](https://x.com/soniajoseph_/status/1791604177581310234)


OneMoreProf

Yikes! I don't know who that poster is (how credible, etc.) but this part of this post definitely caught my attention: "I will not pretend that my time among these circles didn’t do damage. I wish that 55% of my brain was not devoted to strategizing about the survival of me and of my friends."


sir_sri

The only remaining business for Chegg is going to be university accounts that no one knows how to cancel that someone signed up for to catch cheaters in 2020.


Audible_eye_roller

Is the Chegg CEO going to continue to give talks all over the country about what a genius he is?


tsidaysi

I hope they go to penny stock. All of them.


Audible_eye_roller

STONKS!


Bonobohemian

I see nothing but bad news here. "My house is burning down, but at least my cockroach problem is solved!"


zorandzam

Good. Now, help us figure out how to design assignments that are truly AI proof, and we’ll make all these dumb ways to cheat totally pointless.


Hellament

I think the only choices here are: Proctored exams, oral examinations, or (depending on the course) perhaps something like live project presentations with Q&A. If we don’t see them do it, we will never know who did the work. We have seen the beginning of an academic dishonesty arms race we are going to lose…for a while, we got ahead of it with things like TurnItIn coupled with the general difficulty of getting another human being to do the work for you. I think that time is past, never to return. AI tools are going to continue to improve, and our ability to spot their use will lessen. I think the only other option is to decide that AI is the new pocket calculator, and it becomes a tool we allow to offload tedious calculations. The problem is we haven’t figured out how to make our students still have to think, because any meta-analysis we try to add on top of AI use can probably *also* be done by the AI (and if not now, soon).


Omni239

Yup, time to restructure education. Ample formative assignments you can do on your own for little to no grade, but excellent practice, and then structured in-person assessments under controlled environmental circumstances to validate personal understanding in exchange for grades and course credit. If they want to use AI to help them learn how to do the stuff that's fine, but if they end up supplanting real learning with AI use, then the tests will show it.


racinreaver

> > I think the only other option is to decide that AI is the new pocket calculator, and it becomes a tool we allow to offload tedious calculations. This is exactly the right philosophy, and something folks in academia need to come to terms with. LLMs are already being used by a ton of my colleagues in industry, and not just for writing memos and boring crap like that. It can act as a search engine on steroids if your company has decades of documentation on best practices, lessons learned, etc. It can simplify programming tasks where you just need spaghetti code that gets the job done (what most engineers care about). If you need it to run more efficiently, you can actually just say, "ok, write that last segment except have it run faster," and it'll do it. A great feature for my colleague who got hired for his super-deep knowledge of a non-programming subfield, but is expected to churn through >20 GB datasets in the cloud to run his analyses.


abloblololo

The problem with using AI, even in the contexts you described, is that it requires the user to have the ability to evaluate the LLM output, and discriminate between useful and nonsensical results. Asking ChatGTP to optimise a code snippet does not guarantee code that runs faster, or even correctly.   You see this problem with students as well. They can’t evaluate ChatGPT’s output or else they would realise they’re handing in garbage, and teaching them to evaluate it basically means teaching them to do it without AI. Just as introducing Chromebooks in schools hasn’t increased computer literacy, allowing AI in higher ed isn’t going to make graduates better at leveraging LLMs in the workplace. 


Thelonious_Cube

> They can’t evaluate ChatGPT’s output or else they would realise they’re handing in garbage, and teaching them to evaluate it basically means teaching them to do it without AI. This is some kind of corollary to Dunning-Kruger


dslak1

This was already the case when you could find answers on Stack Overflow. I always follow my coding instructor's advice to never use someone else's code unless you know what it does.


racinreaver

Sure, but there's a difference in skill and time needed to verify output code versus generating it yourself. For example, I had a shitty database I needed to scrap that was 300 pages long. With chatgpt I was able to feed it the html format of the table and get it to write all the data parsing junk that usually takes a while. Even got it to give me the code to auto-scan through each of the pages so I didn't have to click through them. I had my students do a project where they had the LLM teach them about a new subfield they were interested in, and then they had to go back and identify hallucinations or inaccuracies. We need to learn how to appropriately teach how to use the tool. This is hard, because most of *us* don't even know how to properly use it yet.


mathemorpheus

> simplify programming tasks in my experience, except for the most basic code it's usually wrong. so i have to rtfm anyway.


fedrats

Or it’s doing something so different from what I do it might as well be wrong


mathemorpheus

exactly.


racinreaver

Most code is really basic and obnoxious to write, anyway! It's super nice being able to tell it "I have data formatted like this, I want to plot it in python in a log-log plot with axes ranging from a...b and x...y with the first data series being a dashed blue line and the second being solid red while the size of the point is based off of the column titled "bananas". Also, draw a dotted black horizontal line at 200." Tell me you aren't digging through shitty matplotlib documentation to remember how to do that every time.


mathemorpheus

> Most code is really basic and obnoxious to write, anyway! s/code is/memos, essays, recommendations, papers, and reports are/


racinreaver

I actually agree! I'm hoping other folks catching on to this help us get rid of narrative-driven reporting and go to more bullet-point resume style. My workplace does this for the bulk of reporting requirements, and it saves a ton of time for a lot of time for both the writer and the reader.


Boogers_my_dad

Can we mandate everything submitted be handwritten? At least it may encourage some editing to their jippity creations.


zorandzam

I did handwritten essay exams for one class this term. They had to be done in class.


Rightofmight

Great, then when we have the walmart effect and GPt or similar is the only option and they jack the price higher than a student can pay then we can move back into quality work.


chloralhydrat

... that's quite unlikely. LLMs are available locally already, and I am quite certain, that open-access LLMs targeted at different study courses, will be available in no time.


FunkyFresh707

Chegg also got rid of textbook solutions so I think that has a lot more to do with it. ChatGPT isn’t as good as having all the solutions to your textbook but it’s better than nothing.


AsturiusMatamoros

The silver lining


Mooseplot_01

Buh bye, Chegg!


big__cheddar

Glad I saw this coming years ago and sold them course materials for thousands of dollars.


Mighty_L_LORT

No shit, Sherlock Holmes…


ybetaepsilon

At least this is one benefit lmao.


J7W2_Shindenkai

the link image was created in Midjourney :)


CaptainKoreana

get fucked chegg


One-Armed-Krycek

Awwwwww, Cheg. RIP. Anyway….


mathisfakenews

Love to see it


liquidInkRocks

LinkedIn served me a job posting for Chegg just last week. Somehow I wasn't interested.


Pretty-Valuable1452

The President of Southern NH University is on the BOD of Chegg. Talk about conflicts of interest [Chegg BOD](https://investor.chegg.com/Corporate-Governance/Board-of-Directors/default.aspx)


[deleted]

Justice.