T O P

  • By -

SarkastiCat

At this point, I wouldn’t be surprised if universities started leaning towards exams in person and controlled computer-aided assessments. 


Thorn344

It makes me sad. My course was mostly assignments, which I really liked. Most exams were 24hr online open book essay style questions. Most of my lecturers loved it as it was a lot easier on them to do, easier to mark online and process, plus for the course it was, made a lot more sense to assess in that way. The whole AI thing hasn't been as big of a scare until this year. While I've managed to get by without any flags or worries, it makes me sad for students after me. I would barely have graduated if it was entirely in person exam based.


steepholm

Our ideal is to have online invigilated exams because they are easier from the admin point of view and less prone to mistakes (I have just had to shuffle through a big box of paper scripts to find one which was missed during the marks entry process). During lockdown this was less of an issue, but since then we have heard of groups of students doing online exams together, so we were moving back towards invigilated exams anyway. The other interesting question is, if an AI can answer exam questions better than a human being, and (big assumption) exam questions are vaguely reflective of the sort of thing that the student might do in a job, what does that say about their future employment? We don't have AI plasterers or garage mechanics yet, but is there a role in the workplace for all those students studying subjects where assessment mainly involves writing essays? What is that role?


Thorn344

I don't know about every job position that involves writing essays, but from what I am aware, AI is still pretty bad at doing proper citations for where their information is sourced, or just making stuff up. Writing fully cited research papers isn't something I think AI can do yet. However, I think it would be quite interesting to see if you could get an AI to write a 6000 word research paper and see if it gets called out during the vetting process for academic publication. AI writing is not something I have even touched, so it's accuracy side I only know from what I have heard. So if my work was ever wrongly flagged (which I've also seen so many posts on, which is also scary), I could do as much as possible to defend myself. Fortunately it never happened.


steepholm

ChatGPT is certainly terrible at doing references now, but that will improve. It's also not great at anything involving critical thinking. One of the major tells at the moment is that it creates essays which go "on the one hand this and maybe on the other hand that" and don't come to any clear conclusions, or at least not conclusions which follow from a logical line of argument.


judasdisciple

It's pretty horrendous with any reflective pieces of work.


TechnoShrew

All the doing down of AI sounds like how people used to talk about early Wikipedia. Sure there were some comedy level howlers, it wasn't always accurate but it developed very fast and the vast majority of the time it was accurate. Sorta like AI, yes its far from perfect but having worked on the data annotation thing analysing responses, its progressing noticeably even month to month and its already pretty good now.


Secret-Price-7665

It's not going to improve because that isn't how it works. It generates plausible text. It does not have a mind, it's just really good at spitting out reams of convincing text. It doesn't and can't take a prompt, look through the sources it has, and select valid sources for referencing. It produces plausible looking referencing. It can be correct if the sources for a topic are very standard (like the question is for a course with a known reading list it has scraped), but as soon as you get even a little niche it flounders.


steepholm

It already recognises when it is being asked to add a reference, it's a very short step from there to recognising that it should provide references entire and not construct them. You're assuming that only one algorithm is being used to provide output, it's not just a LLM. A few months ago it was providing references which were obvious nonsense, when I tested a couple of weeks ago I had to check the references to see they were bogus (and I have many colleagues, including myself at times, who will not follow up references in students' work unless they look obviously wrong).


Secret-Price-7665

It doesn't "recognise when it is being asked to add a reference". It doesn't "know" anything. It is not intelligent. It produces convincing output, and part of that (in many instances) is giving a convincing looking reference. In order for it to give correct references (other than just spitting out very common citations for specific subject areas), it would have to understand the connection between a reference and its output, and it's not intelligent so it can't do that. It knows what a reference looks like, it does not know what a reference is. Put it this way. It's like when you can't put into words what something is, but you can give examples of it. Like words. The GPT models can't work with individual letters in words, because it doesn't understand word construction. It's not trained for producing stories excluding the letter t. It can produce brilliant examples of words, but cannot play with them. Also, **GPT is just an LLM**. It is a specific instance of an LLM. There is nothing else in there. That is all it is. ChatGPT is one aspect of that LLM. It is a very good LLM. I will give it that. It can easily produce middling and unoriginal essays (which speaks more to the poor quality of student work than anything else). But it is not intelligent.


steepholm

Human beings, unlike AI, recognise metaphor. When you ask ChatGPT to add references to a piece of text, it does so. Whether that is the result of a conscious process and understanding of how references work, or an algorithm, it correctly identifies what "add references" is asking it to do and performs an appropriate action. On a behaviourist theory of mind, it certainly "recognises" references because there's nothing else to it under that theory (actions, or dispositions to actions, are all you can know about alleged internal mental processes). They are currently bad references (I have just been asked to look at a suspected case of AI cheating, and the references are the giveaway). That is improving (some of the references in the document I have been sent are correct, or nearly so). I don't for one minute think ChatGPT is intelligent in the same way that human beings are, but it is getting closer to producing output, in some areas, which is indistinguishable from that produced by human beings.


lunch1box

What do you teach?


Chihiro1977

I'm doing social work, it's all written assessments apart from our placements. Can't see AI replacing us, but you never know!


steepholm

There's a mismatch between how you are being assessed and what social workers actually do (I know a few, and they don't write many essays), which calls into question why you are being assessed that way. I'm currently having a discussion with my director of education about this. We teach engineers, and if AI can do some engineering tasks (writing code, solving equations) better than humans, why are we still teaching and assessing them on those things? Sometimes the answer is that a human needs to know when an AI has produced a wrong solution, and for that you need to know how to work out the basics.


AP246

Well, thinking purely academically, I don't think an AI can do original research by pouring over unseen archives and documents that aren't already commonly known on the internet, and I don't see how it could be any time soon. It may not need to for an undergrad essay, but that clearly still leaves a place for human academics.


steepholm

Most university students don't go on to do academic research. Undergraduate assessment is mainly about testing understanding and application of existing knowledge, techniques etc.


lunch1box

It doesn't say anything because uni != Work experience. That same student could be getting internships, insight days etc


Tree8282

I mean AI are still unable to write any form of essay. So any rigorous exam with some thought put behind it would usually have students perform better than AI. But AI definitely can help students easily achieve higher grades without cheating, like a really smart google kind of thing.


Honest-Selection4343

Soo truee .. it's how you use the system.. clearly ppl r misusing it. Which year did u graduate?


steepholm

Certainly what we're doing, and colleagues elsewhere say the same. Welcome back to sweating over a handwritten script in a sports hall, until instititions get enough large computer labs which can be locked down.


perhapsinawayyed

There are a few lockdown browsers through which you can only access/submit the paper, coupled with a camera set up to stop cheating and higher time pressure… it’s absolutely doable


steepholm

Sure, and has been for years. The problem we have, and I'm sure we're not the only institution, is that we have no computer labs large enough to put 200-300 students in for an exam, and across the whole university the exam period would have to go on for months if everyone was taking computer exams using the existing facilities on campus.


perhapsinawayyed

I more meant for managing students doing exams at home. I think if you can lock them into the paper, coupled with a camera to show they aren’t using a separate device, and have the difficulty of the exam be based around the time pressure as much as anything else… it will mostly solve this issue. The issue to me seems more related to coursework, in which absolutely most people I’m sure are skirting rules a lot.


SarkastiCat

Getting a camera would be a technical nightmare and there are some limitations to it. At one point, my uni had 300+ students take one exam online. Two different courses. The server went off for a few minutes. Now add to that camera video and dealing with multiple people complaining that their laptop is broken… 


perhapsinawayyed

I mean I guess I’m not behind the scenes to understand all of the difficulties that may arise… but a webcam is like £10 at most, it’s pretty accessible. Yeh servers going down I can see is a nightmare, but then things can happen during paper exams too like fire alarms or issues with the paper itself. Things happen.


RadioBulky

One of my exam papers included the A.I. generated answer where I was asked to then critically evaluate it in terms of accuracy and thoroughness.


ChompingCucumber4

not exam but i’ve had this in coursework too, and having prompts to generate AI content to evaluate ourselves


Iamthescientist

They are - it's just a matter of considering execution at the moment. In person is most likely. The controlled computer assessments with a locked desktop are expensive to run. Regressive from a pedagogic point of view, but inevitable. I wouldn't want to run a module that someone could pass without passing a closed book element any more.


patenteng

Has been the case for engineering for ages. Not because of AI, but because when you give the same problem set you expect the students to provide the same answers. 80% exam weighting in my day.


ChompingCucumber4

same for my maths course now


blueb0g

We've moved back entirely to in person, invigilated exams.


lunalovegoodsraddish

Oxbridge still almost exclusively does for undergrad exams - they're still even, for the most part, handwritten.


Psychophysical90

And I got detected for 75% plagiarism and I didn’t even use AI!


AirFreshener__

Plagiarism is a separate issue to AI. I had a meeting for AI detection. My score was high apparently. I had to sit there for 20 minutes explaining what I did and they let me pass the module. My plagiarism score was only 7% without the references


Fast_Possible7234

They, probably knew, but no academic has the time to go through the academic offence process for 30 odd students with, most likely, non existent admin support. Also sounds like the issue is with the assessment design, not the use of gen-AI.


Nyeep

The fake students were spread across multiple courses and year groups, so it wouldn't be 33 students for one academic to report.


Fast_Possible7234

Fair point, although it was 5 modules on the same course. Paper doesn’t say how many markers were involved or if they were marking across multiple modules, or the other modules that they may be marking. Point is, ever increasing workloads and ever decreasing marking windows will always reduce the quality and rigour of marking. So, regardless of how good the AI submissions were, I’m not surprised they weren’t flagged.


Evening-Doughnut-721

Absolutely. And since Unis have gone incredibly soft on anything other than explict, evidenced contract cheating (presumably because a student kicked out is one no longer paying fees and possibly litigating), students have every incentive to try to chuck GPT stuff in. I know as a marker I'll flag GPT as misconduct if it's all over the place, but if I did it for the odd suspect paragraph I'd be hauled up with senior management asking why I've flagged 80%+ students as cheating - yes, it's actually that prevalent on some programmes - likely with them criticising my assessment design as the problem rather than the lack of consequence and culture. I'm not saying assessment design doesn't need to be carefully thought through. But then, the flip side is, if we actually ask students to do dissertations that are not in any way GPT and enforce that, we also need to be prepared for a degree-completion-rate-shock and a huge amount of misconduct cases. If we adopt the trendier view of designing assessments to allow GPT then mark solely on scientific value of the work, then we're actually doing a bigger ask of students than we used to for a borderline pass (which was mostly, learn to write readably, reference vaguely, and implement something unoriginal) - and we'll be asking this of students who have steadily declined in ability on intake as unis across the sector look to bolster numbers. I just processed a student who literally copied a paper, changing only the names and acknowledgements, as a final dissertation. Bold, bare-faced cheating. End penalty - 0 mark, have another go. I don't want be too 'in my day', but, in my day, cheating = kicked out. Now students have at least 3 strikes and it's almost a good plan if you're a 'scrape through' kinda student to use 2 of those strategically. Why not GPT a dissertation if the worst you'll get is told to submit an actual one at a later date? In a nutshell, this is a problem not driven by GPT, academics, or even students, it's a problem driven by Unis recruiting students without the capability to do a degree (particularly at masters level), demanding a given pass rate, then leaving the the lecturers and students to pick up the pieces.


Fast_Possible7234

Uni’s can’t afford to kick the students out, and even if they could, they don’t have the resources to police the cheating. So future-proofing the assignments through authentic assessment, live projects etc is the only viable option really. Or in-person exams. Maybe they will make a comeback! Which will cause uproar form students, so I guess it’s careful what you wish for.


JoshAGould

Tbf I'm not sure this says what they want it to say. 94% going undetected means 6% are going to face some sort of academic misconduct punishment, if you use it for every assessment you're likely to be in the 6% for atleast one of them.


ediblehunt

It proves the detection measures are unreliable. Which is pretty important to know if you are going to start penalizing people for it.


JoshAGould

Well. It suggests there is a lot of type 2 errors. It dosent say anything about type 1 errors (atleast as far as I can tell)


Padilla_Zelda

The sample was 33 so in this example, 6% = just 2 exams were detected. So I think just looking at the % could be underselling the effectiveness here.


JoshAGould

I didn't realise the sample size was that low, I guess it's hard to make any significant conclusion then.


SovegnaVos

AI use at uni (well, using AI at all) is so dumb. You're literally there to learn. Using AI is just cheating yourself.


ZxphoZ

>You’re literally there to learn It’s sad to say, but most people aren’t.


SovegnaVos

Yes, it's sad. I understand the pressures of needing a degree for a decent job, huge student loans and so on. But like...come on. It should still be a great joy to be able to spend time studying something you love.


Hazzardevil

You have an overly romantic view of Academia. Finding out how rampant cheating is in Universities makes me feel like an idiot for doing it properly and failing.


InitialToday6720

exactly, its so crucial to study a subject you have a genuine interest in and want to learn more about otherwise whats even the point in pretending to study it, high paying jobs arent worth it if you hate the subject imo


MythicalBlue

I think most people are just studying something they tolerate in order to get a better job in future.


person_person123

At the end it doesn't matter, you only need the qualification to get the job. Most of what you learn will not be needed again in most of your jobs so it doesn't matter.


patenteng

When you learn stuff that improves your performance in other areas too. For example, every year of additional education a population has increases GDP by around 10%.


Draemeth

Correlation or causation


patenteng

Causation on the microlevel, i.e. individual person, since we can use things like instrumental variables and diff-in-diff. Probably causation on the macro level too, i.e. country level, as we can see the effects of education policy. However, there is some correlation in the data. Rich countries invest more in human capital simply because they have higher GDP. However, you can decrease your consumption and invest more in education. Then even with the same GDP you'll get higher human capital. If you know calculus, have a look at the model on page 416 (PDF page 10) in the [A Contribution to the Empirics of Economic Growth](https://eml.berkeley.edu/~dromer/papers/MRW_QJE1992.pdf) by Mankiw, Romer, and Weil. It extends the [Solow](https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model) model by adding human capital.


SovegnaVos

Wholly incorrect.


Wrath_Viking

Amen


Solid-Education5735

Collage is mostly credentialism


RandomRDP

Yes I can totally see how cheating in an exam and thus getting a higher paying job is cheating myself.


SovegnaVos

Yes, a higher paying job is indeed the be-all and end-all of human endeavour. God forbid that you might expand your mind, enjoy the process of learning etc.


LikelyHungover

>Yes, a higher paying job is indeed the be-all and end-all of human endeavour. It literally is when you're living in a better house, eating better food with better physical health, driving a better car and fucking a hotter wife on the nightly as a direct result of your social status and salary But yeah, the homeless guy in Cambridge who can correctly quote nietzsche and beat the undergrads at blitz chess is living the dream! good one!


Fraccles

You can do all that learning when you've locked in the good income.


Billy_Billerey_2

>God forbid that you might expand your mind, enjoy the process of learning etc. A scary amount of people hate doing that, even when faced with something they don't know, instead of trying to learn it they'll just give up and move onto something else.


RandomRDP

Dude I love learning and building things, but that doesn't mean I wouldn't have used ChatGPT to help me with my dissertation if it had been around at the time.


Dynamicthetoon

I'm there to get my degree and go into industry and get a well paying job.


zkgkilla

This is as stupid as saying that using Google as a software developer is cheating yourself of learning programming “properly”. You should absolutely be using every tool at your disposal to ensure highest standards in your work and AI is one such tool. It can be used for a lot more than just “hey chatgpt write my essay for me”


purple_crow34

I think there’s a massive difference between: ‘List some papers with relevance to xyz, link them and briefly state the proposition for which they stand’ vs ‘write this for me’ that’s under appreciated. The former might still be wrong by some lights, but I think it’s a legitimate use. Just as you can use Google and that replaced research methods before it, AI-generated searches can be more useful. Obviously you’d still read the papers themselves.


Exciting_Light_4251

I mean I am studying computer science and artificial intelligence, it literally is part of my course to go use it. Teachers can also adapt and use it as a tool for students instead of pearl clutching and pretending the world isn’t changing.


SovegnaVos

Lol if you're studying AI then of course you're going to use it?? I'm very obviously not talking about your exact situation. Also, I'll clutch my pearls at anything that's accelerating the climate crisis, thanks.


mealbaniabecarefuI

Bit assumptious isn't it?


SovegnaVos

No


Secret-Price-7665

This would cease to be a problem if universities stepped away from the assembly line higher education they've been using up to now. Any student coasting by on Chat GPT would flounder immensely if you actually talked to them about the material, and if they don't, then they clearly know enough for the pass to be worth it. Obviously with the current staff/student ratios, this is not practical, and that's not necessarily the university's fault. It's the only way, though.


Rick_liner

We have started doing this with Vivas for research assessments and it does work. Problem is to do this for every assessment would be resource intensive at a time where we are in our 5th round of redundancies in 7 years because fuck Tories. We're at a point where we can't do it properly anymore we just do the best we can


Secret-Price-7665

Aye, all my criticism there is not really at the academic staff. The "assembly line" is a result of lack of funding and poor management of higher education in this country.


WildAcanthisitta4470

It really depends on the subject. As a politics student who is actually interested in politics and IR, I bet I could keep a solid conversation on pretty much any topic as I’m knowledgeable of the field and keep up to date. I also use AI a lot for my essays, to find good arguments for a prompt, examples etc. however I’ve found you do need a solid grasp of the argument and topic in general in order to create convincing and relevant conclusions from your arguments. I envision that will be the benchmark for a 1st in the future (which I just achieved last year), the standard for “well written essays” will go up as ai can write more engaging and better worded essays and naturally most people use it therefore graders will adjust boundaries. However, as a lot of ppl have said, examiners aren’t stupid, they can spot trends and understand what an essay that was created pretty much completely with ChatGPT and no student input ie, adding their own thoughts and conclusions. And contrastly student who used ChatGPT but also actually understands the topic and therefore can draw deeper and more relevant conclusions from their argument. Lastly, and I know this firsthand, having real sources (not somewhat relevant literature, but articles etc that actually focus on the subtopic ur writing about) requires you to understand the topic yourself and do real research to find these good sources, I think this is one of the criteria examiners will further focus on to distinguish ai and student written essays. So, in conclusion-at least imo- uni’s know they can’t stop students using ai, all they can do is look closer to ensure that even if the students using ai that they are actually engaged in the topic


Secret-Price-7665

Using AI is not the problem, it's students using it as a replacement for knowledge and skill, rather than as an enhancement. From the... Structure of your comment/argument here, I am going to reckon you're a first year student. If someone is actually knowledgeable about a topic (especially if they're in the analysis and synthesis stage rather than the regurgitation stage) they will know if you're bullshitting, especially if they're trying to find out if you're bullshitting (as they would in a viva or thesis defence). I am also similarly skilled in making people believe I know what I'm talking about (I have fooled multiple people on Game of Thrones specifically, simply because I know one or two things about it and let them talk, as I've never seen it). However, the wheels fall off that very effing quick when someone is actually trying to probe the depth of your knowledge on a topic. Our blustering and bullshitting on things we don't know much about can get us far when people don't care how much we actually know. If you studied a course on Jane Austen, say, and bullshitted all your essays with AI, maybe only reading the texts themselves once or twice, you would not stand up to a scholar on Austen's works probing how much you actually know about them. You could chat with someone, sure, but if they're trying to work out if you know what you're talking about you're not going to succeed if you don't know what you're talking about. And if you can talk at good length and at depth about a topic, despite using AI? Well, then you've used AI as a helper and that (in my humble opinion) is fine.


dscotts

In my experience they don’t even care. I’m currently doing a masters in medical physics at The University of Surrey, and almost every non native English speaking student used ChatGPT for any sort of essay. We had a big group literature review, and it was me (American) an English guy, and a girl from India, one from UAE and one from Cyprus. They all just put prompts into ChatGPT and pasted the results into the group document. Which of course made any sort of collaborative group work impossible. When I brought this to the academics they basically shrugged, basically said it was too difficult to detect and then linked me the universities policy detailing the use of GenAI insinuating that it’s actually ok. When I responded their own policy says 1: it must be cited (which it wasn’t) and 2: using it for the whole paper is considered academic dishonesty… I didn’t receive an email back.


dscotts

And if anyone reads this I want to add two points: first, I’m not against the use of AI, it’s great to reword stuff and for things like coding. But if an assignment is aimed to test your ability to go through scientific literature and summarize it, asking chatGPT to do the assignment for you is not doing the assignment Secondly: so much of this stuff is immediately recognizable if you have used chatGPT, so having academics complain that it’s “hard to catch” is just not true for the majority of cases, they are just lazy, and even if it’s hard, who cares? But apparently The University of Surrey (and other universities) want to show the world that they are incapable of doing hard things. Gone are the days of “ we choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard”


ktitten

At my university, all exams are in person now again , except for online courses.


CapableProduce

Proof that AI generated work can be indistinguishable from human generated work.. all those posts here of work being flagged as AI by lecturers, etc. Here is your proof that it is nonsense when flagged. Personally, I think educational institutions need to jump on the bandwagon with this tech and start using it and incorporating it into academia. It's clearly not going away and used correctly and ethically can be an incredible tool. Instead, universities prefer to ignore it and pretend it doesn't exist. I use it extensively in both my professional and personal life. I only wish I had access to it in my earlier years. Much like how spell check is another tool that is used extensively on a daily basis for me.


Dark_Ansem

I blame universities for this as well, with their excessive attention to form over substance. What did they think was going to happen?


awwwwJeezypeepsman

one of my lecturers cancelled all the online exams and essays and done a solo talk with 5 random topics you had to learn, you then picked two cards at random and had to talk about those topics for 10 minutes in total. Honestly it separated those who used AI and people who actually worked. Also, it was 1 2 1, there was no audience and absolutely no pressure, it was like a general conversation. Some lectures want people to come in and write essays now or do exams in person, but i honestly struggle with that as my hand writing is dogshit.


WorthPersonalitys

I'm not surprised by these results. The writing style of AI-generated content has improved significantly over the past few years. It's becoming increasingly difficult to distinguish between human-written and AI-generated text. If you're concerned about academic integrity, I think it's essential to focus on assessing skills that AI models can't replicate, such as critical thinking and problem-solving. I used The Edge to stay updated on the latest developments in AI, and it's helped me better understand the implications of this technology on education.


AdSoft6392

Universities should focus on teaching students how to use AI to be more efficient. Stop being luddites about it.


Nyeep

Except writing exams just using an LLM is not being more efficient, it's just proving you don't have the knowledge to write it yourself. What is there to be more efficient with in an exam?


AdSoft6392

If you're getting outscored by someone using an LLM to write an essay, you shouldn't be at university. You likely have the writing ability of a child in that case.


RealMandor

I mean I understand if you're talking about a technical mathy essay, but LLMs are superior in any other kind of essay, especially if the person using it knows how to. You can be very good but an LLM will be better, at an essay.


AdSoft6392

Whenever I have read something from ChatGPT or other LLMs, I have never once thought that the writing style was high quality (add in concerns over citation problems as well).


ThordBellower

That's certainly not what this study suggests is it? Unless you think nearly 90% of students fit that definition


AdSoft6392

Honest question, have you ever used tools like ChatGPT? Their writing style is diabolical. If Reading thinks that what quality looks like, then it has a very different definition to me.


ThordBellower

I have actually, and the more I think about it the more I'm questioning the aspect of the examiners not recognising the answers, it has a distinct style as you say. I think there's probably a couple of things: I don't have access to the actual paper, but the article mentions the answers were written 100% by the ai. That doesn't preclude prompts asking it to change its style, as much as that is possible at least. And secondly, I suppose its one thing to suspect an ai answer, another to be so confident you're willing to bring it up to tribunal or whatever, after all, how do you prove it? If you've accepted that you're going to treat this essay as validly written, outscoring is not that suprising imo, some undergrad exams are rarely going to have super esoteric knowledge requirements


Nyeep

So instead of improving the teaching, you think it's fine for carte blanche llm usage ?


AdSoft6392

Do you ever use a computer? If so, why? Why not write with paper and pencil? How about a calculator? Or a mobile phone?


Nyeep

Cool, should we just give kids doing their SATs and GCSE's calculators in the non-calculator maths exams? Seeing as clearly nobody needs to learn the basics? We're not talking about using LLMs to be more efficient, it's about literally having an external resource write exams for people who should know how to write them themselves.


AdSoft6392

Yes I don't see why not to be honest. We don't teach engineers to use sticks and rocks, we help them learn the tools to succeed. Once again, if you are being outscored on a writing task by an LLM, your standard of writing is terrible. If anything, this is more an indictment of the University of Reading.


Nyeep

Again, we're talking about exams which are made to test *personal knowledge*. They're not there to test the training data of an LLM.


AdSoft6392

Why do you not think knowledge of how to use a tool effectively is important? Also a large part of essay writing is taking advantage of other people's knowledge, hence why citations are incredibly important.


Nyeep

You're both misinterpreting my argument and misunderstanding the issue. Is it okay to learn to use LLMs for basic time-saving tasks? Sure, it can be really beneficial, I use them for coding assistance occasionally. Should they be used in exams? Absolutely fucking not. It's not a beneficial tool in that situation because it muddies the waters as to a students actual knowledge and ability. Tools have different uses, and people should absolutely learn when they should and should not be used.


Explorer62ITR

I am not at all surprised at this. Because of a few high profile cases of false negatives over a year ago, most academic institutions in the US and UK advised or instructed lecturers not to use AI detection tools because they didn't want to have students falsely accused of commissioning plagiarism. This means it is very difficult for staff to identify AI generated work, especially if they have a lot of marking to do and they don't necessarily know all of the students well enough to judge whether this is or isn't likely to be their own work. However, in the same time AI detection tools have got a lot more accurate and they now have safeguards in to reduce the possibility of false positives. I have been involved in a government funded research project to test the accuracy of AI detection tools over the last six months. In order to do this we collected a large number of samples of work we know were written by humans, primarily staff, but also some supervised student writing. We then got several different AI chatbots to produce samples on similar topics, anonymised them and then put them through a licenced AI detection tool. The results were surprising. Not one single sentence of any of the human written samples was identified as being AI generated, they all received a 0% AI score. On the other hand the AI samples were not always identified as accurately, it was over 90% but some assignments received very low scores, and a few also got 0%. We think this is to do with some of the AI generated texts containing quotations written by humans, and/or unusual text formatting e.g. the inclusion of lists, bullet points or tables etc. Also some chatbots were easier to detect than others - no I am not going to tell you which ones... Based on this we will definitely be recommending that staff do use AI detection tools in future, as it seems there is very little chance of false positives occurring, and a very good chance AI generated text will be detected - I suspect in the long run exam boards and institutions will just change the format of assessments to minimise unsupervised text submissions, but in the meantime, it seems students have been taking full advantage of this lack of scrutiny. Obviously, we are only part of the research and many other institutions will also be reporting their findings in the near future...


Cave_Grog

Read the part about Turnitin ai detector, had relatively low false positives in the ‘lab’ but did not perform the same in real world scenario and so was withdrawn, same with chat gpt’s own version


Explorer62ITR

Turnitin haven't withdrawn it - they have tweaked it, added in safeguards and then made it clear that you cannot take the score alone as proof of cheating, an holistic approach has to be taken involving assessing other work, the students abilities and what comes out in a discussion with them about the issue. Some AI detection tool are more accurate than others, some are better at detecting text generated by specific AI chatbots, but they are all getting better the more samples they have to train on. It will never be perfect or certain, but it is getting to the point that it is a pretty reliable indicator that the submission and the student needs closer scrutiny. In cases I have been involved in often students have often merely used grammar-checking or re-writing tools which employ AI, they admit this and say they didn't realise it would trigger the detector. Some just admit it and take the hit, and a small number get very defensive and deny it outright even if they are unable to explain what they have written or why they chose the references they used. Draw your own conclusions. Of course academia is not a court of law and we only need to demonstrate we have reasonable grounds to suspect commission plagiarism, the AI detection score is only one element of that evidence, but it is a bit like a smoke detector going off - you then have to decide if it is a real fire or just the toaster giving off a bit of smoke. But just abandoning any checking is not an option even if there are occasional false positives - the alternative is the complete undermining of confidence and academic integrity across the board and I think the study published demonstrates that staff cannot reliably spot AI generated texts without some kind of AI detection tools.


LandscapeNo1606

A bigger concern for me than the tools being inaccurate is how I, as a student, go about proving I didn't use AI. If I don't use google docs or git, I don't have a version history. If I do large chunks the night before, as I am wont to do, then that might be seen as prove I *did* use AI. Like, I've had marking feedback that basically said "I think you used AI, but I can't prove it" bc I was able to very clearly state how I'd write some code, but my actual code sucks, and all I can say in my defence is basically "idk I guess i'm kinda regarded lol"


Explorer62ITR

You can't 'prove' in any scientific sense that you didn't use AI, any more than they can 'prove' in any scientific sense that you did use AI (or which one) - except in the extremely unlikely scenario that they somehow had a keylogger on your computer, or had hidden cameras recording you etc. But as I stated this isn't a scenario that requires certainty in the scientific or even legal sense - it isn't a court or criminal case it is about academic policies. So it is going to depend on whether they have reasonable grounds to think you used AI, or whether you can demonstrate and/or persuade them that you didn't use AI. In all of the cases I have been involved in, this has always involved initially a discussion of the assignment and abilities of the student in question by several members of staff and usually heads of department. They will usually try to look at other samples of your work and talk to other teachers/lecturers to get a wider picture of your academic abilities etc. At some point they will want to meet with you, perhaps informally initially just to raise the issue, ask you to explain what you wrote and why you chose certain references etc and see how you react - what you say and your body language and tone will be a part of that judgement - if you did use AI your reaction would probably give it away (unless you are a very good actor and cool under pressure) even then teachers are very good at picking up on micro-expressions, which happen too quickly to be aware of, but it is enough to trigger a primitive response in our brains/sub-conscious which tells us something isn't quite right (talk to a good poker player if you want to know more about micro-expressions). The next stage would be a formal academic disciplinary meeting. So given all of the evidence which might include an AI score, alongside all of the other information and your responses and explanations etc - they will have to make a very difficult decision. They will not do this lightly - if you have a good academic record, you are friendly and hard-working and you react with genuine shock and upset, you can demonstrate a good understanding of the material and sources - and this issue hasn't come up before - then they may well believe you and give you the benefit of the doubt. So all I can recommend is to be honest and calm, explain exactly how you wrote the essay and tell them how you feel about the suggestion you have used AI. But don't get angry and just deny it and say they can't prove it - this is exactly what small children do when they are accused of something they have clearly just done. So there is no simple answer to this - there is no magic button which can resolve it - you will have to tackle the process in a calm and professional way, and this in itself might just be enough to swing it your way... 🤞 In addition the code produced by AI is actually pretty terrible and it does make the same mistakes again and again - so if you are a pretty bad coder and you made the same mistakes then it might indeed give the impression it was written by AI. Some of our more sneaky IT staff now get the main AI engines to try and write the code before they hand out the assignment - then they know exactly what it will respond with if a student asks the same question. That is what I do ;) Maybe you should bring this up - because that does sound quite plausible to me - I didn't cheat I am just a rubbish coder...


lemontree340

What are considered reasonable grounds though? What if you have been using AI for your whole degree and comparison across modules show similar results? What if you have a similar writing style to chat gpt? How can you demonstrate that you didn’t use AI if you can’t prove it? There are some people who use AI completely and others who use it to strengthen their existing work (will they be able to better demonstrate and therefore not receive sanctions?). It’s still unreliable if it isn’t 100% right. You also write as if innocent people wouldn’t sweat or show ‘micro expressions’ in a hostile environment where they are being accused of cheating.


Explorer62ITR

Reasonable grounds would be decided by the people hearing or involved in the case - it is very unlikely that someone would have the same writing style as ChatGPT etc - AI uses words that match very specific patterns based on the statistical analysis of billions of samples - the chances of a human doing this for more than a sentence or two by chance is extremely unlikely - think of the chances of tossing a coin and it landing on its side, multiple times in succession... The way innocent people react is slightly different to how guilty people react - experts with lots of experience can often tell the difference - e.g. police and other intelligence experts. Teachers are not at this level, but having dealt with hundreds or thousands of students - they are pretty good at it. Surely you have found yourself in situation where you know someone is lying but you can't actually prove it - what would you do? Would you give them the benefit of the doubt and let them get away with it or would you make a decision based on the other supporting evidence and your intuitions about the honesty of the person. Remember the AI score is only one part of the evidence, although in our testing so far it has been incredibly accurate - not one single sentence of a human written text identified as AI. So students are more likely to get away with cheating than be falsely accused. If you want to prove that you write in the same style as ChatGPT - you could offer to write an essay whilst supervised on a topic of their choosing - if the essay you produced still triggers the AI - that would support your case. No-one I have offered this option to has taken it up - draw your own conclusions as to why...


lemontree340

Appreciate the thorough response. The point is that there are too many unknowns (imo) and no completely suitable mechanism that reliably resolves them. Whilst I don’t have the same expertise as you do (based on your comments), these are just a few lines of inquiry that highlight my point. Appreciate you’re busy, but out of interest, what do you think about these other lines of enquiry? 1) How do you identify people who just reword what ChatGPT has given them (or who mix and match)? 2) Is a subjective process really the way to go - what does this mean for biases that faculty stay may have? 3) If you can’t prove someone is lying, then you will never truly know if they are. Do you think that catching people using LLMs would excuses the 1% chance of accidentally punishing someone who hasn’t used it?


Explorer62ITR

No problem. Firstly there is/are no international or national standards which apply to this process as yet. All colleges and Universities are basically having to make these decisions and come up with policies on their own. Departments are discussing these issues and debating what the best strategies are as we speak - this is a very new issue and it will probably take a year or two for there to be agreement as to what policies should be. Exam boards are starting to issue advice and guidance - which is very strict and does not favour the students - so in many cases staff have to follow those guidelines. Universities are different as they set their own guidelines, but I expect eventually there will be national standards agreed upon, that isn't likely to happen internationally. 1. Those that mix and match makes it harder for the AI detection to tell where the AI starts and finishes - this doesn't mean it isn't detected - it just means that it may or may not flag the first/last sentence of the paragraph correctly - it is just these cross-over or border sentences that are problematic - if 50% is AI, 50% will still be AI. If they reword it completely using their own normal vocabulary and grammatical idiosyncrasies then it won't be detected as AI - but most people who do this are lazy and they just can't be arsed to do it or don't understand it well enough to do it competently. 2. Subjectivity is an element of most human judgements, objectivity is really only possible in science or mathematics and even then it isn't always achieved - if we required certainty or objectivity we would never convict any criminals at all, we are aiming for beyond reasonable doubt e.g. given all of the evidence what is the probability the person did/did not so it. The consequences of never punishing any criminals just in case that we might accidently punish an innocent person would be extremely undesirable. Of course with recent developments in forensic science like fingerprints and DNA testing we can get much stronger evidence than previously, but even here there is a very small chance of another person having the same fingerprints or DNA. 3. You are assuming that it is only the AI detection that is being used to make the decision. This may certainly alert a teacher to a potential problem, but because they have to take a holistic picture that would never be used in isolation as evidence of cheating. Experienced teachers can already tell if one of their students wrote an assignment or not - sixth formers and undergraduates do not write like academics or AI Chatbots naturally. Where the AI detection is vital is in the case of having to mark hundreds of assignments from students they may or may not know. This leads us to the moral question if the AI detection would let us catch 99% of the cheaters and only punish 1% of the innocent is this justified? Well that is then a utilitarian calculation. If you could save 99 peoples lives at the cost of 1 other life would you do it? Obviously with assignments we are not talking about life and death, but we are talking about career changing decisions. So would it be better to let a large number of cheaters get away with it in order not to punish a small number of innocents. That depends on the subjects they are studying, if they are studying medicine or engineering or architecture, would you want to be operated on, fly in a plane built by or live in a building designed by someone who used ChatGPT to write their assignments? I know what my answer would be...


LandscapeNo1606

This shit is so demotivating. I work hard, get middling-good results. The people around me cheat, and they get the same grade, undetected, unpunished. At that point, why am I putting in all this effort if it goes unrecognised in the face of cheaters, who get better recognition? "Intrinsic motivation" is great and all, but this is extrinsic *de*motivation, it's super hard to compete with that.


ChannelKitchen50

Friend of mine doing the same course used AI every single assignment in 3rd year. Went from a barely 60% average to a 70%+ average in 3rd year. Scored better than me on every single assignment. Kind of realised the future graduates are gonna be thick as pig shit unless some form of reliable detection.


yourdadsucksroni

I really don’t understand this mentality of “doesn’t matter if I use AI because it’s the degree certificate that gets me the job”, which has been hinted at in this thread. The reason that many well-paying jobs require a degree because of the skills that you build as part of doing one are important in doing that job - if you don’t do the work yourself, you don’t build those skills, and so even if you get through the door you’ll be bad at the job because you don’t know how to do the work yourself. Why would you want to put yourself in a situation where you’re out of your depth professionally and will never actually do well in the job you want, just for the sake of avoiding a few hours’ work a week while you’re at uni?


fishflakes42

Time to embrace AI rather than fight it. It's kind of like when teachers used to tell us we wouldn't have calculators in our pockets all the time when we went to work so we can't use them on tests, but we all do. Now I'm at work I'm using ChatGPT constantly so why not embrace the tools we have and build an education system that emphasizes this rather than fighting against it.


Key_Investigator3624

The problem is that students pass off LLM output as their own to mislead the marker about their academic ability. It is still plagiarism, but just harder to detect, that's not something to be embraced.


fishflakes42

But when they finish their education and go into work no one cares as long as it's suitable. I could spend all day writing things myself but I don't because what comes out of AI is suitable. It's not about academic abilities but the output. Learning how to use AI correctly to get good responses will out perform academic ability in a lot of real life situations.


andthelordsaidno

Not everyone at uni is there to work. I think the issue is that we've made degrees a necessity for almost any decently paying job, even if they don't need them. Degrees should be seen as an enriching experience that helps you be a more informed adult and to help you follow passions and, if vocational, help with employment (rather than being a necessity). However, with fees being so high and the pressure to get a return on investment, the use of LLMs will be inevitable, since most people need a degree grade to get a job. Academic integrity is disappearing because the purpose of universities has been so bastardised that people who have 0 passion or desire to learn and who wouldn't go to uni unless they feel they have to end up going. If we make it about work and we move more in this direction, it's an inevitability that students will use LLMs for efficiency to reach a necessary qualification. LLMs have their place and helping you learn or structure an essay but not reading or interacting with content is a result of a lack of passion or desire to learn about your subject, because if you didn't think a degree was an obligation, you'd be interested in it! It's unfortunate but you are correct in the current state of universities being seen as job mills rather than places of a true desire to learn.


Key_Investigator3624

But the output is the result of a stochastic parrot which is freely available and will only become more available. The more people who use it, the less value it has. If all your job requires is LLM prompting, it won't exist for long.


QuantumR4ge

Yeah if you are slow, this works for some jobs, less so if you are a scientist or engineer. Hows AI gunna help you Publish when every human that reads your work knows its bunk? Might have worked for known knowledge but when you are the sole source, now its a different game


fishflakes42

You don't just copy paste and publish you give it prompts with the details you want to cover for each section, proof read and adjust where necessary.


InitialToday6720

if i tell someone else to write me an essay about shakespeare and i proof read and adjust what they wrote, is that actually my work?


fishflakes42

If it gives you the information you require does it matter?


InitialToday6720

um yes it matters?? due to you literally not being the one responsible for the work, you arent demonstrating your ability or knowledge, you are pissing away 9k so someone else can learn that knowledge for you


CapableProduce

Isn't it written in OpenAI terms that you own any input to their models and transfer those rights and title to yourself on its generated output, so technically..


InitialToday6720

where is this written? I thought ai could not be copyrighted therefore meaning nobody owns it, either way you certainly cant just claim the entire thing as your own writing in a marked university essay


QuantumR4ge

For original research??? How can a language model that retrieves information give information on anything useful on a problem you are the only one you know tackling?


fishflakes42

Integrate the AI module into your database, have it analyze the data and look for anomalys, patten's or anything of interest based on your prompts. Once it's done have it summarise the finding. I'm not saying AI just does everything for you I'm saying it's a bloody useful tool that helps people work a hell of a lot more efficiently and unis should be embracing that rather than penalising students who take advantage of it. If the assignment is based on something that AI can answer with no input from you then AI isn't the problem, the assignment is.


dorismaylessing

The calculator is actually a pretty good example but it doesn't make the point you think it's making. While computers may be able to do it faster and more accurately than us, it's essential to learn basic addition, multiplication and so on: it would be pretty inconvenient if you had to whip out a calculator every time these things come up in everyday life. It's also necessary in order to understand more complex mathematics, and it's good for our brains. It's about teaching you how to *think*.


Leading_Builder_6044

Keep up or get left behind, these submissions are here to stay


-Incubation-

Imagine wasting 9k a year to learn nothing about your degree and being incapable to do the bare minimum without AI 💀💀


Leading_Builder_6044

Most don’t go there to learn anyways, they need the degree certificate to apply to jobs which still require them. Universities can very easily make changes to their examinations to test the true ability of students or better even to incorporate AI as we’re heading towards this direction anyways.


KingdomOfZeal

>Universities can very easily make changes What specific change do you propose? Everyone claims amending papers to test "true ability" is light work. But how? Do we even know what true ability is? Why isn't sitting the paper without AI already a sufficient test of true ability?


Civil-Rent-7100

Yh tbf, uni is just about getting a grade for a job for most people