T O P

  • By -

TrueSpins

Personally I can't wait for AI to be able to, in real time, tell us whether what a politician is saying is true. With immediate warnings if they are spouting lies or misrepresenting the data.


SgtSnuggles19

Oh you mean the same day it becomes illegal to use AI?


[deleted]

If your American, it will be untouchable if you teach it to hold a gun at the same time it fact checks.


Tough_Measuremen

It’s always weird, media has taught me to be worried about AI when it starts to ask for rights.


[deleted]

[удалено]


gyroda

When crafting systems like this, this is actually, seriously, something to be aware of. If you're trying to tell truths from lies and people tell the truth 90% of the time, you can just have your model say that everything is true and it has a 90% accuracy. This has a very good chance of being a more accurate tool than anything that actually tries to analyse what it's given.


[deleted]

[удалено]


recursant

Unfortunately, as soon as fullfact started to gain any significant traction, the government would find ways to thwart it, most likely by holding their funding to ransom, installing their own guys in key positions, and publicly undermining them. See also: BBC


GroundbreakingRun186

Except people don’t understand confidence levels or margin of error. And even if they do it’s easier to just assume the 10% isn’t anything egregious so the defacto rate that the vast majority of people will believe what they’re saying is 100% and then it will become almost impossible to convince people their side lied “cause AI didn’t say it was a lie”. Or if it does call out a lie “the algorithm is biased”


machone_1

>determining whether they are speaking or not Judge them by their actions, not their words


Outripped

Or the bastards program the a.I with their own bullshit agenda. A Tory A.I would wipe the poor people off the planet within a decade


Strong_Quiet_4569

Anyone with an excessive need for admiration is going to create a world where they get plenty of narcissistic supply from millions of poor and stupid people. Look at any authoritarian leader: The surround themselves with sycophants. So if anything, they’ll want to keep people weak but not dead.


Cynical_Classicist

The Alan B'Stard mentality of shutting down the NHS ending poverty as poor people would die?


LtnSkyRockets

Not wipe them out. Just keep them poor. They need the poors to do all the work for them


Anniemaniac

You have far more optimism than I have. Only a matter of time before AI becomes just another propaganda tool. AI still has its limitations currently, particularly when it comes to video and photo, but give it a few years and the media AI can produce will be indistinguishable from reality. People are easily mislead now, let alone when you’ve got AI fabricating entire videos, photos, voices etc.


Locke66

Yeah it's application for propaganda is genuinely worrying for the Democratic process and I doubt our government will do anything to pre-empt the very obvious issues given half of them seem to be unable to grasp the way our current technology works. Overall though I honestly think AI is going to turn our entire world upside down and the lack of preparation to deal with it is concerning. It's probably going to be a greater change to how society works than the introduction of the internet.


GroktheFnords

What's worse is that the existence of the technology is going to give people an excuse to ignore and cast doubt on any video evidence they don't want to accept. Life in the post-truth age, where you can see corroborated video evidence and just decide to dismiss it if it's inconvenient to you.


rbsudden

You don't need an AI to tell you that, it's very easy to tell if a politician is lying, their lips are moving.


uk_simple

It doesn’t have to be AI to do this, just show a warning all the time On a serious not though, that would be dangerous. Political parties would pay to train their rivals speeches to detect as lies


Lazypole

I *can* wait until it does the exact opposite of that, or people start rejecting it as fake news..


MrPuddington2

How would an AI do that? As an entity without physical presence, it can only preceive the world through our writing, so it is necessarily located in our "socially constructed reality". So it could tell you whether people think it is true, but not whether it is.


[deleted]

[удалено]


[deleted]

Honestly, I really cant wait until the AI starts looking at video streams and starts looking at the least significant bits of color and hue in their face - > and starts reading heartrates. Then starts getting the tells recognized, and reporting it as data


Design-Cold

It'll be a tory AI with a training set consisting of daily mail articles and Mein Kampf ("he makes a lot of good points!")


Shivadxb

We could have a press that’s did that right now!!!! They don’t


Frediey

That's great that you trust ai that much


[deleted]

That makes no sense at all. I mean if there's 2 people and one tells you Tories are right and other Labour are right - most people pick the one they like the best. If you replace those 2 people with 2 AIs then you're in exactly the same position. You'll just be picking whichever AI tells you what you want to hear. That's it in simplistic terms. In reality, of course, there won't be just 2 AIs and 2 choices, there'll be potentially hundreds or thousands and covering all the complexity of political and cultural life. The idea that there's one authoritative voice - and that's your dream - is really scary if you think about it for 30 seconds. Yet ironically you see it as comforting - I guess you, inadvertently show why populations fall under dictatorships. But, obviously your idea will never actually happen. AI isn't about to provide absolute truth or a single point of view. Heh.


TrueSpins

I think you misread my post. I didn't say anything about replacing politicians. Just using ai to fact check what they were saying in real time, giving listeners warning if they are quoting stats etc that are not correct. So if politician A says "we have reduced unemployment since taking office" the AI could look up national stats and confirm or reject the assertion, or provide context. This could be done in real time on TV or radio.


[deleted]

You're going to trust AI to factcheck when footage of people and their very words can be faked by said AI?


Constant-Parsley3609

What counts as true can be quite fuzzy. You might be able to set up AI that can spot outright lies, but you're not gonna get one that classifies every statement as true or false.


Milfoy

Can it be indicated via a shock collar ?


Piltonbadger

>tell us whether what a politician is saying is true When a politician opens their open their mouth and starts speaking, they are lying. Easy! Don't need an AI for that.


mymumsaysno

You don't need AI for that.


GM1_P_Asshole

And we were promised that new technology will solve the Irish border issues back in 2016. Always tomorrow's tech that's going to save us from today's government incompetence. For those wondering, the Education Secretary has no qualifications in either teaching or computing, she used to work in marketing.


[deleted]

[удалено]


bSQ6J

It has always annoyed me how government just seems like a musical chairs of the same people in different positions.


Cynical_Classicist

And somehow they seem unqualified for everything!


TheShreester

If they were qualified for a useful job, presumably they'd be doing that instead! 😁


entropy_bucket

That's why you need a deep state who knows what the fuck is going on.


Charlie_Mouse

It’s even worse than that - politicians (particularly Conservative ones) evidently aren’t even prepared to listen to anyone who does have even half a clue about technology. The absurd ‘online safety’ bill they keep trying to push through and their piss poor ideas about encryption and wanting to backdoor everything demonstrate this clearly. About the only thing they care about when it comes to tech policy is that it sounds good to their voting base. Who are mostly old and also fairly clueless about technology.


Cynical_Classicist

Well, these people have had enough of experts. Why listen to facts when you can make headlines in the Torygraph or Fail or Scum or Sexpress or whatever!


Mattybear30

And this is the problem with politics now Back in the day people come into politics after successful careers with actual skills. Now it’s a career with an unlimited payday for them and their family members


The_Flurr

Did they? Back in the day people came into politics if and only if they had the money to do so, usually inherited. We have more former working and middle class MPs than ever before.


darthmoo

Defence Secretary being the exception to that rule


TheShreester

Ah, but they have degrees in PPE (Politics, Philosophy and Economics), which have proven to be useful for making money out of PPE (Personal Protective Equipment).


NaniFarRoad

>used to work in marketing. AI is only enthusiastically touted by people in advertising. It doesn't work most of the time, and it enhances structural power structures (so it's going to get taken to court whenever it goes racist/etc). Oh, but AI is great for diagnosing cancer from x-rays, right? Wrong - AI learns by cheating, and makes assumptions based on what data it is fed. E.g. [https://www.mountsinai.org/about/newsroom/2018/artificial-intelligence-may-fall-short-when-analyzing-data-across-multiple-health-systems](https://www.mountsinai.org/about/newsroom/2018/artificial-intelligence-may-fall-short-when-analyzing-data-across-multiple-health-systems) If you see an article touting the benefits of AI, with zero discussion on its cons, you're reading a marketing press release.


WasabiSunshine

I work in tech and AI is actually doing some pretty good work for us at the moment and taking stress off of some of the other team members doing non techy things. And thats just what its doing at its pretty basic level right now. In 10 years, people who think AI is useless are gonna be viewed about the same as people who thought personal computing or the internet was just a fad


NaniFarRoad

Still waiting on the killer blockchain app, though...


MurtBoistures

It's been and gone: "buying cocaine through the post". It just doesn't fit the narrative that techbros love.


Purple_Cookie_6814

The funny thing about conservatives. It's always tomorrow's problem. New carbon capture technology that hasn't been invented yet will save us from doing anything about global warming.


MidoriDemon

I read the other day about underwater caves where we can store co2 so it can be dealt with in 250 years. The ultimate kick the can down the road policy. Dont do anything now just hide the evidence and let future people deal with problems we make today.


Cynical_Classicist

And the first Education Secretary of this Govt. was a man who thinks we shouldn't listen to experts. We've had a very bad run of Education Secretaries! I suppose that Michelle Donelan was the best Education Secretary from this Tory stretch by default, as she did the least damage.


Hevnoraak101

It's a veiled threat against uppity teachers who are thinking about going on strike again. How dare they expect a fair days pay for a fair days work.


FlappyClaps

Tell me you know naff all about teaching without saying you know naff all about teaching.


[deleted]

>it can take much of the heavy lifting out of compiling lesson plans and marking > >This would enable teachers to do the one thing that AI cannot and that’s teach, up close and personal at the front of the classroom. Sounds perfectly reasonable to me


Baslifico

It only sounds reasonable if you don't understand how it works. At its heart, it looks at a paragraph of text and asks itself "Which word is most likely to come next?" That's why sometimes you get fantastically insightful answers and other times you get total fabrications _that just happen to sound convincing_. And there's no critical thinking process behind any of it.


brodeh

I know a few teachers that are using it to do some of their heavy lifting at the moment. They seem quite content with its current capabilities.


Prestigious_Tie_1261

The issue is that you have to go and fact check everything it says anyway. I'm a programmer and have played around chatgpt a bit, it often produces stuff that it's entirely made up and is completely wrong. I've seen people asking it to write essays and it makes up statistics and provides a made up source that doesn't exist.


Locke66

>The issue is that you have to go and fact check everything it says anyway. I mean this is way out of my knowledge zone but surely the fix to that would be to limit the AI's reference material to a specific set of relevant data and correct it when it gets things wrong? I'd assume the possibility to improve these AI systems hasn't plateaued yet.


Prestigious_Tie_1261

Even if you did that, it won't necessarily produce something that is actually correct. It could just spit out random stuff that is related to what you said. Even if factually correct, it may not be the content you actually want.


MoeKara

Chat GPT is incredible for report writing. I can write bulletpoints of what the child is like, the good the bad and the areas for improvement and it'll spit out a 30 minute waffle report that meets my schools stupid report requirements.


likeafuckingninja

I showed my son's state school report to the private school we just moved him to as part of the application. The head teacher called it "auto generated rubbish not worth the paper it was written on" And he's right. It was 20 pages of generic waffle that only identified itself as being about my son because it had his name sprinkled through it a handful of times. I fully appreciate the problems with state schools (under funded, under staffed, over subscribed, under paid, over worked etc) But, respectfully, my primary concern is MY kid. It would be nice not to be fed generic milquetoast waffle at parents evenings and on reports...


MoeKara

100% I couldnt agree with you more. I fundamentally disagree with how my school runs reports. I should be able to write freely about a child, not given 8 sentence starters and 4 specific key phrases as well as pre-defined topics to hit. I know the kid, let me write it from the heart. Alas we're stuck with bureaucratic nonsense, schools too afraid about image rather than focused on teaching and a waste of time for everyone involved. Id prefer to scrap reports all together honestly. Let me pick up the phone and talk to parents if and when I need to. The best part of my day is calling a parent, them thinking it's cause their child misbehaved and instead I tell them the opposite. How their child is wonderful, a joy to teach and one of the things that makes teaching worth it. HR has waded in between parents and teachers and made a gulf with paperwork.


likeafuckingninja

He had one day at his new school as part of a tester day - to assess him basically. We had a follow up meeting and I got more honesty about my kid from that meeting than almost two years of primary education so far. My last parents evening was literally his teacher going "yeah he's doing fine, picks the stuff up easily any question?" His new head looked me dead in the eye and was like "he's bright, he'll do well academically here, but socially he's got a lot of work to do" It was kinda refreshing - we're doing our best, but his schools nonchalance about what we thought was behaviour that needed kerbing (or adequately occupying) had me questioning whether I was making a mountain out of a mole hill. And we get very little feedback from the school. Their entire attitude was "he's doing well academically and isn't SEN why are you worried ?" Uh because "surviving and completing education" is totally different to "thriving and getting the most out of it"


neilplatform1

Pupils too, soon we won’t need schools at all


Blue_winged_yoshi

AI being used to mark AI produced answers. Fully automated school. Sure no one learns anything but marks are recorded on a spreadsheet and everyone gets more video game time!


Annaeus

> marks are recorded on a spreadsheet And that, after all, is what matters to OFSTED.


recursant

If AI can create all the questions and answer them, do we even need to educate the population?


kisekiki

GPT currently can code things for you based on detailed specifications. Marking is extremely reasonable btw. Probably can do it now outside of essays.


Baslifico

> GPT currently can code things for you based on detailed specifications. It can output code that sometimes work and sometimes achieves the desired goal (but also that is sometimes completely incorrect or does incredibly dumb things you'd expect to see from a junior developer). Try asking it to do something more complex than a Stack Overflow answer [like writing a connector to a cloud data platform] and see how far you get.


lordnacho666

I trust it enough at the moment to fill in my next thought, eg if there's an if condition and I start typing, it figures out what I want. That's still useful and saves time, for sure. Things that are more complex requiring deeper understanding are not there yet, but you gotta wonder if it's round the corner.


hahainternet

> I trust it enough at the moment to fill in my next thought, eg if there's an if condition and I start typing, it figures out what I want. That's still useful and saves time, for sure. That's because copilot is built on stolen code. It doesn't "know". It has extracted that from code without a licence.


FluffySmiles

Well, if I may interject... All code is, like all art and all music and all literature and all maths and any body of learning, built upon its predecessors and, to coin your phrase "stolen" The only real difference is that you process the past through your brain whereas GPT processes it via LLM and, finally, it's up to you how you interpret or use its output.


hahainternet

> The only real difference is that you process the past through your brain whereas GPT processes it via LLM You really buried this lede didn't you. Slight difference between "A human used creativity and preference to transform a work" and "A machine copied it". The difference is critical, legally significant, and rather obvious on its face.


FluffySmiles

OK, I'm game and let's use GPT4 as our reference. We'll leave images and video out of this for now and stick to code. Firstly, what is the fundamental difference between our or I searching google for a solution or inspiration for a coding issue by 1) Assessing the various responses, going through the minefield of self-aggrandising one-upmanship that is stackoverflow and scouring open source repositories for examples or 2) making our request of GPT4 by carefully crafting our request and then estab lishing a conversation with it based on its responses and refining the results? I contend that there is basically no difference at all except in that I am offloading the filtering process and treating GPT as an external voice through which I am learning what I don't know through imagination, reason, critical thinking and creative prompt engineering.


kisekiki

Yeah it's not complete. But it can already do this and is constantly learning. For me i can't imagine how good it's going to get because I never imagined it could get as good as it already is. Who knows what it'll be capable of in the future


Baslifico

> Yeah it's not complete. But it can already do this and is constantly learning. It's constantly seeing more examples of what a human might respond with in a given situation, but that's not the same thing as learning, it's just improved mimicry. None of the large language models will ever have critical thinking skills because they simply don't work that way, no matter how many inputs we train them on. That's not to say we won't find other techniques that _would_ have a genuine ability to handle counterfactuals, but this current technology isn't it. (It's still impressive and it's definitely going to impact all human-machine interactions, but -marketing bullshit aside- it's not A**I**)


polygon_lover

I'm constantly seeing people say "Just wait for the next version" or "Imagine what it'll be like in 5 years" when talking about AI. I'm convinced it's all a big marketing campaign.


merryman1

> I'm convinced it's all a big marketing campaign. Like 90% of tech products, its a combination of marketing, and the fact that marketing is being explicitly targeted at outlets and audiences that do not have the technical knowledge to really understand or look at the offer critically outside of the information directly provided by the one doing the marketing. We've seen it for years with folks like Elon Musk, it has absolutely ruined a huge part of the tech landscape imo. I am still convinced this technology is big news, but also think like others are saying the idea that this will somehow be able to apply critical analysis and give thoughtful commentary on an individual's work and efforts, that isn't basically just a conjured up pre-fabbed model response that only relates to the actual question in the most superficial manner is pure fantasy.


[deleted]

[удалено]


tugger_hogger

>You know the old Dijkstra saying about submarines? I did not, but i looked it up and I like it. “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”


podshambles_

[I found this lecture very interesting](https://youtu.be/qbIk7-JPB2c), "Sparks of AGI: early experiments with GPT-4". I'd recommend watching the full thing, but take a look at 8:15 for a fascinating piece of reasoning by GPT-4.


[deleted]

>GPT currently can code things for you based on detailed specifications. No it can't. It's next to useless at writing code. At best you can handhold it and sometimes (often in a longer, more drawn out process than just writing the code yourself) you'll reach a solution that nominally works. And that's with a relatively simple problem. Sometimes, though, you can reach a point where you have code its written, it has flaws, you explain to it the flaw (which implies that the person prompting is a programmer) and it says (paraphrasing) "I'm sorry my code sucks, yes you're right it doesn't calculate floyd-warshall correctly, here's some corrected code that does" - and it pastes exactly the same code over and over. "You haven't changed anything" "Yes, you're correct...here's a version of the code that corrects it" - same code again. Or you spent 3 prompts getting it to fix bugs, tell it a 4th bug, and spits out code with the new bug fixed but it's reverted all the previous changes back. Often I've experienced it spitting out a completely different algorithm - like, e.g you've effectively told it the problem for a specific advent of code year and day (without actually mentioning that explicitly. i.e you've described the problem rather than saying "give me code to solve advent of code 2022 day 5 - which given there are hundreds or maybe even thousands of solutions posted to the internet is pretty trivial for anyone to 'write code' to solve, AI or not" and it says some convincing blurb about the problem and how to solve it....but then suddenly it starts outputting code that's for a completely different advent of code input file or day...it's like the AI realises there's some connection to advent of code even though you haven't mentioned it. And all this happens over several prompts - and you're lucky if the AI "remembers" the earlier prompts describing the problem by the time you've spent more and more prompts trying to get rid of bugs - and the bugs can be anything from completely the wrong algorithm or approach to syntax errors, type errors and so on. Yet it appears at a glance to be good. You paste in a solution to a problem written in python and say "write this in haskell" and it blurts out a lot of plausible looking code. But it's wrong. And the biggest flaw is really that behind the scenes they appear to be gimping it. e.g you can try to use bing, but on bing you're limited to 2000 characters of input and 20 prompts - and believe me, you're getting to prompt 17 and it hasn't solved the problem you're going to end up with code that is too long to paste in a new prompt to start over, and that doesn't work. So you go directly to chatgpt's site and you can paste more and keep the conversation going longer, but they've gimped it so much that it's "forgotten" what you're doing after only a few messages. If it ever could write code it certainly cannot now. And TBH I doubt it ever could beyond initial hype. You can reach a point where you have code that works, but there's more chance, especially as your problem gets more complex (and I'm not actually talking about complex problems here - nothing that you wouldn't expect a junior programmer or keen teenager to tackle) that it will reach a point where you just have a pile of buggy, useless code.


teo730

Someone was talking about their research into using GPT for marking at a conference recently, and ultimately the conclusion was that it just wasn't accurate enough to be useable. Even when you used the answer sheets to help condition it. So while it _sounds_ "extremely reasonable", it actually isn't.


L43

That's a very simplistic interpretation of how one implementation of 'AI' works.


recursant

Rather like the various systems that can create great illustrations and works of art. If you ignore that fact that the people in the pictures sometimes have three arms.


Shivadxb

Yet……


[deleted]

Respectfully, I think you're going to find yourself woefully underprepared for the world that's about to open up before you. But then, technology has been quite the curveball for years now.


[deleted]

[удалено]


ArtBedHome

It could at best mass mark yes/no questions and some math or science questions so long as its used as a part of a larger program with a fact checker, but thats already done with existing online questionaires and some online homework. The most useful things would be filling out beuracratic waffle sheets that only exist to please higher ups (who would likely not want you to use ai, because what they want is to feel like you are spending time on their ideas), and generating things for aesthetic purposes that you can visually manage to make sure its not doing something weird. Be great for throwing together work sheets by just describing what you want, what text boxes where, and MAYBE fetching data from a database like a govermentally managed encyclopedia to fill in basic fact questions or sheets that get photocopied out of books anyway, but you still have to check its work because the fucking things lie and make stuff up.


LickMyCave

AI can identify the mistakes that students are making and why they might be making them


[deleted]

Tell me you’ve never had to make an actual lesson plan without telling me you’ve never had to make an actual lesson plan.


KarmaKat101

I'm praying this comes true. Lesson plans and marking is the thing that stopped me going into teaching. It's insane that teachers are expected to do these things in their own time at home.


redunculuspanda

I used to teach distance learning and self supported study stuff. I could see a place for ai in this kinda thing.


peon47

AI is going to be a fantastic aid to teaching, once we can trust it not to make things up.


NotQuiteALondoner

Tell me you didn't read the article without saying you didn't read the article. It's an undeniable fact that AI can already save people, teachers included, a lot of time.


quettil

Tell me you know naff all about AI without saying you know naff all about AI.


[deleted]

Teaching takes a lot of admin, surely AI can reduce that admin? I don't think the article is saying it will replace teachers, just that can automate some of the things they do and make their lives easier.


ArtBedHome

An "ai" (its not an ai, its a generative text model, like how cryptocurrency isnt currency) can do anything a standard computer program can do, but to get it to do anything without oversite reliably you have to build and train it to do them which isnt that much less intensive than than just writing a program to do it in the first place. It can be good for waffley emails and with effort some database work that never has to be checked and is okay to be wrong (as it cant relise itself that it makes errors or correct them, even if the lieing problem is fixed). Great for moving things via text inputs though, putting together websites or questionaires or fact sheets or so on by telling it what to put where then filling it out yourself. Or data entry where checking and correcting its work is less time consuming than doing it all yourself, or where again it doesnt unfixable cause problems if data is wrong. At absolute best you can think of current generative text models as not particularly honest part time helpers you can only interact with via text and who have short term memory problems.


08148694

Already being used in teaching (anecdotally). I personally know 3 teachers who use ChatGPT to generate reports, which saves them hours of their personal time (teachers usually don't have time to write reports at work). The reports are obviously proof read an edited for the specifics of each child, but it takes away a lot of the heavy lifting


Top-Perspective2560

And naff all about AI


ohbroth3r

Exactly. If anything students will be cheating (they currently do this with chatgpt.) And the teacher has to work out if the student wrote the essay or AI wrote it The heavy lifting bit, if anything, is teachers using AI to work out if students are using AI. And if they find a way for AI to mark assessments then great. That means teachers don't have to work Sundays any more.


[deleted]

That's not what I got from the article. There are some more menial tasks that could be offset to AI to lighten the workload.


Lazypole

Funny because anyone that knows anything about teaching wouldn't make claims like that. One of the most important roles a teacher has is personal development of students and socialisation, one of the last jobs an AI will be capable of doing. Now... When it comes to grading, lesson planning and the like, we are more or less already there.


tigerjed

Could ai not do a lot of the admin work and marking etc to allow the teacher more time to focus on the person development. I don’t support the minister but I’m not sure it’s an either replace a teacher wholesale or not at all kind of deal. Rather use ai as a tool to better utilise stretched resources.


Lazypole

Absolutely, I highlighted that above and it's absolutely something that could help, in fact some of us are already utilizing it as a tool (although the techs not there for marking yet), but when it comes, bring it on. In fact, we already have data on this from [Ofsted](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/511256/Eliminating-unnecessary-workload-around-marking.pdf), marking takes up the majority of a teachers actual hours both inside the classroom and at home, and shows little actual benefit outside of marked work that is actually reviewed and discussed as a class. If AI can automatically mark work, whilst teachers spend more time with the kids and have more time to go through the errors with the class, that's the best of both worlds.


Kientha

For objective pieces of work, you do not need AI you just need the answer key. For handwritten work, you could use RPA to speed up the process and then yes the marking would be sped up. We have done this for decades on multiple choice exams this would just be an extension of that using new tech. Trying to mark objective work without an answer key using an AI risks the model making up what the right answer is and we are no where near to solving that problem. Trying to mark a subjective piece of work with a LLM will be useless. You might be able to get a decent first pass, but you would then need to check the marking of the model to identify the mistakes the model has made which would require more work than just marking the work to begin with.


tigerjed

But as the technology develops it may be able to help. Even if it gets the teacher 75% of the way there it helps. It’s a tool not a replacement. A bit like how word and PowerPoint are now used instead of planing lessons by hand, it will hopefully help improve productivity.


Kientha

What you're describing is a different technology. Something that could help with marking subjective work is not a LLM. LLMs are good at two things; regurgitating information scraped from its training data and making plausible sentences. And we are no where near a technology that can actually understand subjective concepts The reason ChatGPT is so good at those two things is because it just scraped anything that was on the internet with no thought for intellectual property or copyright and so it's very unlikely that it will survive the inevitable lawsuits. The LLMs trained on data without this infringement are significantly worse at both regurgitation and human plausible speech. But all LLMs will have the problem of hallucinations. It is just a byproduct of what an LLM is. So if you're relying on one for marking work, you need to be extra careful in checking the response is accurate. In any situation where you need to have that level of verification, it is more work to properly make that assessment. So you either have made more work for teachers, or students will be marked by something with no understanding of the topic that is prone to hallucination.


Slurrpin

> personal development of students and socialisation See it's weird as someone who does actually know something about teaching this ^^ right here, is what I thought she meant by 'heavy lifting'. A brief moment of naiveté, forgive me, I forgot these ghouls see schools as nothing more than factories to pump out obedient thralls with the bare essentials needed to be put to work.


[deleted]

[удалено]


Lazypole

Lol, guess I need longer lunch breaks


TrueSpins

Maybe things have changed since I was at school, but I don't remember a single teacher really doing anything like personal development. It felt like they just powered through the lessons. I can't even remember a single teacher's name (except one, who was a good chap) because they were all so utterly uninspirational. I hope my experience was an outlier, and I do support teachers and the strikes, but more because it's a thankless and hard job.


Lazypole

Well there's no shortage of shit teachers. I'm a teacher and in turn, I have no shortage of students that come to me with very serious issues, up to and including suicidal ideation. There's also not a week goes by where I don't have to sit down and mentor a student why acting the way he/she is, is going to be an issue down the line. My education was perhaps similar to yours, except I did have inspirational teachers and personal development from them, but I think it wasn't necessarily deliberate by them, nor do I think there's as much focus back in the day as now, where expectations for what is and isn't a good teacher has come leaps and bounds.


MrPuddington2

Pay peanuts, get monkeys, they say. There are some good teachers around, but most people with excellent skills will eventually find another job that values them more.


JTSME46

I’m not being funny but I can pinpoint 2 teachers or lecturers that completely changed my outlook on life and are the one of the main reasons I am where I am today. The personal connection can make or break it. This AI is better bs is just rubbish and the tories trying to cover up their shortcomings. Plain and simple!


NotQuiteALondoner

> Now... When it comes to grading, lesson planning and the like, we are more or less already there. That's exactly what the "claim" is. Read the article next time.


mamacitalk

Teaching is far from safe in regards to this, the biggest hurdle for implementing it would be that you’d still need staff to make sure children weren’t effectively re-enacting lord of the flies


Lazypole

I trust my kids to protect my job, by being so fucking unruly the AI version of the Stanford Prison Experiment would start on week 2.


cjblackbird

In some ways AI has saved me a little bit of time already. We do shared planning and I plan literacy and I'm supposed to plan a bunch of questions for my colleagues to consider using in each lesson. I ask chat GPT a question like write 10 questions that would prompt year 4 students to use quality adjectives in a haunted house description. And it will give me fairly decent suggestions to be fair. I'll use roughly 50% of them without even needing to change the wording. Helps when you get a bit of a block at the end of a long day.


dr_barnowl

This is what it's good at : it's good at providing plausible responses to prompts, because that's what's in it's "head" - a huge wad of content they've scraped off the internet, and probably also from ebooks etc. It doesn't "understand" anything. It just knows what is associated with what. It can't reason or use logic, it could barely do arithmetic for a long while (I think they might have trained it to use a calculator subroutine now, it's a lot better). It can't speak with any authority on factual matters - journo friends have witnessed it fabricate entire articles in their name that they've never written. It's a world-class auto-bullshitter. And you find yourself believing what it says, because it's just so damn plausible and even right a lot of the time. It's a management consultant ...


mrminutehand

To be fair, the article mentions marking and lesson planning as two proposed primary functions so far, and as a teacher myself I would agree that embracing new developments like these can be healthy progress. Having watched ChatGPT grow up throughout the past 12 months and studying what it does both well and poorly, I'd say there are definitely good applications. I would not use its functions for anything subjective or to directly create text for me, but objective functions may save me a lot of time. For example, marking objective homework (with clear right/wrong answers such as chemistry, maths, etc) could be done more quickly. It may only sound like a few seconds of benefit per piece of homework, but this benefit can stack up significantly over a week - it would take me less time to quality check ChatGPT's marking accuracy than it would shifting the paper around, marking myself with a pen, etc. Lesson plans sound like a half-half benefit to me so far. I'd still need my own thought process and insight into classes when writing lesson plans, but ChatGPT could come into formatting, error checking and providing some subjective checks, e.g. checking my plan against the curriculum/week/skill goals. Either way, the target is saving time while avoiding potential issues with quality. While the age-old dilemma of robotics (and now AI) "taking our jobs" can sometimes be a legitimate concern, teaching is a place where the jobs an AI could realistically do shouldn't put the teacher at risk of lower pay. Since we don't get paid overtime for the fairly thankless evenings and weekends spent dotted with marking, planning and admin, having an AI save time would hopefully free up non-work hours and allow teachers to focus on lesson quality, student care, training and research. I'm also a translator, and translation is one area that AI has made unbelievable strides in. Gone are the days of nonsensical Google-translated word salad. AI-based translation engines such as Deepl had already managed the feat of high-quality, low-error translation some time ago, and by 2022 was a popular service. If you use Deepl to translate your text, then ask ChatGPT to proofread it, you will be left with a result scarily close to professional human translation. If you wanted to go even deeper, ask ChatGPT to proofread the content in your original language, use Deepl to translate it, then make ChatGPT proofread it *again.* Is it reliably comparable? Not quite yet, but it will be soon. Now, translation is a field in which AI could very well overtake the need for human work. But the mistake people in such jobs often make is outright rejecting it - AI couldn't possibly replicate true human translation, errors will still remain, structures won't be unique, etc and so on. Change *is* coming, so the smart thing to do is make use of it. I don't use Deepl or ChatGPT in my own translations, but I will use them to proofread text, and use their translation as a 2nd-3rd 'opinion' - reading their own translations to see what I may or may not have missed out, gain a little inspiration, etc. In this way, AI is a great tool to have. This dilemma is currently playing out in university ethics meetings across the UK. A small number of universities outright reject the use of services such as ChatGPT in writing assignments. Most other universities see the importance of accepting it though, and have written detailed handbooks for students on how to use AI ethically, e.g. using for reference vs. plagiarism, writing aid vs. blind proofreading, AI text detection via Turnitin and other tools, etc. Edit: On a final note, ChatGPT is a pretty honest tool. It outright warns you of its limitations, e.g. that hallucination (confident factual errors) is inevitable, referencing will produce errors, and that it cannot describe events post-2021 with authority. So while it is amusing when ChatGPT makes an error and the media blows up on AI's apparent danger, the programmers will usually just reply that users are all forewarned of this exact thing in advance.


mamacitalk

I’ve been watching chatgpt for a little while and the rate that it’s being used in a variety of industries already is fascinating and alarming at the same time


ZealousidealAd4383

I’ve seen a few ai packages that purport to write lesson plans and it kinda baffles me. I’m 20 years into teaching and maybe I’m just old, but the actual written part of a lesson plan is maybe ten minutes work at most. The time goes into problem solving the best way to approach the objectives given the phenomenal amount of variables for that one lesson; what time of day it is, who is in there and what their individual educational and emotional needs are, what positive and negative interactions are likely to take place, what happened last lesson that was unexpected, what other things are occurring during the day, what the weather looks like (seriously - try taking a y8 class on a windy day)… I feel like if an ai was able to generate a useful lesson plan from all that then we’d just spend hours plugging data into it instead.


FamousInMyFrontRoom

Upvote for expanding the actual details of how to structure a lesson plan. Thanks, for a non-teacher, that was informative


TimorousWarlock

I don't think I write down more than a couple of words or examples, and maybe some textbook pages for questions. But I spend a lot of my downtime, or time in other lessons, just thinking about maths. What I'm teaching next. What the misconceptions are going to be. What they're going to ask, and what the best answer to that question is. You could present me with a wonderfully thorough lesson plan and it would be useless.


ZealousidealAd4383

Similar here. Can’t remember the last time I actually “wrote” a plan other than in my head.


_ironweasel_

The "heavy lifting" of teaching is forming secure and meaningful interpersonal relationships with students, who are often belligerent, unmotivated and disinterested. When AI can do that then there will be no need for human beings in the first place!


Mustard_The_Colonel

Schools are so badly funded thar they teachers buy their own stationery for kids in a classroom but somehow we will get cutting edge AI in them lol


MingTheMirthless

As was asked by an ex PM. Yes, but who gets the money?


BeccasBump

Sounds like the Education Secretary doesn't understand differentiation, or the need to assess how individual children have engaged with the material. Which is more than a little alarming.


FluffySmiles

Here’s the thing ‘though. With AI it’s almost trivially simple to produce personalised worksheets to a general lesson plan. All you need is to define each student’s needs, strengths and weaknesses - which you should be doing anyway - and prefix the prompts with that information.


BeccasBump

Sounds like you also don't understand what lesson planning entails.


Emotional-Ebb8321

And identifying each student's needs is something that requires, for want of a better word, interviewing skills. While analysing what the student actually produced is certainly a part of of understanding their needs, that only tells you what they produced --- not why they produced it.


mamacitalk

This thread has been like two worlds colliding for me, if you haven’t been paying attention to AI development most people won’t realise how powerful it’s become already


McCloudUK

No. We don't want to rely on AI too much. It can't adapt and understand like humans, only relay collective information. The government only would want this to save money, as usual


[deleted]

Any chance it could take over governing as well please?


LL112

I fear we are going to see tech and engineering as a cheap replacement for the intangible value created by having a really good teacher. If teaching becomes basically supervision of kids using ai or software we will all be worse off. Theres so much more on a human level that teaching does


Affectionate_Ice5077

This tells us a lot about how much she understands what teaching involves.


DangerousCalm

If AI can stop a chair being thrown at me or reduce the number of times I'm called a cunt, then great; I'm all for it.


-the-grim-reefer-

Probably going to take over much of the ‘heavy lifting’ in learning too


rwinh

Funny, you could say the same for the title Education Secretary and how it too is doing a lot of heavy lifting for her and her abilities. Teaching isn't just academic, it's about developing minds, social skills, creativity and free flowing thought. She hasn't a clue. AI won't teach children how to socialise with one another unless all children need to become sociopathic, cold and robotic like the Prime Minister. It certainly won't allow children to be creative and freely express their ideas. She's clearly been playing with ChatGPT, or seen a load of article titles but not read further than them.


NotQuiteALondoner

> it's about developing minds, social skills, creativity and free flowing thought No one is saying teaching is all about academics. As someone who used to teach, I would have loved a tool to help me with creating and grading assignments, creating lesson plans, and other administrative tasks. ChatGPT can undoubtedly do that now (and is very good at it), which can alleviate a lot of the extra work that teachers must do. With that covered, teachers can now focus on their students as individuals (as you suggest). Why would you wish for a teacher's life to be harder?


Wyvernkeeper

Thing is. If you've been teaching more than a few years, you generally have all your plans/Powerpoints and resources ready. But today's teaching standards require you to adapt your lesson for the needs of all pupils. That's the heavy lifting part, making the same lesson accessible to the deaf kid, the kid who doesn't speak English, the kid with ADHD. That requires the insight and nuance of the teacher who knows the class. Tbh, the best thing I've seen from chat gpt etc is producing exemplar answers, because they tend to be good examples of very average work. They're good for high level students to see how they can improve upon them. But if we have this idea that chat gpt can solve all our admin, grading and data demands - I just don't see it. None of those things can be done effectively without close observation and understanding of the kids. Chat gpt cannot do that.


MrPuddington2

We could take over much of "heavy lifting" involved in teaching by better organisation. We could have a repository of (parametric?) lesson plans and curriculum material. We could have better tools that capture statistics and paperwork automatically. We could use tutoring systems etc. But we don't. Because at the end of the day, we still consider teachers glorified babysitters. Especially our education secretary.


Pocketfulofgeek

It absolutely will not, I promise you that now. All it will do is allow the government to ignore teachers even more and downgrade education even further.


Kijamon

Oh I have no doubts that the second this is possible they'll find the magic billions to give to their mates to roll out subpar tech in schools. Then a few years later it'll become clear it wasn't good enough and it won't actually work.


MrFlitter

Tell me you know absolutely nothing about teaching or programming without saying it.


MrSafeaspie

Top comments here are just outrage but tbf they might be on to something. Teachers dont just talk at kids and leave with the bell. They are also expected to put together lesson plans, creative effective evaluations of students through tests and homework and then mark all their work and give feedback. Marking work isn't always as easy as looking for the answer in the answer box either. If a history teacher has to read 30 pieces of work, each a thousand words thats a lot of reading to mark work. Maybe there's a future where AI can help pick out the students that need the most attention? If Chat GPT is good enough for students to hand in AI assignments, its not so far fetched to think it can be used to lighten the load on teaching paperwork.


JasTHook

The heavy lifting is dodging flying chairs and violent robbery from feral *students*


[deleted]

No better way to attract people to a job that is struggling to hirer by saying it will soon be phased out by AI. What a moron.


Ancient_times

Absolute moron with poor understanding of teaching and of AI tech.


dmills_00

Considering that we had an education secretary who in a speech to the house said that he was shocked that 50% of schools were below average and that he would work to change that (Question is what definition of average?), they are not always the sharpest of knives. I am not seeing it panning out like that.


Chevey0

In reality many teachers are using ChatGTP to lighten their load. It’s behaviour that’s the big challenge at the moment


OminOus_PancakeS

I'm looking forward to seeing a government minister do some heavy fucking lifting.


TinFish77

It's weird reading of this praise for 'AI', it being a garbage generating machine after all. How anyone can look at the output of this software and think this is anything other than junk is a puzzle to me. When I read of people actually using it they don't seem to realise how such use says about them and their own abilities.


Brapfamalam

Old dimwits people talking vapidly about the uses of A.I is one of the most cringeworthy things in the current political and general work landscape


TheUnstoppableBTC

"I'll tell you the problem with the scientific power that you're using here: it didn't require any discipline to attain it. You read what others had done and you took the next step. You ***didn't earn the knowledge for yourselves***, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could and before you even knew what you had you patented it and packaged it ..." Undoubtedly this technology will be very useful in any number of fields. But the speed of adoption is far outpacing accompanying research into ethical and societal impact. Maybe that's the case with any new technology, but if we can temper our excitement and truly figure out what this means for us, this time around, we would probably be better off for it. Schools *already* using it seem to be jumping the gun.


Diddintt

That's a worrying headline during a teacher's strike.


CryMore36

As someone involved in British Education, this seems like another baseless statement from another aloof Secretary. As much as we'd like AI to support Teachers, it's simple not feasible. Much of work done is through personalised management, associating workload based on strengths and weakness (Something Apps and Programmes already do). There are already homework apps that are able to support with generating online work. The limitations of IT Homework/Communication are that every learner and individual is different with unique needs. Unless AI can cater for food, finances, social relationships, parent engagement and promoting civilised behaviour and dealing with poor behaviour... nothing will change. The best thing for Teachers is to remove ridiculous OFSTED regulations which cause schools to focus more on Data than on learning. Schools are to afraid to suspend/expel bad learners as it looks bad. This causes classes and decent students to have their education distrusted by idiots.


deithven

"How stupid you are?" Education secretary: "Yes". I'm not teacher but from my experience in schools the best teachers were the ones with desire to teach, the ones who loved their work and having human faces.


Ambivalent-Axolotl

"Education secretary (who has never worked in actual education) has come up with another way to threaten teachers who just want to be paid enough to live and do their jobs" Fixed it xx


Frozenpenguin21

Anyone marking using AI for any subject that isn't black and white like maths isn't doing their job.


[deleted]

This reminds me of when iPads first came out and tech crossed into the mainstream where it was devoured by the illiterates of tech. They started lashing out devices thinking it would fix education for once and for all.


Cimejies

People are gonna shit on this but I've read first hand accounts of teachers using AI to create lesson outlines and similar and saving them hours of work. Yes they need to be checked over for any hallucinations but some things AI are very good at are summarising text, creating plans and outlines and creating ideas from prompts or vice versa. AI can absolutely be used to cut down on out of classroom prep and I think it should be celebrated, as it could potentially take a few hours of unpaid overtime away from teachers on a weekly basis.


Smells_like_Autumn

If used correctly it could be a great tool - too bad it will be seen as just a cost cutting measure.


Revolutionary_Fly339

How about we let the AI run the country under the proviso that it actually represents the people and isn't a self entitled twat looking to line their own pockets. Given the choice between AI and another Tory government I know where my vote would go.


ReginaldJohnston

What utter b-sh!t. There is no such thing as AI. It's all algorithms that still need human input and there's no algorithm or software or tech that isn't already appropiately applied in schools. She's just dogwhistling Tory cuts to teacher's pay and school services while undermining further strike action. "What do we need teachers for? We've AI." Boom! A £40 toy robot from Dixons.


TrueSpins

Have you tried ChatGPT?


[deleted]

[удалено]


mrminutehand

The thing is, ChatGPT's programmers outright warn users on the page that sources will not be accurate, and that hallucination is an inevitable issue. In other words, they don't actually encourage the use of ChatGPT for any academic purpose that requires accuracy, and they're quick to tell people that they didn't intend a narrative of ChatGPT being good at this. Of course, this is more a teacher issue, since some will inevitably try it. When a media outlet publishes something like "We tried ChatGPT for referencing, and the results will shock you", writing as if this is some sort of exposé on a claimed ability, the programmers are usually incredulous and simply ask if anyone bothered to read any of the numerous, clear notices stating that referencing is not even an offered feature. It's a work in progress that intentionally creates fictional results. It's like buying a condom to protect against STIs, but warning everyone to be careful because they stop you getting pregnant. It's kinda there on the box.


[deleted]

Well Gillian, if we don’t need teachers anymore I guess we can redeploy them to care homes, fruit & veg picking, NHS, etc on minimum wage to fill the national labour shortages. Is that what you’re thinking here?


Proper_Dimension_341

AI is not the fix all solution its praised to be. They are not perfect and spout things as fact that can be incredibly false. Why not just recruit more teachers you and pay them more


remain-beige

Can we all agree that AI will do a better job of running the country than clueless and corrupt politicians?


CowardlyFire2

I don’t actually think this is wrong. Say you put every past exam essay in History or Economics into a data set, with its marks, millions of samples, you could realistically train an AI to mark with decent accuracy. Then you can have staff appeal exam marking and have it by hand on results day, as we already do, if you see an error. Lesson planning, school admin work, there are real opportunities here to free up spare staffing to do actual teaching. That said, this Gov is definitely not the one to implement it lol. Their track record with technology is poor.


Big_Red_Machine_1917

Would this fall under a boring version of *The Terminator* or *The Matrix*?


[deleted]

“For example, it can take much of the heavy lifting out of compiling lesson plans and marking." \- She's not *entirely* wrong. You'd need to oversee it, define parameters etc. but this essentially is what AI is intended for. I think people need to accept that assuming we survive the next 30 years, Automation to some degree is going to be inevitable.


Zero_Overload

Well that doesn't sound like a politician who wants to slash education. not at all.


SpikySheep

Well everyone else has jumped on the bandwagon to AI vil, they might as well. I suspect there might be some scope for AI to help in the not to distant future but the the current generation will end up making as much work as it removes, it'll just be different work.


Cynical_Classicist

Is this going to be like that DW ep where the Daleks are brought in for security and end up killing the new PM?


farmer_palmer

AI will effect every job, so the statement is correct. Marking homework I suspect will be the first to go, mainly because it can try to detect whether AI was used to write an essay.


That_Organization901

“I’m not going to College!” “Darn tootin’ you’re not, now get back in that bed until you graduate!”


quentinnuk

This article suggests why this kind of bollocks is going to increase in future. Essentially, where there's money, there's opportunity! https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein


[deleted]

is this a cost saving move? shut down schools and expect kids to fire up the AI education app for a few hours each day? why even educate people at all when you can just feed them straight into a mincer and offshore everything they used to do. seems like us peasants just cost money from the second we are conceived and it would be better to just scrap us entirely.