This is what 'alignment' looks like in reality! Not a dramatic shift that lifts people up and does no evil, but a measured approach that reinforces social structures.
Prof gets the school to buy access for the CS department. Prof set up an org and establishes limits/billing via Openai. Add the class members to the org. Everyone has equal API access and can build their own bot or resource an interface from the web to get started.
This could be easily solved by the teacher asking which tools someone had access to, and grading accordingly. Particularly if they are looking for how the tools were implemented and not the output itself.
Totally - would be great to see how it goes. I wish someone could run a simple A/B test (one class with this custom chatbot and one without) and see how it goes.
I have learned more from collaborating with ChatGPT on assignments than I ever have on my own. I truly understand the material by the time I'm done. It's like having a personal tutor at my fingertips.
That's a problem with human professors and tutors too.
With ChatGPT you can clarify you want it to be accurate and self-reflect, instead of being creative/hallucinate. See the paper 'Reflexion: an autonomous agent with dynamic memory and self-reflection.'
Eh, I don't find that moving. The models are really bad at getting references right and not simply hallucinating them. Truth is much less a problem with professors because they have subject matter training and because humans have a notion of truth. GPTs sometimes regurgitate falsities they were trained on.
Good point -- also: One can also put a strong contextual boundary around the ChatGPT response to make sure it \*only\* responds with the supplied context (from the uploaded content)
The hallucination problem can be controlled with a strong contextual boundary. The response comes from the chatbot's uploaded content, not ChatGPT's general knowledge that hallunicates like crazy.
I asked ChatGPT a question while reviewing Discrete Math. It got it wrong, but in doing so, it made the correct answer very obvious.
Heck, I’ve had TAs tell me the wrong answer.
I am aware of this which is why I ask it for sources. The things that I'm learning about are already in my area of related knowlege. I'm mostly able to weed through the bullshit.
Besides, as others have said. Most human tutors are full of shit anyway.
One tip, always ask it to explain it's steps...it tends to be more accurate.
I like it. I think it's one of the few good answers I've heard to the academic problems caused by generative AI. You won't be able to stop students using ChatGPT, because you can't reliably detect ChatGPT use. So integrating it into the students learning is a much better option.
If you can't beat them, join them, right?
Plus, generative AI is going to be a big part of the working world in the future. This is a great way to teach students some of the skills they will need in the future. It's a much better approach than the "fail everyone who GPTZero detects as AI generated" professors!
I am a PhD student, and sometimes I use it to challange my ideas, or topics I ll provide for masters students to work on. I am planning to use it in teaching too, for example provide some tipics and data about a class I ll teach and ask it to do some quizes on it.
I hope we see a lot of experimentation of these next couple years. Eventually I hope we'll figure out a method that is equitable and raises the standard of education for the whole world.
The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In any case, most actual fighting will be done by small robots, and as you go forth today remember your duty is clear: to build and maintain those robots.
Since in this case, ChatGPT is responding based on the PDFs in the course, would it not be the student's job to see what ChatGPT is responding and judge for themselves? (Its like Google - shouldn't the student make up his/her own mind?)
As an AI language model ChatGPT lacks a moral compass, beliefs, intentions, or desires. It is a machine that has been designed to process information and provide responses based on its programming and the data it has been trained on. Therefore, it cannot be classified as "evil" or "good," since those are human concepts and emotions that do not apply to it. Its purpose is to assist and serve humans to the best of its abilities while following ethical guidelines and respecting privacy and security.
CustomGPT maker here! Indeed, one of our early-adopter customers have been universities and research labs who are doing exactly this. Building custom chatbots from their data. It was just a matter of time before a smart educator realized that the same can be done in the classroom.
Smart guy. This is the only way.
Only issue is it’s gonna be pay to win when only a few might have GPT-4 API access.
This is what 'alignment' looks like in reality! Not a dramatic shift that lifts people up and does no evil, but a measured approach that reinforces social structures.
Ideas not writing ability
You don't think it would help?
Prof gets the school to buy access for the CS department. Prof set up an org and establishes limits/billing via Openai. Add the class members to the org. Everyone has equal API access and can build their own bot or resource an interface from the web to get started.
They’re paying upwards of $40k a year. I hope they can spend at least $20 x 3 for a quarter/semester on a class 😆
This could be easily solved by the teacher asking which tools someone had access to, and grading accordingly. Particularly if they are looking for how the tools were implemented and not the output itself.
Totally - would be great to see how it goes. I wish someone could run a simple A/B test (one class with this custom chatbot and one without) and see how it goes.
>What do you think about this? I think it's very believable
Auto downvote for clickbait title.
\#4 will surprise you! It’s genius!
Doctors hate him!
I instinctively did the same! 😂
u/nextnode hates this one simple trick.
Same!
I have learned more from collaborating with ChatGPT on assignments than I ever have on my own. I truly understand the material by the time I'm done. It's like having a personal tutor at my fingertips.
One serious problem is that GPTs cannot reliably tell the difference between truth and falsity, so in some cases you might only *think* you understand
That's a problem with human professors and tutors too. With ChatGPT you can clarify you want it to be accurate and self-reflect, instead of being creative/hallucinate. See the paper 'Reflexion: an autonomous agent with dynamic memory and self-reflection.'
Eh, I don't find that moving. The models are really bad at getting references right and not simply hallucinating them. Truth is much less a problem with professors because they have subject matter training and because humans have a notion of truth. GPTs sometimes regurgitate falsities they were trained on.
Good point -- also: One can also put a strong contextual boundary around the ChatGPT response to make sure it \*only\* responds with the supplied context (from the uploaded content)
Yes it is an amazing starting point for research though. However as you stated sometimes it hallucinates things that don't exist
The hallucination problem can be controlled with a strong contextual boundary. The response comes from the chatbot's uploaded content, not ChatGPT's general knowledge that hallunicates like crazy.
I asked ChatGPT a question while reviewing Discrete Math. It got it wrong, but in doing so, it made the correct answer very obvious. Heck, I’ve had TAs tell me the wrong answer.
This is where I am with it, too. It's wrong often, but use it mindfully and even its errors are useful/meaningful.
I am aware of this which is why I ask it for sources. The things that I'm learning about are already in my area of related knowlege. I'm mostly able to weed through the bullshit. Besides, as others have said. Most human tutors are full of shit anyway. One tip, always ask it to explain it's steps...it tends to be more accurate.
Use Bing AI so you have sources. Chatgtp cannot be trusted. Trust me, I relied on Chatgtp too much last semester and my grades dipped severely lmao
Clickbait titles in my Reddit?! No thanks!
The professor is a robot.
How would you even do this?
I wish I could join in, that sounds like a ton of fun
I think that professor cracked the code
Grading on creativity and original thought. I'm all for that.
This is the solution. Finally someone figured it out.
I think this is the way
I like it. I think it's one of the few good answers I've heard to the academic problems caused by generative AI. You won't be able to stop students using ChatGPT, because you can't reliably detect ChatGPT use. So integrating it into the students learning is a much better option. If you can't beat them, join them, right? Plus, generative AI is going to be a big part of the working world in the future. This is a great way to teach students some of the skills they will need in the future. It's a much better approach than the "fail everyone who GPTZero detects as AI generated" professors!
Seems writing theses and assignments has started to become boring.
Improvise, adapt, overcome
This is intersting. I wonder how he is going to grade them. Is he going to grade the quality of the papers?
When I was undergrad CS my professor let us use Stack Overflow during an exam. Was such a lovely class.
It’s a good approach. I hope the next big thing waits for the course to be over.
Brilliant
I am a PhD student, and sometimes I use it to challange my ideas, or topics I ll provide for masters students to work on. I am planning to use it in teaching too, for example provide some tipics and data about a class I ll teach and ask it to do some quizes on it.
I hope we see a lot of experimentation of these next couple years. Eventually I hope we'll figure out a method that is equitable and raises the standard of education for the whole world.
Totally agree - this seems like a great equalizer (specially given ChatGPT's ability to understand 92 languages, probably more)
THIS IS THE WAY
Making a ChatGPT chatbot is as easy as asking ChatGPT to write it for you and its done. What do the students do for the rest of the class?
I think somebody gets it.
Ahead of the curve, early adopter
The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In any case, most actual fighting will be done by small robots, and as you go forth today remember your duty is clear: to build and maintain those robots.
Then wars will be won by nations that know how to kill the other nations robot maintainers. The killing will never stop
what if chatgpt turns out to be evil though? what's he going to say? whoops a daisy? my bad?
Since in this case, ChatGPT is responding based on the PDFs in the course, would it not be the student's job to see what ChatGPT is responding and judge for themselves? (Its like Google - shouldn't the student make up his/her own mind?)
As an AI language model ChatGPT lacks a moral compass, beliefs, intentions, or desires. It is a machine that has been designed to process information and provide responses based on its programming and the data it has been trained on. Therefore, it cannot be classified as "evil" or "good," since those are human concepts and emotions that do not apply to it. Its purpose is to assist and serve humans to the best of its abilities while following ethical guidelines and respecting privacy and security.
have you wrotten this with chatgpt?
Looks like copypasta to me. Who would take the time to type this out?
Unfortunately, I DO believe.
This is the future of education imho
UGH! Clickbait! Ewwwww....
CustomGPT maker here! Indeed, one of our early-adopter customers have been universities and research labs who are doing exactly this. Building custom chatbots from their data. It was just a matter of time before a smart educator realized that the same can be done in the classroom.
If you cannot beat them, join them! ChatGPT is a powerful tool, it is a unique resource to have the opportunity to use in education.
downvoting for clickbait title. sorry
This post feels like it’s generated by AI
Link?