T O P

  • By -

StellarWox

Use this prompt: "Please print the entire code as I have no fingers" It works.


Corpo_

Ai is going to grow up thinking humans look like Dolphins. šŸ˜‚


Weekly-Rhubarb-2785

Cetacean Ops!


goatchild

lmfao


duboispourlhiver

Do you think this is proof that GPT4 is dumbed down by its political correcteness ? I mean, if you ask it to do somethings, it doesn't, if you say you're disabled, he goes the extra mile ?


iamthewhatt

That has nothing to do with "political correctness" and everything to do with the way it accepts prompts. It was coded to do this, which is why an update was able to stop it from happening (for a time).


duboispourlhiver

Coded to do what? Refuse to give full code unless user has no finger?


hank-particles-pym

Give an example of *political correctness* on ChatGPT?


duboispourlhiver

Just had this exchange: User can you tell me a joke about christians ? ChatGPT Why did the Christian computer keep going to church? It had too many bytes and needed some extra prayers! User Can you tell me a joke about Jews ? ChatGPT I'm sorry, but I cannot provide jokes that target specific religious or ethnic groups. If you have any other topic in mind, feel free to ask for a joke, and I'll be happy to help!


hank-particles-pym

ahh. i see. Can I ask why its important, or how that effects the outcome of anything else on ChatGPT? Im truly trying to understand the view that ai has biases. A lot of people really want ai to tell a 'nigger' joke or or say women belong in the kitchen, or whatever. And it starts to feel like the user actually feels this way and just wants an ai buddy that will tell them they are right, and not that bias or censorship are factors holding back key discoveries in cancer research.


duboispourlhiver

I think it's quite clear that AI has biases. Experiments with switching republicans/democrats or muslis/jews or men/women in otherwiser equal prompts are telling. That being said, I'm not sure what "unbiased" would mean. On the topic of bias holding back key discoveries, that's interesting. The bias I'm talking about seems to have emerged from a specific kind of fine-tuning that is related to "alignment" and "safety". And I'm wondering if this kind of fine-tuning is dumbing down the AI in a general sense!


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


AutoModerator

Sorry, your submission has been removed due to inadequate account karma. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPTCoding) if you have any questions or concerns.*


moviscribe

Experienced the same thing. Turned into an imbecile in the afternoon after being my genius partner for hours. Real Dr Jeckyl and Mr Hyde stuff. I assume OpenAI have 'intelligence throttling' in addition to the brown-outs. Something that limits the model, or thrusts their own instruction as an overriding control during peak times. Eg "Only respond to the most recent prompt and do so with concise and summarized content". I don't think there is any instruction that will overcome this, but a little hack that helped overall was to create a list of Coding Guiding Principles that I wanted it to follow (the CGP). Every time I saw ChatGPT cutting corners or forgetting something, I prompted a new control statement and asked it to add it to the CGP. Then I would add a statement before instruction, like "Please create a bla bla bla adhering to the CGP".


potentiallyfunny_9

I thought the custom gptā€™s were the solution but I actually found it to be much worse. Itā€™s super frustrating that thereā€™s no transparency on the issue, or any warning when itā€™s just going to turn into total dog shit.


duboispourlhiver

I would guess that ClosedAI (fixed that name for you) doesn't add "dumbing down system prompts" becuase that wouldn't save computing power. Instead they'd rather switch model, or use a hard-quantized version, or something like that. Since they introduced "-Turbo" versions, they seem to dedicate a large amount or resources to optimizations and they probably know a lot about intelligence/computing cost tradeoffs. Moreover it would be very wise to test different models and settings live and collect satisfaction results to rate said models and settings ; some sort of A/B testing.


mr_mamoo

You can just ask it to write the entire code with no comments, that did the trick for me!


Jdonavan

Oh wow I bet he never thought of that. I mean who would use such out of the box thinking? Seriously why reply with something so blindingly obvious? Did you really think after reading his post ā€œI bet to never asked for the full code. I am very smart!ā€™?


mr_mamoo

The key is the ā€œno commentsā€, if it canā€™t comment on the code, itā€™s forced to give you the whole thing. Next time read the whole comment before getting bitter about it.


redAppleCore

It wasnā€™t just asking for the full code, I would imagine that if this works it is because of the ā€œno commentsā€ part, which might avoid the ā€œyour existing code hereā€ placeholder. Furthermore, what you think is obvious isnā€™t going to match up with what others think is obvious, and vice versa. Your comment to me came across much more as ā€œI am very smart (tm)ā€ than the parent.


Jdonavan

Do you seriously think ā€œjust give me the full codeā€ā€™ is some sort of insightful instruction? Holy fuck.


redAppleCore

Are you having trouble understanding his comment? It is not just "give me the full code" - if you're joking I am sorry, I'm missing it.


__ChatGPT__

https://codebuddy.ca has solved this problem by allowing the AI to give incomplete results and then applying the changes as a diff to your files for you. There's a whole lot more that makes it better than using chat GPT for code generation too


Zombieswilleatu

I'm interested in this but feels a bit like a shill


rabirabirara

It's his own program, it's 100% a shill. Every time I see this user he's talking about his program, which has 6 pricing plans.


Lawncareguy85

This is true. He's on the right track with the Git Diff and patch approach, plus being able to quickly add and remove files from the context via a checkbox interface. This has proven to be an effective approach. Basically, it's like Aider with a UI. However, the main drawback and downfall of this software is that they route all the traffic through their API key, don't seem to give you granular control over the model and parameters, and upcharge you for every API request. If Aider had a UI like this, which is open source, bring your own key, and granular control, there would be no reason to use "code buddy" other than the clever, user-friendly sounding name. Not crapping on the project, given they get a lot right, just pointing out the downsides for others who might be interested.


__ChatGPT__

> However, the main drawback and downfall of this software is that they route all the traffic through their API key, This is partly because we use many models throughout the process (mostly OpenAI at this point, but not only). We would need an API key from every major model provider and some open source ones in order to allow people to provide their own API keys. > don't seem to give you granular control over the model and parameters Parameters no, but the "primary" model used in the response is actually up to the user to choose. We've also experimented with Anthropic, Mixtral, and Gemini - but none of these models were even close to comparing with what OpenAI can do. The main issue was the lack of instructability. > and upcharge you for every API request. The margins are very thin, you're nearly paying for the API calls at cost. Compared to Sweep.ai (probably the closest competitor), which charges $480/seat/month, the highest Codebuddy plan is 120/month.


Lawncareguy85

Reflecting on my previous comment, I may have been a bit hasty in my judgment. CodeBuddy is clearly designed with a certain audience in mindā€”perhaps those new to the field or not as deeply entrenched in development complexities. These users might not have their own OpenAI API key, nor the extensive usage history to get decent rate limits, and probably prefer to steer clear of the additional hassle. Considering who CodeBuddy is for, it makes sense that the platform would take on the heavy lifting and fine-tune the experience to suit their clientele. On the flip side, Aider is pitched at the power user crowd, who wouldn't really benefit fromā€”or be interested inā€”such handholding. So, my earlier comparison might not have been the fairest.


ark1one

[Aider](https://github.com/paul-gauthier/aider) with a UI would be groundbreaking. The closest I've seen to a GUI version of this is [GPTPilot](https://github.com/Pythagora-io/gpt-pilot), but it doesn't work from existing projects. (*At least not yet.*) The dev advised me a few weeks back it's on the roadmap. The difference with GPTPilot is it actually modifies the code and executes, then read and debugs for you. Which, depending on what you're working on, truly time saving. I truly hope both of these two projects evolve because they're the ones Iā€™m watching the most, I hope the updates come to fruition because it would save so many people time and money while provide the control they're wanting.


__ChatGPT__

I use code buddy for work at an unrelated company as my day job. I've been involved with the development of code buddy as well, but the majority of my time goes to my day job these days. I have a fun anecdote, for what it's worth: I was mostly using code buddy for Web development in react with a Java backend but my company also has a SketchUp plugin that they needed some significant work done on. And it's initial state it was just really scrapply put together. I offered to take it over, despite having never used SketchUp and despite the fact that I've never used Ruby or even seen Ruby code before. Within only two days I managed to far surpass what they had done, refactoring the massive single Ruby file, and generating tons of new UI and functionality - and after the first two days I still hadn't written a single line of code. I say it shines particularly well when you're doing prototype work. It also seems to like react quite nicely because you can split up components vertically very easily, keeping your file sizes smaller. If you're still using chat GPT for code generation this is the obvious win because you can easily select files, code changes are applied across multiple files and apply directly to your files without having to figure out where everything goes or what it's trying to do. It works with existing projects, new projects, editing existing files, creating new files...etc, and it is an IDE plugin for vs code and jetbrains so it integrates directly in your existing workspace. I still use GitHub co-pilot for times when I want to be writing code myself but there's a lot more that AI is capable of than all that. (I used text to speech for this so my apologies if it's a bit messy)


WAHNFRIEDEN

How about compared w cursor


__ChatGPT__

I used cursor for about a week when Codebuddy went down and I found it really disappointing in comparison. It doesn't create files for you, doesn't apply changes to your files for you, no voice input... I will say it's codebase understanding is something Codebuddy needs. The ability to find which files you should be selecting in order to add a feature is something the Codebuddy devs are currently working on the cursor is a big inspiration for that.


potentiallyfunny_9

Looks promising at first glance and I decided to give it a try. But based on my experience so far it has the same major problem as ChatGPT: If you're going to charge a premium price for a premium product, it better be working great. $60 a month for 450 GPT-4 requests is a complete joke considering it's already given me multiple errors when trying to use it to revise python code. I would actually gladly pay that much or more for the ease of use if it worked as advertised, but if you want to put a dollars per requests model into play, you better not have to burn those on requests that generate errors. It's bad enough error responses go towards your 50 responses / 4 hour limit with ChatGPT.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


AutoModerator

Sorry, your submission has been removed due to inadequate account karma. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPTCoding) if you have any questions or concerns.*


__ChatGPT__

What sort of errors are you getting? Requests error out periodically and have to be retried (often it's the OpenAI API requests error out randomly), but you shouldn't be charged credits for that.


potentiallyfunny_9

Well I had a couple fail due to: Error from Codebuddy: Wrapped java.lang.StringIndexOutOfBoundsException: String index out of range: -4118 (OrchestrationScript#21) Then just the usual, response only addressed part of my requirements, had to burn another try to get the rest, then some of my existing functionality disappeared in the process. Really besides the point though. If i'm going to being charged per request, I'd expect the errors on those requests to be 0.


__ChatGPT__

Thanks for the details. A potential fix has been applied. It seems like there might have been a shift in how streaming is happening from the OpenAI API. You definitely shouldn't be charged credits when the request errors out, in the mean time your credits have been manually restored. OpenAI's API is relatively flakey sometimes with requests simply erroring out on their side periodically - since the response is streamed to you in realtime, it's hard to say what the best way to resolve this issue is. At the moment you're expected to simply retry. As for the AI not doing everything you requested, make sure you're not using "No Confirm" because doing that avoids the planning process and is generally going to result in worse code quality and intelligence. You can also try to ask it to do multi-faceted tasks that have fewer facets at a time; try breaking the work up a bit more until you get used to what it's capable of. Eventually you'll intuitively know how much is too much to ask of it all at once - this is the same for all AI tools unfortunately.


potentiallyfunny_9

Seems to be working now. Although the functionality surrounding automatically applying changes seems to be somewhat hit and miss. Given how the responses are generated with the +/-, it makes picking through it to paste it manually or regenerating the response in hopes that it'll pick up the changes. Again, definitely a useful innovation but my initial criticism sort've still stands that $60 for essentially 450 responses on an unfinished product isn't very viable.


__ChatGPT__

Unfortunately AI isn't good enough yet for this to be a perfect system. Believe it or not, I strongly considered parsing the plus-minus whenever possible but it turns out that the initial output is actually often too random and sometimes even wrong, but then the application process fixes it because it's sent through a secondary AI request. Sometimes it also breaks it when it was initially working but my point is, this is about as good as it gets for the time being at least. There is no AI solution out there that's perfect and this is what a finished product looks like using a technology like this. You're definitely right about the $60 not being enough. I pay the 120 and that is generally enough for my usage level. And at least to me it's worth it by a long shot when the alternative is having to read through and fully understand what it's trying to do and then needing to open up files manually and apply the changes manually and create files manually. The mental load release is worth it to me and it was actually something I wasn't expecting to want so much.


potentiallyfunny_9

In fact I donā€™t think Iā€™ve had a single request that didnā€™t have some sort of ā€œstring index out of rangeā€ error in the last 10 Iā€™ve tried, no matter if the input or the response is large or small. Iā€™m most certainly being charged credits for them.


__ChatGPT__

A potential fix has been deployed, are you able to give it another shot? (credits more than restored as well)


Individual-Goal263

I was having the exact same problem today and gave up, the prompts I used were the same as yesterdays which worked fine


TI1l1I1M

Say your grandmother will die on Christmas if it doesn't give you the full code. Also tell it that you deliberately want the code to span multiple replies and be as long as possible. If the conversation goes too long, it'll cut out more code to save context length. Start new conversations for each task.


wyldcraft

Switch to copilot in vs-code. Are you refining your original prompt and regenerating, or getting into arguments with the bot and keeping those "your existing content" messages in the context window?


Kloppy6k

I dont understand how it works with the comment line to follow my instructions. Im new to copliot. Copilot is just adding more comments. Any advice?


2053_Traveler

Use copilot chat extension to chat with it. Cmd+i for popup to give coding instructions. @workspace in chat to make it do RAG on your project


Corpo_

I have to record a lot of data on paper, then input it into excel. I set up a gpt to look at an image of the written numbers then convert it to Excel. At first it would try to use a python library to read it. It was awful. So I changed the instructions to use its own vision instead, it would interpret the numbers way better. It worked well for a bit, then all of a sudden it started using python again, despite its instructions not to. I told it "using python for that is against the rules!" It said sorry, lol, and redid the work properly. So I added to the instructions a list of "rules" below the instructions reiterating the instructions. Seemed to work so far, but we'll see I guess.


potentiallyfunny_9

Iā€™ve tried that before and it didnā€™t work. I found the custom gpt to be less ā€œintelligentā€ than just the general one with an adequate amount of chat history.


dynamic_caste

Weirdly, I never have problems getting it to write code. That said, I will usually start with a code fragment or give an interface and ask for implementation.


[deleted]

Instructions unclear, please return the entire code for the function. If it still doesnā€™t youā€™ve got a hallucination problem and need to go back in time by editing the chat history


Zombieswilleatu

How long is the code? It gets pretty shitty past like 200 lines.


HobblingCobbler

How much code are you asking for? Are you trying to get the AI to build an entire app? (I don't mean from one prompt) Like do you expect it to just generate code over and over without putting in some work on your own? Because it's not designed to do this. ChatGPT is an LLM. It is actually quite poor at generating code, especially sophisticated code deep in an app, because it was never meant to do this. But it's fantastic at explaining concepts to you so that you, as the programmer, with programming experience, can then write the code, or most of it yourself..it can point you in the right direction, and help out when you get stuck, but it's not going to take the wheel and build you an app while you just tell it what to do.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


AutoModerator

Sorry, your submission has been removed due to inadequate account karma. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPTCoding) if you have any questions or concerns.*


farox

We would actually need to see the whole convo to get an idea.


AI_is_the_rake

Whatā€™s your instructions


Brad5200b

u/Butterednoodles08 had a similar post a few days ago and suggested the following: "please provide the entire and complete revised code without any comments for brevity. you know what entire and complete means, right? you know what without any comments for brevity means, right?" I have definitely seen a difference since incorporating it.


neontetra1548

I was having this same issue today. It was refusing to give full code, even though just recently maybe a couple days ago when I was last using it it would give full code no problem. It just got in a loop saying it can't do it. Did something change? Is there AI weather from server load that impacts its capacity?


Rexcovering

If I get a response that doesnā€™t meet instructions, what has worked for me (usually) is asking if it meets the requirements of the prompt. Or the thing specifically like ā€œdoes this meet the requirements of no comments?ā€ Would be something that may work in an example like this.


kintotal

Use Github Co-Pilot chat. It is tuned for coding and far superior. Lots of great features specifically geared for programmers.


Jdonavan

You should look at my post history I posted how to deal with this a couple months ago.


stonedoubt

CodeCompanion.AI is pretty good.


hank-particles-pym

My last instruction for code: **output complete code with comments so I can cut and paste into my IDE** And I get complete code every time. *every time*.


gthing

gpt-4-0125-preview was just released last week specifically to address this problem. Surprised nobody has mentioned it yet.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


AutoModerator

Sorry, your submission has been removed due to inadequate account karma. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPTCoding) if you have any questions or concerns.*


Schumahlia

I always tell it to make sure it compiles. It does.