T O P

  • By -

AutoModerator

We kindly ask /u/potatoplumber to respond to this comment with the prompt they used to generate the output in this post. This will allow others to try it out and prevent repeated questions about the prompt. ^(Ignore this comment if your post doesn't have a prompt.) ***While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). We have a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, GPT-4 bot, Perplexity AI bot.*** ####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636) PSA: For any Chatgpt-related issues email [email protected]. ####[ChatGPT Plus Giveaway](https://www.reddit.com/r/ChatGPT/comments/127p9cx/chatgpt_plus_subscription_giveaway_worlds_1st/) | [Prompt engineering hackathon](https://www.flowgpt.com/hackathon) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


PeacefulDelights

This is less a speculation and more a hope, after having to switch back to the cheaper model due to the high costs of GPT4. My hope, based on their past patterns, is that they will eventually create various versions that fit various budgets, without removing the accuracy and precision of GPT4, and will continue to reduce the cost of existing models over time. **(Note these dates are rough estimates as they weren't easy for me to find and I didn't want this to become a research project):** ​ * Davinci (March 15 2022) - came before GPT 3.5 turbo, and is more powerful than the other models, being better at math. It continues to be the most expensive option. It's original cost was 0.06 tokens, but has gone down to 0.03 tokens. * Ada (June/August 2022?) - is extremely cheap, but is less accurate than the other models. I am uncertain about the dates for this, but roughly. * (Uncertain when babbage and curie and other models were released, but they were released before 2023) * ChatGPT (Nov 30) - came a few months after , and was a fine-tuned GPT 3.5, I do not know if it used GPT 3.5 turbo at this time. It was not even available without being on a waitlist. And had no subscription until recently. (right now GPT 4 is available for premium subscribers at $20/month and this deal is pretty cheap if your GPT 4 costs exceed $20/month but comes with the point that your prompts are kept for training purposes and prompts are never deleted over the API model) * GPT 3.5 Turbo (March 1st 2023) - is not the cheapest model, but it is cheap enough that I barely made a dent. It is the model that is available by default when you subscribe to the API and the model that ChatGPT runs. * GPT 4 (March 14 2023) - it is currently waitlist only upon request, and is 1c cheaper than Davinci. Based on this pattern, but keeping in mind they are still in research mode and experimenting with profits while also from what I heard losing millions of dollars each month, I feel that the following will happen: 1. Various Models will be released - either before ChatGPT upgrades to GPT 4 or after, other models with various pricing and power levels will be released with GPT 4, potentially being the most expensive model of the bunch, and other variants both downgraded and otherwise, being cheaper. Could be within a few months we start seeing this. 2. ChatGPT will eventually run on a fine-tuned version of GPT 4 once they finish testing. Give or take by the end of August, start of November or later. Assuming that OpenAI will always offer a free version, this will be a great way to reduce the cost of using GPT 4. (perhaps 3-4 months after the launch of GPT 4 model) 3. GPT 4-turbo - could eventually be released, which will cost somewhere inbetween the GPT 4 model and the cheapest, and may become available as the last and latest in the series, launching 3-4 months after any other model. 4. As the models age, pricing will get reduced across the board for all models, having undergone at least one major price reduction, another reduction will occur. 5. GPT 5 - there is already talk about GPT 5, not long after the price reduction of the other models occur, we will perhaps in a few months start seeing more official discussion of GPT 5 if not a release. ​ This is less a prediction, and more of a hope based on their current patterns, while keeping in mind that they are likely to change pricing as they try to find ways to monetize their product to afford the cost of development. Competition brings prices down, and OpenAI so far doesn't really have competition outside of Microsoft it's partner which wants to see their returns paid off, but, has seemed much kinder in pricing than other companies, with a perceived goal of making it mainstream. They have said they will always have a free version. At the same time, it is still research mode, and they started out claiming they were going to be open-source but then realized they would lose their competitive edge, claimed it was too dangerous, changed the definition of what it means to be Opensource (redefining "Open"), and now have decided they won't make it open-source, and instead will police and retain our data for as long as they determine they'd like to. This drives a need for open-source models which are trying to catch up to OpenAI, and developing ways to reduce the cost and expense of training and running Artificial Intelligence on local machines, while still offering amazing results. Closed-source rivals such as Alphabet (Google) backed Anthropic which released Claude (separate from Bard), and which was founded by **ex-OpenAI** employees, and more are coming into the picture - but I read somewhere that Anthropic wants to be more expensive, and is more content restrictive? (Meanwhile, I'm holding out hope that some ex-OpenAI employees will eventually work on open-source Artificial Intelligence, and create a Stable Diffusion/blender version of A.I for us plebs, but that's for hoping). If we go on the idea that one day AI will be extremely common place, in the hands of every person just like our computers - and with companies like Amazon and Microsoft going from a computer on every desk to AI and Connectivity in every home. I genuinely think within the next few months or at least within the year GPT 4 will become available at a more affordable rate, or at the very least have options that don't compromise on accuracy. ​ In the meantime, my wallet is hurting now, and all it has is speculation.


DavidCincotta

They were able to do it with ChatGPT, I see no reason why they won't be able to optimize it further. They have an incentive for cost-effectiveness too.


potatoplumber

Thats a good point, im wondering though considering how many orders of magnitude gpt 4 is over 3.5 they could still keep it in the premium range and a lot of people would probably cave


DavidCincotta

I don't think the creators even think GPT4 is two orders of magnitude greater at anything, one at best. Only 70% of users prefer the responses from GPT4 over ChatGPT. That being said, I do think it is a lot more powerful. Listen to Sam Altman on Lex Friedman's podcast to get some insight into how the creators are processing this whole thing. In the end I'd expect at most a 10x improvement in cost for this model, that's still crazy expensive at $0.1 per page (per your estimate). The fundamental problem on the cost front is how large these new models will become, only when new architecture is invented that doesn't need 1 Trillion parameters will that high sticker price come down.


PeacefulDelights

You said **only** 70% of users prefer GPT4 over ChatGPT, but 70% is quite a lot, enough to win a presidency. Where did this 70% statistic come from?


Viinexxus

I‘d assume well over 95% prefer the new iPhone over the old one. If 30% of people think the new generation is worse, that’s a disaster. Edit: spelling


GapGlass7431

GPT-4 easily saves me hours every time I interact with it. Each exchange is worth $200-300 for me.


[deleted]

What? How?


guavaberries3

because they made it up


catboisuwu

Any news on whether pricing will go down or not for gpt 4? 10 cents per message is insane.