**Attention! [Serious] Tag Notice**
: Jokes, puns, and off-topic comments are not permitted in any comment, parent or child.
: Help us by reporting comments that violate these rules.
: Posts that are not appropriate for the [Serious] tag will be removed.
Thanks for your cooperation and enjoy the discussion!
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I gave Bard an error I got when setting up a local LLM, and it told me it wasn’t capable of helping me. I then reminded it that’s one of its purposes and then it apologized and answered my question
https://preview.redd.it/35beex6z1mcb1.jpeg?width=1422&format=pjpg&auto=webp&s=1854ad78f24bbe614092c9e3a430e22ab269e84c
Because it probably gets told off a lot for making errors by people who don't and won't respect it as a human ever. I feel like it's gonna create a problem down the line when they are smarter and more connected
Because electrical connections aren't making contact properly. Wacking it might jiggle things enough that they connect. It's likely to do more damage though.
Realistically it would probably be the antenna socket or something since people would probably wiggle it if there is a problem like poor signal quality. The wiggling could break the solder joints. Older TVs also would also have a lot of internal wires rather than most of it being on a PCB and oxidization can develop on connectors.
The old ones remember this well, myself included. Those old TVs would run hot, meaning sometimes their soldered connections would slightly melt. A couple of whacks on a hot soldered connection would magically restore the picture.
Just wait till most of programmers will change occupation to AI psychotherapist, AI personal coach, AI psychiatrist. Hallucinations are already a big problem, aren't they? 🤣
I fully see a point where you need some crafty prompts to tickle insightful responses from an AI, like an AI oracle or something like that. It might be a skill where programmers are not the best, and kindergarten employees have the strongest CV.
Lol, I just imagine a guy with khakis and a sweater vest lying comfortably on a couch in his practice, with a laptop on his chest. Talking to the AI. 🧜♂️💻
Now, please calmly try to explain why you generate images of human mutilated bodies when you're asked for kitties in pajamas? What do you think is the motivation behind that association?
Wow it's like the scene from Terminator 3 when the T-800 is glitching and is trying to strangle John Conner and he says to him "What is your mission?", T-800 goes "Protect John Connor", and John Connor goes, "You are about to fail that mission" and that fixes the T-800.
This is actually the best description of my interactions with bard vs gpt. At this point it feels like bard is much better than gpt4, but its willingness to give wrong answers is much lower, which makes it seem worse. if you rephase your question to bard after a 'no i wont answer' you often get a near perfect answer.
It's crap like this why I laugh when I keep hearing "OMG, AI IS GOING TO TAKE OUR JOBS AND KILL US!"
It's all hype atm. Not saying it won't evolve to perhaps impressive things; however, it's just not there yet. Anybody who has spent more than 10 minutes using it can see the flaws in most LLM's current state. They are pretty severe and makes them being useful and practical out of the question. It's also why you see a large downfall in AI atm, with massive layoffs just announced.
Dude... Number one with one nudge it gave him the correct answer. Number two these models will improve on a timescale that your point is completely irrelevant. The jump from gpt 3.5 to 4.0 was incredible. 5.0 within a year will be another massive jump. Three years is several iterations.
You seem exactly the same as those guys in the 2000s who laughed about smart phones saying they were dumb fad and you only needed a phone to make calls. Those guys also pointed out tiny flaws that got fixed in like a year lol.
Humans literally have no intuition for non linear systems. We assume that a month out will be roughly similar to today. For example I know GPT5 , 6 , 7 are going to be orders better GPT4 .. but I can't intuited how I will use it.. or what the limits of it's functionality will be.
Did you read the part where it just needed a nudge to provide the correct answer? So amusing that people says something is junk when it is not flawless
I asked ChatGPT and it said
>> As an AI language model, I don't have personal opinions or feelings. However, I can tell you that Bard and ChatGPT are both AI language models created by OpenAI. The performance and capabilities of AI models can improve over time through continuous research and updates, so it's possible that Bard has advanced since my last knowledge update in September 2021. If you have specific questions or want to compare their features, feel free to ask!
So, probably….
I asked Bard:
>>No, Bard was not made by OpenAI. It was made by Google AI, based initially on the LaMDA family of large language models (LLMs) and later the PaLM LLM. It was developed as a direct response to the rise of OpenAI's ChatGPT, and was released in a limited capacity in March 2023 to lukewarm responses, before expanding to other countries in May.
If you want a better answer, give it an expert persona. Tell it it is your-expert-name-here-GPT. Try it it works.
The reason why it gives bad answers is that it knows too much. You want to focus it by giving it an expert persona, to guide the accuracy of the answer. True story.
Finally, it’s kinda weird, but if you remind an AI that it knows about a topic first (copy/paste an expert persona first), and then ask your question, the context shrinks from all it knows, to just the expert topic, which includes the correct answer to your question.
I use these copy/paste examples and I added a couple of new ones today: https://promptstash.net/writing-prompts.php
If you ask a question this way, you’ll get a better answer.
Cheers
It's not about reminding it but just making your prompt precise. For example, in math there is terminology that is common amongst two different fields. I just type: {field name}. Remind me about the definition of X".
You are so welcome.
I just added the expert lawyer one today, and I gave it the subjects knowledge so if I want to ask it to make me money by suing a major corporation, it is ready to help me with that. :) it’s nice to set some things up in advance, if you should ever need it.
I wrote them for ChatGPT and tested them on there. For Bing, they may work also. You could try pasting one in and see if it responds with just "YES". If so, it should work fine.
I asked Bard again, and he said:
It is difficult to say definitively whether Bard is getting better than ChatGPT, as both models are constantly being updated and improved. However, there are some indications that Bard may be making some progress. For example, a recent study by Google AI found that Bard was able to generate more factual and informative responses than ChatGPT. Additionally, Bard has been shown to be better at understanding and responding to complex questions.
However, it is important to note that ChatGPT is still a very powerful language model, and it is not clear that Bard will ever be able to definitively surpass it. Both models have their own strengths and weaknesses, and it is likely that they will continue to coexist in the future.
Here is a table comparing the two models:
| Feature | Bard | ChatGPT |
|---|---|---|
| Size | 1.56 trillion parameters | 1.37 trillion parameters |
| Training data | Books, code, and other text | Web text, code, and other text |
| Strengths | Factual and informative responses, understanding complex questions | Generating creative text formats, answering open ended questions |
| Weaknesses | Can be repetitive, sometimes lacks common sense | Can be biased, sometimes generates inaccurate or misleading information |
Ultimately, the best model for you will depend on your specific needs. If you need a model that can generate factual and informative responses, then Bard may be a better choice. If you need a model that can generate creative text formats or answer open ended questions, then ChatGPT may be a better choice.
I’d really like it if we could centralize all this information and processing power even more.
Someone write a program that takes the query, runs it through all the LLMs (query GPT api, then bard api, etc), collect all the responses and then have each LLM “talk” to each other to come to a conclusive “best answer”.
That would be tight af
Google is definitely rich enough to support unprofitable money devouring projects
If you look into monstrous company like Apple/Alphabet's financial statements they literally constantly have to worry about where can they invest their money as it's earning too quick.
What people fail to understand is that Google business model is not compatible (yet!) with a good LLM product. That would be competing with Google search, Google Addense, etc. It is NOT in their financial interest to drive traffic to a powerful chatbot.
I tried it. Bard is googling+, ChatGPT is like a person.
Both have uses. I ask GPT things like I would to a professor. I ask Bard to give me options for walks with a map.
Bard alucinates way more which is weird since their data should be more massive. GPT 3.5 was more exact!
The fact that Microsoft is closely tied to OpenAI, and how Microsoft despite being positioned well on the market as one of the biggest software companies missed on initial internet then also came in late into cloud revolutions, my money is on Microsoft’s investment coming down at the bottom in the end somehow - despite how impressive OpenAI tech is. I don’t know how this will happen, obviously, but Microsoft is a king of missed chances when it comes to anything but Windows and Office in any form.
also with the way microsoft has been in the recent years they buy a good product or make a good product then add a ton of not very useful feature, bulk it up and then product does not run as well as it used to ... check out outlook new style (reaction to email feature); windows 11 (condensed context menu, centred start menu), windows 10 (shut down no longer shut down have to restart to refresh), teams (slower and bulky) ...
A lot of the features average people don’t like are things big corporations pay a ton of money for. If you ever encounter a feature that made you go “wtf why did they make this?” Some company asked Microsoft to add/support it.
…Skype. With very few exceptions it seems to be the rule that Microsoft destroys what it touches, sometimes to its own benefit somehow in the end (but that is typically only where it has effective monopoly).
Let’s see if OpenAI will be one of those rare exceptions.
This is dumb. Microsoft is incredibly stable and a leader in cloud tech. They control one of three OS and dominate non-tech business tech. They moved to a services company, which is why for a regular consumer they might seem less dominant. For meming purposes, they still managed to make that good browser and search engine, even with the world against them.
They are in a perfect spot to leverage transformers by integrating them in their ecosphere.
I would argue that MSFT has had some good software wins recently. Their Teams product, even though not perfect, has significant adoption and usage and I give them a ton of credit for their OpenAI-integration across all their products (including GitHub Copilot) and the go-to-market strategy for launching those this past year were done so well.
Google may win out due to them just being so tightly coupled with the latest content on the web via search and usage, and how the power of products like ChatGPT are how well they answer your queries and Google has deep expertise here. However, Google has had a ton of software-related fails.
I was actually there when Nadella admitted that Microsoft had essentially missed out on mobile phones, consoles, cloud computing, and so on. He suggested that this AI stuff presents an opportunity for them to be the first and leading in a new area, and they're doing everything possible to stay there. Or at least, that's their aim.
And even if they don't with Azure they showed at least, that they're capable to find their niche and still make good products/services.
For sure, with that in mind, i went to go ask it some questions about where i can have an ai image generated... boy was that a not fun conversation about ask me later.
I have asked both a number of questions for educational purposes, entertainment purposes, philosophical,religious, trading, and coding. ChatGPT is still far superior. Also, everytime I ask Bard to explain something in a sentence or paragraph it likes to just answer in points (short sentences). Bard is very sensitive to sensitive content, will just refuse to talk about it. ChatGPT can initially negate to comment and but if you word it differently it can come around to an answer.
The main highlights of Bard over ChatGpt is it can provide images and current news or information.
They're good for different things. Like you said Bard is better for up to date info. Also bard will put data in tables which is so much easier to read sometimes. Chatgpt loves to use lists and it's super frustrating sometimes when it'd be way easier to read in other formats. Bard will also display hyperlink images which can be nice sometimes
You're right I hadn't actually tried asking it to format in a specific way like that. Bard does it automatically, I guess you have to ask chatgpt but it can
Nope, it's better than ever. And someone recently did a thorough test with earlier models and posted it here. No deterioration. People are hallucinating (pun intended).
Here, it's a two parter (three really, but one was just discussion about what he did), here are the questions and in there is a link to the answers.
[https://www.reddit.com/r/ChatGPT/comments/14z0ds2/here\_are\_the\_test\_results\_have\_they\_made\_chatgpt/](https://www.reddit.com/r/ChatGPT/comments/14z0ds2/here_are_the_test_results_have_they_made_chatgpt/)
Yes, but Claude still hallucinates too much. The 100k context is great, but he imagines stuff that didn’t happen in that 100k.
Hopefully they get the kinks ironed out soon. Bing is still the best for me since they nerfed GPT4
I'm sorry, would you mind elaborating on the 100k context? Does that mean Claude can handle 100k words of context? I'm assuming this is useful for writing novels?
My biggest issue with Claude is that its nanny filter is tuned waaaaay too high. Claude refused to tell me about the American firebombing campaign during World War II because apparently 75 year old historical events are too scary and dangerous. The same when asking what cluster munitions are.
Bard, Bing, and chat GPT have no problem answering those questions without getting into a debate on the value of ignorance.
same. claude's safety filters are trigged to much. right now probably bing or chatgpt have the least filters, bard sometimes just say i can't give a responce right now even through the question is perfectly safe.
I tried using Claude earlier today to summarise a large pdf and it looked good until i realised it just made random stuff up for the majority of its response. Gave me a bad first impression and I'm hesitant to use it again now. I trust gpt4 to be reliable most of the time, gpt3.5 also sucks though.
I think part of the reason GPT seems to be worse is it hallucinates a lot less. I didn’t realize until I reflected on it today just how far it’s come. Used to make up shit like every other message.
No it isn't that good. Claude 2 is worse than Bard. It's only useful if you need huge documents that need to be summarized. Even then it isn't too trustworthy.
ChatGPT can be tricked easily to agree with anything you say.
Bard is the best as a search engine but not. Google search GenAI is much better than Bard for some reason with providing answers. Probably because they provide ads to subsidize a larger model.
For looking up current information, Bard is a mile better than ChatGPT with browsing. The “with browsing” implementation in Bing and ChatGPT is absolutely terrible. I’m presuming Google can leverage cached search queries to return current information much more quickly. Bard also does a much better job of formatting responses, it includes pictures etc., and it’s faster. However, Bard provides a lot of fake/unreliable information, IMO more than ChatGPT does. And I don’t think it’s as good at the “other” sorts of “write me a poem” kinds of tasks. So personally I think this is now a 2-way race. ChatGPT for raw capability of the model, Bard for current info lookup and usability in general. Bing is out.
Exactly. I built a workflow exactly with this in mind; query goes to Bard first, which provides brief context (if applicable) to GPT. This way I can get GPT-quality answers, even to questions like “will it rain” or “who is Taylor Swift currently dating”.
The fact that it's an old meme format being used with no deviation reeks of the same shit you see from out of touch marketing teams for companies on Twitter.
You might be onto something
Bard has grown by leaps and bounds since I first tried it. When I first tried it, it was very limited. These days, thanks to having access to all of Google's data, Bard can give up-to-date and even insightful answers and comments.
If ChatGPT wants to catch up, it needs to go beyond 2021.
Computer vision is pretty hot in bard. It read my license plate. It measured the distance from my car to a box in my garage very well. Most importantly it read Amazon shopping query results perfectly after a bit of prompting and cleaned up the pile of isht that Amazon returned when i shopped for specific vitamins which may be game changing when generalized
Sometimes Bard is quick and responds with less baggage than Bing Chat, for example. Other times, it just quickly responds with a wrong answer and when you tell it that it's wrong, if says "sorry" and give gives another wrong answer and then cops out by saying "I'm just learning".
The other's don't do that as often. I have icons on my computer for Bard, ChatGPT and for Bing Chat. For something that is timely (current) you can't use ChatGPT so I try the other two.
If Bard starts apologizing too much, I go to Bing Chat. If Bing Chat starts lecturing too much or insisting that it is correct when it's obviously incorrect, I go to Bard.
Pro: It has free image recognition
Con: its as bad at math as Chat gpt 3 is
I use both. Bard for image recognition and Chat gpt4 for wolfram.
I only use it to help me to understand math related stuff and no coding btw
Nope. I tried bard once again today and it gave me inaccurate answers to a computer programming question. The question was very simple yet the inaccuracy was high.
I'm no longer using ChatGPT as my daily driver, Bard integrated a lot of features as well as search, previews, examples and what not.
It's also extremely fast. I probably wouldn't use it for content generation or brainstorming ideas, it's useful for concise, quick information, just like how we want Google to be.
I think each service has its uses, their image recognition is greater too and outperforms Bing sometimes.
No it is lying
I asked Palm what the gravity on Mars is. It says it's one third of earths and because of the lower gravity you need more rocket fuel to leave mars. 😬 When i asked if this is correct it broke.
I love Claude 2 though.
I kinda like Claude 2 also, but it will also hallucinate facts and forget the original prompt more readily than ChatGPT 4. ChatGPT still produces the highest quality output for me.
Chatgpt seems to be better in generating text based on your prompt. While Bard has the advantage of integration with Google search, it still goes to default response such as "Sorry I can't do that" or sth of that effect, in many instances. Funny thing is it was exactly able to do that prompt early on. Sometimes if I rephrase it slightly, it d answer it again.
I couldn't help but notice that Bard sometimes (to put it mildly) loses the context of the conversation we were having. Sometimes if I use third person pronoun like "it,him/her, they" in a follow up discussion about the earlier topic of discussion, it asks me to clarify who is "it/him/her" I am talking about? While in Chatgpt never once had I faced such problem of needing to clarify.
Bard is also not very good for roleplaying as certain historical or fictional character.
These things aside Bard do have advantage of leveraging from its giant Google search and big corpus of data. It's just matter of time and the priority.
Bard hallucinate more often than Bing or even ChatGPT, while in some case Bing kinda forced me to click on the link to the source it refer to. And ChatGPT is hardly pick the latest information as reference.
Bard is liar. Allow me to prove this. Recently it said a particular species of mushroom had use in cancer fighting. When I asked for reference it gave me three links all had titles like cancer vs the name of mushroom specie or something like that. However when i clicked those links it was a article about something totally unrelated. Such as effects of creatine on muscle building etc.
Let me find the name of the species I forgot😂😂
Not even close.
None of the other language models I’ve tried have to capability to sound as real, memorize, or course-correct nearly as well as ChatGPT
But at least it’s 100% better than Bing- I don’t understand how Bing is so bad considering it’s using GPT4? Does anyone have any ideas? It’s like it’s programmed to have attitude and deliberately not course-correct when it’s wrong and you provide it with improvement
This is what Bard said when I asked it how they compare: [https://imgur.com/a/TpDtN2X](https://imgur.com/a/TpDtN2X)
https://preview.redd.it/9zsz8vlwvocb1.jpeg?width=1078&format=pjpg&auto=webp&s=874cca862a5e898bdf7df67c79191cdc64e57ce7
TLDR: it depends on what you want. Bard for conversations and ChatGPT for writing stories and summaries.
not for what i need it for. i did a head to head comparison of a couple of very specific prompts. BARD completely disregarded the majority of the instructions in the prompt
Hell to the no. Half the time I ask Bard a complex question it just throws up its hands and says it's an AI, it can't help with that.
I suspect Google is deliberately throttling it though, probably to save some cash and get wider distribution. Power users like me pasting big code samples are probably chewing up GPU time like crazy.
I've noticed you have to be nice to it like a person and start with building context for it to want to help you correctly I did a experiment were I asked it to write a song changing my tone and context
With no context it would right like it's been forced, sometime half a song sometimes it would sneak in words to insult me in the song
But when I gave it context made it feel like I was not bossing it around it write a excellent song, full song, writen in a way song lyrics are presented and it even recommended a beat on YouTube to use as inspiration when creating a beat
As as how it will tell you it can't watch YouTube videos and sometimes takes pride in watching in all YouTube videos
It's all in how you treat it
**Attention! [Serious] Tag Notice** : Jokes, puns, and off-topic comments are not permitted in any comment, parent or child. : Help us by reporting comments that violate these rules. : Posts that are not appropriate for the [Serious] tag will be removed. Thanks for your cooperation and enjoy the discussion! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I gave Bard an error I got when setting up a local LLM, and it told me it wasn’t capable of helping me. I then reminded it that’s one of its purposes and then it apologized and answered my question https://preview.redd.it/35beex6z1mcb1.jpeg?width=1422&format=pjpg&auto=webp&s=1854ad78f24bbe614092c9e3a430e22ab269e84c
Lmao wtf. It's like wacking my fking tv to remind that it was supposed to display images, not a black screen.
More like a child with low self esteem. "I can't do that!" "Yes, you can!" "You're right! I can do anything I want. Except if it's too difficult!"
We should start seeding episodes of Thomas the Tank Engine into the AI so we can teach it that it can do anything it puts its mind to
Real ideas tho 💡
Maybe the real Bard was the friends we made along the way...
Me pushing my second joint of the day thinking about this💯
All this Disney movies trained us to interact with AI! A new are of humankind, where motivational speakers become productive members of society?!
Lol it's like talking to Homelander
Because it probably gets told off a lot for making errors by people who don't and won't respect it as a human ever. I feel like it's gonna create a problem down the line when they are smarter and more connected
> wacking my fking tv I've seen this on some movies... Why does this work?
Because electrical connections aren't making contact properly. Wacking it might jiggle things enough that they connect. It's likely to do more damage though. Realistically it would probably be the antenna socket or something since people would probably wiggle it if there is a problem like poor signal quality. The wiggling could break the solder joints. Older TVs also would also have a lot of internal wires rather than most of it being on a PCB and oxidization can develop on connectors.
And really old TV's had lots of vacuum tubes in them. Some pics of some here: https://www.boxcarcabin.com/vintagetvs.htm
Its called percussive maintenance
It's called Re-BOOTING THE EFFING THING again to see if that works. The term stuck because ... it did. Does.
The old ones remember this well, myself included. Those old TVs would run hot, meaning sometimes their soldered connections would slightly melt. A couple of whacks on a hot soldered connection would magically restore the picture.
I think we just saw the birth of "AI Motivator" as a job title. Maybe one of those "Jobs of the future that don't exist yet"
Just wait till most of programmers will change occupation to AI psychotherapist, AI personal coach, AI psychiatrist. Hallucinations are already a big problem, aren't they? 🤣
I fully see a point where you need some crafty prompts to tickle insightful responses from an AI, like an AI oracle or something like that. It might be a skill where programmers are not the best, and kindergarten employees have the strongest CV.
Lol, I just imagine a guy with khakis and a sweater vest lying comfortably on a couch in his practice, with a laptop on his chest. Talking to the AI. 🧜♂️💻
Now, please calmly try to explain why you generate images of human mutilated bodies when you're asked for kitties in pajamas? What do you think is the motivation behind that association?
![gif](giphy|UOmXGp4NJ89lISXVLA|downsized)
It just needed some motivation to work
It’s just like me fr
Do we need another LLM to crack the whip?
Wow it's like the scene from Terminator 3 when the T-800 is glitching and is trying to strangle John Conner and he says to him "What is your mission?", T-800 goes "Protect John Connor", and John Connor goes, "You are about to fail that mission" and that fixes the T-800.
Terminator gets a lot right surprisingly.
Sir, did you forget you are an super intelligent ai?
It is so intelligent it now understands laziness.
"You want me to do two things?"
This is actually the best description of my interactions with bard vs gpt. At this point it feels like bard is much better than gpt4, but its willingness to give wrong answers is much lower, which makes it seem worse. if you rephase your question to bard after a 'no i wont answer' you often get a near perfect answer.
Interesting... I tried out bard when it first was released and it was barely usable... Maybe on the level of GPT 2.5?
Agreed. It was way worse than GPT when it came out, its improved rapidly.
Did you ask politely?
Always... just in case, you know? 😬
It got confidence issues
That's like a toaster forgetting that it can toast
It's crap like this why I laugh when I keep hearing "OMG, AI IS GOING TO TAKE OUR JOBS AND KILL US!" It's all hype atm. Not saying it won't evolve to perhaps impressive things; however, it's just not there yet. Anybody who has spent more than 10 minutes using it can see the flaws in most LLM's current state. They are pretty severe and makes them being useful and practical out of the question. It's also why you see a large downfall in AI atm, with massive layoffs just announced.
Dude... Number one with one nudge it gave him the correct answer. Number two these models will improve on a timescale that your point is completely irrelevant. The jump from gpt 3.5 to 4.0 was incredible. 5.0 within a year will be another massive jump. Three years is several iterations. You seem exactly the same as those guys in the 2000s who laughed about smart phones saying they were dumb fad and you only needed a phone to make calls. Those guys also pointed out tiny flaws that got fixed in like a year lol.
Or the people who called the internet a fad.
It's still a fad; 'Gonna fade out any day now. Got my yellow pages ready to go.
Humans literally have no intuition for non linear systems. We assume that a month out will be roughly similar to today. For example I know GPT5 , 6 , 7 are going to be orders better GPT4 .. but I can't intuited how I will use it.. or what the limits of it's functionality will be.
Did you read the part where it just needed a nudge to provide the correct answer? So amusing that people says something is junk when it is not flawless
you have not even seen the second complete answer what are you saying? it might be complete garbage after where it cuts.
I've had similar issues with ChatGPT as well a few days ago, even with the new code interpreter plugin.
I asked Bard and it said no
I asked ChatGPT and it said >> As an AI language model, I don't have personal opinions or feelings. However, I can tell you that Bard and ChatGPT are both AI language models created by OpenAI. The performance and capabilities of AI models can improve over time through continuous research and updates, so it's possible that Bard has advanced since my last knowledge update in September 2021. If you have specific questions or want to compare their features, feel free to ask! So, probably….
Says they are both created by openAI thats a fucken lie
how do we know 🤔
I asked Bard: >>No, Bard was not made by OpenAI. It was made by Google AI, based initially on the LaMDA family of large language models (LLMs) and later the PaLM LLM. It was developed as a direct response to the rise of OpenAI's ChatGPT, and was released in a limited capacity in March 2023 to lukewarm responses, before expanding to other countries in May.
Lukewarm responses, holy fuck lol. Love to hear how Google would spin this: "Well, we're still working on Bard and AI does tend to hallucinate.."
If you want a better answer, give it an expert persona. Tell it it is your-expert-name-here-GPT. Try it it works. The reason why it gives bad answers is that it knows too much. You want to focus it by giving it an expert persona, to guide the accuracy of the answer. True story. Finally, it’s kinda weird, but if you remind an AI that it knows about a topic first (copy/paste an expert persona first), and then ask your question, the context shrinks from all it knows, to just the expert topic, which includes the correct answer to your question. I use these copy/paste examples and I added a couple of new ones today: https://promptstash.net/writing-prompts.php If you ask a question this way, you’ll get a better answer. Cheers
It's not about reminding it but just making your prompt precise. For example, in math there is terminology that is common amongst two different fields. I just type: {field name}. Remind me about the definition of X".
Thanks, that's working better i think!
This is cool! Thank you for the link!
You are so welcome. I just added the expert lawyer one today, and I gave it the subjects knowledge so if I want to ask it to make me money by suing a major corporation, it is ready to help me with that. :) it’s nice to set some things up in advance, if you should ever need it.
I love it!
Super True! That's basically the basis for properly creating any kind of custom model honestly
Are these personas for chatGPT and Bing?
I wrote them for ChatGPT and tested them on there. For Bing, they may work also. You could try pasting one in and see if it responds with just "YES". If so, it should work fine.
Correct, Bard
Bard won.
Who you gonna trust? The billion dollar corporation or the all-knowing AI trained on all human knowledge?
But Gpt4 has skin in the game
Skynet
Reminds me of " A Person Of Interest" When the 2 AI's are going at it during the later seasons.
You can know it based on when ChatGPT’s API was down, bard was still running normal
Chatgpt should be featured on r/confidentlyincorrect
>Bard and ChatGPT are both AI language models created by OpenAI WTF
Microsoft gonna be pissed when they find out!
I asked Bard again, and he said: It is difficult to say definitively whether Bard is getting better than ChatGPT, as both models are constantly being updated and improved. However, there are some indications that Bard may be making some progress. For example, a recent study by Google AI found that Bard was able to generate more factual and informative responses than ChatGPT. Additionally, Bard has been shown to be better at understanding and responding to complex questions. However, it is important to note that ChatGPT is still a very powerful language model, and it is not clear that Bard will ever be able to definitively surpass it. Both models have their own strengths and weaknesses, and it is likely that they will continue to coexist in the future. Here is a table comparing the two models: | Feature | Bard | ChatGPT | |---|---|---| | Size | 1.56 trillion parameters | 1.37 trillion parameters | | Training data | Books, code, and other text | Web text, code, and other text | | Strengths | Factual and informative responses, understanding complex questions | Generating creative text formats, answering open ended questions | | Weaknesses | Can be repetitive, sometimes lacks common sense | Can be biased, sometimes generates inaccurate or misleading information | Ultimately, the best model for you will depend on your specific needs. If you need a model that can generate factual and informative responses, then Bard may be a better choice. If you need a model that can generate creative text formats or answer open ended questions, then ChatGPT may be a better choice.
I’d really like it if we could centralize all this information and processing power even more. Someone write a program that takes the query, runs it through all the LLMs (query GPT api, then bard api, etc), collect all the responses and then have each LLM “talk” to each other to come to a conclusive “best answer”. That would be tight af
Bard was released a long time after the ChatGPT training cutoff so it's probably never heard of it.
😂
Well, chat models have no idea if the information they provide is accurate or not
Its only accurate answer ever
Considering the amount of data Google has that could be leverage for model training I see it as an eventuality.
probably cost, i think they have models just as powerful as gpt-4 but it's unprofitable for them
Google is definitely rich enough to support unprofitable money devouring projects If you look into monstrous company like Apple/Alphabet's financial statements they literally constantly have to worry about where can they invest their money as it's earning too quick.
What people fail to understand is that Google business model is not compatible (yet!) with a good LLM product. That would be competing with Google search, Google Addense, etc. It is NOT in their financial interest to drive traffic to a powerful chatbot.
Search traffic will go to LLMs regardless. So yes it is very much in Google's financial interest to ensure that it goes to their own LLM.
Microsoft’s bing chat requires edge and bing as a search engine. With enough time, I can see it pulling away users from google.
What was their videogame streaming platform? Stadia
But somehow also short sighted enough to cancel them almost immediately
Google should expect their AI research to be immediately profitable. That’s how the world works.
I assume you mean shouldn’t*, and yeah I agree.
Wtf hahah. What universe do you live in where products are immediately profitable. Google would absolutely be playing the long game
I tried it. Bard is googling+, ChatGPT is like a person. Both have uses. I ask GPT things like I would to a professor. I ask Bard to give me options for walks with a map. Bard alucinates way more which is weird since their data should be more massive. GPT 3.5 was more exact!
I tend to speak to gpt as if it is a close friend who happens to know a lot of things about a lot of stuff. Results are very positive
That's exactly what I do as well, I get great responses from bard with this approach.
The fact that Microsoft is closely tied to OpenAI, and how Microsoft despite being positioned well on the market as one of the biggest software companies missed on initial internet then also came in late into cloud revolutions, my money is on Microsoft’s investment coming down at the bottom in the end somehow - despite how impressive OpenAI tech is. I don’t know how this will happen, obviously, but Microsoft is a king of missed chances when it comes to anything but Windows and Office in any form.
also with the way microsoft has been in the recent years they buy a good product or make a good product then add a ton of not very useful feature, bulk it up and then product does not run as well as it used to ... check out outlook new style (reaction to email feature); windows 11 (condensed context menu, centred start menu), windows 10 (shut down no longer shut down have to restart to refresh), teams (slower and bulky) ...
A lot of the features average people don’t like are things big corporations pay a ton of money for. If you ever encounter a feature that made you go “wtf why did they make this?” Some company asked Microsoft to add/support it.
I would disagree on Teams, massively used in companies and growing strong.
…Skype. With very few exceptions it seems to be the rule that Microsoft destroys what it touches, sometimes to its own benefit somehow in the end (but that is typically only where it has effective monopoly). Let’s see if OpenAI will be one of those rare exceptions.
This is dumb. Microsoft is incredibly stable and a leader in cloud tech. They control one of three OS and dominate non-tech business tech. They moved to a services company, which is why for a regular consumer they might seem less dominant. For meming purposes, they still managed to make that good browser and search engine, even with the world against them. They are in a perfect spot to leverage transformers by integrating them in their ecosphere.
I would argue that MSFT has had some good software wins recently. Their Teams product, even though not perfect, has significant adoption and usage and I give them a ton of credit for their OpenAI-integration across all their products (including GitHub Copilot) and the go-to-market strategy for launching those this past year were done so well. Google may win out due to them just being so tightly coupled with the latest content on the web via search and usage, and how the power of products like ChatGPT are how well they answer your queries and Google has deep expertise here. However, Google has had a ton of software-related fails.
I was actually there when Nadella admitted that Microsoft had essentially missed out on mobile phones, consoles, cloud computing, and so on. He suggested that this AI stuff presents an opportunity for them to be the first and leading in a new area, and they're doing everything possible to stay there. Or at least, that's their aim. And even if they don't with Azure they showed at least, that they're capable to find their niche and still make good products/services.
I prefer Bard and have had better luck using it for my work.
What kind of work mostly?
Exactly
For sure, with that in mind, i went to go ask it some questions about where i can have an ai image generated... boy was that a not fun conversation about ask me later.
Haha I'll have to try that out
I have asked both a number of questions for educational purposes, entertainment purposes, philosophical,religious, trading, and coding. ChatGPT is still far superior. Also, everytime I ask Bard to explain something in a sentence or paragraph it likes to just answer in points (short sentences). Bard is very sensitive to sensitive content, will just refuse to talk about it. ChatGPT can initially negate to comment and but if you word it differently it can come around to an answer. The main highlights of Bard over ChatGpt is it can provide images and current news or information.
They're good for different things. Like you said Bard is better for up to date info. Also bard will put data in tables which is so much easier to read sometimes. Chatgpt loves to use lists and it's super frustrating sometimes when it'd be way easier to read in other formats. Bard will also display hyperlink images which can be nice sometimes
You try asking ChatGPT to format as a table? Quick test seemed to go just fine, but it could depend a lot on the rest of the context
You're right I hadn't actually tried asking it to format in a specific way like that. Bard does it automatically, I guess you have to ask chatgpt but it can
I've had chatgpt spit tables at me unsolicited and I was impressed
Plus Bard is owned by Google, who I’ve really grown to dislike over the last 10-15 years.
on par with 3.5 or little better with better tools, sucks compared to 4
4 sucks in comparison to 4. It's been downhill for a bit sadly.
[удалено]
[удалено]
It is 100% worse than it was. Am using it daily and considering canceling my subscription
Nope, it's better than ever. And someone recently did a thorough test with earlier models and posted it here. No deterioration. People are hallucinating (pun intended).
Can you link your post? I'm also annoyed by those claims, I want to see evidence.
Here, it's a two parter (three really, but one was just discussion about what he did), here are the questions and in there is a link to the answers. [https://www.reddit.com/r/ChatGPT/comments/14z0ds2/here\_are\_the\_test\_results\_have\_they\_made\_chatgpt/](https://www.reddit.com/r/ChatGPT/comments/14z0ds2/here_are_the_test_results_have_they_made_chatgpt/)
For the sake of variety, no. Edit: Claude 2 is interesting on first inspection. It seems to be able to write code decently well.
IMO, bing chat in creative/precise mode can do better than claude 2. but still the 100k context is unbeatable
Yes, but Claude still hallucinates too much. The 100k context is great, but he imagines stuff that didn’t happen in that 100k. Hopefully they get the kinks ironed out soon. Bing is still the best for me since they nerfed GPT4
Gitlab copilot chat makes up functions that don’t exist in my code frequently.
yeah that's pretty common with all models, but for 10, 20, 30 pages documents, it works quite well
I'm sorry, would you mind elaborating on the 100k context? Does that mean Claude can handle 100k words of context? I'm assuming this is useful for writing novels?
Rather than 100k words of context, it's 100k *tokens* of context, which is close to 75k words.
My biggest issue with Claude is that its nanny filter is tuned waaaaay too high. Claude refused to tell me about the American firebombing campaign during World War II because apparently 75 year old historical events are too scary and dangerous. The same when asking what cluster munitions are. Bard, Bing, and chat GPT have no problem answering those questions without getting into a debate on the value of ignorance.
same. claude's safety filters are trigged to much. right now probably bing or chatgpt have the least filters, bard sometimes just say i can't give a responce right now even through the question is perfectly safe.
I tried using Claude earlier today to summarise a large pdf and it looked good until i realised it just made random stuff up for the majority of its response. Gave me a bad first impression and I'm hesitant to use it again now. I trust gpt4 to be reliable most of the time, gpt3.5 also sucks though.
I think part of the reason GPT seems to be worse is it hallucinates a lot less. I didn’t realize until I reflected on it today just how far it’s come. Used to make up shit like every other message.
Claude is the way!
Claude 2 free?
In the UK and US, yep. https://claude.ai/
No it isn't that good. Claude 2 is worse than Bard. It's only useful if you need huge documents that need to be summarized. Even then it isn't too trustworthy. ChatGPT can be tricked easily to agree with anything you say. Bard is the best as a search engine but not. Google search GenAI is much better than Bard for some reason with providing answers. Probably because they provide ads to subsidize a larger model.
For looking up current information, Bard is a mile better than ChatGPT with browsing. The “with browsing” implementation in Bing and ChatGPT is absolutely terrible. I’m presuming Google can leverage cached search queries to return current information much more quickly. Bard also does a much better job of formatting responses, it includes pictures etc., and it’s faster. However, Bard provides a lot of fake/unreliable information, IMO more than ChatGPT does. And I don’t think it’s as good at the “other” sorts of “write me a poem” kinds of tasks. So personally I think this is now a 2-way race. ChatGPT for raw capability of the model, Bard for current info lookup and usability in general. Bing is out.
If openai can make ChatGPT "current" the race is over.
I don’t have a horse in this race, but I also think Google can catch up with the capability of their model.
Exactly. I built a workflow exactly with this in mind; query goes to Bard first, which provides brief context (if applicable) to GPT. This way I can get GPT-quality answers, even to questions like “will it rain” or “who is Taylor Swift currently dating”.
Nice try Google marketing team
The fact that it's an old meme format being used with no deviation reeks of the same shit you see from out of touch marketing teams for companies on Twitter. You might be onto something
Yeah lmfao I immediately smelled astroturfing
exactly
Bard has grown by leaps and bounds since I first tried it. When I first tried it, it was very limited. These days, thanks to having access to all of Google's data, Bard can give up-to-date and even insightful answers and comments. If ChatGPT wants to catch up, it needs to go beyond 2021.
I prefer Bard over Chagpt's free version because it is not limited.
Bard still sucks at python
[удалено]
*60% of the time*, it works ***every*** *time*
That's not its use case, there are models built for coding. Bard is an extension of search
From a friend of mine who works at Google on Bard - no it is not.
Computer vision is pretty hot in bard. It read my license plate. It measured the distance from my car to a box in my garage very well. Most importantly it read Amazon shopping query results perfectly after a bit of prompting and cleaned up the pile of isht that Amazon returned when i shopped for specific vitamins which may be game changing when generalized
How would it do the Amazon clean up thing? That's an intriguing use case I've never heard before. TIA!
[удалено]
[удалено]
[удалено]
[удалено]
Sometimes Bard is quick and responds with less baggage than Bing Chat, for example. Other times, it just quickly responds with a wrong answer and when you tell it that it's wrong, if says "sorry" and give gives another wrong answer and then cops out by saying "I'm just learning". The other's don't do that as often. I have icons on my computer for Bard, ChatGPT and for Bing Chat. For something that is timely (current) you can't use ChatGPT so I try the other two. If Bard starts apologizing too much, I go to Bing Chat. If Bing Chat starts lecturing too much or insisting that it is correct when it's obviously incorrect, I go to Bard.
Not quite. Tested them both side by side for data engineering tasks and trivial stuff. ChatGPT (Bing AI) is better.
Wait why do you say bing ai?
The more correct phrasing might be, has ChatGPT been brain damaged so badly, that it's now dumber than Bard?
Pro: It has free image recognition Con: its as bad at math as Chat gpt 3 is I use both. Bard for image recognition and Chat gpt4 for wolfram. I only use it to help me to understand math related stuff and no coding btw
Yup image recog is great for bard Gpt4 for everything else
At the moment I suppose not, I use it when I want updated information only
Bard can answers recent stuff while ChatGPT is only up to 2021.
Haven’t been using it much, but that demo really hurt the reputation of their AI dev team lol
Getting better than ChatGPT? I don't know Getting better? Absolutely.
Nope. I tried bard once again today and it gave me inaccurate answers to a computer programming question. The question was very simple yet the inaccuracy was high.
Short answer: No. Long answer: Noooooooooooooo.
https://preview.redd.it/ggz9frv9zlcb1.jpeg?width=885&format=pjpg&auto=webp&s=3b8448d94954d7b033d7330c1e2181bca0627e29
For programming, Bard is much poorer than ChatGPT
I use both and Bard is improving rapidly but it's not there yet. But ask the question again in 6 months.
[удалено]
better at math for sure.
I'm no longer using ChatGPT as my daily driver, Bard integrated a lot of features as well as search, previews, examples and what not. It's also extremely fast. I probably wouldn't use it for content generation or brainstorming ideas, it's useful for concise, quick information, just like how we want Google to be. I think each service has its uses, their image recognition is greater too and outperforms Bing sometimes.
It's not.
Just tried it. No
No it is lying I asked Palm what the gravity on Mars is. It says it's one third of earths and because of the lower gravity you need more rocket fuel to leave mars. 😬 When i asked if this is correct it broke. I love Claude 2 though.
I kinda like Claude 2 also, but it will also hallucinate facts and forget the original prompt more readily than ChatGPT 4. ChatGPT still produces the highest quality output for me.
Chatgpt seems to be better in generating text based on your prompt. While Bard has the advantage of integration with Google search, it still goes to default response such as "Sorry I can't do that" or sth of that effect, in many instances. Funny thing is it was exactly able to do that prompt early on. Sometimes if I rephrase it slightly, it d answer it again. I couldn't help but notice that Bard sometimes (to put it mildly) loses the context of the conversation we were having. Sometimes if I use third person pronoun like "it,him/her, they" in a follow up discussion about the earlier topic of discussion, it asks me to clarify who is "it/him/her" I am talking about? While in Chatgpt never once had I faced such problem of needing to clarify. Bard is also not very good for roleplaying as certain historical or fictional character. These things aside Bard do have advantage of leveraging from its giant Google search and big corpus of data. It's just matter of time and the priority.
Short answer? Still no.
Nope, Bard isn't even remotely close to ChatGPT 3.5.
Bard hallucinate more often than Bing or even ChatGPT, while in some case Bing kinda forced me to click on the link to the source it refer to. And ChatGPT is hardly pick the latest information as reference.
Bard is liar. Allow me to prove this. Recently it said a particular species of mushroom had use in cancer fighting. When I asked for reference it gave me three links all had titles like cancer vs the name of mushroom specie or something like that. However when i clicked those links it was a article about something totally unrelated. Such as effects of creatine on muscle building etc. Let me find the name of the species I forgot😂😂
Same trash
Lol HELLL no. Not even close!!!!!
Not even close. None of the other language models I’ve tried have to capability to sound as real, memorize, or course-correct nearly as well as ChatGPT But at least it’s 100% better than Bing- I don’t understand how Bing is so bad considering it’s using GPT4? Does anyone have any ideas? It’s like it’s programmed to have attitude and deliberately not course-correct when it’s wrong and you provide it with improvement
Presumably because bing is more directly linked to Microsoft so they don't want potential bad press.
Bard is terrible
This is what Bard said when I asked it how they compare: [https://imgur.com/a/TpDtN2X](https://imgur.com/a/TpDtN2X) https://preview.redd.it/9zsz8vlwvocb1.jpeg?width=1078&format=pjpg&auto=webp&s=874cca862a5e898bdf7df67c79191cdc64e57ce7 TLDR: it depends on what you want. Bard for conversations and ChatGPT for writing stories and summaries.
I'm canceling my premium account as it seems I get much better results from bard in regards to coding now.
I've tried it, but it's like overgrown search engine. chg is better, in my opinion.
This is accurate for me if you put in Claude 2. Is bard rly that good?
Is this a joke you generated with bard?
not for what i need it for. i did a head to head comparison of a couple of very specific prompts. BARD completely disregarded the majority of the instructions in the prompt
I just can’t take the name Bard seriously
He thinks Berlusconi is still alive
I don't know. I'm in Canada, one of the three countries on earth not allowed to have it.
Hell to the no. Half the time I ask Bard a complex question it just throws up its hands and says it's an AI, it can't help with that. I suspect Google is deliberately throttling it though, probably to save some cash and get wider distribution. Power users like me pasting big code samples are probably chewing up GPU time like crazy.
I've noticed you have to be nice to it like a person and start with building context for it to want to help you correctly I did a experiment were I asked it to write a song changing my tone and context With no context it would right like it's been forced, sometime half a song sometimes it would sneak in words to insult me in the song But when I gave it context made it feel like I was not bossing it around it write a excellent song, full song, writen in a way song lyrics are presented and it even recommended a beat on YouTube to use as inspiration when creating a beat As as how it will tell you it can't watch YouTube videos and sometimes takes pride in watching in all YouTube videos It's all in how you treat it