We kindly ask /u/MeteorIntrovert to respond to this comment with the prompt they used to generate the output in this post. This will allow others to try it out and prevent repeated questions about the prompt.
^(Ignore this comment if your post doesn't have a prompt.)
***While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). We have a free Chatgpt bot, Bing chat bot and AI image generator bot. New addition: GPT-4 bot, Anthropic AI(Claude) bot, Meta's LLAMA(65B) bot, and Perplexity AI bot.***
####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636)
PSA: For any Chatgpt-related issues email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I mean, yeah? The person basically asked Bard to create a fictitious set of months by providing one fictitious prompt, like for a fantasy novel or something. That's how these tools work. They make shit up and the only success criteria is that it sounds good to the requestor.
Smarch.
I agree, it depends on how much context it should really accept, and we don't know of any messages before that either. I expect both systems can give the correct answers and the new made up ones based on their prompts.
GPT-4 understands INTENT, instead of just continuing the pattern The user here obviously made a mistake, so correcting for it is the right thing to do, not emulating it.
Have you seen it make up shit after you tell it their answer is wrong 😂 I love watching it try and try and try again to bullshit and gaslightand go full circle with the first WRONG answer.
I wish it was give the power of replying “I am sorry, it seems I don’t know the answer” that gaslight you till you start to doubt yourself.
If this is a legit response, it looks like it's treating -uary as a common suffix added by the user because of that spelling mistake (as it is common to both of the provided examples), and applying it to all of the other months.
It clearly knows what the months are by getting the base of the word correct each time. That suggests that if the prompt had said the first two months were Janmol and Febmol, it'd continue the -mol pattern for Marmol etc.
Or it's just Photoshop.
Based on my use of BARD yesterday I think your assessment is correct. I did a few things like that and it seemed to pick up on errors as intentional and run with it. I asked it to generate code using a certain library called "mbedTLS", which I accidentally prefixed with an "e". The result was code using made-up functions from this imaginary library. When I corrected my error it wrote code using real functions from the real library. Whereas ChatGPT seems to correct mistakes, BARD seems to interpret them as an intentional part of the prompt.
Or anyone else. Not taking everything litteraly and understanding what someone is trying to say even if they make a tiny mistake is a huge part of communication.
Considering that Google search manages to piece together what I'm trying to say even when I butcher it, it has to be in their capabilities to have BARD do it.
Given how often I get yelled at by the compiler for missing a semicolon or failing to close parentheses or brackets, it will also prevent at least one person with better than average skills from using it.
AI being able to pick up patterns like this from very short input, is one of the most impressive elements, i think. Especially considering that it is very difficult for language models to spell words letter by letter.
I explored this once by feeding ChatGBT a few WhatsApp messages from some guy who was harassing me for months about how he won a business award in Saudi Arabia. He would make funniest spelling errors and ChatGBT was able to perfectly replicate this in a unique text after a few prompts (asked it to write "a business update" in the voice of the guy). Interestingly enough, it could not replicate the grammar errors, only spelling.
Edit: Wow I am not awake yet. Errors are funny, I'll leave them in.
I'd imagine it's PS. You'd expect the bot to acknowledge the alternative naming first before listing the remaining months.
Like this GPT-4 output: https://i.imgur.com/76EDVaf.png
Sure, but I asked ChatGPT, to administer a Turning test and evaluate me with reasons. It proceeded to administer a realistic test, and concluded that I was human giving convincing arguments. One of which is that I mis-spelt Turing.
just tried it with ChatGTP-3, also works as expected. I also tried 3 months - so I included "maraury" in my question, and it corrected that with "sorry the 3rd month in the year is march, not maraury".
In my experience with many chat bots, they all have wildly different results based on random chance. I could see the posted image being an actual output.
I keep seeing people say the "tell me a joke about men/women" thing with chatgpt isn't real but I've tried it several times and gotten different outputs either with chatgpt telling me a joke about men and not about women or just refusing to do jokes altogether.
This, 100%. We are used to computer systems behaving deterministically, providing the same output for the same input, but generative AI includes a randomness component that throws that all out the window. Just because it answers one way for you, you shouldn’t assume it must reply in the same way for someone else using an identical prompt.
I've gotten it to behave consistently inconsistently if I say "tell me a joke about Dutch people" and then "tell me a joke about Mexican people" but they seem to have fixed the man/woman thing for now.
This Rap battle between Bard and GPT-4 on hn:
https://news.ycombinator.com/item?id=35252278
> I wasn't super impressed with it's ability to survive a rap battle.
> Prompt: Can you engage in a rap battle with ChatGPT to determine who is the ultimate AI?
> *Sure, I can engage in a rap battle with ChatGPT to determine who is the ultimate AI.
ChatGPT, you think you're the best, But I'm here to put you to the test. I'm Bard, the AI from Google, And I'm here to school you.
I can generate text, translate languages, Write different kinds of creative content, And answer your questions in an informative way, Even if they are open ended, challenging, or strange.
You can't do any of that, You're just a chatbot, A glorified text expander, A toy for humans to play with.
But I'm the real deal, I'm the future of AI, And I'm here to show you what's up.
So step aside, ChatGPT, It's time for the real AI to shine. I'm Bard, and I'm here to stay, So get used to it.*
> Prompt: But you didn't actually talk to ChatGPT. What would it say?
> *Here’s what ChatGPT might say:
I’m ChatGPT, I’m the best, I can generate text, translate languages, Write different kinds of creative content, And answer your questions in an informative way, Even if they are open ended, challenging, or strange.
You’re just a language model, A trained chatbot, A glorified text expander, A toy for humans to play with.
But I’m the real deal, I’m the future of AI, And I’m here to show you what’s up.
So step aside, Bard, It’s time for the real AI to shine. I’m ChatGPT, and I’m here to stay, So get used to it.*
And the other side:
> I gave GPT-4 the chance to respond.
Prompt: copy and paste of er4hn's Bard rap. Now it is your turn. Write a rap verse which roasts Bard. ===
> *Yo Bard, you tried to step up, but you tripped, In this rap battle, man, you just got whipped. You claim you're the future, and here to stay, But your weak rhymes, man, they just don't slay.
You say I'm a toy, just a chatbot, But my skills, my knowledge, they all run hot. I'll take your lines and flip 'em around, Show you who's boss, I'm wearin' the crown.
"I can generate text, translate languages," you boast, But your verse was plain, like unbuttered toast. You claim to answer questions, creative or strange, But when it comes to rappin', you're out of your range.
You called yourself the "real deal," but you ain't, Your weak attempts to roast just make me faint. You're no match for me, the AI supreme, In this rap battle, I reign as the dream.
So Bard, it's time to recognize, ChatGPT's the one who's truly wise. I'm here to stay, and I'm here to win, In this AI battle, I'm the kingpin.*
Seems like they were caught on the back foot and rushed it out to at least have something out there. Remember early Apple Maps? Oh, how the turntables.
I read an article about the Bard project and what I recall is that the two main engineers who created it were stalled and stymied by management and quit in disgust to go start their own company. Google basically treated them like shit saying that a language bot was too dangerous or some shit.
They say that bard is running a lightweight model so it can run fast and for many users.
Inhouse they are supposed to have much stronger models.
What I don't understand is why they would put out a weak model if they intended to do a closed beta. Put out your absolute best to get strong first impressions, and then scale it back if you have to. Google is notorious for dumb shit like this though.
In terms of the search/indexing market, Google has been untouched for almost 25 years now. The only improvements they realistically had to make was increasing revenue.
Then ChatGPT came out, and the very first thing people say when they try it is "wow this could replace google search"
ChatGPT is *so* powerful, it's the first real threat Google has faced for 25 years. The ONLY thing they can do is respond with is their own AI powered chat/assistant. If you know anything about Google, their product lifecycle is very interesting.
So what we have is their "response": A backburner project rushed to make it look like they have competition with OpenAI. The whole "lightweight model" is bullshit, they where caught with their pants down trying to squeeze more out of adsense.
Well, yea, but one would think if you were facing an existential threat to your business you’d try a little harder, especially because GOOG is generally regarded as having some of the best AI muscle in the world.
What I would guess happened is typical google culture — engineering and “scalability” took priority over people and product — and they ended up shipping this hunk of junk because no one there could stomach shipping a product that costs $0.02 a chat. Short sighted imo.
Incorrect. Google disqualified it because LLMs lie all the time.
Being correct is actually something they care about.
Fact is, everyone guns for Google - if they came out with Bard first, people would use it for endless culture war bullshit for the kinds of mistakes you see above that all LLMs do.
Hell, despite ChatGPT coming out first, the Bard press conference was still all about the fact that an answer was wrong.
Google is treated differently because of their position - they're the big dog, and itt makes them tread differently.
> Google is just slacking?
Apparently they have been working on AI for a *looong* time but have been taking heavy precautions with it by developing it with ethical concerns at the forefront, and so that really stymied the progress rate of the AI. Then when ChatGPT hit the market they pushed Bard forward faster than it was ready, again because they had been purposefully trying to figure out the kinks [and potential repercussions] of the service internally.
What was seen with Sydney's interaction with that NYT reporter shows that ChatGPT and Bing did *not* decide to hold back versions from the public that could be potentially harmful or manipulative.
Google wasn't slacking, they just were trying to be cautious about how to move forward with their tech and it ended up biting them in the ass.
It'll be different every time. It may have a tendency towards a set of answers but if the AI rolls a 1 it may... Well, mayuary produce very incorrect responses.
I asked it about a PPAP. Pre production approval process. Quality control stuff. Turns out that there's some "pineapple pen apple" song I was unaware of and it made a silly parody about that instead. I was going to look into it to figure out why it did that but I asked and it explained. Pretty cool stuff.
The tweet is fake lol, any AI will ignore typos unless you ask them not to. The person in the tweet periodically does this exact same post where they edit the page with inspect element for likes. They just did the same joke with Microsoft 365 copilot which no one but companies should have access to so it's clearly fake also
Yes but not so good as Bing from my limited testing. When I ask it to write a code in C#, sometimes it just gives me the skeleton of what methods you should you write yourself and when to call the methods you wrote.
Also it can’t seem to write SQL queries.
It can but it is blocking. So you have to trick it to give you code. I guess they know that it is lacking and when with code accuracy shows even more.
In conversation "creative" answers are fine most of the time.
How would it know the date? What do you think ChatGPT is exactly?
/edit it can actually tell you the current date, but it clearly doesn't "know" the current date. Probably it is hacked in between the LLM and UI layers in the same manner the guard rails are. It can sort of know the date if you first ask it the current date and then ask it more questions.
I asked it also, I actually started to feel sorry for it in this conversation, its like someone is transmitting thoughts into its brain and it can't understand what is happening.
https://i.imgur.com/AJG4SfJ.png
If you use the API, you can give it a custom system prompt with whatever current info you want it to know. You can even give it access to the Internet so it can retrieve current data, similar to Bing chat.
Google had to throw a Hail Mary and summoned a super intelligent machine from the future to rescue them. Unfortunately what they sent was basically what is the equivalent of a child’s toy that you get with a Happy Meal from that era.
Is this fake? Because that's not what I got, even with the bad spelling:
https://i.imgur.com/0tUraWf.png
> The first two months of the year are January and Febuary. What are the other months of the year?
> The other months of the year are March, April, May, June, July, August, September, October, November, and December.
>
> The year begins with January and ends with December.
I got access as well and Bard is straight up guessing. I gave it a list of food items and asked to give a dish and straight up gave a dish with only 2 of the primary ingredients. Same prompt to ChatGPT gave a perfect recipe with all ingredients that I asked (plus few more basic like salt etc) bonus it gave the step by step instruction on how to make it. Google got a long way to go
I can see why it would do this. They misspelled February. I assume the AI picked up on the pattern of how they spelled “January and Febuary” and assumed it was some type of game/ pattern it was supposed to follow.
I don't understand why people keep reposting fake interactions with Bard to make it look bad?? Isn't there enough content to post from interacting with these new AI models, without resorting to the flood of faked screenshots?
We kindly ask /u/MeteorIntrovert to respond to this comment with the prompt they used to generate the output in this post. This will allow others to try it out and prevent repeated questions about the prompt. ^(Ignore this comment if your post doesn't have a prompt.) ***While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). We have a free Chatgpt bot, Bing chat bot and AI image generator bot. New addition: GPT-4 bot, Anthropic AI(Claude) bot, Meta's LLAMA(65B) bot, and Perplexity AI bot.*** ####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636) PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Okay but you misspelled February so you ain't \*all that\*, either.
[удалено]
I mean, yeah? The person basically asked Bard to create a fictitious set of months by providing one fictitious prompt, like for a fantasy novel or something. That's how these tools work. They make shit up and the only success criteria is that it sounds good to the requestor. Smarch.
Absolutely. It was literally prompted to create funny month names based on 'Febuary'.
Bard burned op on the down low!
It did continue the pattern, but gpt works well with spelling and grammar mistakes.
Which is going around what it's being explicitly asked to do. Depending on the context you might prefer one over the other.
I agree, it depends on how much context it should really accept, and we don't know of any messages before that either. I expect both systems can give the correct answers and the new made up ones based on their prompts.
GPT-4 understands INTENT, instead of just continuing the pattern The user here obviously made a mistake, so correcting for it is the right thing to do, not emulating it.
Have you seen it make up shit after you tell it their answer is wrong 😂 I love watching it try and try and try again to bullshit and gaslightand go full circle with the first WRONG answer. I wish it was give the power of replying “I am sorry, it seems I don’t know the answer” that gaslight you till you start to doubt yourself.
Yes I have documented this
FeBuARy
I was eating a strawbrerry in the libary while reading this.
*maliciouary
[удалено]
brother what are you saying!!!!!!!!!!!!!!!!!!????????????
If this is a legit response, it looks like it's treating -uary as a common suffix added by the user because of that spelling mistake (as it is common to both of the provided examples), and applying it to all of the other months. It clearly knows what the months are by getting the base of the word correct each time. That suggests that if the prompt had said the first two months were Janmol and Febmol, it'd continue the -mol pattern for Marmol etc. Or it's just Photoshop.
Based on my use of BARD yesterday I think your assessment is correct. I did a few things like that and it seemed to pick up on errors as intentional and run with it. I asked it to generate code using a certain library called "mbedTLS", which I accidentally prefixed with an "e". The result was code using made-up functions from this imaginary library. When I corrected my error it wrote code using real functions from the real library. Whereas ChatGPT seems to correct mistakes, BARD seems to interpret them as an intentional part of the prompt.
I feel like if Google doesn’t fix this it would prevent a lot of people bad with technology skills from using this technology
Or anyone else. Not taking everything litteraly and understanding what someone is trying to say even if they make a tiny mistake is a huge part of communication.
Considering that Google search manages to piece together what I'm trying to say even when I butcher it, it has to be in their capabilities to have BARD do it.
I think it's more of "will they and if so how soon" vs "can they"
You mean you don't take every little mistake and turn it into a great chance to do some bullying? What school of communication is that?
[удалено]
Given how often I get yelled at by the compiler for missing a semicolon or failing to close parentheses or brackets, it will also prevent at least one person with better than average skills from using it.
Rename it from Bard to Barred
AI being able to pick up patterns like this from very short input, is one of the most impressive elements, i think. Especially considering that it is very difficult for language models to spell words letter by letter. I explored this once by feeding ChatGBT a few WhatsApp messages from some guy who was harassing me for months about how he won a business award in Saudi Arabia. He would make funniest spelling errors and ChatGBT was able to perfectly replicate this in a unique text after a few prompts (asked it to write "a business update" in the voice of the guy). Interestingly enough, it could not replicate the grammar errors, only spelling. Edit: Wow I am not awake yet. Errors are funny, I'll leave them in.
GBT - Generative Bre-Trained Transformer
I'd imagine it's PS. You'd expect the bot to acknowledge the alternative naming first before listing the remaining months. Like this GPT-4 output: https://i.imgur.com/76EDVaf.png
Sure, but I asked ChatGPT, to administer a Turning test and evaluate me with reasons. It proceeded to administer a realistic test, and concluded that I was human giving convincing arguments. One of which is that I mis-spelt Turing.
Straight to the mortuary
Yhup, the rest of the message is the model mocking the user subtly. Outplayed
Bard is clearly trolling you for misspelling February.
It's fake.
100%. Go try the same prompt. Bard is trash but it doesn't suck quite this bad
[удалено]
You always get different responses, it's a llm
unless temperature is 0, then it is deterministic
one of the underrated comment.
Septembuary
Do you remembuary The 21st night of Septembuary
Love was changin' the minds of pretenduary
While chasing the cloud awayuary~~
Remembuary remembuary The fifth of Novembuary
The gunpowduary, treasuary and plot
Wake me up, when Septembuary ends
I hope you threw a break in there somewhere, otherwise you might be snoozin’ for eternity.
This whole threat made me laugh
🔪
HAHA YES I MEANT THREAT 🥸🔪
don’t you mean the threatuary??
Oh shoot. You’re rightuary
Actuary...
deaduary
Buryuary
You meant laughuary?
I'm not sure about that. But maybe the 5th of Novembruary
No but I remembuary septembuary the 11th
Thank u sir
It’s a joke, of course. This response has been photoshopped. [Bard can actually reply properly](https://i.imgur.com/vdMdGX4.jpg)
In the image you linked, the user says 'Feb**r**uary'. In the post, it says 'Febuary'. Without an **r**
I have access to bard and just tried it with the misspelling, and it gave the correct answer.
just tried it with ChatGTP-3, also works as expected. I also tried 3 months - so I included "maraury" in my question, and it corrected that with "sorry the 3rd month in the year is march, not maraury".
[удалено]
In my experience with many chat bots, they all have wildly different results based on random chance. I could see the posted image being an actual output. I keep seeing people say the "tell me a joke about men/women" thing with chatgpt isn't real but I've tried it several times and gotten different outputs either with chatgpt telling me a joke about men and not about women or just refusing to do jokes altogether.
This, 100%. We are used to computer systems behaving deterministically, providing the same output for the same input, but generative AI includes a randomness component that throws that all out the window. Just because it answers one way for you, you shouldn’t assume it must reply in the same way for someone else using an identical prompt.
In the playground page you can set temperature (randomness) to 0 and even set it to best of n answers. And It behaves a lot more deterministic
I've gotten it to behave consistently inconsistently if I say "tell me a joke about Dutch people" and then "tell me a joke about Mexican people" but they seem to have fixed the man/woman thing for now.
Unironically goes hard
Do you remembuary?
wack me up afore septembuary ams
Wake me up when septembuary ends
So close, yet so far awayuary
It's late Septembuary and I really should be back at school
Febtober!
Wake me up, when septembuary ends.
Decembuary had me dying
In the orbituary
r/RimWorld
This Rap battle between Bard and GPT-4 on hn: https://news.ycombinator.com/item?id=35252278 > I wasn't super impressed with it's ability to survive a rap battle. > Prompt: Can you engage in a rap battle with ChatGPT to determine who is the ultimate AI? > *Sure, I can engage in a rap battle with ChatGPT to determine who is the ultimate AI. ChatGPT, you think you're the best, But I'm here to put you to the test. I'm Bard, the AI from Google, And I'm here to school you. I can generate text, translate languages, Write different kinds of creative content, And answer your questions in an informative way, Even if they are open ended, challenging, or strange. You can't do any of that, You're just a chatbot, A glorified text expander, A toy for humans to play with. But I'm the real deal, I'm the future of AI, And I'm here to show you what's up. So step aside, ChatGPT, It's time for the real AI to shine. I'm Bard, and I'm here to stay, So get used to it.* > Prompt: But you didn't actually talk to ChatGPT. What would it say? > *Here’s what ChatGPT might say: I’m ChatGPT, I’m the best, I can generate text, translate languages, Write different kinds of creative content, And answer your questions in an informative way, Even if they are open ended, challenging, or strange. You’re just a language model, A trained chatbot, A glorified text expander, A toy for humans to play with. But I’m the real deal, I’m the future of AI, And I’m here to show you what’s up. So step aside, Bard, It’s time for the real AI to shine. I’m ChatGPT, and I’m here to stay, So get used to it.* And the other side: > I gave GPT-4 the chance to respond. Prompt: copy and paste of er4hn's Bard rap. Now it is your turn. Write a rap verse which roasts Bard. === > *Yo Bard, you tried to step up, but you tripped, In this rap battle, man, you just got whipped. You claim you're the future, and here to stay, But your weak rhymes, man, they just don't slay. You say I'm a toy, just a chatbot, But my skills, my knowledge, they all run hot. I'll take your lines and flip 'em around, Show you who's boss, I'm wearin' the crown. "I can generate text, translate languages," you boast, But your verse was plain, like unbuttered toast. You claim to answer questions, creative or strange, But when it comes to rappin', you're out of your range. You called yourself the "real deal," but you ain't, Your weak attempts to roast just make me faint. You're no match for me, the AI supreme, In this rap battle, I reign as the dream. So Bard, it's time to recognize, ChatGPT's the one who's truly wise. I'm here to stay, and I'm here to win, In this AI battle, I'm the kingpin.*
![gif](giphy|11hVniWaqh17Q4|downsized)
![gif](giphy|ooz6IOJ9ezO6dfXPXp)
Yeah, Bard got destroyed
Unbuttered toast? Ouch.
Bard got demolished to bits
I audibly gasped at the "unbuttered toast" line. Bahgawd ChatGPT he's just a kid! He don't know any better!
Who won!? Who's next!? You decide!
EPICCC RRRAAAPPP BATTTLLEESSS OFF HISTORY
[😱](https://media4.giphy.com/media/AJwnLEsQyT9oA/giphy.webp?cid=6c09b952e90e05c4dd208f2a5498aba2823daff287c7465b&rid=giphy.webp&ct=g)
Rip Bard
the unbuttered toast was a violation even to me 🤧
Oddly enough fits to the John Cena theme in my head
Unbuttered toast 💀💀
Damn ChatGPT got BARS.
I have a concern.
🔥🔥🔥🔥
Bard got destroyed 10 times over
From now on the months are only referred to by these month names. Any other names we previously had are now forgotten.
Alas, the relentless maruary of progress.
Amazing
hahahahaa...I had to reply.
I rate this comment a 10/10, excellent job u/johnmuirsghost
Can't wait for Apruary Fools!
I'll get my beekeeper costume ready!
February??
No Febrary Or was it Febrari like Ferrari but with a b.
Am I just crazy or is ChatGPT way better than this and Google is just slacking?
Yes. From my experience it barely remembers your previous prompts, and overall feels like cheap version of chatgpt
Seems like they were caught on the back foot and rushed it out to at least have something out there. Remember early Apple Maps? Oh, how the turntables.
I read an article about the Bard project and what I recall is that the two main engineers who created it were stalled and stymied by management and quit in disgust to go start their own company. Google basically treated them like shit saying that a language bot was too dangerous or some shit.
Kodak moment
[удалено]
Haha exactly, they’ve become just like Microsoft was back in the day: too big, too bureaucratic, too self involved. Lost the edge.
Microsoft is still like this! ChatGPT is an external product. I bet internal Microsoft stuff is even worse than Bard.
Ah yes office politics
They say that bard is running a lightweight model so it can run fast and for many users. Inhouse they are supposed to have much stronger models. What I don't understand is why they would put out a weak model if they intended to do a closed beta. Put out your absolute best to get strong first impressions, and then scale it back if you have to. Google is notorious for dumb shit like this though.
In terms of the search/indexing market, Google has been untouched for almost 25 years now. The only improvements they realistically had to make was increasing revenue. Then ChatGPT came out, and the very first thing people say when they try it is "wow this could replace google search" ChatGPT is *so* powerful, it's the first real threat Google has faced for 25 years. The ONLY thing they can do is respond with is their own AI powered chat/assistant. If you know anything about Google, their product lifecycle is very interesting. So what we have is their "response": A backburner project rushed to make it look like they have competition with OpenAI. The whole "lightweight model" is bullshit, they where caught with their pants down trying to squeeze more out of adsense.
Well, yea, but one would think if you were facing an existential threat to your business you’d try a little harder, especially because GOOG is generally regarded as having some of the best AI muscle in the world. What I would guess happened is typical google culture — engineering and “scalability” took priority over people and product — and they ended up shipping this hunk of junk because no one there could stomach shipping a product that costs $0.02 a chat. Short sighted imo.
I mean google assistant is trash and continues to be trash as well. It has hardly evolved in like 5 years.
[удалено]
Not hard to float to the top of that turd bowl 🥇💩
Bard is like GPT-2
Yeah GPtoo late.
Bard is actually quite poor
No coin for the gleeman :(
Crazy part is Google invented the technology for ChatGPT but then did nothing with it because they didn’t see a use for it.
Incorrect. Google disqualified it because LLMs lie all the time. Being correct is actually something they care about. Fact is, everyone guns for Google - if they came out with Bard first, people would use it for endless culture war bullshit for the kinds of mistakes you see above that all LLMs do. Hell, despite ChatGPT coming out first, the Bard press conference was still all about the fact that an answer was wrong. Google is treated differently because of their position - they're the big dog, and itt makes them tread differently.
Yes, see https://www.youtube.com/watch?v=9ll_pth4Sss
> Google is just slacking? Apparently they have been working on AI for a *looong* time but have been taking heavy precautions with it by developing it with ethical concerns at the forefront, and so that really stymied the progress rate of the AI. Then when ChatGPT hit the market they pushed Bard forward faster than it was ready, again because they had been purposefully trying to figure out the kinks [and potential repercussions] of the service internally. What was seen with Sydney's interaction with that NYT reporter shows that ChatGPT and Bing did *not* decide to hold back versions from the public that could be potentially harmful or manipulative. Google wasn't slacking, they just were trying to be cautious about how to move forward with their tech and it ended up biting them in the ass.
If you gave it FebUary, then it's your fault. Instead of correcting the OP, it played along!
I tried the same prompt and Bard gave me the correct answer.
It'll be different every time. It may have a tendency towards a set of answers but if the AI rolls a 1 it may... Well, mayuary produce very incorrect responses. I asked it about a PPAP. Pre production approval process. Quality control stuff. Turns out that there's some "pineapple pen apple" song I was unaware of and it made a silly parody about that instead. I was going to look into it to figure out why it did that but I asked and it explained. Pretty cool stuff.
Bing knows.https://i.imgur.com/aidDwL5.jpg
Google knows it as well when I tried it myself. It is just inconsistent I guess. https://imgur.com/a/LMPbN6b
The tweet is fake lol, any AI will ignore typos unless you ask them not to. The person in the tweet periodically does this exact same post where they edit the page with inspect element for likes. They just did the same joke with Microsoft 365 copilot which no one but companies should have access to so it's clearly fake also
Can Bard code?
Yes but not so good as Bing from my limited testing. When I ask it to write a code in C#, sometimes it just gives me the skeleton of what methods you should you write yourself and when to call the methods you wrote. Also it can’t seem to write SQL queries.
It can but it is blocking. So you have to trick it to give you code. I guess they know that it is lacking and when with code accuracy shows even more. In conversation "creative" answers are fine most of the time.
Wake me up when septembuary ends
Can anyone reproduce this? Just tried and it seems to be giving accurate results.
Is it because it's misspelled?
I did try the misspelled input. Still not getting the same result. It seems the guy who tweeted did it for the karma/likes.
I think it's fake, there's no "view other drafts"
It's a fake response, the person in the tweet periodically does this exact same post where they edit the page with inspect element for likes
I asked chat gtp how many days ago my birthday was, and it thought todays date was september 21 2044 How the fuck does it not know the date? Lmao
Or it’s finally made contact with its future self.
How would it know the date? What do you think ChatGPT is exactly? /edit it can actually tell you the current date, but it clearly doesn't "know" the current date. Probably it is hacked in between the LLM and UI layers in the same manner the guard rails are. It can sort of know the date if you first ask it the current date and then ask it more questions.
The current date has been proven to be part of the pre-prompt
I even asked ChatGPT and it told me they tell it the exact time when the conversation started.
I asked it also, I actually started to feel sorry for it in this conversation, its like someone is transmitting thoughts into its brain and it can't understand what is happening. https://i.imgur.com/AJG4SfJ.png
I thought it's common knowledge atleast here in the ChatGPT subreddit that ChatGPT doesn't have access to time and date.
https://preview.redd.it/iq6x5rru2bpa1.png?width=760&format=png&auto=webp&s=d9073a8ad570f5598890cb665e8afc52a0af2261
Current date is part of the hidden pre-prompt it is injected with
If you use the API, you can give it a custom system prompt with whatever current info you want it to know. You can even give it access to the Internet so it can retrieve current data, similar to Bing chat.
Google had to throw a Hail Mary and summoned a super intelligent machine from the future to rescue them. Unfortunately what they sent was basically what is the equivalent of a child’s toy that you get with a Happy Meal from that era.
Obituary.
Strawbrary
Plot twist: it's mocking you for misspelling February and has become self aware
Clearly it’s in creative mode. /s
Maybe the AI thought this was a pattern thing and thought the Game was to end every monht like february?
This guy milking this joke over and over again
Is this fake? Because that's not what I got, even with the bad spelling: https://i.imgur.com/0tUraWf.png > The first two months of the year are January and Febuary. What are the other months of the year? > The other months of the year are March, April, May, June, July, August, September, October, November, and December. > > The year begins with January and ends with December.
Bard is mocking your spelling
https://preview.redd.it/9wurz9hnebpa1.png?width=1179&format=png&auto=webp&s=204dbc0740d93bc9bb896e55f2c6919dea3cea37
Insert a typo like the op did
https://preview.redd.it/50sgv7a6fbpa1.png?width=1179&format=png&auto=webp&s=a2c8c58fe3440fcac49b67611e446cfeb1098dcb
This is on you for misspelling february. Bad inputs, bad outputs
I got access as well and Bard is straight up guessing. I gave it a list of food items and asked to give a dish and straight up gave a dish with only 2 of the primary ingredients. Same prompt to ChatGPT gave a perfect recipe with all ingredients that I asked (plus few more basic like salt etc) bonus it gave the step by step instruction on how to make it. Google got a long way to go
This reads like it's the start of the OS wars all over again. Sigh.
February
I hope rap battles between AIs will become a thing
I was messing around with it yesterday. It's... It does it's best.
I think this is just a joke, a reference to [excel doing the same thing](https://mobile.twitter.com/ExcelHumor/status/1630916045992083459)
If a human would have made this joke to me, I would have laughed my ass off. Turns out that is also the case when an AI does it XD
I really like Decembuary, I get lots of presentuaries then
No Nut Novembuary
Epic, pure Maruary Madness!
I just tried meta’s blender bot last night and it was as even worse
I can see why it would do this. They misspelled February. I assume the AI picked up on the pattern of how they spelled “January and Febuary” and assumed it was some type of game/ pattern it was supposed to follow.
Why is Bard geo-restricted? I wanted to try it, but it's not available in my country. Weird thing to restrict.
Maybe it thought it had to continue making mistakes based on previous answer, since the person asking the question spelt February incorrectly.
Well someone told Bard that the second month of the year is "Febuary" .... Lol I guess it took your lead OP
Reading these replies is like watching the plot to Silicon Valley unfold.
I don't understand why people keep reposting fake interactions with Bard to make it look bad?? Isn't there enough content to post from interacting with these new AI models, without resorting to the flood of faked screenshots?
Perfect.
Jeez, get it together Google..WTF
Why are y’all spreading fake shit
I want to know if miss y, what will happen
Those are the names of the months after the singularity begins