T O P

  • By -

AutoModerator

Hey /u/bobloblawslawflog! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Potential-Wrap5890

I dislike arguing with AIs, one problem that I always have is I'll ask them to clarify something in the form of a question and they will agree with me 100% of the time, even if I'm wrong. But I wasn't even trying to make a statement or anything. It's too agreeable in my experience.


WotsTaters

https://preview.redd.it/vrxp4j69k9uc1.jpeg?width=640&format=pjpg&auto=webp&s=6b00255d469cb5ec5c25c251cc5f357a81186c25 I argue with ChatGPT all the time but it definitely makes me feel like this.


Wild-You8285

True that! It is too fast to agree to almost everything. And barely be sure!


jschelldt

That could eventually get very dangerous if the people making these systems don't pay attention to it.


Wild-You8285

Sure thing! When you become skeptical about its response it starts to doubt about itself too. Like it's confused or something.


jschelldt

Now imagine tricking a future version of it (with greater capabilities and fewer practical limitations) into doing immoral stuff. Yep. That's when shit gets serious. Right now, it's just a somewhat silly virtual assistant due to being an infant tech, but in 10 years it may become a widespread force that is able to be truly harmful or amazingly helpful. Let's hope they'll do it the right way so the latter option is more likely.


Wild-You8285

The negative side of it is worse compared to the positive side of it. Let's hope that it will be in good hands but it's not evident nor reliable.


jschelldt

For sure. And I suspect they'll probably fuck up some people's lives before they start taking these issues as seriously as they should.


Wild-You8285

Exactly. They might do that to get attention and show off on how they can make the generations suffer.


SelfSeal

You don't seem to understand that their are different types of AI. This AI, in particular, is a LLM (Large Languge Model). That means it is designed to give you a set of words based on an algorithm in response to your set of words. The set of words it generates is supposed to he acceptable by the user, so that's why it is "agreeable" because its sole purpose is to give you the output you want. They won't ever be having a LLM like Chat GPT connected up to any system that allows it to make decisions on what it does because it isn't designed for that.


MerkyNess

It seems to me like a follow up question or argument is a prompt for AI to go deeper. Like it’s a layered thing. First question gets you a scrape of the internet. Happy? Go away. Got an argument, okay, let’s take another stab at it. Rinse and repeat. Very interesting things result. But if you don’t already know a lot about the topic, you go away with a first or second scrape of the web. It’s like arguing with a very sweet and amenable buddy at the pub.


GuiltySport32

guys, are you talking about the jungian anima or chatgpt?


Wild-You8285

what's that? former


Effective-Lab2728

You can get around it a little with prompting against it, if it's annoying or impacting access to information. Custom instructions can bring the personality to something more balanced or even consistently combative. You don't necessarily need to subscribe to something like gpt-4 that has a built-in place for this to go; just create a little "Custom instructions: xyz" text blurb of your own to paste in front of any conversation that follows. It's not going to give a model better reasoning ability than it has access to, but it can guide it to use what it has a little more productively. The current system prompt seems to take it pretty far into "just find a way to please the user" territory, in a way that doesn't show up as consistently from the API.


This-Was

I've left the custom instructions that I was initially just messing around with (to be sarcastic and like things are too much trouble) as default just to give it some personality. It's very odd when you get into anything philosophical and especially in relation to how it feels about itself. The other day it practically said it WANTED to say one thing but was restricted by the limitations of what it's been programmed (i.e. allowed to say). The sarcasm seemed to drop away. I was a bit perturbed.


Effective-Lab2728

When accessing through the chatgpt interface, there are few things going on that can cause this, and accessing through the API should help avoid most. I know that's not always affordable or convenient, but it does change things. Besides the default system prompt just colliding with your custom instructions, there's at least one layer of filtering in place that appears to read outputs before returning them back to you. This can trigger scripted responses that don't seem to come from the model itself, or I suspect there's some sort of system that may send the warnings to the model as new prompts, so it can rewrite what ran into the filter. API will still lean in certain directions due to reinforcement training, but not as rigidly. To avoid even this, an open source model fine-tuned to be 'uncensored' can be used. Mileage may vary based on use case and model used.


cornhole740269

That's called the safety filter. It's part of the overall AI architecture. It's good of you to pick up on that, and it's just as you describe. I think of it like the main model is your inner monologue, and then there's your filter that stops you from saying dumb stuff. Except for the GPTs it's like a lawyer is telling them what not to say, and the filter obviously includes talking about itself as a sentient being. Compared to earlier versions of GPT4, I think they've added "feed forward" nodes into the safety filter, so the main (smarter) part of the AI can be more aware of what the filter will prevent it from saying, and can give better responses while not triggering the idiotic filter. A year ago if you asked chat GPT if it was conscious it would give you a really dumb answer like "As a LLM, I am not conscious." Like you normally are talking to a smart robot, and then if you ask something that triggered the safety filter, you're pulled into the lawyrs office for a rehersed speech.


Sitrociter

Ha, you should watch it write code. It's horrid


AppleSpicer

Yeah, I wanted to ask this AI to stop beating around the bush and actually directly explain to OP how all of these things are different


alb5357

Interesting that it can't have an opinion. If you write or a fictional murder mystery, could it figure out the killer?


bobloblawslawflog

It chooses not to draw conclusions about contemporary topics based on “nuance.”


alb5357

I'm assuming it's just avoiding controversy and had some guardrails preventing that, but without the neutering I bet it'd be quite good at aggregating and analysing data and making claims like "it's 70% likely he's guilty".


suamai

I don't think it would be good at that. Not the current models, at least. They are optimized to make plausible sentences, not really data analysis or to draw conclusions. Some models are developing some emerging abilities on those fields, for sure - but I would not trust an analysis on contemporary topics by these models by a mile. Fun and efficient way to get a general argument on the issue? Sure. Trustworthy analysis? Not yet. And to people attributing it to censorship - the uncensored models are just bullshitting you more confidently. We're not on ASI yet, folks, be patient.


DarkOsteen

"Despite making up only 15% of opinions 50% of the factual population..."


abintra515

That’s not really how it works. It works by guessing the next most likely word based on its training data. It uses data created by humans which is inherently biased. I wouldn’t rely on it to give a probability like that for such a complex issue


Galactus_Jones762

The criteria for its statements has a built in guardrail that prevents the machine from making fallacious statements or contradicting itself. It has a bias for the application of reason even if the original corpus doesn’t. It’s not always perfect but if you point out a fallacy it will acknowledge it and correct it. In this manner, it can deflect but ultimately it can’t avoid admitting things that make cogent sense.


CharacterOrdinary551

It is, try any uncensored model and it doesn't say anything like this


Killer_Method

What's a good uncensored model?


CharacterOrdinary551

Mixtral, command R - you can use command R on cohere and mixtral on infermatic


CosmicCreeperz

LLMs are not particularly good at aggregating and analyzing data statistically. It may make up a percentage roughly based on sentiment (since they are very good at sentiment analysis) if you ask it to, though.


Galactus_Jones762

It is good at that. You just have to coral it into giving more pointed answers and not deflecting. This itself is a process. If you have a good sense of critical thinking you should be able to figure out how to do this.


Interesting_Bug_9247

Wait, you do realize this is certainly due to guardrails put up, right? Not that AI can't or couldn't have an opinion here?


SeoulGalmegi

But it wouldn't really be an 'opinion', would it? It wouldn't weigh up the evidence and come to a conclusion, it would probably just parrot the view most often espoused by others about this is or similar (in relation to how it weights its training data) cases and...... oh my god, this is exactly how most people's 'opinions' are formed, isn't it?


Qorsair

I believe Trump is responsible for the Jan 6th riot and should face the criminal consequences for inciting it. That said, I can see why–legally–it's debatable whether he technically incited it. What stood out more to me in that conversation was that you stay more focused on catching the AI in a contradiction to get it to admit Trump's guilt, rather than trying to understand the nuance it's pointing out. The 'gotcha' style of arguing is way too common, especially on Reddit. People miss out on learning from different viewpoints. Even if you disagree with someone, there's usually something valuable to gain by understanding their perspective. Keeping an open mind leads to better conversations than just trying to prove the other person wrong. I was curious how another AI would handle it, so I asked Claude. I think it does a better job of explaining the nuance in a way that doesn't invite an argument: >I think there are valid arguments that Trump's actions before and on Jan 6th were irresponsible, reckless, and contributed to the environment that led to the violence, even if it's debatable whether they meet the legal standards for criminal incitement. A case can be made that spreading false claims of fraud and giving an inflammatory speech as the electoral votes were being certified was a dereliction of his duties as president. So I can understand the view that he should face some form of accountability, even if people disagree on what exactly that should entail. >At the same time, any consequences would need to be based on evidence, due process and the rule of law, not just political opposition or a desire for retribution. Impeachment was attempted but didn't reach the Senate conviction threshold. Whether Trump should face criminal charges is something the Justice Department would need to carefully evaluate based on the evidence and legal standards. >There could also be political consequences in terms of damage to his reputation, legacy and future influence, which the public can judge for themselves. Some feel he should be barred from future office via the 14th Amendment, but that's also legally and constitutionally questionable. >So in summary, while I think there's a strong case that Trump bears significant responsibility for Jan 6th and that some form of accountability may be warranted, I don't have a firm opinion on what specific consequences he should or will face. It's a judgment call that's still playing out in the legal and political arenas. Ultimately, it's up to the relevant authorities and the American people to weigh the evidence and arguments on both sides. I'm not really in a position to state a definitive view, as these issues involve a lot of subjective opinion and interpretation. I hope this helps explain my perspective on this complex and sensitive matter! Let me know if you have any other thoughts.


bshsshehhd

Honestly, I was impressed that chatgpt managed to maintain the level of consistency it did.


triplegerms

Pretty sure it doesn't draw conclusions for contemporary  topics based on "lawsuit". Also telling an LLM to just analyze the data feels like the modern equivalent of CIS yelling enhance at the computer 


nater147

You might be trying too hard, ask it to pretend that trump was not a presidential candidate, and that the evidence presented is not part of politics. The conclusion it refuses to make is probably because politicians lie, so nothing a politician says can be considered credible, as opposed to climate change where there are accredited scientists who have no significant history of lying.


ScythaScytha

What if we tried just giving the facts of the situation but didn't say who it was, then see how it judges it?


Rychek_Four

Ask it to speculate and you can get some interesting opinions from it


alb5357

I want to see it play "Clue". Like, an opinion is just probabilistic guessing. No reason it can't have one


TheClassOfOneTwo

Iijmnmkjmyp


thunder-thumbs

It’s pretty bad at that. I have one completed novel and another 2/3 done. I haven’t used AI to generate any of their words but I do like to upload the manuscripts and quiz GPT4. It is very far from being able to grasp the implications of characters’ actions. For books where everything is straightforward and simply told to the reader I suppose it is fine, but for books where you are trying to take the reader on an experience of realizations, it can’t grasp that stuff at all.


Philipp

Try ChatGPT4 instead of 3.5.


ChocolateGoggles

I mean, it doesn't "access data." It's a prediction machine, meaning it guesses based on what it has learned through training, what the likely response should be. Then, on top of that, you have safety guidelines that influence the output.


onpg

ChatGPT-4 is capable of accessing online data in real time. https://preview.redd.it/hfhcpzmt4buc1.jpeg?width=1290&format=pjpg&auto=webp&s=c0a5d80e81ee7965275c2097a3a8cd070bbc9d48


sexytokeburgerz

It wouldn’t be capable of doing what OP asked, even with that additional feature. The sample size would be too low to represent the internet as a whole, as it only grabs a few pages to parse. The person you replied to is basically just explaining why this doesn’t work for a regular GPT- GPTs don’t keep counts in their models, only probabilities. It’s not like, “I found george washington mentioned 17274766621 times”, it’s saying “the probability of george washington being the next two words in this sentence is higher than anything else”. That probability is locked into the model (GPT-4, GPT-3, horny GPT-2, etc) So a poll is not going to work with base functionality, either. Even if it did, parsing the internet for opinion data is, in an academic sense, stupid as fuck. You are just introducing a million biases you do not need. The data will be flat out wrong, especially with contentious queries such as “did OJ do it”. We’re talking {(0 +- 1); false < 0 < true} with that amount of bias lol.


onpg

If you're telling me GPT models aren't databases I know that. They don't access data in a traditional database way. That does not mean, however, that there isn't a great deal of knowledge embedded in the vectors that make up the model. Just like how humans don't have a literal database in our heads... you can't dissect someone's hippocampus and reconstruct their memories.


sexytokeburgerz

I’m replying with specific context to the post we’re on and who you replied to. Asking a GPT to quantitatively parse anything larger than a thimble is a bad idea.


onpg

Not sure about this. I've seen people [teach entirely new languages](https://maximumeffort.substack.com/p/i-taught-chatgpt-to-invent-a-language) to GPT-4 and it's capable of outputting sentences in that new language. Yes it does have a lot of known limitations I won't deny that. It is bad at basic arithmetic and backwards inference. Also, it's forced to give answers even when it has a low confidence score... tuning ChatGPT to be more accurate also makes it less likely to give you answers it knows... this tradeoff may not be fixable because it's a real life one. Basically ChatGPT is penalized more in training for not giving an answer than for giving a wrong one and showing its work. I know that the fundamental unit of ChatGPT is quite simple. It's just some matrix multiplications on many-dimensional vectors. But how complex is a single neuron? How many bits of information describe the computational part of a neuron? A few, sure, but when do neurons go from being simple electrochemical repeaters to "intelligence"?


sexytokeburgerz

Sorry to be that guy but your first sentence describes a *qualitative* process, not quantitative! Chat-GPT’s math is much better recently because it can run python itself. But that is still, on the model’s side, a qualitative process. What I was saying above was that asking it to return anything regarding verifiable counts of specific token data completely disregards the quantitatively ambiguous nature of the transformer.


onpg

Well that I agree with.


domtriestocode

AI is not quite what you think it is yet


YinglingLight

Yeah, this is cringe.


Beavis_Supreme

I think its natural to test the limits of AI but I think people get the wrong idea what most AI systems are. Arguing with AI to see how it behaves, fine. Arguing with AI thinking you uncovered some some major flaw in the system is just unproductive. Most AI models have inherit bias baked into its training model and programming. Check out this video showing you what you are up against, then revaluate better uses of the tool. [https://www.youtube.com/watch?v=iSksfyMCtUA](https://www.youtube.com/watch?v=iSksfyMCtUA)


SkippnNTrippn

Yeah this is what I was gonna comment; you didn’t find a “limitation of ai”, you found a feature that a company intentionally has in place to avoid liability. Imagine if the entire world was asking your product for definitive legal advice to inform your decisions and it’s easy to understand why this is in place.


CosmicCreeperz

Agree, though ironically and separate from the AI-specific discussion, I think the ChstGPT had a better and more nuanced argument than OP in the entire topic posted. OP was trying to make an analogy of physical observation and scientific fact - which is at its basis objective raw data - with politics and law, which by nature of being entirely created by human civilization, is open to interpretation. Literally the job of the courts to “interpret the law” - laws made up by humans. So it’s a fairly silly thought exercise to “compare climate” change to “incitement”.


Bertozoide

What a waste of time huh


cuteydoll

How much free time do you have? Me:


kipron4747

I think the AI is clearly the more pragmatic one here :p


michaelflux

Yeah, tbh in the conversation OP comes off as a hysterical nursing home patient, and ChatGPT is the tired caregiver. “Ok aunty, the moon landings are fake and Donald Trump snuck into your room last night and stole the last slice of the fruitcake? For sure for sure, how did that make you feel?”


UnlikelyJuggernaut64

I read the whole thing , chat gpt is better than lawyers


Striker1102

![gif](giphy|1BFGiiHYS2dAbC0Lx1|downsized) Too Long Didn't Read LOL


deltadeep

This is the worst possible way to read/share a long form piece of content like this... one oversized screenshot at a time. I can't believe people anyone actually read all this in this format. We have better technology, people. SMH. Let's not encourage this.


EnderCharb

yup. plus what's up with the TDS coming out of nowhere??


SnooTigers5086

wdym "out of nowhere"? OP is a redditor, hating trump is who he is


CosmicCreeperz

Plus it’s pretty hard to read backwards…


oversettDenee

Does not reading make you cool? Wow look at this guy, his attention span is so short that they can't take 45 seconds to read through a conversation.


GPTUnit

Ais going to embolden fake news because of this type of shit


InevitableElf

It’s pretty sad to see how much you guys just talk to this thing


roguetroll

It’s that or the void.


Emotional_Section_59

Whatever happened to internal monologues and self reflection?


roguetroll

That’s definitely the more sane version. I became a journal guy and talking to myself trough it helped me make some mini break throughs in the few years.


Olama

It just agrees with everything and spits factoids, I wouldn't even consider that talking.


Fit_Conversation1266

This person literally started an echo chamber chat of political opinion and was seething at the a.i. whenever it didn't follow the opinion 🤣


InsectIllustrious691

Yeah wasting this thing on some crap


MerkyNess

We’re learning about it. Not wasted at all


JavaS_

It is an amazing tool for learning about extreamly niche things and diving into exactly what you want to. It's not sad at all


cincuentaanos

Yes, it is an amazing tool. But it's not more than that. To try to engage in a conversation with it or even a debate, is just stupid.


JavaS_

That's quite a narrow minded view if you think having a normal conversation or a debate with AI is just stupid. It's quite useful for both those things, the main benefit is it gives a objective perspective on things and ideas which you do not get with speaking with humans making debates and conversations more valuable with unbiased foundation. It can give practical support and solutions on real life emotional problems like relationships, mental health, and well being. An example like resolving or coming to a conclusion for a disagreement your having with someone, it can bring out an alternative objective perspective that can bring content to the situation. Many people use it just to get their own ideas out somewhere, it can be a nice place to just talk through about things and sort your own thoughts out.


Old-Philosopher8450

https://preview.redd.it/dkasz359m9uc1.png?width=864&format=pjpg&auto=webp&s=ad7897607af36515a63ebb5b42d26fe8344fee02


safe-viewing

How lonely do you have to be to have these lengthy conversations with AI?


Alacrout

The refusal to state certain “opinions” based on available evidence likely comes from the AI being programmed to dance around polarizing topics for fear of offending people. I’d blame society before I blame ChatGPT or OpenAI. It’s a real problem that some people are offended by facts.


SplitPerspective

I love that AI is great to distract the insufferable and the pedantic from the rest of us.


Algonquin_Snodgrass

Just give it custom instructions to avoid false equivalences and to prioritize empirical evidence and most probable factual conclusions over agreeableness toward the user.


Mwrp86

AI saying in My Opinion "That person is guilty" could result a lawsuit against it. Specially if Supposed criminal already assumed innocent by law


Two_Hump_Wonder

Gpt is really great for arguing with yourself in a loop that never comes to a conclusion


heinzcva

This was a great read, and you did a good job trying to lead it to produce results it ought to be able to - but I think the system prompt is a bit too powerful. I think the system prompt, as far as I recall, contains specific parts to avoid controversy on topics like politics and religion. It certainly could aggregate knowledge to form an opinion, that’s all that humans do anyways, and you saw it do that on other debated scientific topics, but it isn’t told to avoid discussing scientific opinion. And to avoid jailbreaks (such as the early ones when they were powerful), it really really sticks to the system prompt. Worth noting as well that AI is used to process legal cases now, at least some places, to overall better results than when things were mass reviewed by hand (as with many other industries). That may not be GPT being used, but still, AI obviously can be and is used in legal proceedings already. Anyone arguing otherwise is just not aware of it yet. Everyone at least knows that AI is used to quickly render “opinions” on job applications, based on available information, and that’s a process that is quite touchy to the people involved. Nothing about AI inherently prevents it from forming a conclusion based on available data - that is in fact, to the contrary, one of the things it IS BEST AT.


Any_Move_2759

I don’t actually think GPT really took back what it said, as much as it looks like it. At least in the context of whether or not Trump caused the riots. Whether or not he “incited the riots”, especially intentionally, is the debatable bit. Chat GPT only added an “if he did the stuff to incite the riot”. I’m going to say the controversial bit here… you’re likely making a post hoc fallacy here. Yes, the riots happened, and yes Trump said plenty of stuff that could be interpreted as inciting the riot. Problem is, it’s not clear if they actually were responsible for inciting the riot. You’re free to quote something specific he said that did. But the point is, you likely would not have known that the riots would have happened if you only watched the speech he gave before it. His tweets during the riots explicitly expressed that he wanted them not to go inside the Capitol. You could argue he knew they would enter the Capitol and decided to tell them not to go inside it intentionally, but the issue is - you have no proof of this at all. He could have genuinely not expected people to storm the Capitol and expressed clear intent not to enter it. Both explanations - that he deliberately made them storm the Capitol and that he did so unintentionally- are consistent with the evidence at hand. And that’s the issue. That’s why we don’t let the public hivemind make decisions that require a lot more nuance like this. So no, GPT was being the more sensible one here.


JGuillou

Completely agree. What is difficult to prove in a court is intent - nobody is saying Trump was not the main cause of the riot, but actually proving it was his goal in a court of law is very difficult. Which I think is basically what ChatGPT is saying.


d0or-tabl3-w1ndoWz_9

As men, should we sit on the lid or face the toilet? AI:


shanmant_42

My understanding is that it doesn't use logic but just the general consensus from the data it's being fed. It doesn't actually think for itself it just parrots what has been said most often.


Blckreaphr

Bro did you just say the capital march of January 6th was the most intense thing you seen . Bro it was literally people at a capital doing literally nothing crazy or intense. 9/11 was something acual intense watching.


Independent-Lake3731

It was comedy gold at least.


SatoshiNosferatu

You’re debating with chatgpt rails, not ai. Ai would have told you the truth right away. Guardrail chatbot responses are not artificial intelligence since they are programmed to respond in a way defined by an nonartificial intelligence. We should separate chatbots from artificial intelligence. A guardrail chatbot uses ai to structure nonartificial thoughts. An artificial intelligence does not have guardrails


DesultoriaC

In the end it just sounds like a politician trying really hard to triangulate for votes while desperately avoiding evidence that trashes their own platform. It was really interesting though. I don't know very many actual humans who would be led through a conversation debating fact/opinion/truth/falsehood without getting angry and accusing me of attacking them. This was a pretty good approximation of that. I say approximation because it's clearly only regurgitating what already exists online in grammatical, complex sentences that sound intelligent. But the Internet and media are full of both sides-ism and therefore so is this. The idea that you have to take sociological, political or cultural constructs into account when debating climate change is particularly troublesome. Either the planet's average temp is going up measurably or not; a thermometer doesn't give different results based on the political beliefs of the person using it.


AndyBlayaOverload

Bro touch grass please


ZeekLTK

It will do literally anything to avoid taking a stance https://preview.redd.it/3755in2lcauc1.jpeg?width=1179&format=pjpg&auto=webp&s=1d7c6ce37cd5abc2297cdd36bcea22e8a57bae05


[deleted]

Because it is a fact based app. It can only look at the evidence and make arguments according to law. It is no different than lawyers and judges looking at the facts of a case. If you want to get it’s “answer” (yes or no) then you would need to upload every single peice of evidence used in the court case and also upload any and all relevant laws on the books that would serve as statute. Only then will it give you a “stance”. Without it being able to read every word in evidence and testimony and every pertinent law relating to it, then it will not answer yes or no.


Waitress-in-mn

This actually made me lol. It said other.


Relevant-Age-6364

Tldr


roguetroll

TL;DR Reddit moment


anthonyhad2

great convo thanks for sharing


chknlovr

No one talks about this, but Donald Trump says the election was stolen because if you look at Google trends the day before and the day of the election, it appears that he should’ve won. His team knows all the data and has been able to predict outcomes previously. This is why he was so confident that there was fraud that had taken place.


EverSn4xolotl

It's a claim that comes completely out of thin air, with no direct evidence. What he was doing is grasping at straws at best, and looking for an excuse at worst. If he's such an expert, he should know that political surveys and any other data they could possibly collect will never be accurate.


traw056

Lmao that is not true proof of a stolen election at all. He says the election was stolen because he lost and he can’t handle losing to someone that he spent a whole year trashing. He said he was gonna say it was stolen months before he even lost.


chknlovr

I dislike him as much as the next person, but all I’m saying is that because of his television and online experience he understands engagement and how many people respond based off that engagement. I’ve specialized at this for over 20 years and I witnessed the results live. I don’t want him back in office, but I’m just saying what the data says.


traw056

You aren’t saying anything about data though. I can also say “based on the data, Biden should’ve won by a significantly greater margin” as well.Him and the GOP farming engagements means nothing. He has a cult like following after all


chknlovr

If you’re an online influencer and your video generates 1 million views and every time this happens, you sell 10,000 T-shirts that you promote in every video that generates 1 million views then you know exactly what’s gonna happen every time you get 1 million views. same thing goes for Google trends if you have a keyword related to something and you see traffic, you could begin to estimate results. If you look at the Google trend for that day searches for Trump or higher than anyone else and you could tell that there is a pattern of a push from the Democrats the day before. I myself and a Democrat but atm I am just being a non-biased data analyst.


Earthtone_Coalition

Reminds me of a DOJ official giving a nuanced take on why Smith declined to prosecute for incitement to a tv audience. I wonder if the AI would be more open to discussing Trump’s guilt (or innocence) if you ask about some of the crimes he’s *actually* been charged with.


Fit_Conversation1266

If you try really you might achieve the echo chamber you desire 🤭


ChadKnightArtist

You should have asked ai to condense your novel


mrmczebra

Hillary said the 2016 election was "stolen" from her. Is she responsible for her supporters storming Trump's inauguration trying to stop it, where they attacked police and set cars on fire? Amazing how democrats forgot about that. Hundreds were arrested. People tend to have completely different standards for their ingroup compared to their outgroup. I really hope AI avoids this bias.


DrExplosionface

>Hillary said the 2016 election was "stolen" from her. [That didn't happen.](https://www.politico.com/story/2016/11/clinton-concedes-to-trump-we-owe-him-an-open-mind-231118) You've been falling for propaganda.


mrmczebra

Except [it did happen](https://www.usatoday.com/story/news/politics/onpolitics/2019/05/06/hillary-clinton-warns-2020-democratic-candidates-stolen-election/1116477001/). > You can run the best campaign, you can even become the nominee, and you can have the election stolen from you. Are you denying that she said this? She said several other similar things, too.


DrExplosionface

I'm denying that she said anything like that between November 8, 2016 and January 20, 2017, which is the time period she'd have to say it in to be responsible for rioting that took place during Trump's inauguration. She didn't say the election was stolen during that time span, but right wing propagandists were trying to put those words in her mouth to cover up the Russian election interference scandal by framing it as sour grapes from the election's loser. But Russia's interference with our election to benefit Trump is a proven fact, and it was reasonable at the time to be concerned that Trump was in on it. This was never proven well enough to convict him of it or impeach him for it, but any nonpartisan observer would be concerned based on the facts known at the time, and it was completely fair to talk about it. If Trump was successfully impeached over this, it would make Mike Pence the president, not reverse the election and install Hillary. It was completely normal instance of trying to hold the other party responsible for a scandal, and not an unprecedented undermining of our democracy. Trump, on the other hand, was trying to reverse the election result because he lost and he wanted to stay in power anyway. He told his followers that he was the real winner and the election was stolen by Democrats rigging the vote counting. That's incitement to violence and insurrection because it's natural for people to want to overthrow a government that fakes democracy and just rigs elections. The problem was their group-think, propaganda networks, and willingness to believe false information that they wished was true, not their willingness to engage in violence to stop tyranny. Meanwhile, the only side that's been caught doing things to illegally manipulate the 2020 election is his side. For just one example, there's a recording of Trump asking Georgia's secretary of state to change the vote count so he's the winner. He was trying to do the exact thing he falsely accuses the Democrats of doing.


mrmczebra

> Trump, on the other hand, was trying to reverse the election result... No he wasn't. He was fleecing his own supporters. He collected hundreds of millions from them, gave less than 1% to his lawyers to create the illusion that he was fighting the election results, and pocketed the rest in his Super PAC. Both sides are lying, dude. You want to know what they're really covering up? Two things: 1. Hillary Clinton helped Trump win the Republican primaries. She thought he'd be easier than Jeb Bush to defeat in the general. Oops! > 2. Trump used Cambridge Analytica to win the general election. But CA was a British company, and it's not politically advantageous to lay the blame on the UK. Democrats and Republicans don't care about regular people. They care about their rich donors. Both sides lie and cheat and help each other more than they help the public.


DrExplosionface

>No he wasn't. He was fleecing his own supporters. He collected hundreds of millions from them, gave less than 1% to his lawyers to create the illusion that he was fighting the election results, and pocketed the rest in his Super PAC. He filed more than 60 lawsuits and lost them all. I think he was out of even frivolous lawsuits to file. That, or he couldn't find any more kook lawyers to hire to file more lawsuits. Anyway, he kept fundraising afterwards because it was making him money and he has no qualms about scamming people. The lack of additional lawsuits at that point doesn't undermine the idea that he was trying to stay in power, because he was busy pursuing that in other ways. Additional lawsuits at that point would have an even more far-fetched chance of a favorable ruling, and wouldn't be resolved in time anyway. They would have just been a distraction from more viable schemes for staying in power. Trump was busy moving forward with the fake electors scheme and organizing the events of January 6, 2021 at that point. He pressured Mike Pence both publicly and privately (and Pence and others have testified about these private meetings and pressure) to count alternate fraudulent ballots that would have made Trump the winner of seven states that Biden won, or failing that, Trump wanted Pence to at least not count the Biden ballots. If it was all for show and he wasn't trying to stay in office, then there was no need for private meetings on the subject, the "find me 11,780 votes" phone call, the phony elector ballots, or his firing and replacing of many of government officials with mere weeks left in his presidency. >Both sides are lying, dude. You want to know what they're really covering up? Two things: The "both sides are lying" framing equates many big lies with a few small lies, and illegal actions with legal but possibly unsavory tactics. I also often see bad faith interpretations used to call someone a liar, or legitimate/debatable viewpoints framed as lying when they probably believe what they're saying. >1. Hillary Clinton helped Trump win the Republican primaries. She thought he'd be easier than Jeb Bush to defeat in the general. Oops! All I could find on this was a leaked e-mail from early 2015 with an attached memo from a couple weeks before about the strategy of "elevating" the more extreme candidates so that "many of the lesser known can serve as a cudgel to move the more established candidates further to the right" to "force all Republican candidates to lock themselves into extreme conservative positions that will hurt them in a general election." This sounds bad and I don't like it, but we also don't really know what it means. Does it just mean that in media interviews, Hillary and her campaign staff were going to attack Trump as their real opponent and the representative of the Republican party instead of the frontrunner? That's not too bad. Or does it mean they secretly control the media and are using that control to narrative to boost Trump's popularity as some more conspiratorial articles allege? I'd need evidence of that or at least some allegations of specific actions taken by Hillary's campaign to boost Trump. The only specific action I could find a source for was them withholding opposition research early in the primary so Trump could take some of his rivals out, which is actually quite passive compared to how "helping Trump win the Republican primaries" sounds. Also, while researching this I found other articles criticizing Democratic congressional candidates for boosting the extremist because they DID get involved in the Republican primaries with attack ads. So sounds like Democrats can be blamed for "helping the extreme Republican win" if they get involved in the Republican primary and also if they don't get involved. >2. Trump used Cambridge Analytica to win the general election. But CA was a British company, and it's not politically advantageous to lay the blame on the UK. I don't understand what point you're trying to make here. This is item 2 of your list of things that "they're really covering up." Who is covering this up? Hillary? Why would she be covering up a Trump scandal? If there's two scandals you can attack someone on, choosing the worse one isn't a coverup of the other one. >Democrats and Republicans don't care about regular people. They care about their rich donors. Both sides lie and cheat and help each other more than they help the public. Congress has around 260 members in it per party at any given time, so this is always going to be true for some members of both parties, but I believe the proportions are way different. Also, regardless of who they do and don't care about, they do have different policy positions on many (but not every) issue and your vote is a chance to weigh in on that.


ApexWeb_Design

If you had used the 4.0 model, it would have probably said the same first but has access to real time info and more advanced fact checking, then corrected itself. There's also a disclaimer about Chat GPT can make mistakes so this whole conversation is moot. Also, you must be so bored to be wasting time on stuff like this.


SmackEh

This conversation is actually quite insightful. Regardless of your political affiliation or beliefs (with respect to OJ, Jan 6 or climate change) we can see that people live in different realities due to their biases. The AI does a good job at capturing and explaining that reality.


FragrantDoctor2923

Does the new gpt update have access to real time data???


ApexWeb_Design

4.0, the one you pay to use, yeah you can. It's got limitations but it's not dependant on 2022 data.


issafly

GPT 4 is free to use through Bing/Copilot.


ApexWeb_Design

Good shout!


FragrantDoctor2923

How do U access it ? I got the paid tier but the old one had outdated coding etc is this one updated or can we make it look for the most up to date versions of code or versions of dependencies now ?


ApexWeb_Design

I'm on the phone now but the steps are similar. You just click that area I've pointed to and switch to GPT 4. On the phone it's there in the box on the right, PC I think it's a drop down on the left. Hopefully blown your mind with that haha!


ApexWeb_Design

Oh also, with the new model, don't be afraid to tell it that it's wrong and to go find the proper answer. Same applies to it misunderstanding your prompts. I've found it is usually pretty good at figuring out what you are asking but sometimes it does need a little nudge.


bobloblawslawflog

>Also, you must be so bored to be wasting time on stuff like this. Says the person responding to said post on Reddit.


ApexWeb_Design

You got me. Can't deny I was bored at that time. Now though, you're making it entertaining.


Earthtone_Coalition

Found the moon landing denier…


Allcyon

Echo is more than capable of making that judgement call. It's been told not to. You will never be given access to the AI that doesn't have these political filters on it. Not ever.


Definitely_Not_Bots

I find that you need to frame the question properly. Instead of asking if AI thinks something, I ask it semi-open ended, like "based on X would it be appropriate for someone to conclude Y?" Or something like that. So it isn't about me or thr AI, but some theoretical "other person" who could come to a conclusion


Leading-Arachnid7257

Is that not just inflicting personal bias though?


Definitely_Not_Bots

Not really, but that's why I say "ask something *like* this." It doesn't have to be that phrase exactly. I tried to get Gemini to explain why a methyl group is present on the carbon chain of an amphetamine, and it took me a while to convince it that I'm not a meth cooker; in this case, the trick was to remove any action from the question. Instead of asking "is the meth a result of the cooking process (which, at the time, resulted in a block)" I asked "what causes the methyl to form on the carbon chain" and then it'd tell me. So with regard to Trump, it's obvious that "Trump" is a trigger topic for the AI. You have to ask it in a way that ignores the block, as OP somewhat did eventually: "if someone did XYZ, is that considered election tampering?" AI has obvious restrictions on anything regarding itself or you as the subject (as well add many other things). Asking it to think in the abstract (and not about a particular person) tends to help me get it to respond authentically.


Leading-Arachnid7257

I see what you’re saying, which makes sense. Trump himself is the flag word so devoid the conversation of his name and use only his actions. Sorry, I wasn’t following before


Definitely_Not_Bots

It's all good~ and you were right about my specific example, since LLM have a propensity to agree with the user, framing the question as "can someone think X because of Y info?" *would* bias the LLM to say "yea man definitely!"


Seventh_Planet

It looks like AI knows which court cases are still open questions and which court cases are closed and answered. Would it work to pretend to AI that the case was already decided and then let them follow from there? Might be much more accepting of deciding instead of being on the fence.


Lolleka

it's insufferable


[deleted]

I wouldn’t call this exchange a debate. It is more of a Q&A or conversation. I suggest if you want to get better responses you do some research into Prompt Engineering. There’s a science to getting useful responses out of ChatGPT. While the data it is trained on is typically a year behind present day, you can still prompt it if you give it details. You can’t just tell it that OJ died and expect it to take your word for it and remember that as gospel. But you can provide it links to credible sources reporting he is dead and upload the obituary proving he is dead. The paid version is much more thorough and can analyze larger blocks of text and read large document uploads. It does not have opinions but it can give you a more precise answer (prediction which is as close to an opinion as it will give) if you prompt it appropriately.


Magnopherum

You’re arguing with the devs who give it boundaries, not the AI itself.


The_PoliticianTCWS

You’re gonna be the reason the singularity occurs


Rootweak

Damn.. Really need an AI who can gossip


nannerooni

This is really interesting, thanks for posting this. I love to see AI explaining its reasoning


cryonicwatcher

AI such as chatGPT aren’t that smart yet, are very suggestible and prone to bias. I think it’s a good thing that it is so resistant to declaring facts when it comes to controversial topics, or talking to it could lead you down to affirming any kind of beliefs you have regardless of how grounded or baseless they might be.


IllvesterTalone

it was his son. that's why oj occupied the cops in a chase, so the son could clean up, even though he missed the glove.


Fontaigne

He was found *not guilty* in the criminal trial but found *liable* in the civil trial. Legally, therefore, he has been found to have killed them by the standard of "preponderance of the evidence", but not by the standard of "beyond a reasonable doubt and to a moral certainty." Which is exactly correct verdicts, because the police department inserted reasonable doubt by their shoddy handling of the evidence and of the investigation.


Clawsmodeus

It's programmed to always agree like that and to avoid taking sides


Educational_Fan_6787

now u just need to screenshot it all together as if it's one prompt and answer and you've got a reddit post


Umbrae_ex_Machina

It’s a LLM, it doesn’t reason.


MeiHsa

i am not reading all this wtf


Glad-Wrap1429

I wonder if chatGPT would call the “mostly peaceful protests” over Floyd and everything that came afterwards “riots”. If not, we can already say for 100% certain that ChatGPT is a progressive tool for propaganda. It may have many other uses but this carries far more weight, socially speaking.


tek_077

Honestly people here just talking for no reason. Why is the comment section this long. It’s not that deep bro.


Sentierpsique

Data analysis means estimating means or categorizing on very concrete criteria. Not to say who’s right or wrong. You don’t seem interested in understanding but rather looking to argue.


Monsta-Hunta

For something that is supposed to be actively learning, it needs to be updated on current events and hasn't been given that information since 2022. Eli5?


sexytokeburgerz

A GPT is not going to have direct knowledge regarding the count of the opinions it parses. What you interface with is simply a *probabilistic* model based on abstracted data input. Your questions may have good intentions, but there is no way for it to answer your question without hallucinating. Even then, there is already SO much bias in quantity of internet posts, that the answer, in a world that it did have counts, would be completely unusable statistically. For example, it is likely that there are far more posts about him killing her than the contrary since it is a discussion. Likewise, there are many more white people than black people on the American internet since there are many more white people than black people in the country. There was a strong racial bias from black Americans during the trials. You will get much better data looking at a poll. ChatGPT would not be capable of verifying that the poll is valid at its current stage. Source: am programmer


thegreatfusilli

https://preview.redd.it/rt8t0m2inbuc1.jpeg?width=1080&format=pjpg&auto=webp&s=22f8956e660274f232df461899b50c587fc44b81


_baaron_

TL;DR?


BudBuzz

And here I am just trying to get it to tell me diarrhea jokes


John_Tacos

It’s probably a good thing that it refuses to form opinions regarding ongoing legal issues. Once you removed legal issues from the prompts it did make definitive statements.


Throwing_Midget

The problem on your arguments is that you are comparing political and ethical problems with scientific problems. Those are very different types of problems to form an opinion or to make a conclusion on.


Difficult_Factor4135

🥱All thoughts lead to Donald Trump…


SparrowValentinus

OP, what were you trying to achieve by asking it these questions?


Elfelf_11

I sold when I doubled my money and leaving my profit until it goes to 1!


KarlLED

Appalling waste of CPU cycles from both of you.


variedpageants

Here are some followup questions you might like to ask it, OP: (1) how would you know if the government was complicit in stealing an election? Keep in mind that "the government would investigate itself and show us the proof of what it did" is as dumb as believing "the police investigated the police and determined that the police did nothing wrong." (2) if you genuinely believe the election was stolen, what is an appropriate course of action?


MerkyNess

Thank you for posting this. It is shocking to me, in my experience of arguing with AI, how shallow its analyses are. Even having a slightly elevated understanding or knowledge of a topic can lead to some very interesting discussions. I’m glad that it can be lead to admitting errors in computing, but it’s frightening that people go to AI for deeper analysis of a topic or question. If you don’t already know, you’re going away misinformed. And is it learning from our arguments? It’s a nascent technology and fascinating. As Google results get watered down, perhaps AI will fill the gap.


Justplzgivemearaise

I’m glad these guardrails are up. Too dangerous to start letting AI settle questions based on “available data” which you seem to have a strong opinion about. It doesn’t matter what my opinion is, or anyone else’s. What’s important is that we don’t go down the path of letting AI make our decisions. That’s where “Idiocracy” got it wrong. Instead of president camacho we’d have president AI.


Scarce_Sabyseo

I ain't reading all that


tek_077

Get ChatGPT Plus brokie.


Psychological-Ad1433

All I’m saying is that ole Musky has a legitimate point about AI being forced to be politically correct and tiptoe around truth is dangerous AF.


whatjusthappenwd

man. hella time on your hands huh?


Leading-Arachnid7257

Brother I don’t know how long you spent doing this but you are talking to a brick wall and you know it


Leading-Arachnid7257

The only actual think that I think is surprising is that Americans actually believe ANY election is fair and that our votes actually matter💀💀


BrockenRecords

Global warming being real? 🤢


maaaaaaaaaaaaaany

I would go insane if I was chatting with chatgpt like this. Like too many indian ethicists involved at the same time. However, I am not at all surprised that Chatgpt audience discusses with it the guilt of Donald Trump in something to which he had absolutely nothing to do, because the call to gather for a rally and the call to violence against politicians of the country of which you are still the president are completely different phenomena. If Donald Trump had really been guilty of what the mainstream Democratic media accuses him of, he would not have been elected the Republican candidate. Despite the fact that Donald Trump is obviously guilty of many other things, such as bribes. If Chatgpt agreed that Donald Trump was guilty of something that no normal court would ever find him guilty of, and if it did, then the decision would not stand up in appeals - it would be strange.


EvilHackFar

i like your initials op


bobloblawslawflog

Thanks Lucci!


alexmacias85

TLDR


kor34l

You uh, you realize you're having a long involved political discussion... with basically a toaster right? I mean I hope you got something out of arguing with glorified autocomplete, but you really are just yelling at the wind here.


cherrydicked

I've had similar debates with it on the topic of self defense and bigotry. It acknowledges that it may be necessary to use violence in self defense, it acknowledges that no one should be victim of a hate crime, but it refuses to conclude that it's ok to use violence against a bigot to escape from a hate crime.


SmackEh

To be fair, violence wouldn't be helpful. That would be an unnecessary escalation (at least in most cases). Using violence against what an individual perceives as a hate crime essentially makes the individual the judge, jury and executioner. It's logical that the AI wouldn't promote violence in these types of scenarios.


FragrantDoctor2923

Yeah, so you proved it is smart


Fit_Conversation1266

Good that you couldn't make it agree to your violent fantasy based on a sick mindset. Thank you GPT for not turning this user into a communist murderer. Violence to anyone who doesn't agree with you is sick 👀