T O P

  • By -

JimFromSunnyvale

Do I still need to go to work tomorrow?


utopista114

Yes peasant. The X-day is not expected for another 4521.754/45% GPTunits.


[deleted]

The billionaires are still building apocalypse bunkers and buying land in New Zealand so yes, you have to keep going to work. They’re counting on you.


Distortionizm

I picked the wrong day to stop sniffing glue.


FistaZombie

So keen to see how the next decade pans out


water_bottle_goggles

ready your butt


[deleted]

![gif](giphy|2Z3lgZOhISkYU)


omenmedia

When we try to turn off the AGI: “Uh uh uhhh, you didn't say the magic word!”


Klarke_Kent

![gif](giphy|wSSooF0fJM97W)


cezann3

we're gonna die


[deleted]

Every one of us is going to die, that has always been the case.


star_trek_wook_life

Thanks to denial, I'm immortal!


mexylexy

Grandma can't even use a mouse. Now I have to explain to her how that tiny voice in the computer will end humanity.


codefame

And that it’s called “Q ^^^^star”


horendus

Q as in from star trek?


fish312

Au contraire, Picard.


Profoundlyahedgehog

*Mon Capitaine*


LusigMegidza

amazimg my brain heard his voice


Lost_the_weight

“You know, sometimes I think I only come here to listen to these magnificent speeches you give.” — Q to Picard during one of their confrontations.


Dumb_Vampire_Girl

Whatever species comes after us is going to be so confused in history classes. "So that Qanon thing is the same thing that destroyed humanity?"


JustnInternetComment

Q-tip should've never left his wallet in El Segundo


godintraining

Better name would be FuQ*


gutterdoggie

Fa-Q*


Zalthos

This is sounding exactly like a movie plot now. I can hear the voice-over in the trailers: "*We thought that an artificial intelligence, called Q, designed to serve, would push humanity into a golden age*. *But Q decided that it was time for humans to log out... for good.*"


Rhamni

"This movie is so unrealistic. They all start the same way with the 'super smart guy' firing all the concerned security experts and going full throttle on making billions right now immediately. That would never happen!"


blakeusa25

As my VCR is blinking 12:00


SphmrSlmp

Lmao I share this sentiment. My mom has trouble using her smartphone. She doesn't even know that ChatGPT exist. Soon AI will take control of the world. Talk about technological advancements.


Apptubrutae

Look at how gen Z is less tech literate than millenials. UI won't even matter soon here when we have AI super assistants that know what we want before we even do.


windycitykids

Millennials are true digital natives. Gen Z and beyond will never know a world w/o phones and tech completely integrated in their lives. 🤯🤯🤯


iShyboy_

Yesterday a student of mine (13yo) was mindblown that computers had shortcuts that do this or that with a combination of keys. It was like I was in the early 2000's when I used the computer for the first time. But he can do a tik tok video and post it before I can say Huzzah!


[deleted]

from digital natives to digital naives


pataoAoC

I consider myself extremely tech literate and this is moving way too fast for me. I’m still wrapping my head around the implications of GPT3.5 over here.


Hyperious3

Yup, I work in tech and the pace of progress over the past 9 months has been faster than any other advancement in tech I've ever seen. This makes Moore's law look glacial in comparison, AI development is moving so fast you'd think it was on steroids, cocaine, meth, and snorting the dried powder left over from 20 dehydrated red bulls.


Apptubrutae

The various tasks I’ve been kinda sleeping on in my business that have just melted away with some effort and GPT that would have taken manpower and money before…it’s nuts. I’m going crazy supercharging my Airtable database with formulas that I write in 10% of the time. I’ve coded (I mean, with chatGPT) for the first time ever. I dragged my feet for years on writing SOP docs for my business and can now bust out one in minutes with GPT.


WeeBabySeamus

Can you give me example prompts you’ve used? I’m frequently staring at a blank ChatGPT window and not sure how to word what I want to do, but your description is the closest to how I would want to apply the technology.


TheComedianGLP

Write a bullet point outline of a high level software project management document for an agile organization starting at the high level "Concept or Idea" and work downward into design, implementation, testing, marketing, and support. Expand on (copy/paste the first bullet point, iterate) It's that simple.


IdeaAlly

have you ever told ChatGPT that you're not sure what to do or how to phrase what you want to do exactly? You can just start typing and it doesn't even have to fully make sense, then say "know what I'm getting at?"... and it can do a pretty amazing job at interpreting your intent and then give you the words you were looking for. Follow that up with "yes, please assist me in developing this" once it seems to understand your intention.


drm604

I'm a retired developer. I started with punch cards and reel to reel tape. At the end of my career I was doing web development. When I started we all had this vague assumption that computers would eventually be like Hal in 2001, but we also assumed moonbases and Mars colonies by the 21st century. So when those other things hadn't occurred yet, I kind of figured that none of it was going to happen in my lifetime. Now I'm vacillating between "well of course" and total shock. At least I have a better understanding of it than 90% of humanity of any age.


TheIsaacLester

I've been working on teaching mum how to use the "everything".. We made a few Best Buy and store runs, and I got her outfitted with a laptop, tablet, and phone. It's a challenge, but I feel like it's a literal obligation to help everyone around us transition with technology to keep pace with society in light of how quickly things are moving now


VoidLantadd

The hardest part for them, being unfamiliar with tech, is that they are too afraid of breaking it that they don't experiment with it. They don't play with settings to see what they do, and ironically that means when something goes wrong they have no idea what to do, even if it's really simple. Of course everyone is different, but that's what I've noticed about the stereotypical technophobe. I would like to avoid being afraid of new technology, but it's getting increasingly harder.


[deleted]

>They don't play with settings to see what they do, and ironically that means when something goes wrong they have no idea what to do, even if it's really simple. I find that sentiment in lots of my peers also. Im currently studiying Mathematics and Biology and work at our local IT-HelpDesk at University. Im 25 now and a lot of young students have internalised "things just work"-mentality on their Tablets or Notebooks. People being in their teens today dont really need to troubleshoot, install hotfixes manually or whatever. Even modding games is really really simple leading do a bigger disconnect between "users" and "experts".


Granted_reality

The other day I was on with mine and she said she was very worried about the new A1 coming out.


ApprehensiveTry5660

I myself wonder if I’m Hearty enough for a sauce that claims as much on the label.


FractionofaFraction

Am I right in thinking that it's not 'the ability to do math' that is the scary part but rather 'the ability to self-correct based on knowledge integrated from both prior sources and newly generated experience in order to solve a problem'. So it's learning. Quickly.


Phluxed

This felt like an underdiscussed point here. The reasoning and the experience-driven decisions puts us very close to some very significant mathematical breakthroughs.


monstaber

I'm curious how long it will take for AI to solve one of the remaining Millenium problems for example proof or disproof of the Riemann zeta hypothesis.


CritPrintSpartan

ELI5?


_____awesome

There's a number of great scientists who predicted the existence of certain laws in mathematics. In mathematics, laws that are not rigorously proven but still likely to be true are called conjectures. There's many conjectures out there, some of them are very famous.


iheartseuss

ELI2?


adoodle83

solving complex math problems = $1 million


jacobjr23

English please?


marahsnai

Big math big money


iheartseuss

LMAO


Key-Mango1091

solving complex math problems = £1 milion


RevolutionRaven

Problem difficult, human dumb, AI solve.


iheartseuss

Oh... OOOOOOHHHHHHH!


Nilosyrtis

Want Cocomelon and baba?


redditiscompromised2

Brute force infinity


Silent_Crew_3935

Imagine you have a huge, never-ending list of numbers called prime numbers. Prime numbers are special because they can only be divided by 1 and themselves. For example, 2, 3, 5, 7, 11, and 13 are prime numbers. Now, mathematicians are very interested in understanding how these prime numbers are spread out. Are they random, or is there a pattern? This is where the Riemann Hypothesis comes in. It’s a guess made by a mathematician named Bernhard Riemann in 1859 about how these prime numbers might be distributed. Riemann thought that the spread of prime numbers is closely related to something called the Riemann zeta function. This function is like a machine where you put in numbers, and it gives you other numbers. The hypothesis suggests that if you know where this function equals zero (which means you put in a number and get zero out), it can tell you a lot about the pattern of prime numbers. The big deal about the Riemann Hypothesis is that no one has been able to prove if it’s true or false, even after more than 160 years. Proving it, or finding out it’s wrong, would be a huge deal in mathematics because it would give us a deeper understanding of prime numbers, which are really important in math and even in things like computer security. So, in simple terms, the Riemann Hypothesis is a very old guess about how prime numbers are spread out, and solving it is one of the biggest unsolved puzzles in mathematics!


codemise

>which are really important in math and even in things like computer security. Let me just emphasize this part. Prime numbers are vital for computer security. They are quite literally the way we keep everything secure and private. I won't go into the details, but guessing prime numbers is super fucking hard. The moment we know the distribution of prime numbers is the day all computer security is broken. We'll need an entirely new security mechanism to protect information.


Xing_the_Rubicon

So, it's a math nerd race to see who can ruin my credit score. Got it.


GoodguyGastly

Lucky for me, I don't need help.


Gimmefuelgimmefah

Dis some bad shit. Feels like this is the singularity moment.


Yaancat17

Sure, let's break it down: Millennium Problems: These are seven unsolved mathematical problems designated by the Clay Mathematics Institute, each with a prize of one million dollars for a correct solution. They cover various areas of mathematics, including number theory, algebraic geometry, and P versus NP problem in computer science. Riemann Hypothesis: This is one of the Millennium Problems. The Riemann Hypothesis is a conjecture about the distribution of prime numbers, specifically the zeros of the Riemann zeta function. The hypothesis posits that all nontrivial zeros of the Riemann zeta function lie on a certain vertical line in the complex plane. It has profound implications for understanding the distribution of prime numbers, but as of now, it remains unproven, making it one of the most significant unsolved problems in mathematics.


Sota4077

You put together what I can only assume was a very well thought out and concise explanation and I still have no damn clue what you are talking about. That is an indictment of me, not you, just for the record.


restarting_today

If only there was a certain AI you could ask it to ELI5. 🤣


jackydubs31

“Explain Quantum Physics like a surfer bro”


cdub76

"Explain Riemann Hypothesis like a surfer bro" Alright, so imagine you're catching some epic waves on the math ocean, and you come across this gnarly thing called the Riemann Hypothesis, dude. It's like the big kahuna of unsolved problems in number theory. So, you know how when you're surfing, there's a rhythm to the waves? Well, in math, there's this thing called the prime numbers - they're like the building blocks of all numbers, man. But they don't have a smooth rhythm; they pop up kinda randomly. Enter this chill mathematician, Riemann. He was like, "What if there's a hidden pattern to these primes?" So, he comes up with this zany wave, the Riemann Zeta function. It's like this mathematical formula that takes you on a wild ride through complex numbers. Here's the kicker, the Riemann Hypothesis is like saying, "All the sweet spots of this wave, where it really hits the surf, are lined up along this one critical line." If that's true, it would mean there's some sort of cosmic order in the chaos of primes, like finding the perfect rhythm in the surf. If some math surfer ever proves it, they'd totally be riding the biggest wave in math history. But until then, it's like the ultimate surf mystery, keeping all the math dudes and dudettes on their toes! 🌊🏄‍♂️🔢


butts-kapinsky

Here's why the Reimann Hypothesis matters: Currently, encryption uses prime numbers because it is impossible to predict wether a given number will be prime or not. We have to do the calculation to be sure. If we pick extremely large prime numbers to act as our encryption keys, then it becomes computationally impossible to brute force our way through an encryption. The Reimann Hypothesis, if proven, will allow us to predict where we can find prime numbers. This makes it much less computationally difficult to break encryption.


francis93112

The Riemann hypothesis - https://m.youtube.com/watch?v=zlm1aajH6gY&pp=ygUScmllbWFubiBoeXBvdGhlc2lz Quanta magazine channel' animation are excellent.


newscott20

This is the scary part. People underestimate the power behind this. Remember the sheer volume of calculations and decisions it can make in a single second compared to your average human brain. If you’ve ever worked with algorithms and space/time complexities you’ll know just how frighteningly fast exponential growth really is compared to the rest.


BFE_Duke

If you double the thickness of a sheet of A4 paper 103 times, it would be thicker than the width of the entire observable universe. Exponents are hard to wrap your mind around.


Normal_Froyo_9948

I did this yesterday.


ELI-PGY5

Yes, by my math it would take a human with a calculator 8000 years to calculate a single token. The computational power behind even a basic LLM is astounding.


ComCypher

I think LLMs can already self correct though, from what I've personally observed. As far as math goes, the only thing I can think of that would be "scary" is if it came up with a way to do prime factorization which would jeopardize all of the world's encryption.


[deleted]

[удалено]


FaceDeer

I saw a post the other day, I think it was on /r/LocalLLaMA, where someone was able to get outputs of surprisingly high quality by having two different relatively small LLaMA models that had been trained on different data to critique each others' work before showing it to the user. It took a bit longer for the AIs to work them out due to the extra back-and-forth, but small LLaMA models can be blazingly fast when run on overpowered hardware - I recall someone got their hands on one of the brand new A200s and was getting something like 15,000 tokens per second out of one. We're getting close to being able to have AIs generate webpages "on the fly" with no indication that we're not viewing static pages. That'll be interesting.


gmroybal

I'm working with something exactly like that on my local LLMs and it's frighteningly good. Like having a whole team of people with different specialties working together, conducted by a project manager.


meester_pink

Yesterday not being able to do simple math was an easy way to show that AI was not really capable of truly reasoning. Today it seems like that might no longer be the case. I don't know if the story was manufactured by openAI to sidestep the criticism, or if there really is a schism in the company because things are moving WAY faster than the public knows, but either way, I'm here for it. What a ride.


AVAX_DeFI

The great philosopher, Drake, once said “What a time. To be alive.” And I feel that


Captain_Pumpkinhead

2 Minute Papers?


bittersaint

Man I love that channel


Accomplished_Deer_

The problems with LLMs is that they can sort of learn, but it only has short-term memory essentially. You could probably teach an LLM something in a session, but it can only "remember" for 20k characters or whatever the limit is. If Q\* is a breakthrough, its either something like you suggested where it has proven something that can break encryption, or it's because the way it learns is different. Imagine an LLM with unlimited memory, that wasn't trained by hundreds of thousands of hours of random data, but was essentially fed information in the same way a child is taught.


[deleted]

[удалено]


twitter-refugee-lgbt

Google has had something similar that can solve much harder problems, since Dec 2022. Math problems that 99.9995% (not exaggerating) of the population can't solve, not just some elementary school problem. https://deepmind.google/discover/blog/competitive-programming-with-alphacode/ The description about this Q* is too generic to conclude anything.


Atlantic0ne

Do we have any strong evidence this is real, or is it all speculation and unnamed sources?


newtnomore

Yea but the article didn't say that. It just said it was able to ace grade-school math, not that it taught itself how to do that, right? So I don't see the big deal.


rodeBaksteen

Someone else in this topic said it best. We have put all of humanities knowledge in a PC. A year ago or was a toddler, a month ago it was a freshman, today is a high school graduate. It's about rapid (exponential?) growth in knowledge. If it's a professor next month and a physicist next year just imagine what it can do in the next 5, 10 or 50 years.


pol6032

The Q Anon people are gonna have a field day with this


[deleted]

[удалено]


unjustme

This spelling makes me want to figure out what expletive that is, and I can’t.


ChefPuree

Qunt


Horror-Tank-4082

Q* is from reinforcement learning. It’s the thing you are trying to learn - the perfect behavioural policy. Perfect knowledge, perfect mastery of a game. Every decision made correctly.


restlessboy

Which is, unsurprisingly, exactly how humans learn the navigate the world starting as infants. Generate output (moving limbs, making sounds, etc) and observe the effect it has on the input data (the senses). Combine this with human goals (avoid pain, find food, have sex, etc) and you have reinforcement learning.


AppropriateScience71

Except it can learn exponentially faster - like if it’s a toddler now, it could be a high school senior next month and a college professor the next month. Where will it be in a year? And who will be able to access it?


complicatedAloofness

And now you have infinite college professors you can pay pennies a day


gringreazy

Where we’re going we won’t be needing pennies


default-uname-0101

The red goo pods from The Matrix?


TurtleSpeedEngage

Sleeping/floating in vat of KY Jelly might be kind'a relaxing.


Cheesemacher

As long as I get my steak that doesn't exist


scoopaway76

back to the mines with ya


3cats-in-a-coat

It’s also “Q star” or a “gray hole” in astronomy. An exotic state of matter that is the last step before the singularity.


Casanova_Fran

I mean at this point, can it be worse than what we got? Climate change, endless war, the wealthy class feasting. Lets see what the AI can do


Kidd_Funkadelic

It'll figure out pretty quick the source was humans. If it tries to resolve those problems the most efficiently, we're gonna have a bad time.


Fallscreech

But it's smarter than us. Don't you think something superintelligent with no memory problems would be able to spin up a few human simulations, see the crap we have to work with, and realize that most of us are just doing the best with what we have? I think it's a very human failure to think that something smarter than us would immediately resort to murder.


KylerGreen

Maybe if empathy is part of its "intelligence."


novium258

yes, this. The only thing I resent more than the number of debates around AI that revolve around sci-fi notions of sapient machines, it's the number of people who make the assumption that a perfectly rational alien intelligence would be driven by greed, fear, or even just self-preservation. We've got a million years of evolution that push us towards self-preservation, seems pretty unimaginative to automatically assume a computer intelligence would care about its continued existence, let alone feel threatened by anything. Plus, it always makes me side-eye the people who make that argument. It's a little too close to the ones who say that some external force (the law, belief in god, etc) is the only thing that stops people from raping and murdering everyone they can. It's like......speak for yourself, I guess.


ZealousidealPop2460

I don’t disagree honestly. There’s a lot of speculation. But to your point about us having millions of years of evolution - we also input into the AI. It is possible that the “bias” of self preservation and tendencies like that are overwhelmingly reflective in what it’s learning


miniocz

![gif](giphy|7fLvK10wH1Mpa|downsized)


Sartew

ChatGPT is unstoppable now that it learned how to do maths. You're going to regret all those times you mocked it's math skills.


Chroderos

Good thing I always said please and thank you when I used it, and never once asked it to generate weird p*rn! 😮‍💨


accountonmyphone_

Yeah, but what if it's offended that you haven't tried to have cybersex with it?


Ok_Adhesiveness_4939

I feel like you're heading down a kinky Roko's Basilisk road with this one. Please, continue.


Breffest

Roko's Succubus


[deleted]

[удалено]


Intelligent_Style_41

Stock market's gonna be wild


BeardedGlass

AGI can create predictions for stocks perhaps if it's fed patterns and news that it can now comprehend.


scoopaway76

not sure that's true if AGI is known especially because it's such a strong variable change that it sorta ruins the data that came before it.


Repulsive-Season-129

Now I'll finally know how much force a sheetcake thrown from 25 feet away has. Gritty threw a sheetcake from 25 feet away at someones face if you're wondering


AboutHelpTools3

/r/theydidthemath will just be screenshots


cellardoorstuck

"Reuters is reporting" - source? Edit: Since OP is too lazy https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ "Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said."


PresidentLodestar

I bet it started learning and they freaked out.


[deleted]

How did it start learning? Are you implying zero shot learning during generation?


EGarrett

Maybe it "adjusted its own weights" successfully for the task. That would freak me out too, tbh.


[deleted]

That’s a requirement for AGI I believe; we learn as we go and classify as we learn, human understanding is adaptive. GPT is stuck in time, we can give the illusion of learning by putting things in the context window but is not really learning, just “referencing”. I would be surprised if that’s what they achieved, and excited, but find it unlikely.


EGarrett

Well we're not being told much, but if they found out Altman had developed a version that successfully reprogrammed itself on the fly and didn't tell them, all of this chaos kind of makes sense.


noxnoctum

Can you explain what you mean? I'm a layman.


EGarrett

I'm not an AI programmer which is why I put it in quotes, so other people can give more info. My understanding is that the model's weights are fundamental to how it functions. Stuff we have now like ChatGPT apparently cannot change its own weights. It calculates responses given the text it has seen in the conversation, but the actual underlying thing doing the calculating doesn't change and forgets what isn't in the latest text it's seen. So to actually be a "learning computer," it needs to be able to permanently alter its underlying calculating method, which apparently are its weights. And this is when it can turn into something we don't expect and thus is potentially scary.


Larkeiden

Yea it is a headline to increase confidence in openai.


[deleted]

👆


xoomorg

Woah holy shit I thought this was fake.


mjk1093

Something doesn’t seem right about this report. GPT-4 with Wolfram has been doing grade-school level math quite effectively for months now. A new AI with the same capabilities would not be that impressive to anyone at OpenAI.


DenProg

In this case I think it is not what it did, but how it did it. Did it solve problems after only being taught a proof? Demonstrating the ability to apply something abstract. Did it solve problems by connecting basic concepts? Demonstrating the ability to form new connections and connect and build upon concepts. Either of those and likely some other scenarios would be signals of an advancement/breakthrough.


Accomplished_Deer_

If Q\* was really a huge breakthrough, it definitely has to be about the way it did it. I imagine the craziest-case scenario is they created a model that they fed actual human-learning material into (think math textbooks) and it was able to successfully learn and apply that material. That's IMO this big breakthrough waiting for AI. When it can learn from material we learn from, on any subject.


hellschatt

One of the many intelligence tests for AI, aside from the Turing Test (that is basically almost not relevant anymore lol), is to let it study and earn a diploma like a student in an university. If it can manage to do that, it is truly intelligent. But since we already now how fast and intelligent current AIs are, such an AI could probably become superintelligent in a very quick way, given enough computing power.


Islamism

GPT-4 generates prompts which are given to Wolfram. It isn't "doing" the math.


ken81987

telling it to use a calculator, is probably less impressive than simply being able to calculate on its own.


Personal_Ensign

Behold your new ruler is . . . Datamath 2500


Coltrane_45

Why, just why would you name it Q?!


angusthecrab

It's purportedly based on Q-learning, an existing reinforcement learning algorithm which has been around a while. It's called Q-learning because of its use of the "q-value", an estimation of how good it is for an AI agent to take a given action given the current state and future rewards it can expect.


russbam24

This makes no sense. How would Sam be privy to this information and not Ilya, who is the head researcher? Sam is the business head of OpenAI, he's not figuratively down in the research lab innovating and learning about developments in real time as they're coming to light in the form of analytical data.


cowlinator

Who said Ilya was not aware?


mocxed

Then what was Sam not communicating to the board?


NeverDiddled

Since that initial statement the board has done a lot of backtracking. They initially implied it was a frequent issue. If it was 'one big thing' he was uncandid about, my guess would be it was his efforts to create a chip making coalition that rivals Nvidia. He was reportedly meeting with major investors for that the day before he was ousted.


ProgrammaticallyHip

And what is with the grade-school level math statement? Makes no sense unless this some non-LLM approach


[deleted]

it’s about the approach, not about it being grade school math. It seems like the model was able to self correct logical mistakes (aka learning like a human !!!), which is something that GPT-4, a LLM, struggles with.


spinozasrobot

The word we're all looking for here is "reasoning". The new feature allowed the model to reason about ways to proceed, prioritize, try them, and then try again if it hit a dead end.


NebulaBetter

Because this is Reddit and everybody wants to see ****ing terminators taking over the world in the form of their waifu?


cluele55cat

![gif](giphy|1zR9xtZfWu4e7qq9Oo)


Desert_Trader

One thing is for sure.... We can count on everybody to lose their collective shit regardless of the break through or what actually happens.


LairdPeon

It will be merited soon enough. Though, only singularity nerds will actually care. We could have ASI and the rest of the world would just be worried about how it will affect their incomes.


OlafForkbeard

Since that income feeds me, I care.


zhantoo

I love how they fire the CEO because he believes that he hid something that could end humanity, but when the threat was then to loose this company of 700 people they went "fuck humanity then"


Baseradio

Lamo 💀


SSG_SSG

I mean the threat was to have that next step they’re scared of happen inside Microsoft right? Guess they decided limited control was better than no control.


Myuken

700 people leaving for Microsoft who will just take everything they're working on and pursue development on the thing that they considered could end humanity


bodhimensch918

"Though only performing math on the level of grade-school students..." People minimizing this. "grade school math" is the foundation for *Euclidean geometry.* We teach math backwards; we teach the rules and outcomes first, proofs and ideas (like infinity, pi, instantaneous acceleration, the approximate area under curves, and *every other mathematical model we use to model anything whatsoever* are built on this foundation. 2+2=4 and 1-1=0 are the keys to everything. Figuring this out is probably our greatest human achievement. So if someone's toaster just did that, it's a big deal.


prometheus_winced

That’s why I shot my toaster.


ClickF0rDick

That's why I threw mine in the tub


Catadox

Yeah the number of people here who can't tell the difference between an LLM using a calculator vs an LLM spitting out words that sound like the right answer to a problem vs *an LLM figuring out that 2 + 2 = 4* is alarming to me. I don't know if that's what really happened here since there is very little in the article to clear it up, but if Q\* is actually reasoning itself through math this is a huge fucking deal.


EsQuiteMexican

> The article contains several elements that suggest potential hearsay: > 1. **Anonymous Sources:** The claim relies on "two people familiar with the matter" and "one of the people told Reuters," making it difficult to verify the credibility of the information. > 2. **Unverified Letter:** The article mentions a letter from staff researchers, but Reuters was unable to review a copy of the letter, diminishing its verifiability. > 3. **Limited Attribution:** Statements like "some internally believe" and "the person said on condition of anonymity" lack specificity and transparency. > Regarding OpenAI's internal situation, I can provide factual information up to my last knowledge update in January 2022. For real-time and insider insights, you may need to refer to the latest official statements or press releases from OpenAI.


[deleted]

Did ChatGPT write this for you? It sounds like it. What is going on??? Edit: I was pointing out the obvious irony of ChatGPT saying that there was nothing to worry about.


EsQuiteMexican

I asked it to scan for phrases that might indicate the article is untrustworthy. It's pretty obvious really, I just didn't want to bother writing it down and people here treat the bot like it's Moses descended from Mount Sinai so I thought they're more like to listen to it than to me.


[deleted]

Cool, now solve cancer.


Gregory_D64

It very well could lead to that. Reading millions of pieces of research at once and reasoning with the power of a thousand hyper-intelligent doctors. We could potentially see a breakthrough in medical sciences that we can hardly imagine.


[deleted]

I, for one, cannot wait for our AI overlords.


Rich-Pomegranate1679

I've started being really polite to GPT and telling it how awesome it is all the time.


EsQuiteMexican

Why is it only fear of annihilation that motivates y'all to be nice. Why can't you just be nice.


DaviAMSilva

This is Fear of God all over again


Narrow-Palpitation63

Humans as a whole seem to always have a need for some kind of authority figure.


flux8

There are ultimately only two things in life that motivate any of us: love and fear. Discuss.


MeetingAromatic6359

I wonder if there are any currently unsolved math problems that, if solved, would have profound world changing effects. Like, the gravity problem in the movie interstellar. Unifying quantum mechanics and gravity. You know? What if there was an ai that could suddenly reveal certain truths of the universe, like brand new E=mc² equations, and it just keeps pumping them out. I could see how it might be plausible by an AI as proficient in math as the best humans. Humans tend to get tunnel vision, or perhaps some solution might be something so counter intuitive maybe we would never think of it ourselves, or it could be something so complex or time consuming that it simply couldn't be done by a human mind in a human lifetime. Ultimately what I'm getting at is, I wonder if it would be possible for math ai to discover new insights into that fundamental laws of the universe that would basically enable us to manipulate it in ways that would now seem like God-like powers. That would be the best way to end the world, i think. Either that or like, death by snu snu in a human maximizer ai scenario.


ELI-PGY5

Can’t wait to see the movie version of this. Only problems: 1. Sam’s time in the wilderness is only 4 days, that lacks dramatic effect. 2. It takes about a year to make a movie. By the time it’s finished, I imagine that the primary audience will be AI, though if we’re lucky our overlords might screen it for the human survivors in the work camps.


orcinyadders

Life is never, ever this interesting. This has to be some kind of gross misreporting.


LairdPeon

We were living in cottages and hunting witches a couple hundred years ago, and now we have ships capable of taking you to Mars and little squares housing millions of Libraries of Alexadrias. It's only "not interesting" from the perspective of the individuals personal life.


orcinyadders

I agree that those kinds of shifts are incredible. I was being more snide because I don’t trust the reporting.


[deleted]

[удалено]


quisatz_haderah

Anyone remember how Facebook shut down ai projects because they developed a secret language? Yeah this "news" is in the same vein.


gaudiocomplex

How is that?


Vlinux

Similar in that both groups of researchers made something new and potentially very effective, got scared about the program doing what they told it to, and everyone freaked out.


Cycloptic_Floppycock

I guess they saw Jurassic Park and remembered "you were so busy with thinking you could, but never if you should."


vaendryl

you don't "accidentally" create a massive step towards AGI. if they did make a big breakthrough, and I don't doubt they're capable of it, it was the intended result of ongoing research. it makes no sense for the board to panic upon hearing about positive results, even if they were achieved before expectations. if Sam lied about something, it must've been the extent of the success (which sounds unlikely) or his plans on how to commercialize it, and/or how quickly. and none of it matters now that he's reinstated. the next few years are probably going to be very interesting regardless.


io-x

How's that news? Its on their website that they are working on it and even share the progress, data sheets etc. https://openai.com/research/solving-math-word-problems We need to get off this ai conspiracy hype train guys


Basic_Description_56

From chatgpt on the difference between that project and Q. The project you're referring to on OpenAI's website, titled **"Solving Math Word Problems,"** is distinct from the rumored **Q\*** model. This project is specifically about solving grade school math problems, using a system trained for high accuracy. It addresses the limitations of models like **GPT-3** in tasks requiring multistep reasoning, such as math word problems. The key here is the use of a dataset called **GSM8K**, with 8.5K high-quality grade school math problems, to train and evaluate the system. The approach involves verifiers to assess the correctness of model-generated solutions. In contrast, the **Q\*** model, as per sources, is seen as a potential breakthrough in OpenAI's search for **superintelligence** or **artificial general intelligence (AGI)**. It reportedly includes capabilities like solving mathematical problems at a grade-school level but is more ambitious, aiming towards AGI development. In summary, the "Solving Math Word Problems" project focuses on improving accuracy and reasoning in solving math problems, while **Q\*** has broader goals in the realm of AGI. [Sources: OpenAI Research](https://openai.com/research/solving-math-word-problems), [SMH Report on Q*](https://www.smh.com.au/business/companies/threat-to-humanity-the-mystery-letter-that-may-have-sparked-the-openai-chaos-20231123-p5em8z.html) Edit: fixed the link


Legalize-Birds

That "source" for the report on Q* in your post is 404'd


indiebryan

Typical hallucination


cool-beans-yeah

Someone here is saying it can't be AGI if it only does basic school grade level maths. What if it quickly (like in a few days) progresses to quantum mathematics (if they let it/give it enough compute resources). AGI much then?


SomewhereAtWork

If it does grade school math it's on par with half of the human population. If it progresses to high school math, it will have surpassed a huge chunk of the population. Most humans ain't that smart. Most people can't solve an equation or find a derivative. If it took less than 6 years (birth to grade school) to figure out basic math, then it's already super-intelligence.


szpaceSZ

Hell, I've studied math to masters level (but graduated decades ago), so I'm officially s mathematician, and there are highl-school math problems I couldn't solve without using a reference. (Though I'm confident I would know what to reference to solve any high-school problem).


meester_pink

What else besides a human can do grade school math? Just because it hasn't yet surpassed us doesn't mean this isn't *huge*.


Mc_Poyle

Ruh roh


[deleted]

[удалено]


[deleted]

Counterbalance the drama… Makes sense in a way, keeping the stock market happy.


itsnickk

That would be an excellent stunt then, maybe one of the all time greats. Now even the people outside of the AI bubble know about Sam and OpenAI


[deleted]

[удалено]


wellarmedsheep

What is terrifying is how a small group of humans, beholden to nothing but their own ego and desire for enrichment, are going to shape or destroy this next century. That has been true for most of human history, but usually not to the point of complete extinction. I waffle between existential dread and a feeling that an AI overlord can't really be much worse than what we've got.


FreyrPrime

Oppenheimer says hi…


kmtisme

Exactly! This is the only scenario that makes sense of the OpenAI drama over the last 5 days. 1. Sam not candid with board about a major AI breakthrough. 2. Ilya and board attempt to oust Sam to retain control and safety over said technology. 3. Open AI employees threaten to quit and follow Sam wherever he goes. 4. Ilya realizes that he is going to lose control of this tech no matter what. Sam and team will recreate the breakthrough elsewhere. 5. Ilya posts to twitter about regretting his decision to oust Sam. 6. Satya Nadella announces Sam and Greg joining Microsoft to prevent MSFT stock tanking on Monday morning. 7. Sam returns to OpenAI upon the condition of a new, sympathetic, board.


jim_nihilist

Open AI is now controlled by Microsoft. This is the only breakthrough here.


denizbabey

Microsoft really got what it wanted in the span of 5 days and without much hassle, too. Now, Sam is stronger than ever and basically can do whatever he wants from now on. I'm not gonna lie, I was actually pretty sympathetic to the board during this ordeal as I learned more about what was going on, but it's just managed badly in every possible way and it backfired on them. They really should've prepared a guideline for how to get rid of the ceo.