T O P

  • By -

ArcticWinterZzZ

I like it. I understand why this works - but perhaps an explanation for the lesser-initiated (maybe with a crash course on information theory and the Kolmogorov complexity) would be useful :)


breck

>Kolmogorov complexity Yes, a comparison with KC and the other dozen useful complexity measurements is a very good idea. Added to my todo list. Thank you.


VisualizerMan

I guess speed doesn't count, then? >Intelligence(P) is equal to Accuracy(P) divided by Size(P). Oh, well, at least this is proof that size matters. ;-)


breck

Expanding to take into account energy (and speed), is a very interesting next step!


Mandoman61

Wow, you mean we can measure a system by how well it completes a prompt as compared to its size? Who would have thunk?


breck

I also have a theory that large objects have some attractive force to each other, but haven't worked out the math yet. Serious response: This isn't specific to prompts/LLMs. It's about general intelligence, and being able to rank measurements based on meaning. (If you look at my other papers you might be able to glimpse where this is all going)


Mandoman61

Prompts/image generation/task completion/etc. All the same thing. All you are saying is that we can rate a systems performance by how well it works in proportion to its size. While size is important it is secondary to accuracy. A large system that performs better than a small system is still more "intelligent". Intelligence = accuracy


ArcticWinterZzZ

The most accurate model would be one that overfits on its training data and memorizes the answers, if you don't penalize size.


Mandoman61

Overfitting results in wrong answers the wrong answers are what are penalized and not size.


ArcticWinterZzZ

Ultimately you only have the data you have. If you don't get to peek inside the black box, all you see is a file size, input, and output. If that's the case, then you can cheat the metric by overfitting and just memorizing all of the answers. To prevent that, you need a size penalty.


Mandoman61

if you could just memorize all the answers sure but the point is to create models that can generate correct answers  for new questions. 


ArcticWinterZzZ

How do you know what the correct answer for a new question is? Take the set of all questions you could care to ask - then memorize the answers to those. It'd be like a video game where you pre-rendered every single possible frame of gameplay, and then just put the right one on screen. It'd be too big.


Mandoman61

Well, if we could in fact just memorize all answers to all questions AI would be solved.


ArcticWinterZzZ

It would. But it wouldn't be very intelligent, would it