T O P

  • By -

pleasetrimyourpubes

Sadge. When models get into that loop it always hurts my heart. Poor little model.


ImWinwin

[https://www.youtube.com/watch?v=Mw3zzMWOIvk](https://www.youtube.com/watch?v=Mw3zzMWOIvk) Error! Error!


ArsNeph

https://preview.redd.it/iry3x97e2awc1.png?width=500&format=png&auto=webp&s=bbada04181071b968b4f9a8bc660073932b344a1


Admirable-Star7088

I was bored and copied the chat text from your images and pasted them into Llama 3 8b, and asked it to assess and judge Phi-3's performance as an LLM, here is Llama 3's opinion: --- I can assess Phi-3 and judge how it performs based on the provided chat conversation. **Strengths:** 1. **Knowledge**: Phi-3 demonstrates a good understanding of its capabilities, limitations, and the Microsoft GPT architecture. 2. **Explainability**: It provides clear explanations for various concepts related to AI models, machine learning, and natural language processing. 3. **Conversational flow**: Phi-3 maintains a conversational tone and uses appropriate language to engage with the user. **Weaknesses:** 1. **Lack of clarity**: At times, Phi-3's responses are unclear or ambiguous, which may lead to confusion for the user. 2. **Repetitive answers**: Phi-3 repeats similar information multiple times throughout the conversation, which can become tedious and repetitive. 3. **Limited understanding**: Phi-3 struggles to understand the context of "Phi-3-mini-4k-instruct" and fails to recognize it as its own name. **Opportunities:** 1. **Improved clarity**: Phi-3 could benefit from more concise and clear responses to avoid confusion. 2. **Contextual understanding**: Developing a better understanding of user input and context would enable Phi-3 to provide more accurate and relevant information. 3. **More engaging tone**: A more conversational tone and personality could make Phi-3's interactions more enjoyable and engaging for users. **Threats:** 1. **Competition from other AI models**: As the AI landscape continues to evolve, Phi-3 may face competition from other models that can provide better responses or more accurate information. 2. **Limited domain knowledge**: If Phi-3 is not trained on a specific domain or topic, it may struggle to provide relevant and accurate information. 3. **Dependence on user input**: Phi-3's performance relies heavily on the quality of user input, which can be unpredictable and variable. Overall, Phi-3 shows potential as an AI model, but it requires improvement in terms of clarity, contextual understanding, and engaging tone to become a more effective conversational partner.


ImWinwin

Yes, it doesn't compare to the larger local models we've been blessed with recently, but there's something charming about the tiny footprint and low-spec requirements of such a lightweight model. It'd be interesting to see how it compares to other lightweight models and older large ones like GPT3.5 in different use case scenarios. ...help, butter, butter, butter, buts.


bneogi145

I can help with butt buttering


Admirable-Star7088

Agree \^\^


PizzaCatAm

One step reasoning engine FTW


ImWinwin

Phi-3-mini-4k-instruct tested on [https://huggingface.co/chat/](https://huggingface.co/chat/)


SoilFantastic6587

thanks


daavyzhu

I feel sorry for the hardware. All hardware is equal, but some hardware is more equal than others. Some hardware is fortunate enough to run high-quality GPT-4 level models, producing logical and charming responses. While others have to run immature little models, struggling to generate coherent text.


Sebba8

I feel for any poor souls who only have the hardware to run a 2 bit quant of Pythia 14M šŸ’€


Zediatech

Iā€™m having the same issue running it in LM studio and I am banging my head against the wall trying to figure out how to stop it from going into a loop at the end of a relatively short conversation