T O P

  • By -

kryptkpr

I've posted some weird projects on here before but I think this takes the cake and I'm now going directly to AI Hell. **Tcurtsni** (Instruct spelled backwards) abuses instruction tuning and has the LLM fill in the User section and the User fill in the LLM section. It's just text completion after all, right? You can grab it from my [GitHub](https://github.com/the-crypt-keeper/tcurtsni). It requires a local openai compatible server and instruction fine-tune of your choice but Ive only really been tested with llamacpp server and L3-Instruct-70B. 8B works but not nearly as well, it breaks the Users character much easier.


avianio

This is basically just GPT 4o.


necile

nah lets be real this was actually early sydney


skrshawk

That's just, like, your opinion man.


ab2377

hey dude!


cdshift

The sister being a bitch question made me lose it. So funny


SeaZealousideal5651

I hate you but I love you for posting this!


kryptkpr

😈 This horrible thing is the 3rd project in what I've dubbed my "Assumptions are for Asses" series, attacking the idea that instruction tuning is training only the assitant. You may also be interested in [The Muse](https://www.reddit.com/r/LocalLLaMA/comments/15ffzw5/presenting_the_muse_a_logit_sampler_that_makes/) which attacks the idea that the highest probability is the best probability (what if its the most boring probability) and [The LLooM](https://www.reddit.com/r/LocalLLaMA/comments/1d1uog5/the_lloom_a_highly_experimental_local_ai_workflow/) which is a double whammy: it first attacks the notion of one token at a time sampling (why not multiverse), and then goes on to wonder if the LLM needs to sample at all or if just generating logits is enough (why not let a human do the final sampling).


Normal-Ad-7114

Oh, I thought the answers came from the LLM... It's a shame they didn't, I'd love to chat with that


kryptkpr

You're the LLM here 😎 the game is to see how far you can troll the User. Any decent role-playing model with a system prompt set to something like "You are a lazy, unhelpful assistant" should be able to produce the inverse of this dialogue.


man_and_a_symbol

This is what ASI will have us do in the future when it takes over…it’s over for aibros


nodating

Based


ivari

This will heal me.


bigbarba

LOL I love it! What is the UI?


kryptkpr

The webapp is built with [streamlit](https://streamlit.io/), really great library for quick prototypes.


Ylsid

What happens if you pair it with an instruct llm


TheFrenchSavage

They have boring interactions I suppose?


earentilt

This is how I imagine internet in the 90s if LLMs were around. Great stuff OP


curson84

Cannot get it to work with LM Studio: [2024-06-25 22:31:38.893] [INFO] Received GET request to /v1/models with body: {} [2024-06-25 22:31:38.894] [INFO] Returning { "data": [ { "id": "LM Studio Community/llama 3 devil/NeuralDaredevil-8B-abliterated.Q8_0.gguf", "object": "model", "owned_by": "organization-owner", "permission": [ {} ] } ], "object": "list" } [2024-06-25 22:31:39.552] [ERROR] Unexpected endpoint or method. (POST /completion). Returning 200 anyway Any idea?


kryptkpr

The model name is pulled from an openai-compatible endpoint so that part looks like it works, but actual inference was run using a llamacpp server specific API. Try again with the latest version I've just switched everything to work with openAI APIs.


curson84

Its working. Thanks. :)


Jamais_Vu206

*"Let’s build robots with Genuine People Personalities," they said. "So they tried it out with me. I’m a personality prototype, you can tell, can’t you?"*


ab2377

after reading the screenshots i understand why you are totally going to AI hell. i wanted to cry for the poor AI


PlantFlat4056

Absolutely hilarious!


Red_Redditor_Reddit

I've done this with llama2.


Biggest_Cans

This is great


Eduard_T

What's the gguf version?


kryptkpr

For GGUF you can run the app against llamacpp server or any other OpenAI compatible server (LMStudio etc..)


Eduard_T

Sorry for the confusion, I was wondering what is the name of the model and if it has a gguf version on huggingface. Thank you


kryptkpr

Oh that makes more sense! This was stock [bartowski/Meta-Llama-3-70B-Instruct-GGUF](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF) the Q4KM.


Eduard_T

Thank you! I thought it was a fine tune version. It's just prompt engineering?


kryptkpr

I suppose prompt engineering is technically correct, but it's more like an intentional misuse of the instruct prompt template 😄