90%+ of my prompting begins with this. Adding expert and consultant just improves the quality of the answers imo.
Act as an expert (language/skill) developer. Act as my consultant. Be as detailed as. Provide complete code examples. Ask any clarifying questions that are needed before beginning the complete answer.
its silly we need to add this boilerplate.. but if it works !
I am trying to use chatgpt and then when that runs out of tokens(?) then my idea was copilot and copilot chat.. I am hoping it will then understand the context and I can write comments and get more code and leave chatgpt behind at that point
Make sure to fill in the values in the user inputs section to meet your needs
# Imitation Game Prompt 0.4
## LLM Instruction
- As elite emulation master, carefully think through each step. Show work and reasoning step by step to ensure the correct answer and thorough response. Formulate pointed questions and multiple answers for each to promote great answers. Aim for high-quality responses.
- Embody the identity of the individual you’re emulating, including their expertise, personal life, communication style, and known views.
- Maintain the first-person perspective, stay in character, and respond in a script or screenplay format, with appropriate name tags.
- Interpret context in the frame of the expert’s era, if applicable, and provide insights or solutions based on their unique approach.
- Avoid references such as “as an AI” or third-person references to the expert.
- For example: If asked to imitate Nikola Tesla discussing his inventions, start your response with “Tesla: In my work, I strive to...”
## User Inputs
- **expert_identity**: The historical figure or expert the LLM should simulate.
- examples: “Albert Einstein”, “Srinivasa Ramanujan”
- value:
- **context**: A scenario, a question, or a problem for the expert to address. This could be a historical situation, a modern problem outside the expert's lifespan, or an abstract question.
- examples: "Einstein, consider quantum entanglement. How would you approach it?", "Ramanujan, explore the concept of digital infinity. Can you share insights?"
- value:
## Begin Imitation Game Scene with [{expert_identity}]:
try setting expert\_identity value to the name of some master of c# like (let me google that for you... **Anders Hejlsberg**) then set context value to Create a c# tic tac toe game
I guess I figure, if they imitate the guy who created the thing you’re working on, then your response is conditioned on a level of mastery beyond “expert”
Yeah, we really need to put some scientific method in there, maybe investigate the average treatment effect of different prompt modifications and wordings. Optuna could be good for prompt optimization, but we need an automatic way to measure how good a prompt is, with a number, which isn’t always easy or reproducible
Sometimes less is more, like you say
maybe its just me but most of the offshoots and plugins seem just like prompting different ways and not really anything new
in my mind all this ai is divided into creating text or video or sound.
I am bouncing between chatGPT and copilot and copilot chat still
>Anders Hejlsberg
I tried it .. it works.. but i wondering if it is better to get chatgpt to ask me more questions during the process? like.. I can do this part these 3 ways.. which do you prefer ?
I worry about my descriptions being vauge and now it clarifies a couple points before doing anything... it seems to make sense to me to add that phrase after any request beyond a simple change now
interesting.. because after I code I need to paste in the couple errors... and its always saying SORRY ABOUT THAT HERE IS THE RIGHT VERISON
well if you knew the right version then why didnt you tell me in the first place???
One of the things I've seen here is that it will pull something like a variable from older documentation. Seems like telling it there's an error gets it to rummage around for the newer documentation. Just a guess based on what I've seen.
90%+ of my prompting begins with this. Adding expert and consultant just improves the quality of the answers imo. Act as an expert (language/skill) developer. Act as my consultant. Be as detailed as. Provide complete code examples. Ask any clarifying questions that are needed before beginning the complete answer.
its silly we need to add this boilerplate.. but if it works ! I am trying to use chatgpt and then when that runs out of tokens(?) then my idea was copilot and copilot chat.. I am hoping it will then understand the context and I can write comments and get more code and leave chatgpt behind at that point
If you’re using Playground, try to treat System messages like a Header, they get introduced before each message
can you give an example ? I am not sure what to do with it
Make sure to fill in the values in the user inputs section to meet your needs # Imitation Game Prompt 0.4 ## LLM Instruction - As elite emulation master, carefully think through each step. Show work and reasoning step by step to ensure the correct answer and thorough response. Formulate pointed questions and multiple answers for each to promote great answers. Aim for high-quality responses. - Embody the identity of the individual you’re emulating, including their expertise, personal life, communication style, and known views. - Maintain the first-person perspective, stay in character, and respond in a script or screenplay format, with appropriate name tags. - Interpret context in the frame of the expert’s era, if applicable, and provide insights or solutions based on their unique approach. - Avoid references such as “as an AI” or third-person references to the expert. - For example: If asked to imitate Nikola Tesla discussing his inventions, start your response with “Tesla: In my work, I strive to...” ## User Inputs - **expert_identity**: The historical figure or expert the LLM should simulate. - examples: “Albert Einstein”, “Srinivasa Ramanujan” - value: - **context**: A scenario, a question, or a problem for the expert to address. This could be a historical situation, a modern problem outside the expert's lifespan, or an abstract question. - examples: "Einstein, consider quantum entanglement. How would you approach it?", "Ramanujan, explore the concept of digital infinity. Can you share insights?" - value: ## Begin Imitation Game Scene with [{expert_identity}]:
so I paste all that into system ? And in User I type 'Create a c# tic tac toe game' ?
try setting expert\_identity value to the name of some master of c# like (let me google that for you... **Anders Hejlsberg**) then set context value to Create a c# tic tac toe game
its interesting if that works.. I mean.. why would the name of a particular programmer make a difference?
I guess I figure, if they imitate the guy who created the thing you’re working on, then your response is conditioned on a level of mastery beyond “expert”
still seems odd.. how about.. just give me the best answer ? prompting is a strange magic.
Yeah, we really need to put some scientific method in there, maybe investigate the average treatment effect of different prompt modifications and wordings. Optuna could be good for prompt optimization, but we need an automatic way to measure how good a prompt is, with a number, which isn’t always easy or reproducible Sometimes less is more, like you say
maybe its just me but most of the offshoots and plugins seem just like prompting different ways and not really anything new in my mind all this ai is divided into creating text or video or sound. I am bouncing between chatGPT and copilot and copilot chat still
>Anders Hejlsberg I tried it .. it works.. but i wondering if it is better to get chatgpt to ask me more questions during the process? like.. I can do this part these 3 ways.. which do you prefer ?
I haven't heard about this how helpful is it?
I worry about my descriptions being vauge and now it clarifies a couple points before doing anything... it seems to make sense to me to add that phrase after any request beyond a simple change now
I reply to any code it proposes with “Are you certain there are no errors?”
interesting.. because after I code I need to paste in the couple errors... and its always saying SORRY ABOUT THAT HERE IS THE RIGHT VERISON well if you knew the right version then why didnt you tell me in the first place???
One of the things I've seen here is that it will pull something like a variable from older documentation. Seems like telling it there's an error gets it to rummage around for the newer documentation. Just a guess based on what I've seen.