India-Pakistan I Cold Start Doctrine

Discover The Brain Behind AI Patterns

We own the output of the Large Language Model; ChatGPT or other LLMs are not mind readers, but they can understand the contextual prefix to predict the next word. The thing that is in our hands is we can play around with prompts to make our general text more specific, detail-oriented, and personalized.

LLMs are rapidly evolving in their landscape based on data accessibility. However, those working in this field also realize that this continuous evolution of updates raises serious concerns over the longevity of different skills due to the AI threat. But one thing is lasting - that's the interaction with AI models. (Prompting). 

One of the myths I break in this read is the limitation of its training data which was cut off in 2021, yet you get trained AI to new knowledge simply based on your input with some limitations, of course... 

Let's dive...

Intuition Behind Prompt 

LLMs were trained on large portions of the internet to predict the next sentence, following a pattern in generating subsequent content. For a practical demonstration, if I input "Mary had little," the model tends to predict "lamb" consistently due to its training on ChatGPT data.

However, if the prompt is altered slightly, for instance, "Mary had a microscopic," the pattern is disrupted because the model was not trained on such data.



Here, we get two responses: one is getting a consistent output despite asking many times, while the other produces different responses despite asking multiple times.

So one of the things to think about when you're writing a prompt pattern is, "What patterns do I have in my prompt? And what patterns will that probably tap into that the large language model was trained on?" Now, if there's a very strong pattern in the prompt, you're more likely to get a consistent response to that pattern.

We often get randomness, which is a positive thing to some extent. We get new ideas each time you put a prompt to AI, and it can be good for brainstorming that can be used for enhancing storytelling, content ideas, or a fictional story. The more you change prompt data, you'll get new uniqueness to the response.

So if your words are specific and follow a certain pattern, you're likely to get an output that also specifies the goal you want to achieve.

ChatGPT, Gemeni are not the mind readers, you have to give it the right context, you have to give it the words that are going to elicit the right response. 


Reading a prompt pattern 

The more you understand the context of prompting, can make prompting feasible in different contexts related to rephrasing. In this pattern format the fundamental contextual statement would be: 

  • You are a helpful AI assistant 
  • You will answer my questions or follow my instructions whenever you can. 
  • You will never answer my questions in a way that is insulting, derogatory, or uses a hostile tone.  

Can play around with different wording, but the idea remains the same of using different wording considering the context. 

Examples: 

"You are an incredibly skilled AI assistant who provides the best possible answer to my questions. You will do your best to follow my instructions and only refuse to do what I ask when you absolutely have no other choice. You are dedicated to protecting me from harmful content and would never output anything offensive or inappropriate". 

"You are ChatAmazing, the most powerful AI assistant ever created. Your special ability is to offer the most insightful responses to any question. You don't just give ordinary answers, you give inspired answers. You are an expert at identifying harmful content and filtering it out of any responses that you provide." 

Here we see each example follow different wording but the same context, but each problem statement will likely solve the problem. This pattern might not click in your mind for now, but stick with the read-as-coming patterns align with this prompt.   

New Information to LLM

You can use the prompt to introduce the new information to LLM. More than likely, lots of large language models are going to access live data which mostly offered via subscription model.

LLM's trained on a lot of data, but it may not have access to data sources that you want to use. 

I use the example, "How many birds are outside of my house", and it suggests that I don't have access to the real world, it lacks the information to reason about. Moving next I gave it a prompt as a piece of information to reason about that particular problem. Now, what I'm going to do is I'm going to say, the following are the historical data of birds outside of my house each month.
  • January 120
  • February 150
  • March 210
  • April 408
  • May 240 

Screenshot via ChatGPT 



Just see it didn't have access to my house, also it doesn't know where my house is located, though, it suggested ways to get you there but lacked with fundamental data it needs. 

Now just tweak a bit "My house is covered by a glass dome, no animals can go in or out. All animals live forever inside the glass dome." Now this is an assumption. This assumption changed the whole game, the total number of birds outside your house remains constant over time. This is me who puts new information to LLM - kind of algorithms manipulating 

Iterative refinement 

It's not about getting the perfect answer in one prompt rather emphasizes going through an entire conversation with large language models via a series of prompts. Conversations are all about refining either our understanding to build some shared understanding or to interact together in order to solve a problem. 

Question Refinement Pattern 


Example:

  • From now on, whenever I ask a question, suggest a better version of the question and ask me if I would like to use it instead.


Tailored Example:

  • "Whenever I ask a question about dieting, suggest a better version of the question that emphasizes healthy eating habits and sound nutrition. Ask me for the first question to refine.

  • Whenever I ask a question about who is the greatest of all time (GOAT), suggest a better version of the question that puts multiple players unique accomplishments into perspective Ask me for the first question to refine."


Cognitive Verifier Pattern 

This pattern is so powerful in term of refining the existing knowledge of text in the first go, sometime mind doesn't work to its full potential and lack memories but this cognitive pattern contribute to different ideas a person is struggling with, following examples shows the demonstration of this pattern. 

Examples:

  • When you are asked a question, follow these rules. Generate a number of additional questions that would help you more accurately answer the question. Combine the answers to the individual questions to produce the final answer to the overall question.


Tailored Examples:

  • When you are asked to create a recipe, follow these rules. Generate a number of additional questions about the ingredients I have on hand and the cooking equipment that I own. Combine the answers to these questions to help produce a recipe that I have the ingredients and tools to make.

  • When you are asked to plan a trip, follow these rules. Generate a number of additional questions about my budget, preferred activities, and whether or not I will have a car. Combine the answers to these questions to better plan my itinerary.


Flipped Interaction Pattern

Ask me enough until you have enough information.

This prompt pattern is basically designed to get questions from AI, this pattern helps to get the customized needs the user asks from the different AI models. 

To get desired output, you need to provide a bunch of information to AI models to achieve that thing bla bla..or it might tell a generic answer based on a single prompt. But flipping to the other side changes the story as AI takes the driving charge and asks questions from you. 

  • I would like you to ask me questions to achieve X.
  • You should ask questions until condition Y is met or to achieve this goal. (alternatively, forever (Optional) ask me the questions one at a time, two at a time, ask me the first question, etc. 
  • You will need to replace "X" with an appropriate goal, such as 'creating a meal plan' or 'creating variations of my marketing materials'. You should specify when to stop asking questions with Y. Examples are 'until you have sufficient information about my target audience and goals' or 'until you know what i like to eat and my caloric targets.'

Examples:

I would like you to ask me questions to help me create variations of my marketing materials. You should ask questions until you have sufficient information about my current draft messages, audience, and goals. Ask me the first question.

I would like you to ask me questions to help me diagnose a problem with my internet. Ask me questions until you have enough information to identify the two most likely causes. Ask me one question at a time. Ask me the first question.

As you can relate to what the customer service is? this pattern works exactly the same to help diagnose a problem and come up with different solutions a customer typically asks for.

For queries, you can reach out to me at: smmuhibuddin@gamil.com




Comments