# Meta Reveals 7 Prompt Techniques You Need To Know

## Метаданные

- **Канал:** Skill Leap AI
- **YouTube:** https://www.youtube.com/watch?v=Cu8QvodkjpU
- **Дата:** 31.01.2024
- **Длительность:** 11:15
- **Просмотры:** 12,435
- **Источник:** https://ekstraktznaniy.ru/video/12745

## Описание

Meta, the company behind Facebook and Instagram runs one of the most popular large language models called Llama 2 and they have now released a prompt engineering guide that we can use to get better prompts. 

And since all large language models work in a similar way, we can use these for ChatGPT, Bard, Copilot and Claude. 

This document is created for developers, so I simplified it for non-developers and I have 7 very actionable prompting techniques I wanted to share in this video.  

Link to the guide: https://github.com/facebookresearch/llama-recipes/blob/main/examples/Prompt_Engineering_with_Llama_2.ipynb


1 - Explicit Instructions
Detailed, explicit instructions produce better results than open-ended prompts:

* Stylization
    * Explain this to me like a topic on a children's educational network show teaching elementary students.
    * I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words:
    * Give your answer li

## Транскрипт

### Segment 1 (00:00 - 05:00) []

there's a brand new prompting guide from meta the company behind Facebook and Instagram and there's seven prompting techniques in there that I think are going to be really useful if you use any kind of a chatbot so if you use chat GPT Claude co-pilot this is written for meta So Meta has a large language model called llama 2 but this applies across all the large language models so I went through this documentation and I simplified it and I came up with my own prompting example because this is really designed for developers that are creating applications on top of llama 2 but you could again use it across any kind of AI chatbot and large language models they all have the same principles so I'm going to simplify it for us so we could use it inside of chat GPT again with some prompting examples and there's seven really good prompting techniques that I think most people should be aware of okay so the first prompting technique is explicit instructions so basically given it more detail and being more explicit on the instruction on the prompt rather than just doing an open-ended prompt so here's some examples to change the style of the output explain this to me like a topic on a children's educational Network show or this one here we'll talk more about this role setting but I'm a software engineer in using large language models for summarization summarize the following text under 250 words so this is basically telling you the more detail you give these prompts the better results and that's all part of this stylizing and then you also have formatting as part of this detailed instruction so using bullet point formats or Json objects this is more for developers use less technical terms that help me apply it to my work and communication so you get the idea here with the type of formatting that you could give it so then you could use restrictions so you could do things like this never use a source that's older than 2020 so this way it will force it to try to maybe do a search like if you're using GPT 4 do a web search to only search for things that are more new based on what you're using it for and maybe older data is not relevant to what you're writing about or if you don't know the answer just say this is basically a way to get it to not hallucinate because these models do sometimes just make up the answer so you could get around it with these type of restrictions okay so that's all part of the first thing they shared with us which is explicit instructions in these three parts the second prompting technique that meta shared is called zero shot prompting so this is another prompting technique that you could use and if you haven't heard of this before basically all zero shot prompting means is you don't provide any previous examples you just ask chat GPT or whatever large language model you're using to basically just give you a response so you could say write a blog post about the latest trends in social media Med marketing for small businesses it has no previous example to pull from so it really just knows that prompt that's called zero shot prompting no previous examples or no prompt priming I'm going to explain that in a second with the next one here's another example explain the basic of using generative AI in digital marketing in a simple easy to understand way you're not given a previous examples to use so you can't teach it your writing style for example that's called zero shot prompting not one of my favorites I usually use the next technique that I'm about to cover so the next technique is called fuse shot prompting it's the one I use almost every single time because I wanted to have some context some previous examples so the difference between zero shot and fuse shot is with f shot you give it examples of your desired output so it has something to pull from and not just use what it thinks is best so this is a really good technique here and it's called few shot prompting okay here's an example so blog post about social media marketing remember with the zero shot that's all you gave it with a few shot you're going to say here's an example of a blog post about how a small business can leverage Instagram for growth and then you could insert text from a previous example of something maybe you wrote or this is an example of an article discussing the benefits of Facebook advertising for local businesses and then you give it some example so then that way he knows the writing style he knows exactly the format of what you're looking for that's the F shot prompting and then the prompt would be now write a blog post about the latest trends in social media marketing for small businesses so this is called f shot prompting or prompt priming you give it some context it has the example that you're looking to replicate one of my favorite ways to do prompting inside of any model especially chat GPT okay next on our list of the seven prompting techniques we have role prompting this is one of my favorites if you give again

### Segment 2 (05:00 - 10:00) [5:00]

this is referring to llama 2 because again this is the Facebook guide but remember this is relevant to pretty much any large language models if you do this role prompting you will get more consistent responses when given a role okay roles give context to the large language models on what type of answers are desired so instead of saying explain the pros and cons you say Your Role is a machine learning expert who gives highly technical advice to senior Engineers right so this is basically setting up what's called role prompting and then you follow up with your prompt again one of my favorite things to do with chat GPT is using this technique as the first part of a prompt and in marketing and Entrepreneurship I have some more relevant examples for example as a social media influencer with a large following in health and wellness space suggest creative ways to use Instagram for building a brand presence or imagine you are a successful entrepreneur who has built multiple businesses shared the top five strategies for Effective time management and productivity so you see the role comes first then the actual prompt of what you wanted to do com second and with all these techniques I'm sharing you could combine many different techniques into the single prompt it doesn't have to be choosing the role prompt or choosing the F shot prompting you could combine these and do a sequence of prompts as well now this next one is simple but it's really powerful and it's really easy to implement it's called Chain of Thought prompting and this is a very popular prompting technique again across all the large language models and you just need to add a phrase to get it to think step by step okay and it says significantly improves the ability for the model to perform complex reasoning this is called Chain of Thought prompting and basically this is all it says who lived longer so here's just a simple prompt but it says who lived longer let's think through this carefully step by step so this is the prompt that's it just copy and paste this at the end of prompts that sometimes require a little bit of logic or math because that way it will just do the calculations in the background and give you a better response it's kind of funny that this simple sentence based on multiple research from across all kinds of different large language models actually improve it so it doesn't hurt to add this to almost every prompt that requires a little bit of reason next on the list is called self-consistency so as I mentioned before a lot of times a single prompt could result in incorrect answers and this basically helps enhance the accuracy by letting it actually come up with multiple different answers and then choosing the best from its own generation so the simplest way that I understood this is imagine you're asking a group of experts the same exact question and then they all give the answer and see what the most commonly agreed answer is let's say there's five experts three of them give the same answer and that's the one that's the best answer because most of them agree on that this is basically what this type of prompting technique does you ask the AI to give you several answers to the same one question and then whatever answer that pops up more frequently or aines more correctly that's the answer that it will pick so it's basically creating a panel of experts answering you in multiple different ways and then reasoning to choose the best one from its answer okay so that's called self-consistency again every example they have here is more for developers so again some of the examples I'm giving you I wrote it in the description below this video so you could kind of see a very specific normal non-developer answer if you're a non-developer if you're a developer obviously this is a very helpful document I'll link this below and obviously all my text will be below as well and then we have this one retrieval augmented generation this is called Rag and this sounds a little bit complicated but it's actually one of the simplest prompting techniques really all you're doing is a lot of times again this is the case of building an application using a large language model but a lot of times when're using chat GPT for research or really any other tool that is this AI large language model for research you want very applicable and factual knowledge right you don't want to just to make up the answer or maybe the knowledge base that it has is not up to dat enough so a lot of times you need to give it external sources and it will go do that research this is why I often recommend GPT 4 for example or if you use co-pilot something that has upto-date internet access so when you're building these models it's just telling you to have it look at an external source so if I use a prompt like this research the latest models of electric cars released in

### Segment 3 (10:00 - 11:00) [10:00]

2024 the normal training of GPT chat GPT just wouldn't have that information right but if you do this usually it will just in the background do that research for you so it says doing research with Bing Bard obviously is going to do that with Google and it's going to give us an answer that is up to dat that's the whole concept of rag basically forcing it to do external research and not use this internal knowledge base because that could be out ofd sometimes so with the Llama 2 model when you're building application if you're a developer you could then tap into external sources so it's not hallucinating or making up the answer because it just doesn't know that information and you can see this is going to be much more up to dat from 2024 where the training as I'm recording this of gp4 is not in 2024 it's in 2023 so it wouldn't know this information just yet and open aai the company behind chat GPT also released an official prompt guy that I covered in a different video and that is also very useful a little simpler than this and a little bit more applicable if you using GPT 4 so I'll link that video next and I recommend you watch that next and everything is in the description below this video I will see you next time
