Stanford CS221 | Autumn 2025 | Lecture 15: Logic I

Stanford CS221 | Autumn 2025 | Lecture 15: Logic I

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (15 сегментов)

Segment 1 (00:00 - 05:00)

So we're going to do logic today which is going to be almost the you know the last uh topic um before the AI society section. So the last technical topic of this course um this is going to be uh a bit different. I mean, I think most of you probably have encountered logic and um in your lives and you probably do engage in logical thinking. Um but this is going to be a bit of a different perspective on logic and hopefully it'll be a refreshing perspective. Um so remember the motivation of this class. Um we built agents that can perceive, act, reason, and learn. And for the last while we've been really focused on uh reasoning. We've seen a lot of different ways starting with search MDPs and games and then looking at Beijian networks um which we looked at last time to do proistic reasoning. Um now we're going to focus on logical reasoning um propositional logic and first logic and in some ways logical reasoning is more primitive than proistic reasoning because it doesn't account for uncertainty. Um but we'll see that um simplification allows us to do fancier things. So I'll start with a simple example. So if a + 10 equals sorry a plus b equals 10 and a minus b equals four. What is a? So someone shout out the answer. — Seven. Okay great. That's correct. So of course you know what is the point of this is how did you solve this? Probably you didn't go and you check all possible solutions A and B. So you're not doing search, right? which is the other way we've thought about reasoning is we just search through possibilities and then find the optimal thing. Instead, you probably did algebra. You probably took these two equations, you added them, you get 2 a= 14 and then you divide by two on both sides, right? So, this is an algebra. You all learned it since you were like babies or something, right? And uh so this is an example of logical or symbolic reasoning. And as you can see, it's very powerful. So we were able to find a solution to this problem um which has you know infinite number of possible you know possibilities um just you can do it in your head okay so we'll see what this is an instance of something more general that we'll talk about here so on a historical note so logic as I mentioned in the first lecture of class is was a dominant paradigm in AI before the 1990s so logic started out when John McCarthy coined the term AI. Um, that's what logic what people had in their heads what AI was. And we saw multiple problems with logic. It's deterministic, so it doesn't unhandle uncertainty. Something's either true or false, but the real world isn't so simple. And it also didn't leverage data, so the amount of information you can put into a logical system was limited. So we had machine learning that addressed this. So then you might wonder, why are we studying this? especially towards the end like we should be culminating into something like language models or something like that. Um but I'll try to remark that logic provides this one strength um which is I think is sort of irreplaceable and it's that's expressivity in a you know a compact way. Okay. So the way to think about logic is as a language. Okay. This might sound a little bit strange, right? So the and the goals of a language is that you can represent knowledge about the world. Um, think about textbooks as representing knowledge in the world and then you can reason with that knowledge. So think about like thinking in your head. Um, you're using some sort of language of thought there. So you might wonder why not you just use natural language. Okay? So why bother with logic? Now we have language models. We just use natural language. Um so here's some examples from natural language. Um and you can do inferences directly in natural logics. I'm sorry natural language. So a dime is better than a nickel. Um a nickel penny. Therefore a dime Okay. So very sound logic. Everyone agrees this is good. Okay. So what about example two? A penny is better than nothing. Nothing is better than world peace. And then therefore a penny Okay, sounds good to me. Um, so as you can see, natural language is kind of slippery, right? Like you can't just like substitute words and hope for the best. Um, I'll let you think about why this uh doesn't work the way it is.

Segment 2 (05:00 - 10:00)

So then we think about what other languages do we have. So we have natural languages which are informal. Um those are things like English or German. You guys are already familiar with formal languages like Python and C++. Um what we'll study today is logical languages which are also formal. We'll look at things like first order logic in the next lecture. Um and also description logic which we won't uh well we won't talk about description logic but I use that as example to show that there's not one logic. Logic is just like natural language. There's a whole set of logics each with its own language and they express different things and is useful for different um purposes. Okay. So um how do you think about a logic? So there's three ingredients um which I'll talk to you. So this shouldn't make sense right now but hopefully by the end of the course uh or end of this class it will make a lot of sense. Okay. So when we define a logic you first have to think about the syntax. So that defines a set of possible formulas. These are formulas or statements that you make in a logic. Think about them as sentences. Um and then semantics is about what is the meaning of these formulas. And then inference rules allows you to define um new formulas given previous formulas. And that has something to do with um what this uh sentences mean. Okay. So, you know, I think the important thing is that there's three things. Syntax, semantics, and inference rules. And I'm going to talk about each of them and what they what they're about. Okay. So, one important distinction is syntax versus semantics. Okay. So, syntax means what are valid expressions in the language and semantics is what is the meaning of those expressions. So to illustrate why these two things are different. So you might have two expressions with a different syntax but the same semantics. Right? 2 plus 3 and 3+ 2 are different syntax but they have the same semantics. They both represent the number five. You can also have same syntax different semantics. So you might have 3 / by two and that under Python 2. 7 semantics is one and under Python 3 semantics uh this is 1. 5. Okay, so you know this is a cause of many uh probably bugs as people are trying to you know put their code but there the point of this is that this is when you write 3 point 3ide by two what does it mean and the meaning depends on what the semantics are there's something that's not contained in a string 3ide by two that gives the um sentence some life. Okay. And I'm going to make that all precise uh a little bit later. Okay. So, a quick summary here. Logic enables very expressive representation and reasoning of knowledge. Um you should think about logic as a language and it's defined by syntax, semantics, and inference rules. So now what we're going to do is talk about propositional logic for the remainder of this class and next time we're going to talk about first order logic. So we're going to introduce propositional logic. This is the basically the simplest form of logic you can think about. Um and really we're going to use this as a example to develop the general concepts such as entailment and soundness and what do these means. So probably some of you have heard these terms but they have a very precise meaning in logic and um more important I think than propositional logic because it's so sort of basic is these more general concepts. Okay so here's our picture here. Um what we're going to do is we're going to talk about syntax and then we're going to talk about semantics as it relates to inference rules as it relates to uh semantics. Okay, so that's the plan. So um remember the question of syntax is what are the valid formulas? So to find the syntax of propositional logic, all I have to do is tell you what are valid uh formulas. So in uh we start out with defining a bunch of propositional symbols. These are also known as atomic formulas. Um these include just um any identifier. Think about it as a variable name. You could have P, Q, um, rain, wet. Um, these are just names. Um, and these are just symbols, right? I'm deliberately

Segment 3 (10:00 - 15:00)

not telling you what they mean until later. Okay, so um, now I'm going to introduce a bunch of logical connectives. um and or not uh implies and um birectional implication. So the way it works is that if you have f and g be any propositional formulas for example f equals rain and g equals wet then the following are also propositional formulas. So you can take any formula and you can negate it. You can take any two formulas and you can put an and between them. take any two formulas, you can put an or between them. Um you can take any two formulas and put um an implication arrow between them and you can take any two formulas and you can put um you know direct uh birectional implication or equivalence between them. Okay. And here I'm writing them this is a mathematical way of writing them and I'm also instantiating them in code. Okay. and nothing else is a formula. So the set of formulas is defined recursively. And so this gives a rise to formulas which are pro atomic formulas such as just P or you can do not P or not P or Q or P implies Q or not P. These are all examples of formulas. These are not formulas at least in propositional logic. This one there's no connective between them. So that's not valid. um plus is not a valid connective. So you can't do that. And then there's no um you know parenthesis in propositional logic. So these could be formulas in other logics but not in propositional logic. Okay. So now we know what the set of valid formulas are. Now what do all these formulas mean? So now let's talk about semantics. So if it it's hard not to look at these formulas and have some prior over what they possibly could mean because you're probably used to thinking about like and or but I could have easily called these like fu and bar, right? These are just symbols. Um so I don't have any semantics. So to in order to define semantics I'm going to do this fairly uh carefully. There's a lot of concepts to introduce. So this lecture is going to be a lot of terms. Um uh each term isn't complicated but um it's just you know you just have to um remember them. Okay. So um the first idea is a concept of a model and this is a model in the logical sense. Um it's not to be confused with a machine learning model and this is just like unfortunately a clash of terminology here. And um the idea in logic is that a model represents a state of the world. Um so I also use the term W. So you can think anytime you see model you can think about world as well. So a model W in propositional logic is an assignment of truth values to uh propositional symbols. Okay. So remember uh propositional symbols are these just atomic formula that are just uh named variables essentially and then um so in this case there are uh two to the third possible models um so um you can say a is false b is false c is false you could true and so on and so forth. Okay, so each of these eight um things is um an assignment of truth values to these propositional symbols and that forms a particular model. Okay. And the way to think about this is a model is a state of the world and the all possible models are all possible ways the world could be. Okay. Now, now I'm going to connect a formula which is an idea in syntax to a model which is idea and semantics. And this is I'm going to use this interpretation function I um and it takes a formula and a model W and returns either true whether the model satisfies the formula or false. if it doesn't and then the picture you should have in your head is this is a space of all models and um here's a formula that's sitting

Segment 4 (15:00 - 20:00)

outside. So remember the formula is in syntax world and the model is in semantics world and the interpretation function takes an f and a w and returns whether um the formula is true essentially. Okay, so let me work through an example here. Okay. So suppose you have three propositional symbols and you have this formula. Um so not a and b um is equal to c. Okay. And I have this particular world. A is true, b is true, c is false. Okay. So the interpretation formula is going to recursively um define what uh this uh this you know the result here and the way to think about it is as a tree. So if I evaluate the interpretation function of the formula this formula and the world or model then I'm going to break this down into these two sub expressions. So remember formulas are built up recursively. So now I can take them apart recursively as well. So I have not A or and B and that breaks down into not A and B and this breaks down further into A. And at the bottom interpretation of A against W is I'm just going to look up W sub A and that gives me true or one. Okay. And then so if I have not of anything then I just negate whatever the interpretation of this thing is. So I negate one that becomes zero. Um here I'm anding two things. When I ever and two things I take the values here zero and one and them and I get zero. And um over here C is a base case. Um so I'm going to look up C. Um that gives me zero. And uh this returns basically whether the interpretation of this sub expression is equal to and zero does equal zero. So that gives me one or true. Okay. So um I can I've implemented this in code so we can just see how that looks like. So I takes a formula in the world and um this is a base case which um okay so let's see here. So I have this formula. So it's a um something equals something and here's the world. Um so I'm it's just a lot of cases. So is equal. Okay. So that means I'm going to um call the interpretation function on G and H. Um and G and H are just the sub expressions here. So G is this thing and H is this thing. Okay. So now I recurse on the G and now I have and not A B. So I'm down here now. Um and then here this is an and. So I recurse again on not A. So I'm here now. Um and here I have one argument. So it's a not I recurse on um a. So this is the base case. The result is true because w sub um f is true. And then I go back up um and then I recurse and then at the end of the day I get uh true. Okay. So um in short the interpretation function takes a formula which is this um tree structured object and um a possible world and it returns uh it's defined recursively um using these uh these rules of what the connectives um [snorts] mean. Okay. So for any formula however complicated I can put in the um interpretation function and against any world and I can get a boolean. So this is the recursive function um and it's basically looking at all the other cases. So if this is a and or implies equal, it just recurses on the sub expressions and then applies the um the corresponding um operation. So this might seem a little bit um kind of strange if you're not used to like you know languages and compilers. Um but

Segment 5 (20:00 - 25:00)

but I think it's it becomes less trivial. For example, you can define any language you want with different symbols. And as long as you define interpretation language, it's a interpretation function, you have a logic to work with. Okay, so recall the interpretation function takes a formula which is like wet or uh rain and a model um and returns a truth value. Okay. So um the way we should understand um the semantics of a formula is actually via this other function which I'm going to define which is called the models of a formula. So m of f is going to be the models w such that interpretation of f and w is true. So the picture you should have in your head is that before the interpretation function gives us um for every point here um an assessment of whe whether it's a true or false according to this uh function or formula f and so if I just simply take all the worlds here where the interpretation function returns true I can call that a set m off F. Okay. So here, let me walk through example. So I'm going to have two propositional symbols, rain and wet. And um here's a formula which is rain or wet. Okay. And so what I'm going to do is call get kit models of F1. And here I'm just passing in the list of symbols uh so it knows how to enumerate the models. Um and what this is going to do is we're going to just iterate over all possible worlds. So it could be rain, false, wet, false. Um I'm going to call the interpretation function on that world. And uh this one doesn't pass. Um here's rain false wet true. This one uh passes. So I'm going to add it to models. I'm going to consider um uh rain true what false. This one passes. And true um also passes. Okay. So here the models m of f_sub_1 is the set of models or worlds where the interpretation function returns true. Okay. And intuitively you can see here um here's all the possible um states of the world. It could be wet but not raining. It could be raining but not wet. Or it could be both. and those correspond to the states of the world that this formula represents. Okay, so here's another one. So if you have f2 equals rain and wet and you try getting the models, then it's only this uh singleton model that survives. So here you see that a formula which is rather you know small compactly represents a potentially very large set of models. So I only have two propositional symbols here. So there's only four models total. But if you had a 100 propositional symbols, there' be two to 100 different models. And you might have a very simple um propositional formula that represents maybe you know trillions of different um models. Okay. So to summarize, remember a formula is just a piece of syntax. It has some ands and has some ors. s. s. s. s. s. s. s. s. s. It has some propositional symbols in it. We define an interpretation function which is our kind of like um core uh the way we define start defining semantics. But the thing we're going to use from now on is this m function. And this has a very intuitive way to think about it. Think of f as a sentence. So when I say rain or wet, it's like saying the sentence it is rainy or wet. Okay? And the semantics of that sentence is the set of all possible worlds we could be in such that sentence is true. Okay? So if I say it is rainy or wet, then we could be in one of these situations.

Segment 6 (25:00 - 30:00)

If I say it's raining and wet, we could only be in this situation. All right? So now we know what models of a formula are. Let's talk about the idea of a knowledge base. Okay. So this is just in some sense um syntactic sugar in some sense. Um and a knowledge base is just a set of formulas and we should think about is a knowledge base is a set of facts or formulas that you add to over time. Right? So it's our list of things that we know to be true about the world. And um again this is knowledge base is just a piece of syntax. It's just a list of facts. Um and what are the semantics here? The semantics M KB are all the models that satisfy every formula in KB. And this these are all the possible worlds we could be in given our knowledge uh knowledge base. Okay. So let me do an example here. So if rain and wet. So if the knowledge base contains two formulas rain and rain implies wet. Okay. So that correspond to we know it's raining and we also know this um information that whenever it rains it's wet outside. — [snorts] — So the way we construct the models of KB is um we just take one way to do it is take the intersection of the all the models of its formula. So you take M of rain. Okay. So this gives us um it's rainy but you know it could be wet or not wet. And then we also look at the models of rain implies wet. So this one's um a little bit more interesting. So you see that it could be raining and sorry not rainy and not wet. Not raining and wet or it could be rainy and wet. So only one that's missing is which one? It's raining and it's not wet. Right. That's not consistent with it is if it's raining then it's uh wet. Okay. So notice that this doesn't say tell us whether it's raining or not. We have both raining is false and rain equals true. But it does say whenever it's raining it has to be wet. It can't be the case that it's not wet. Okay. So that's the way in which the models here um represent this formula's restriction. So now when you intersect them which you take then you get just rain equals true and white equals true. And this sort of makes sense. Um if it's raining and also if whenever it's raining it's wet then it has to be both rainy and wet. Um which is exactly what we have here. Okay, another way you can think about this is a knowledge base is equivalent to semantically to the conjunction of its formulas which means that you can just uh take a knowledge base which is rain and rain implies wet and we just put an and between all the um the formulas. So that gives us this formula rain and rain implies wet and if you call get models on that you also get the same thing. So you can either take the models of the individual formulas and you intersect the models or you can just like take the conjunction and um you just get the models of that single massive formula. So we'll go back and forth between this intuition of a knowledge base as a set of facts or as a single formula representing a conjunction of those facts. So now I'm going to introduce some other terminology here. So the way you should think about it is I have a knowledge base here and I want to either ask questions or um add to that knowledge base. So as I add more things into this knowledge base um this knowledge base in some sense grows right I have five facts I add another fact I get six facts okay so uh the KB union this new formula that I added is a superset of uh KB

Segment 7 (30:00 - 35:00)

but notice what it does to the set of models uh shrink. So as I get more you know facts I am intersecting those that restriction onto my previous um MF KB and therefore that um that set is going to shrink over time. And this makes sense because at any point in time M of KB is the set of all possible worlds we could be in. So the larger it is the more uncertain we are. As we add more facts, the more certain we become because we can only we rule out things that don't kind of match our um our knowledge. So you can think about um KB as representing a set of constraints on the set of possible worlds and the more constraints you have the more constrained your space and the smaller um your set of models. Okay. So, as we gain more knowledge, we narrow down the possible worlds we might be in. Okay, with that in mind, um let's uh Okay, we're going to go back to our example. There's rain, wet, and I'm just going to define this helper function that um takes a knowledge base, converts it into a single formula, and gets the models out. So I'm just going to call that M um to match the mathematical notation. So the key question now is how much does M of KB shrink? Okay, so we know it can't grow. Um so how much does it shrink? And depending on how much it shrinks, there's sort of special meaning uh to what happens. So there's a notion called entailment. Okay. And the idea behind entailment is that it doesn't shrink. So KB the knowledge base entails F also written KB with this uh this particular notation F if and only if the set of models of the augmented knowledge base is exactly the same as the set of models of the original knowledge base. Okay. So pictorially it looks like this. you have a set of models of the KB models of a formula might be something larger but when you intersect it with the knowledge base um models of the knowledge base you get exactly the same thing okay so the intuition is that fads no more information it's not restricting your set information means that you've sort of restricted your um MFKB [snorts] okay so let's walk through this example so I have this knowledge base rain and rain implies wet um that's the knowledge base and then the formula is rain okay so I compute the models of the original knowledge base so that's of KB and remember we did this uh before it's just rain true wet true and then um if I get the new models of the new knowledge base which is these two combined then I get exactly the same thing which means that KB entails F. In other words, no matter what state of the world we are in that's consistent with the knowledge base F is also true. So this is the entailment symbol. So KB entails F. Whenever you see this you should just read KB entails F. Okay, so that's entailment adding numer no information the models don't change and now uh in the other end of the spectrum we have contradiction. So the contradiction says KB contradicts F if the new set of models with KB and F is the empty set. And the intuition here is that F is just not compatible with KB. it's uh you know the set of models always shrinks and this is the maximum it shrink it shrinks okay there's no solutions there's no possible worlds in which f and kb are true so the picture you should have in your head the intersection between these two sets is empty okay so here is a concrete example um suppose the knowledge base says it's raining and it's wet and I have a formula that says it's not wet okay so Let's go through this example. Um so the old models are rain true wet true and the new models is the empty set here. Right? Because this formula is going to be uh only contained models

Segment 8 (35:00 - 40:00)

where wet is false. Okay. So it is the case that um this knowledge base and contradicts this formula. Okay. So now there's a third case which is everything else and this is called contingency and in some sense this is the more um I guess interesting case um at least when you're adding information. So f is contingent with respect to knowledge base. If um the new set of models the amount of which you've shrunk, you neither hit the empty set nor are you at the original uh set of models. Okay. So the intuition is that it shrinks the set of models but not completely. So the idea and picture you're going to have your head is this is a set of models by the knowledge base. Here's m of f and it's sort of shrunk. If you look at the intersection, it's shrunk to this uh set. So it's not the empty set. There's something here and it's also not the original set. So you've learned that we are here instead of uh somewhere over here. Okay. So let's go through this example here. So suppose the knowledge base just says it's raining and I come along and say formula equals wet. Then the old set of models is it's raining. Wet could be true or false. New set of models is just rain and wet. Okay. And we can check that new metal models is not empty nor is it the same as the old models. So that's called contingent. So there's also just as an aside a relationship between entailment and contradiction. So KB contradicting F is the same as KB entailing not F. Okay. So the way you could see this is that um this if uh let's say um okay so KB entails F. Okay. So what is not F? The models of not F are everything outside this uh this pink um ellipse. Okay. So if the models of not f um don't intersect models of KB. So models of KB contradicts sorry KB contradicts um not F whenever KB entails F. So to summarize the key thing to do is you have a and the semantics of the knowledge base is MFKB set of models we could possible worlds we could be in given our knowledge that we have. you're gonna consider adding a formula and that will shrink the set of uh possible worlds into this new set. And then based on how much that shrinks, we can call that one of three things. If it's the same, no shrink at all, that's called entailment. If it shrinks to the empty set, that's called contradiction. And if it's anything in between, it's called contingency. Right? So the question is if there's a contradiction who's at fault the knowledge base or f um this logic doesn't is not opinionated about that. Um and there's no notion of who's right or wrong. This is just like the set of models, right? It could be either war. All right, so let's we've built out quite a bit, right? So we started with defining model where then we define an interpretation function that connects um formulas in the syntax to models. Then we said we should really think about a formula as a set of models, possible worlds. And then we defined a knowledge base um which is a set of formulas. And then we thought about adding new formulas and seeing the relationship between um the added and the not added and that's entailment contradiction and contingency of the three options. Now we're going to use all of that building up all that machinery um to actually do something interesting with knowledge base and that is to implement these uh two operations ask and tell. Okay. So let's go through our example. We have rain and wet. Um and suppose we have this knowledge base. Um you know what can we do with it? This is the same helper function as before for getting the knowledge of the knowledge base. Um we can ask uh yes or no questions of the knowledge base. We're going to denote that as ask a formula and we can also tell the knowledge base new statements. So tell uh a formula.

Segment 9 (40:00 - 45:00)

Okay. And each of these is going to return a response and that response is going to uh determine the relationship between is based on the relationship between KB and F. Okay. So let's begin with ask. Now um so we only support yes or no questions in propositional logic because uh propositions are boolean. Yes or no. Um and there's three possibilities. Yes, no or I don't know. Okay. So um suppose we have the knowledge base um rain wet rain and wet and I ask is it rainy or wet okay so I know it's raining and it's also I know that it's wet so I ask the question of the knowledge base is it raining or wet so um the way you can answer this in general so this function works for any knowledge base and any formula um is that um you compute the set of old models, you compute a set of new models with KB and F and then you see are they equal? Yes. So that means the answer to the question is yes or absolutely yes because um in all possible worlds in KB the F is true. Okay. All right. So suppose we know that it's not rainy but it's also wet. We ask is it raining? So then we get our old models, we get our new models and it's empty. So we say no. Okay. So it's inconceivable that f is true. So here to answer the previous question, we're assuming that the knowledge base is correct. And then f is something that we're testing. It's like a query. And to answer the query, we're going to pretend to add it into the KB and see what changes. And if what changes is that we ended up with the empty set, then we say no, that is uh that's impossible. So in all possible uh worlds of the that are consistent with the knowledge base, none of them are consistent with F. So it's definitely a definite no. Okay. So in the final case suppose I know it's raining and I ask is it wet? Okay. So the old models, the new models and in this case I say I don't know right because in some of the cases um you know if I were here the answer would be uh false um true. So really I guess another way of word is it depends or which equivalently I don't know. Okay. So this is just using the exact same concepts as we had before entailment, contradiction and contingency and mapping them onto question answering. So if I ask a question f then I basically see which of these cases it is and I generate the appropriate response which is yes, no or I don't know. So now I'm going to talk about the other operation tell. So this is about adding information into a knowledge base. And here there's also three possible responses. It's I already knew that. I don't buy that or I learned something new. Okay. And it corresponds to entailment, contradiction, and contingency. So let's walk through the examples. So suppose I know it's raining and I also know that if it's raining, it's wet. And then I say, "Hey knowledge base, it's wet. What is the knowledge basis going to do? It's going to say, well, these are the possible worlds that I could have been before and um if I took your formula and I added to knowledge base, this is what I would know. And actually, that's the same thing. So, you didn't tell me anything new. I already knew that. Okay. So in the second case suppose I um in the knowledge base I know it's raining and wet and I tell the knowledge base hey knowledge base it's not raining. Knowledge base says okay this is what I um know to be the possible states of the

Segment 10 (45:00 - 50:00)

world given my previous knowledge base KB. I'm going to try adding your formula in and oh it's empty. So therefore, it's a contradiction and I'm saying I don't buy that. Okay? There's no possible way that you know we exist in the world. So there's some possible world and it's um if you're telling me this there's no way I can resolve that contradiction. So it says I don't buy that. And finally, um, if it's I know the knowledge base says it's raining and I tell the knowledge base, hey knowledge base, um, it's not wet today. And the knowledge base says, okay, here's the possible worlds I could be in. Here's the new um, models, and it it's not the same as old models. So, I learned something and it's not the empty set. Um so um it's not a contradiction. So then I'm what I'm going to do is add it to the knowledge base and say thank you. I learned something new. Okay. So now the new knowledge base contains uh rain and not wet. So now you know of course you can it's important to remember that um just because a knowledgebased tell accepts it doesn't mean that the fact is is like you know correct in any sense. Um I mean you can you can give tell the knowledge base you know 1 plus 3 is five and you'll say okay I learned something new because unless it contradicts its uh additional information. So if you start a knowledge base without knowing anything then you can add as add a lot of things um and the knowledge base doesn't know and this is how fiction works right you just create a world um okay so the summary here is that ask queries a knowledge base with a question tell you add new information to our knowledge base all both of these queries essentially are doing the same operation. The only thing is what string is returned and also whether you update actually update the knowledge base or not. So the three possible outcomes are entailment. So you're always comparing MFKB and with F. And entailment says ask returns yes. Tell says I already knew that. Contradiction says no. Ask is no. Tell says I don't buy that. and contingency ask returns I don't know until returns I learn something new. Okay, so this is an example of how we use entailment contradiction and contingency in this quotequote application of knowledge bases with asking and telling. So the first part of the question is should I think about knowledge base as a database and absolutely that's the right way to think about it. it's uh you know storing facts in a database. Okay. So the question is suppose um you tell the knowledge base a new fact and um and it's a contradiction. I think this goes to your earlier question which is it's possible that knowledge base is just wrong and my new information is correct. And so you're right that the implementation of tell here assumes that uh the knowledge base is always correct. If that is not correct then uh then you have some work to do and there is actually a bit tricky because you don't know which parts of the knowledge base to remove. Um you don't want to remove all of it. Um and sometimes there could be multiple options of what parts of the knowledge base to remove to make it not uh contradiction. Uh it just gets more complicated. So the question is um is there a notion of the knowledge base being outdated? Um there's no notion of time here in propositional logic um or in logic in general usually when you add formulas to a knowledge base um it's monotonic meaning that um you can't remove something right the set of models always shrinks so interesting this is not true with probabilities actually it's not monotonic it can probability something could go up or down depending on what you add with logic it can only um can only shrink. Um I think the way you would deal with this is you have some external mechanism that uh tries to detect when there's maybe old facts that don't make sense. I think the thing that you were building some

Segment 11 (50:00 - 55:00)

interface here, you probably say there's a contradiction. You would probably um surface the formulas. If there's a small set of formulas that contradicts, you show that to the user and the user decides what to remove and what not to remove. Yeah, I think the in general these cases where you find a contradiction but you don't know what is wrong is um shows up in many cases like for example if you're coding right and the your tests are failing. It could be that you your code is wrong or it could be your tests are outdated um or you have comments that don't match the code. It could be the comments are wrong or the code is wrong. So I think in general it's an unknowable and you just have to surface this contradiction and then you use external information to decide. So the question is if ask and tell are basically doing the same thing why are they separate? Um the reason is that um it's more of an intuser interface thing like you have a knowledge base you sometimes want to ask questions and sometimes you want to tell it things um and that is the single bit of distinction like the only thing is basically this line where if tell if it's a contingent you just add the fact yeah all right so me speaking of probabistic reasoning, let's actually make a interesting connection to Beijian networks. So you thought we were done with them, but they're actually kind of pretty well connected to propositional logic. So remember that a basian network defines a joint distribution over a set of variables. So here I've using rain and wet as the random variables here. Um and for every assignment we have a probability between zero and one. Okay, so this is a joint table and um the way to map Beijian networks into propositional logic as follows. The random variables are essentially the propositional symbols, right? So before we had variables such as rain and wet, they could be take on values like true or false. Here we have propositional symbols that take on true or false. And then what we're calling models here are just assignments um in vision uh network speak. So assignment is a row of this table here rain one wet one for example. So a joint distribution actually assigns a probability over distribution over models. Um you can think about it like that. Okay. So the other thing with probabistic um uh sorry Beijian networks is that we did probabistic inference right we had um probability of some query given evidence and the way to map this on is you can think about the evidence as your knowledge base. These are the things you know to be true about the world and then the query is the thing we're asking about. Okay. So for example before we wrote what is the probability of rain given that it's wet in logic this is I'm going to tell the knowledge base that it's wet and then I'm going to ask is it rainy or not so these are exactly kind of analogous. One key difference however is that in Bayia networks um the query and evidence are just simple assignments to variables. So when we condition on things, it's always of the form variable equals value. And in propositional logic, the query and evidence are arbitrary formulas. So they could be rain and wet or rain and not snow or not snow. And that gives us a much more expressive language for um talking about um you know about reasoning, right? because sometimes we don't have um you know the actual evidence. We don't know if it's raining. Maybe we know that it's either raining or snowing and we can condition on rain or snow which you can't do in a basian network. I mean in a basian network you have to define a separate variable for rain or snow. So you can do it but it's not um native. Okay. And the second difference of course is that basian networks have probabilities and logic does not. Um but the way we can draw this connection is that if you think about um the probability distribution being over models or worlds. So the probability of a knowledge base in some sense is the um the summation over all possible

Segment 12 (55:00 - 60:00)

models that are consistent with the knowledge base of the probability of that world. So for example the probability of rain is equal to um these two these are the um models in which it's raining and you add these two numbers and that would be 04 um and if you look at the probability of KB union F that's simply the sum over all the probabilities of those new models and then if you divide that gives you your probability of F given KB. So if you look at this, this is sort of the probabistic generalization of um you know logical inference that we're doing. And because it's a probability, it's a number between 0 and one. And it generalizes the yes, no, I don't know responses. So before we just return no, don't know or yes. And now we have some number you know in between. And normally we would say I don't know. But now if we have probabilities on everything we can just give um a number we can say hm 90% chance. Okay. So just to summarize there's two ways in which they're different. One is that um probability distributions generalize the idea of just having zeros and ones. And secondly, with logic, you can have formulas which give you more expressive uh um you know queries that you can make. And there's a whole line of work talking about uh probability and logic and um and probabistic logical languages as well which we're not going to talk about here. [snorts] All right, moving on. So the final topic before uh we finish our um propositional logic and semantics unit is satisfiability. And um so far we've implemented ask and tell in terms of entailment, contradiction and contingency. Um and the question is how do you do this efficiently? So far we're just enumerating all possible models. Um and in general this is going to be exponentially large. So let's try to get a faster algorithm by first reducing into a single operation. So we're going to define a notion of satisfiability and a knowledge base is satisfiable if the set of models in that are denoted by the knowledge base is not empty. Okay, this makes sense if there's the there's at least one world in which it's consistent with the knowledge base. Okay, so now we can actually reduce entailment contradiction contingency everything just to satisfiability. Um, so let's try to do this. So let's your first instinct might be just let's call a satisfiability checker on KB union F. Okay. And we know that if it's not satisfiable that means models of this is empty and we can return contradiction. Right? But if it's is satisfiable then we don't know whether it's contailment or contingency. So we need to do another test. So recall this relationship between entailment and contradiction. KB entails F is the same as KB contradicts not F. So using that fact we can call the satisfability checker on KB union not F and if this is not satisfiable that means which means that there's no contradiction that means KB entails F and this is actually if you're familiar with a proof by contradiction is basically like this if you want to prove um basically that f follows from KB. You basically add not F and then you try to reason and you if you see that that's a contradiction then you know that KB entails F. Okay. So to summarize what we're going to do is as follows. So first we check um whether KB union not F is satisfiable and if it's no that means it's KB entails F. [snorts] If it's yes then we

Segment 13 (60:00 - 65:00)

check if KB and F are satisfiable. If it's no then we say it's contradiction otherwise we return it's contingency. And the reason we need two calls to is that satisfiability returns yes or no which is only one bit of information but we have three outcomes here. So there's no way one call can tell us the enough information to know which of the three it is. So um let's do um so that's satisfiability and its relationship to entailment contradiction and contingency. Um model checking is the task of determining whether a KB is satisfiable. The reason we're expressing everything in terms of satisfiability is that people have been working on model checking for ages and there are efficient ways of doing this. Um so the idea is that you're given a knowledge base and you check whether KB is satisfiable. Um so we've been using this um I didn't say this explicitly but there's this um package called Z3. It's a SMT solver. Um I'll explain maybe next time what that means exactly, but it has a tool basically to do logical inference. Um and one of the things it can do is check satisfiability. So we define this uh solver from Z3 and we just add um all the formulas into the knowledge base and then we add ask it to check and it will return um SAT for satisfiable and if it's SAT then we can get also the model. So whenever um satisfiability checkers return SAT, it should also give you a example um a sort of existence proof, right? And the existence proof it gives us here is that rain equals true, white is true. And that model here is um justification for why this knowledge base is uh satisfiable. And under the hood, um, SAT solvers have been there's been decades of research on SAT solvers. They've gotten extremely good. So, we know from complexity theory that SAT solving is MP complete in the worst case. But that doesn't stop people from building horistics and um various tools to actually solve set instances with, you know, thousands or hundreds of thousands of variables. Okay. So there's some really sophisticated things which are outside the scope of this uh this class. Okay. So to summary summarize um we've reduced entailment contradiction contingency to one to two satisfiability calls. And then once we have reduced everything to satisfiability, we can invoke these model checkers which are really fast at checking satisfiability of um formulas. Okay, so that's the real way we actually uh handle these uh queries. Everything here where we um implement ask and tell we're enumerating over models and that's just like a proof of concept to for understanding. It's not the actual way you would implement this. Okay. Um, so now in the last five minutes, let's talk about inference rules here. Um, and I'll talk more about this next time, but just to get you a taste. So we've done syntax. These are possible valid formulas. We've done semantics, uh, which connects, um, the syntax and the space of models. And now we have inference rules. Okay, so in some sense we don't need to do this because we've already defined the syntax and semantics. We can just use model checking a blackbox solver. It'll just work. Um, and model checking basically goes through and finds models possible worlds that satisfy the formulas. But remember this problem that I posed in the beginning, right? So we didn't do any model checking there. We just manipulated symbols. And this is normally what you do um when you're thinking as a human when you're doing logical reasoning for example math you hardly do any model checking which would be like enumerating cases you often derive things right and also in natural language you derive things. So what's the mechanism logical mechanism here so um the idea here is that the idea of logical inference. So here's an example of inference. It's raining. If it's

Segment 14 (65:00 - 70:00)

raining, it's wet and therefore it is wet. Okay. So what we've done here in general is we have a set of premises which are set of formulas rain and rain implies wet and if we have these premises we can derive this conclusion. And in general rain and wet can be replaced with any propositional symbols or any formulas actually. Um uh and this is called modus ponent. Um this rule which says that if I have P and I have P implies Q then I derive Q. Okay. And this little symbol here with one horizontal line instead of two means uh proof and uh instead of entails. So the general in idea of inference rule is that you have a set of premises which is a set of formulas f1 through fn and you get a conclusion which is some other formula and these rules operate on the syntax um not the semantics. Okay so um given a set of rules we can do we can just iterate this. So I give you a set of rules. I give you initial knowledge base. I can just loop while there's no changes to a knowledge base. I take out a set of formulas from the knowledge base. And if I find a matching rule um then I add G to the knowledge base. So if I find a rule that's shaped like premises um derives uh conclusion then I add the conclusion to the knowledge base and I just iterate until you know I'm done. So here if um according to this I say we say that KB deres or proves f written this way if f eventually gets added to the knowledge base using this uh procedure. Okay, so here's an example. Um, so if I have this as a starting point, I have rain. I have rain implies wet. Wet implies slippery. So I apply modus ponents to rain and rain implies wet, which are things in my knowledge base. Then I derive wet. So that's the conclusion I added to the knowledge base. And now I can um take wet and take wet implies slippery which are formulas in a knowledge base and I derive the conclusion which is slippery and then I converge because I can't add any more facts to the knowledge base. So in general you take facts from knowledge base you apply some rule you add new facts and then you can use those new facts in conjunction with old facts to add more new facts and you iterate. So there are some formulas you can't uh derive. Um so some of them are just false uh um so it's you know it's good that you can't add them. And some of them are um a little bit more maybe advanced. uh so you don't end up adding them. So now there's a question um you know this is all syntax now we're back in syntax land. So this idea of proving or deriving is a syntactic operation. I have a set of knowledge um of facts in the knowledge base. These are just symbols and I have a set of rules that operates on them and can generate more uh formulas which are symbols. And then before I introduced this notion of entailment and remember entailment means that the set of models um of f is a supererset of models of kb. So what is this relationship here? Okay so in particular what is the set of formulas that are derived derable from KB and the syntax which is the set of formulas which are entailed by KB. Okay, so the way to think about this is as follows. So we have truth. Okay, so semantics is what's true in the world and that's with the two um horizontal lines and you can think about truth as a glass where uh the space in inside is uh you know the space of you know formulas. So everything in the glass is is true. So when you are um doing this modus ponent derivation you're it's like you're filling up water in a glass and so soundness means that anything you add via that derivation syntactic operation is a subset of the things that are true. So

Segment 15 (70:00 - 73:00)

you never go outside the glass. Everything you might not fill the glass but you don't go outside the glass. So soundness means that anything anytime your lot derivation um produces a formula you know that formula is true or entailed by the knowledge base. The opposite is um completeness which means that the knowledge base um you the set of formulas that you can derive is a superset of the um set of formulas that are entailed. Okay. So remember this is the water that's um filling the glass and then this basically says you filled up the glass completely. every single fact that is true you've also derived but you it's possible that you could have derived things that are outside the glass you might have derived false things so it's not sound but it's complete okay so ideally you want the truth the whole truth and nothing but the truth and the soundness is says you get nothing but the truth and completeness is that you have the whole truth okay so hopefully glass analogy is good to keep in mind. Um so two quick examples. So um is modus ponent sound? Okay. And the way you check whether a rule is sound is you just look at the models. So you have models of rain. Uh models of rain implies wet. So these three and then you take the intersection between them and you check that is a whether that's a subset of um the models of the conclusion and this case it is actually true it is the case so it's sound whereas if you have this other funky rule which is saying whenever it's wet um and uh I have this rule that says rain implies wet can I infer rain and this is Um let's check well wet the set of models that are wet are here. The set of rain implies wet is over here. Intersection is this dark red and that is not a subset of rain. So this is unound and never do this in practice. So this is a kind of doing a reverse implication and it's not valid logical reasoning. Okay. So just to summarize, so we did a lot in this class. Um we talked about syntax, semantics and inference rules. On the semantics side, we introduced models, interpretation function, which allows us to induce a set of models for a formula. Introduce a knowledge base. Introduced notions of entailment, contradiction, contingency, and satisfiability. And then finally, we connected syntax and semantics. We have notions of soundness and completeness. Okay, so these notions are not specific to propositional logic. These are general um ideas and logic. And next time we'll see um graduate to first order logic where we'll actually do some really uh interesting things.

Другие видео автора — Stanford Online

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник