This article was originally posted on Medium and Asheux
In my previous article on knowledge representation using propositional logic, we looked at a simple algorithm called model-checking to show how to determine entailment in all possible worlds. Here, we draw our attention to a more efficient way to achieve the same goal by using theorem proving, the idea of applying inference rules directly to the sentences in our knowledge base to develop a proof without ever requiring to enumerate models.
Inference rules
Proofs in mathematics are valid arguments that establish the truth of the desired goal. Therefore, inference rules are functions for building valid arguments. They take the premises, analyses their syntax and returns a conclusion.
Basic concepts:
- ≡: The equivalence connective
- Logical equivalence: two sentences are logically equivalence if they are true in the same set of possible models
- An argument is a sequence of sentences that end with a conclusion
- The validity of the argument is determined by the conclusion and the premises(preceding sentences). If the conclusion follows from the truth of the premises, then the argument is valid.
These rules are;
- Modus ponens
- And-Elimination
- De Morgan's law
- Double negation elimination
- Implication elimination
- Bi-conditional elimination
- Distributive property
We will use the following structure to represent our inference rules, the line separates the premises and the conclusion.
Let's look at each of these rules with some examples;
Modus ponens
The way to interpret this is, if we have access to some piece of information, α ⇒ β and we know that α is true then the conclusion we reach is β is true. Example;
|
Premises | α ⇒ β: If Rey was free, then Rey would visit Ben
| α: Rey is free
|
-----------|------------------------------------------------------
|
Conclusion | β: Rey would visit Ben
|
And-Elimination
This states that from a conjunction (a sentence whose connective is ∧), any of the conjuncts (sentences connected by ∧) can be inferred:
Example;
|
Premises | α ∧ β: Rey is free on Wednesday and Friday
|
-----------|------------------------------------------------
|
Conclusion | β: Rey is free on Friday
|
De Morgan's rule
¬(α ∧ β) ≡ (¬α ∨ ¬β), De Morgan(∧)
¬(α ∨ β) ≡ (¬α ∧ ¬β), De Morgan(∨)
Using And(∧);
Example;
|
Premises | ¬(α ∧ β): Rey is not free on Wednesday and Friday
|
-----------|------------------------------------------------------
|
Conclusion | (¬α ∨ ¬β): Rey is not free on Wednesday or Rey
| is not free on Friday
|
Using Or(∨)
Example;
|
Premises | ¬(α ∨ β): Rey is not free on Wednesday or Friday
|
-----------|-----------------------------------------------------
|
Conclusion | (¬α ∧ ¬β): Rey is not free on Wednesday and Rey
| is not free on Friday
|
Double negation elimination
¬(¬α) ≡ α, double-negation elimination
Example;
|
Premises | ¬(¬α): It is not true that Rey did not succeed
|
-----------|---------------------------------------------------
|
Conclusion | α: Rey succeeded
|
Implication elimination
(α ⇒ β) ≡ (¬α ∨ β), implication elimination
Example;
|
Premises | (α ⇒ β): If Rey was free, then Rey would visit Ben
|
-----------|------------------------------------------------------
|
Conclusion | (¬α ∨ β): Rey is not free or Rey would visit Ben
|
The way to think about this rule is, given an if-then statement,
(α ⇒ β): If Rey was free, then Rey would visit Ben
we can translate it to an or statement by making the if statement false or true. For example; The conclusion could be Rey is not free or Rey is free(¬α ∨ α). But we know when Rey is free(α), she would visit Ben(β). Therefore, we replace α with β in our inference(¬α ∨ α) to make the final conclusion that Rey is not free or Rey would visit Ben(¬α ∨ β).
Bi-conditional elimination
(α ⇔ β) ≡ ((α ⇒ β) ∧ (β ⇒ α)), bi-conditional elimination
The premise here can be translated as 'α if and only if β'. Example;
|
Premises | (α ⇔ β): Rey is free if and only if it's Monday
|
-----------|------------------------------------------------------
|
Conclusion |(α ⇒ β)∧(β ⇒ α): If Rey is free, then it is Monday
| and if it is Monday, then Rey is free
Distributive property
(α ∧ (β ∨ γ)) ≡ ((α ∧ β) ∨ (α ∧ γ)), distributive of ∧ over ∨
(α ∨ (β ∧ γ)) ≡ ((α ∨ β) ∧ (α ∨ γ)), distributive of ∨ over ∧
Using And(∧) over Or(∨);
Using Or(∨) over And(∧);
Now let's use these inference rules to draw some conclusion or proof that a query is true. Firstly, we define a proof problem as follows;
- Initial state: the beginning knowledge base
- Actions: All the inference rules
- Result or transition model: the new knowledge base
- Goal: The state containing the sentence we are trying to prove
Resolution rule
This is another type of inference rule that yields a complete inference algorithm.
Basic concepts;
- Literals: Propositional symbols or the negation of propositional symbols
- Clause: a dis-junction of literals
Let's see some examples to help us understand this rule;
Given the clause A = (P ∨ Q ∨ R) and the literal ¬P, using this rule, the literal ¬P resolves with P in A. This results to the clause B = ( Q ∨ R) as the re-solvent. This can be interpreted in simple terms as;
(P ∨ Q ∨ R): if Rey is in the house or Ben is sick or it is Wednesday
And we know that ¬P: Rey is not in the house; then we can conclude that;
(Q ∨ R): Ben is sick or it is Wednesday.
Here P and ¬P are complimentary literals or literals that cancel each other. And this is a type of resolution called unit resolution (takes in a clause and a literal and produces a new clause.
Note that the resulting clause or re-solvent should contain only one copy of each literal. For example given (P ∨ Q) and (¬P ∨ Q), we get (Q ∨ Q).
But we need the literals to be unique, so we perform a process called factoring on the resultant clause to factor out the literals appearing more than once. From the above example, we will have;
Now to determine the entailment of the sentence α and β in propositional logic we use resolution rule for knowledge bases expressed in conjunctive normal form(see below).
Conjunctive normal form(CNF)
The form in which a logical sentence is expressed as a conjunction of clauses leading to a complete inference procedure for all of propositional logic.
Conversion to CNF
- Eliminate bi-conditionals; turn (α ⇔ β) to (α ⇒ β) ∧ (β ⇒ α).
- Eliminate implication; turn (α ⇒ β) to ¬α ∨ β.
- Move the Not(¬) inwards by applying De Morgan's rule and double-negation elimination; ¬(α ∧ β) ≡ (¬α ∨ ¬β) and ¬(¬α) = α respectively.
- Use distributive law to distribute nested ∧ and ∨ operators applied to literals.
Example using sentence A ⇔ (B ∨ C);
Bi-conditionals
Implication
De Morgan's rule
Distributive law
The converted CNF is (¬A ∨ B ∨ C) ∧ (¬B ∨ A) ∧ (¬C ∨ A).
Point to note is, when you resolve a single sentence with it's negation, we might get an empty clause represented by empty clause = (). And the empty clause is always false. Example, P and ¬P gives = (), meaning it's is false.
Resolution algorithm
The idea of proof by contradiction is pretty famous in computer science; the idea that given a proposition, to determine it's validity or truth we first assume that the proposition to be false and show that it leads to a contradiction.
Applying this concept in resolution rule, we look at how to determine the truth of the sentence (KB |= α) (KB is the knowledge base). We start by assuming (¬α) and show that (KB ∧ ¬α) leads to a contradiction; if so, the sentence KB |= α is true, otherwise it is false. How?
Procedure:
- Convert the sentence KB |= α to Conjunctive Normal Form(CNF)
- Apply resolution rule to the resulting clauses to produce a new clause and keep doing this;
- If the produced clause is empty = () = false, there is a contradiction, therefore, KB |= α is true
- Otherwise, if we cannot add new clauses, KB |= α is false, hence, there is not entailment
Example;
Using the above CNF converted sentence.
CNF = R0 = (¬A ∨ B ∨ C) ∧ (¬B ∨ A) ∧ (¬C ∨ A)
We want to check that R0 entails the query A or (R0 |= A).
First thing is to assume our query is false. i.e ¬A. Then form the following sentence.
R1 = (¬A ∨ B ∨ C) ∧ (¬B ∨ A) ∧ (¬C ∨ A) ∧ ¬A
Then we show that R1 leads to a contradiction;
Step 1: Pick 2 clauses (¬A ∨ B ∨ C) and (¬B ∨ A), because they have complimentary literals, and resolve them;
Our new premise becomes, R2 below, ignoring the previous picked clauses;
R2 = (¬C ∨ A) ∧ ¬A
Step 2: Pick another clause to resolve that has literals which is complimentary to the new previously formed clause (C). i.e (¬C ∨ A).
New premise;
R3 = (¬A)
Step 3: Since we only have one last clause, we resolve it.
Our resulting clause is an empty clause which means it is false. We then conclude based on proof by contradiction that there is a contradiction. Therefore, the sentence (R0 |= A) is true.
Resolution programmatically
Pseudo-code:
function RESOLUTION(KB, α) returns true or false
inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of KB ∧ ¬α
new ← { }
loop do
for each pair of clauses C[i] , C[j] in clauses do
resolvents ← RESOLVE(C[i], C[j])
if resolvents contains the empty clause then
return true
new ← new ∪ resolvents
if new ⊆ clauses then
return false
clauses ← clauses ∪ new
References
- Introduction to Artificial Intelligence with Python
- (Global Edition) Stuart J. Russell, Peter Norvig - Artificial Intelligence_ A Modern Approach, 3rd Edition-Pearson Education (2016), pg 234
Top comments (0)