"

Extending our approach to sets of formulas

Extending our approach to sets of formulas

Students of logic are usually taught at first about tautologies, contradictions, and contingent propositions. However, these are not the notions you will encounter in more advanced, and more modern, works. This is at least partly because these notions are usually used for individual formulas, while a lot of the times, we want to talk about sets of formulas instead.

One thing that we can immediately extend to sets of formulas is contingency. Sometimes, it is not just one formula that has a model, but a whole set of them. Indeed, perhaps an infinitely large set of them! In these cases, we sometimes say that the set

SS

of formulas is (zeroth-order) satisfiable or semantically consistent. This can be captured in a precise manner as follows:

Definition 5.3 (Satisfiability). A set

SSof formulas of

0\mathcal{L}_0is (zeroth-order) satisfiable or semantically consistent iff there is a structure

𝐒\mathbf{S}such that

𝐒X\mathbf{S} \models Xfor each

XSX \in S. We say

SSis (zeroth-order) unsatisfiable or semantically inconsistent provided it is not semantically consistent.

Remark 5.5. Note that the above definition is not the same as claiming that for each

XSX \in S, there is a structure

𝐒\mathbf{S}such that

𝐒X\mathbf{S} \models X. This would not make the set consistent, it would make the formulas contingent individually. For example, if

XXis contingent, then

¬X\neg Xis also contingent, so both

XXand

¬X\neg Xhave a model. But clearly,

{X,¬X}\{X, \neg X\}is not consistent, since there is no structure in which both of these could be true.

Here is a useful way to think about consistency and contingency:

Proposition 5.11. The formula

XXis contingent iff the set

{X}\{X\}is semantically consistent.

Exercise 5.4. Prove Proposition 5.11. Hint: show from left to right that if

XXis contingent,

{X}\{X\}is consistent, and from right to left that if

{X}\{X\}is consistent,

XXis contingent. To do this, use the definitions for these notions.

Here is an interesting fact about consistency and contradictions:

Proposition 5.12. If

S={X1,...,Xn}S=\{X_1, …, X_n\}, and some

XSX \in Sis a contradiction, then

SScannot be semantically consistent.

Proof. In other words, a single contradiction can make a set of formulas inconsistent. This is easy to see since semantic consistency demands that there be at least one structure

𝐒\mathbf{S}in which each member of

SSis true. Now if

XXis a contradiction, there is no structure in which it is true, and

XSX \in Sby assumption. ◻

On the other hand, we have:

Proposition 5.13. If

SSis a set of tautologies,

SSis semantically consistent.

Exercise 5.5. The proof is left as an exercise.

Validity

Here comes something magical. So far, we talked about certain sets of formulas that are satisfiable, and certain sets of formulas that are unsatisfiable. And this boiled down to whether the set of formulas have a common model, or they do not. Now as it turns out, satisfiability and unsatisfiability are sufficient to define the central notion of logic: validity.

This may not be apparent at first. Take the usual definition of validity:

(A1) An argument is valid iff whenever the premises are true, the conclusion must also be.

It seems like this has absolutely nothing to do with whether sets of formulas are satisfiable. Not so fast!

What is an argument? We can define it as such:

Definition 5.4. An argument is a set of formulas, of which one is designated the conclusion, and the rest are designated the premises.

This could be formalized with more structure, using ordered sets and whatnot, but what I want to emphasize here is that arguments are just sets of formulas with some designated members.

Let’s return to validity. Validity says that if an argument is valid, then whenever its premises are true, its conclusion must also be true. You may ask yourself: is it possible for a valid argument’s premises to be true, but its conclusion to be false? It seems not, since that is exactly what validity excludes. It says that if a valid argument’s premises happen to be true, then whatever else is the case, the conclusion must also be true. Thus, we can also characterize validity as follows:

(A2) An argument is valid iff it is impossible for its premises to be true, but its conclusion to be false.

Note the word ‘impossible’ in italics. What does it mean that it is impossible for the premises to be true and the conclusion to be false? Well, it just means: there cannot be a structure

𝐒\mathbf{S}

such that

𝐒\mathbf{S}

models the premises, but

𝐒\mathbf{S}

does not model the conclusion. Let’s use some specific variables to make this more precise.

Suppose you have an argument with (possibly infinite) premises

P={X1,X2,...}P=\{X_1, X_2, …\}

, and conclusion

YY

. Then, by the above, if this argument is valid, that means there is no structure

𝐒\mathbf{S}

such that

𝐒X\mathbf{S} \models X

for all

XPX \in P

, but

𝐒⊭Y\mathbf{S} \not\models Y

(the conclusion). We also know that

𝐒⊭Y\mathbf{S} \not\models Y

iff

𝐒¬Y\mathbf{S} \models \neg Y

, so we can say: if an argument is valid, that means there is no structure

𝐒\mathbf{S}

such that

𝐒X\mathbf{S} \models X

for all

XPX \in P

, and

𝐒¬Y\mathbf{S} \models \neg Y

. Again, this is just a formal way of saying it is impossible for the premises to be true but the conclusion to be false (or equivalently, it is impossible for the premises to be true, and the negation of the conclusion to also be true).

But note that now we really are talking about unsatisfiability! For what we just described may simply be put: the set

{X1,X2,...,¬Y}\{X_1, X_2, …, \neg Y\}

is unsatisfiable. And this is, indeed, what validity boils down to. In particular, an argument is valid if, and only if its premises, together with the negation of its conclusion, are unsatisfiable. And again, all this just means that it is impossible for a valid argument to have true premises and a false conclusion. Thus:

Definition 5.5 (Validity). Let

A={X1,X2,...}A=\{X_1, X_2, …\}be any set of formulas and let

YYbe any formula of

0\mathcal{L}_0. Then, the argument from premises

AAto the conclusion

YYis valid iff the set

{X1,X2,...,¬Y}\{X_1, X_2, …, \neg Y\}is not satisfiable. If an argument from

AAto

YYis valid, we write

AYA \models Y, or say that

YYis a semantic consequence of, or semantically entailed by

AA.

Remark 5.6. Indeed, from this, we get an alternative definition of being a tautology for free. For note that the premise set

AAwas not required to be non-empty. So

AAmay be

\emptyset. So by definition,

Y\emptyset \models Yiff

{¬Y}\{\neg Y\}is unsatisfiable. And

{¬Y}\{\neg Y\}is unsatisfiable iff

YYis a tautology. In such cases, we can just write

Y\models Y, as we have been doing.

Exercise 5.6. Show that the following hold in zeroth-order logic:

  1. ¬¬XX\neg \neg X \models X

    and

    X¬¬XX \models \neg \neg X

  2. ¬X¬Y¬(XY)\neg X \wedge \neg Y \models \neg(X \vee Y)

    and

    ¬(XY)¬X¬Y\neg (X \vee Y) \models \neg X \wedge \neg Y

    ;

  3. ¬X¬Y¬(XY)\neg X \vee \neg Y \models \neg(X \wedge Y)

    and

    ¬(XY)¬X¬Y\neg (X \wedge Y) \models \neg X \vee \neg Y

    ;

  4. ¬XYXY\neg X \vee Y \models X \rightarrow Y

    and

    XY¬XYX \rightarrow Y \models \neg X \vee Y

    ;

  5. XY¬Y¬XX \rightarrow Y \models \neg Y \rightarrow \neg X

    and

    ¬Y¬XXY\neg Y \rightarrow \neg X \models X \rightarrow Y

    ;

Validity and tableau

From the above, you may already see how reasoning with sets of formulas is implemented in our system. With single formulas, we had that if every possible tree for

¬X\neg X

remained open, that must mean

¬X\neg X

is satisfiable (i.e., it is a tautology or contingent), which in turn ensures that

XX

is not a tautology. On the other hand, if there is a possible tree for

¬X\neg X

that closes (closes on all its branches), then

XX

is indeed a tautology, since its negation is not satisfiable. Similarly, but with sets of formulas, if a tree for

{X1,X2,...,¬Y}\{X_1, X_2, …, \neg Y\}

closes at least once (on all its branches), then

{X1,X2,...,¬Y}\{X_1, X_2, …, \neg Y\}

is not satisfiable, which in turn means that

{X1,X2,..}\{X_1, X_2, ..\}

entails

YY

. On the other hand, if no tree for

{X1,X2,...,¬Y}\{X_1, X_2, …, \neg Y\}

closes, that means the set is satisfiable, and

{X1,X2,...}\{X_1, X_2, …\}

does not entail

YY

.

The only question is: how do we work with multiple formulas, when in the simpler case, we put the sole formula

¬X\neg X

at the root of the tree (when trying to prove

XX

is a tautology)? First, with arguments, we put the negation of the conclusion at the root of the tree. So if we are considering whether the set

S={X1,X2,...}S=\{X_1, X_2, …\}

of premises is such that

SYS \vdash Y

, we put

¬Y\neg Y

at the root of the tree.

The other change is more substantial. We introduce a very simple new rule into the system that enables us to inject any one of the premises at any part of our derivation. Thus:

Two line proof: 1. X; 2. Y Pr.

The premise rule

Note that

XX

and

YY

here are completely independent of one another.

XX

may be any formula that you happen to consider at any one point of your proof, and

YY

can be any formula as long as it is one of the premises under consideration. Accordingly, the premise rule is radically different from all the previous rules, since it does not ‘depend’ on any formula at any one point of the tree. In other words, you can put any premise down at any one point of your derivation, and in fact, as many times as you want.

The relationship between the premise rule and all the other rules of our tableau system is similar to the relationship between the base rule and the productive rule(s) in our syntactic derivations.

Suppose you want to show that

{X,XY}Y\{X, X \rightarrow Y\} \vdash Y

. This is usually called Modus Ponens, and it is a valid argument form. In other words, for any choice of

XX

,

YY

, it is true that

{X,XY}Y\{X, X \rightarrow Y\} \models Y

. So let’s show the corresponding fact syntactically.

As noted above, the first step here is to put the negation of the conclusion at the root of the tree derivation. Since

YY

is the conclusion, the tree will start out as such:

The first line of a logical proof containing a negation symbol, followed by the letter "Y," with the word "Start"

Now unlike with tautologies, just having the negation of the conclusion gets us nowhere (if the conclusion is not itself a tautology). In fact, in this case, there is no rule you can use on

¬Y\neg Y

to get further. On the other hand, what you can do is use the premise rule. There are two premises, so you have a choice. It is not always the case that all the premises are used, and in fact, the order in which they are used is also not predetermined, so there is a degree of freedom here.

Let’s start with the more complex premise:

Logical steps numbered 1 to 2 in a proof. Step 1: "¬Y" with "Start" aligned right. Step 2: "X → Y" with "Pr." aligned right.

Great! You should now see that you can use one of your other rules on the new formula. Thus:

Logical proof diagram with three steps: 1. Not Y, labeled "Start"; 2. X implies Y labeled "Pr."; 3. Not X and Y derive by rule from step 2.

You may have immediately noticed that the right side branch closes by the occurrence of

YY

and

¬Y\neg Y

on it. Thus, we have:

Proof diagram showing steps of a logical argument. Steps 1 and 2 list premises "¬Y" and "X → Y." Step 3 shows conclusions "¬X" and "Y," derived from step 2. Branches 1 and 3 are closed.

On the other hand, you might also see that the left hand side branch of the tree remains open, and there are no formulas left on which we might apply a rule. Thus, we have to look for another premise. (Not) incidentally, our other premise,

XX

is just what we need. In particular:

Proof diagram showing steps of a logical argument. Steps 1 and 2 list premises "¬Y" and "X → Y." Step 3 shows conclusions "¬X" and "Y," derived from step 2. Step 4 shows arrives at X labeled "Pr.". Branches 1, 3 and 4 are closed.

Accordingly, we can conclude that

{X,XY}Y\{X, X \rightarrow Y\} \vdash Y

.

Definition 5.6. Let

A={X1,X2,...}A=\{X_1, X_2, …\}be any set of formulas and let

YYbe any formula of

0\mathcal{L}_0. Then, the set of premises

AAsyntactically entails

YY, or

YYis a syntactic consequence of

AA, iff there is a closed tree with

¬Y\neg Yat its origin, and with possible uses of the rule Pr. with any member of

AA. In such cases, we write

AYA \vdash Y, and call the closed tree a proof of

YYfrom the premise set (or simply, premises)

AA.

Let’s look at a more complicated proof before it is your turn. Suppose you want to evaluate the following argument:

(P1) X¬YX \vee \neg Y
(P2) ¬¬Y\neg \neg Y
(C) (QZ)X(Q \vee Z) \rightarrow X

As a first step, like before, we put the negation of the conclusion at the root of our tree. Thus:

Step one of the logical expression '¬((Q ∨ Z) → X)', followed by the word 'Start'.

Note that we are negating the whole formula, so you have to provide the outer parentheses if they were omitted previously.

It is a good idea to break this down a bit more immediately, since it won’t branch.

Logical argument with steps:   1. Not ((Q or Z) implies X), labeled "Start."  2. Q or Z not derived from step 1.  3. Not X not derived from step 1.

As before, there are various ways to proceed, some shorter than others. In particular, we could start breaking down

QZQ \vee Z

. However, this would make the proof longer than necessary. Once you are more comfortable with how the system works, you will be able to see this.

For now, let’s proceed by adding

¬¬Y\neg \neg Y

with our premise rule. We could immediately simplify it further, but for now, we will leave it as is. So we have:

Proof sequence with four steps. Step 1: ¬((Q ∨ Z) → X) from Start. Step 2: Q ∨ Z. Step 3: ¬X, both inferred from the step 1. Step 4: ¬¬Y from premise.

Then, we can add the other premise to our tree, again, non-branching:

Proof sequence with four steps. Step 1: ¬((Q ∨ Z) → X) from Start. Step 2: Q ∨ Z. Step 3: ¬X, both inferred from the step 1. Step 4: ¬¬Y from premise. Step 5: X ∨ ¬Y from premise.

We now have a choice to branch. But doing the proof briefly in our head, we immediately see that if we branch to

XX

and

¬Y\neg Y

, our tree immediately closes by

¬X\neg X

and

¬¬Y\neg \neg Y

. On the other hand, if we branch to

QQ

and

ZZ

, nothing happens, and we still need to branch further to

XX

and

¬Y\neg Y

to close the tree. And in fact, in the latter case, we have to do it twice, resulting in four branches in the end.

So the simplest way is to go immediately to:

Mathematical proof in list form, with logical statements and symbols. Lines include negations, conjunctions, and disjunctions for deduction steps.

Again, you can compare this to:

Mathematical proof in list form, with logical statements and symbols. Lines include negations, conjunctions, and disjunctions for deduction steps.

And in fact, if you branched immediately at the beginning as many times as possible, and applied every rule as many times as possible, you would have had this:

Mathematical proof in list form, with logical statements and symbols. Lines include negations, conjunctions, and disjunctions for deduction steps.

Now this may not look that bad on paper, but notice that the first proof had 7 occurrences of formulas, the second had 11, while the last one had 22! On the other hand, none of these proofs are incorrect. It is just that they become less and less elegant.

On soundness and completeness, again

Note that we can now reformulate our soundness and completeness claims from above in a stronger form. Previously, we had:

X𝑖𝑓𝑓X\models X \textit{ iff } \vdash X

Again, from left to right, this is completeness, and from right to left, this is soundness. But we also have a stronger version of this, with multiple formulas, as opposed to singular ones. In particular:

AX𝑖𝑓𝑓AXA \models X \textit{ iff } A \vdash X

Or in words,

AA

semantically entails X iff

AA

syntactically entails X. And once again, from left to right, this is (strong) completeness, and from right to left, this is (strong) soundness (for zeroth-order logic).

We shall not prove this biconditional in this book, as it involves more advanced techniques than we are ready for. However, we can note one of its very interesting consequences.

Let’s negate both sides of the biconditional again, just as before. Then, we get:

A⊭X𝑖𝑓𝑓AXA \not\models X \textit{ iff } A \nvdash X

Again, we can try and unpack the two sides, and see what we get. First, if

AXA \nvdash X

, that means that

¬X\neg X

cannot have a closed tableau, no matter which members of

AA

are used in any of the attempted trees. In other words, all the trees starting with

¬X\neg X

will remain open, no matter how long they might be.

On the other side, we have that

A⊭XA \not\models X

, i.e., that

AA

does not semantically entail

XX

. Now previously, we defined semantic entailment in terms of unsatisfiability. In particular,

AA

semantically entails

XX

provided the set

AA

together with

¬X\neg X

form a set (precisely

AA

plus

¬X\neg X

) that is unsatisfiable. Accordingly,

AA

does not semantically entail

XX

provided

AA

together with

¬X\neg X

is not unsatisfiable. Or in other words,

AA

together with

¬X\neg X

is satisfiable.

Putting the above two results together, we then get that

AA

together with

¬X\neg X

is satisfiable iff

¬X\neg X

does not have a closed tableau from premises

AA

.

Exercise 5.7. For each set of formulas

AAand formula

BBof

0\mathcal{L}_0below, show that

ABA \vdash Busing the tableau method.

  1. A={XY,YZ,¬Z}A=\{X \rightarrow Y, Y \rightarrow Z, \neg Z\}

    and

    B=¬XB=\neg X

    ;

  2. A={X(ZQ),RZ,¬Q}A=\{X \vee (Z \rightarrow Q), R \wedge Z, \neg Q\}

    and

    B=XB=X

    ;

  3. A={¬((X¬Y)(Z¬Q))}A=\{\neg ((X \vee \neg Y) \wedge (Z \wedge \neg Q))\}

    and

    B=(¬XY)(¬ZQ)B=(\neg X \wedge Y) \vee (\neg Z \vee Q)

  4. A={Z(X¬X),¬QZ,XQ}A=\{Z \rightarrow (X \wedge \neg X), \neg Q \vee Z, X \rightarrow Q\}

    ,

    B=¬XB=\neg X

    ;

  5. A={(¬XY)(Z¬Q),¬X¬Y,ZQ}A=\{(\neg X \wedge Y) \vee (Z \wedge \neg Q), \neg X \rightarrow \neg Y, Z \rightarrow Q\}

    ,

    B=RB=R

    ;

  6. A={XXis a formula of0}A=\{X \mid X\text{ is a formula of }\mathcal{L}_0\}

    ,

    B=¬YB=\neg Y

    .

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Learning Logic Backwards Copyright © by Peter Susanszky is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.