Predicate Logic: The Logic of Quantifiers and Variables
If that title sounds like gibberish to you, you’re in good company.
It sounds like math.
But in many respects the key to understanding predicate logic is to understand its relationship to mathematics and mathematical reasoning. Too bad this relationship is almost entirely obscured in most introductory logic texts.
In the textbooks, predicate logic is presented as a synthesis and extension of previous developments in logic.
There’s some truth to this. Predicate logic combines elements of Aristotelian categorical logic and propositional logic in a way that creates a logical system that is far more expressive and powerful than either system separately.
Predicate logic is similar to categorical logic in that it explicitly formalizes the different parts of a proposition (subject terms and predicate terms). Predicate logic includes all of categorical logic, but it is more general:
- It can handle all combinations of quantifiers (“all”, “some” and “none”)
- It can handle conjunctions (“and”), disjunctions (“or”), conditionals (“if-then”) and biconditionals (“if and only if”), the meat and potatoes of propositional logic
- It can represent predicates that involve relations between variables (e.g. “x is the father of y”), whereas Aristotle’s system is limited to predicates that only apply to a single subject.
In predicate logic, for example, it’s easy to symbolize a sentence like “every boy loves some girl”, or even “every boy loves some girl who loves some boy who loves some girl”. This is impossible to symbolize in Aristotelian categorical logic, but no problem for predicate logic.
Still, linguistics students may question the relevance of predicate logic for understanding language, since the formalization scheme that it uses doesn’t look like the grammar of any natural language they’re familiar with.
There’s a reason for that. It comes back to my second take-away point in this whole discussion about logic and its relationship to language: the development of formal logic was motivated by a desire to understand the nature of deductive reasoning and deductive proof, not the logical structure of natural language.
We see this most starkly in the development of predicate logic, which originates with the work of the German mathematician/logician/philosopher Gottlob Frege (1848-1925).
Frege wanted to show that mathematics arose out of logic. He invented a formal system that borrows the language of "functions" and "arguments" from mathematics (in the expression f(x) = y, f is the function and x is the argument that serves as input to the function).
Frege’s main insight was to show that you could model the compositionality of language — how the meanings of whole sentences are built up out of the meanings of the parts — by treating some expressions as functions that can apply to the referents of other expressions.
The details of this system don’t matter for us right now. What matters is that with this system Frege provided an analysis of quantified statements and formalized the notion of a “proof” in terms that are still accepted today.
With this formal apparatus in place, Frege was able to rewrite and analyze mathematical statements in terms of simpler logical and mathematical notions.
In Frege’s logical system, for example, it’s possible to symbolize statements of Euclidean geometry, like "the interior angles of a triangle sum to 180 degrees", and statements of number theory, like “there are an infinite number of prime numbers”. All of this is impossible with ordinary categorical logic.
This was a Really Big Deal for foundational work in mathematics and logic. It revolutionized the field.
Frege also wrote a great deal about the nature of meaning and language, and in many respects his work laid the foundation for what we now call the “analytic approach” to philosophy, which really began as a response to Frege’s writings on meaning, reference and language.
This work had a huge impact on philosophy, but comparatively little impact on linguistics, precisely because it was so divorced from the empirical study of real-world natural languages.
Predicate logic was first and foremost a theory of formal languages. It’s application to natural language has always been a subject of dispute. Some analytic philosophers ran with it, others criticized it, and linguists largely ignored it.
Well, they ignored it until Richard Montague’s pioneering work on formal semantics of natural language in the 1960s and 70s. Montague’s research program is well beyond the scope of what I want to talk about here, but it was a wake-up call to linguists that formal logic might be a useful tool after all. It inspired a lot of innovative work in formal semantics in linguistics departments (largely outside of philosophy departments, though in Europe this seems to be more integrated with logic and philosophy departments).
However, none of Montague’s work, or other work in formal semantics by linguists, shows up in the introductory logic classes that linguistics majors are forced to take as part of their undergraduate degree program. The details are certainly too advanced for such classes, but this material isn’t even mentioned in the textbooks. And probably only 1 in 50 instructors who teach those courses have any even heard of it, since it falls outside the scope of the baseline instruction in logic that philosophy graduate students receive.
I’m convinced that the main reason why most linguistics departments require a symbolic logic course for their majors is that it imparts just enough logic literacy that if any students are interested in reading further on the topic, or pursuing formal semantics in graduate school, they at least have some familiarity with logical vocabulary and the idea of a formal language.
But that’s little comfort to those majors who show up year after year in logic classes and leave just as confused about why they were there as when they started.