Even if we grant all the points we just made about the logical structure of language, a skeptical linguistics major can still object that most of what they’re actually taught in an introductory symbolic logic class isn’t directly relevant to understanding natural language.
It’s a reasonable objection. Many students finish a full course in symbolic logic and come away unsure of how the course was relevant to understanding language.
There’s a reason for this state of confusion, it’s not just a sign of bad teaching.
The reason is that the questions that formal logic was developed to answer are not primarily about the logical structure of natural language.
Let’s elaborate on this.
In a typical first course in symbolic logic you’re introduced to three different systems of logic: categorical (or Aristotelian) logic, propositional logic, and predicate logic.
In each system you learn how to translate natural language statements into a formal symbolic language, and (among other things) learn how to represent arguments and test for their validity within a given language.
For example, a natural language statement like “cows are mammals” would be represented as “All C are M” in categorical logic.
In propositional logic you would just use a single capital letter to represent the statement “cows are mammals”, because propositional logic is indifferent to the subject-predicate structure of language — it only deals with logical relationships between sentences taken as wholes.
In predicate logic the expression “cows are mammals” would be represented as “(x)(Cx ⊃ Mx)”. You can read this as “for any individual x, if x is a cow then x is a mammal”.
Most of your time in a symbolic class will be devoted to learning techniques for answering the following sorts of questions, working within each of these logical systems:
Most of the pages in a symbolic logic textbook are devoted to teaching techniques for answering these questions.
But these questions reflect the historical preoccupations of philosophers and logicians, not linguists.
Here is the reality: formal logic was developed as a tool for understanding the nature of deductive reasoning and deductive proof. Its subject matter is the nature of logical truth and logical inference, not the structure of natural language.
To be more specific, these are the sorts of questions that have driven the development of formal logic (especially over the past couple of hundred years):
We see a connection to language in that last question, but the connection is indirect, driven by the primary philosophical goal of acquiring a deep understanding of the nature of deductive reasoning.
Now, it turns out that formal logic is very useful for investigating the logical structure of natural language. But this good fortune is best viewed as a byproduct or application of formal logic.
The intentional use of formal methods to model the semantics of natural language — formal logic in the service of linguistics — is really a mid- to late-20^{th} century development. Most philosophers who teach introductory symbolic logic aren’t familiar with formal semantics in linguistics, and introductory textbooks don’t feature these developments. Within philosophy it’s viewed as a specialized area that overlaps the philosophy of language and linguistics. More advanced courses in logic sometimes cover topics in this area, but formal semantics is usually taught only at the graduate level in more specialized programs.
Let’s turn now to the third and final point I wanted to make about the relationship between logic and language.