BRSQ Home

Recent Issues

August 2005 Contents

Cover / In This Issue

Society News

What is Analytic Philosophy?

Russell Letter to The NY Times

What is Analysis?

Traveler’s Diary

In Memoriam: Joseph Rotblatt


what is analysis?

John Ongley

Review of Michael Beaney, ‘Analysis’, Stanford Encyclopedia of Philosophy, 2003, http://plato.stanford.edu/entries/analysis/

Michael Beaney is writing a survey of philosophical analysis from ancient Greek philosophy through the 20th century. He has posted a first report of that work on the internet—in the form of an entry on analysis in the online Stanford Encyclopedia of Philosophy—and he will publish a book on the subject soon. Though Beaney’s survey covers the idea of philosophical analysis from Socrates to Soames, its main focus is on the types of analysis characteristic of 20th c. analytic philosophy. It is thus a part of the recent history of early analytic philosophy movement that emerged in the late 1980s and early 1990s and is a major force on the philosophical scene today. That movement has only recently focused on the nature of analytic philosophy itself, that is, on the question of what analytic philosophy is, and in particular, on what philosophical analysis is. Beaney’s Stanford essay on ‘Analysis’ is at the forefront of this recent turn towards examining the nature of analytic philosophy historically, and has consequently drawn a great deal of attention from the members of the new historical movement and is a frequently cited work among them on the subject. This review of Beaney’s online article will consider his account of philosophical analysis in each major historical period in philosophy.

I. SOME BASIC DEFINITIONS OF PHILOSOPHICAL ANALYSIS

Beaney groups the methods of analysis found throughout the history of philosophy into three major types: decompositional, regressive and interpretive. In general, he says, analysis breaks a concept or proposition down into elements that are used in synthesis to justify or explain it. Decompositional analysis breaks a concept or proposition down and resolves it into its components. Regressive analysis, which was invented by the ancient Greeks who modeled it on geometric methods of problem-solving, works back to first principles which can then be used in synthesis to demonstrate the truth of a proposition or meaning of a concept. Finally, interpretive analysis, which Beaney claims was used by Frege and Russell, first translates a statement or concept into correct logical form before resolving it into simple components. He claims that this interpretive form of analysis also has its roots in ancient Greek geometry and in medieval philosophy.

Beaney points out that several kinds of analysis are typically going on at once in any actual analysis; for example, a regressive analysis can also be a kind of decomposition and at the same time a kind of interpretive analysis. He also notes that philosophers often practiced some form of analysis without ever using the term, as in the case of Socratic analysis though the term ‘analysis’ never occurs in a Platonic dialogue, and that fields outside of philosophy have their own different notions of analysis, such as cost-benefit analysis, functional analysis, systems analysis, and psychoanalysis, though he does not rule out the possibility that these may be related to philosophical analysis is some way.

II. GREEK AND MEDIEVAL PHILOSOPHICAL ANALYSIS

Among the ancient Greeks, the term ‘analysis’ was first used in the regressive sense to refer to the method of working backwards from a desired conclusion to first causes and principles. This method was modeled on the geometric method of solving problems or arriving at conclusions by breaking them down to known principles by which they can then be proved. Note that such analysis is also a decomposition into simpler parts, as well as a kind of interpretation that transforms what is being analyzed into different kinds of parts and concepts. Such geometric analysis influenced Plato and Aristotle, but Socrates’ concern with real definitions and essences is thought to have been a separate influence on Plato and Aristotle. In this latter method, Socrates typically asks for the definition of some concept, and then analyzes attempted definitions or examples of it or beliefs about it with a method of dialogue and questioning in order to arrive at its meaning. It is here, Beaney says, that the roots of modern conceptual analysis are to be found.

It is commonly claimed that philosophy has for most of its history been armchair theorizing – that it is apriori reasoning about the world. On this assumption, it is sometimes claimed without further argument or examination that methods of analysis used by these philosophers must likewise be apriori. Thus, analytic philosophers, who usually claim that their own method is apriori, will often also claim that what they are doing in analysis is simply what the ancient Greeks and all good philosophers since have done, namely, analyzing concepts with an apriori method. Plato himself helped foster this image of the Socratic method as apriori when he had Socrates say about the dialectical method in the Republic that “a person starts on the discovery of the absolute by the light of reason only, and without any assistance of sense, and perseveres until by pure intellect he arrives at the perception of the absolute good….”

To judge these claims that analysis is an apriori method, we must consider how those methods work in detail. Take, for example, the following argument from the Euthyphro where Socrates uses his dialectical method of analysis with Euthyphro to find a definition of ‘piety’:

SOCRATES: …what sort of difference creates enmity and anger? Suppose, for example, that you and I, my good friend, differ about a number; do differences of this sort make us enemies and set us at variance with one another? Do we not go at once to calculation and end them by a sum?
EUTHYPHRO: True.
SOCRATES: Or suppose that we differ about magnitudes, do we not quickly put an end to that difference by measuring?
EUTHYPHRO: That is true.
[…]
SOCRATES: But what differences are those which, because they cannot be decided, make us angry and set us at enmity with one another? … I will suggest that this happens when the matters of difference are the just and unjust, good and evil, honorable and dishonorable. Are not these the points about which, when differing, and unable satisfactorily to decide our differences, we quarrel, when we do quarrel, as you and I and all men experience?
EUTHYPHRO: Yes, Socrates, that is the nature of the differences about which we quarrel.2

Here, contrary to the common view, and even to Plato’s own stated view, we can see that Socrates uses many empirical assumptions in philosophical analysis, as when he appeals to experience to know which differences do and do not cause us to be angry; he even says that he knows these things by experience, so his method cannot be apriori. By applying the same scrutiny to 20th c. methods, when we get there, we can test similar claims made for them.

Beaney does not claim that any philosophical method is apriori—in fact, he does not consider whether they are apriori or not. What he does, for the most part, is describe various instances of analysis as ‘regressive’, ‘decompositional’, or ‘interpretive’. But by simply attaching one of these labels to a method of analysis, we do not learn the details of how the method works, and it is the details that will tell us such things as whether it is empirical or apriori, that is, whether or not empirical propositions must be assumed in order to analyze some concept or proposition. With his own approach to analysis, Beaney cannot answer such questions. This is the major limitation of his approach.

While methods of analysis in the medieval and renaissance periods tended to be mixes of earlier forms, with an emphasis on the geometrical concept of analysis and synthesis, Beaney claims that an original conception of an interpretive analysis emerged in the late medieval period that anticipated 20th c. forms of analysis.3 Beaney takes the process of minimizing, or at least revealing, our ontological commitments by transforming one concept into a set of other concepts to be a central form of 20th c. analysis. It is this method that was anticipated by medieval scholastics, in particular by Ockham, with his eponymous razor, and by Buridan, who practiced philosophy with Ockham’s razor.

Buridan’s notion of nominal definition, where expressions are clarified by explaining what the expression means, is one such anticipation of modern analysis. Medievals especially used this notion to explicate the logic of statements containing ambiguous quantifiers. The middle ages were thus both a reworking of ancient ideas and an anticipation of modern ones. During the renaissance, the general inclination was to repudiate scholastic logic, which led to a reduction of clarity among renaissance philosophers about the notion of analysis.

III. EARLY MODERN NOTIONS OF ANALYSIS

The major inspiration for early modern ideas of analysis was again the ancients’ geometrical notion of analysis, especially the Aristotelian version of it, which assimilated the process of going from theorems to axioms with the process of going from effects to causes. The early moderns thus viewed analysis “as a method of discovery, a working back from what is ordinarily known to the underlying reasons (demonstrating “the fact”), and synthesis as a method of proof, working forwards again from what is discovered to what needed explanation (demonstrating ‘the reason why’).”4 (Note the conflation of explanation and justification or proof in the account of synthesis; to some extent, this was typical of the early modern era, but Beaney also regularly conflates the two ideas of synthesis as proof and synthesis as explanation.) The Port Royal Logic, published in 1662 and probably the most influential work on methodology from then to the middle of the 19th c., supported this basic view of analysis as discovery and of synthesis as proof or explanation.

The authors of the Port Royal Logic claimed that their views on method were principally derived from Descartes’ Rules for the Direction of the Mind. Beaney thus devotes much of the discussion in the section on early modern philosophy to Descartes. Descartes relied mostly on the geometrical regressive model, with its emphasis on discovery and proof of principles and causes, but he also used decompositional analysis in his work, reducing something, especially a concept, to its simplest terms and dividing it into its smallest possible parts; this is then expressed in the form of a definition. Beaney sees a shift occurring during the early modern period of philosophy in general from the regressive model of analysis of finding principles and causes to the decompositional one of analyzing concepts and finding definitions.

Beaney devotes just one paragraph to Locke and does not mention Hume at all. This is unfortunate, because one of the major dramas in philosophy and psychology in the 19th century was the struggle between British associationists and Kantians over the correct nature of the analysis of concepts, with associationists following Locke and Hume in holding that all concepts are constructed from observations and experiences and can be analyzed entirely into these units, while Kantians argued that there are concepts that cannot be discovered in experience and that are necessary for the construction and definition of most other concepts (i.e., Kant’s categories, or Whewell’s “fundamental ideas”).

What Beaney does say about Locke is that Locke viewed all ideas as resolvable into simple ones that are copies of sense impressions, so that Locke’s method of analysis is ‘decompositional’: its aim is to provide an account of ideas by explaining how they arise, showing what simple ideas make up our complex ones, and distinguishing the various mental operations performed on them in generating what knowledge and beliefs we have. But again, we do not get the details of how this method is supposed to work. If we knew this, we could compare Lockean analysis to 20th c. analysis and see if they really are the same, or even similar, and thus test the claim of analytic philosophers such as A.J. Ayer and Richard Rorty that analytic philosophy is just a type of British empiricism. Beaney’s account does not go deep enough to answer such questions.

Leibniz, whose method of analysis Beaney also calls ‘decompositional’, is a major figure in Beaney’s history of analysis. Leibniz’s method of analysis rests on his principle of containment, the view that the predicate of every true affirmative proposition is contained in the subject whether the proposition is necessary, contingent, universal, or particular. Given this, the task of analysis for Leibniz is to make explicit the containment of the predicate in the subject of any proposed proposition. With such an analysis, the proposition can then be proved, by synthesis, to be true.

More specifically, Leibnizian analysis proceeds by using a series of definitions to analyze the subject of a proposition and reduce the proposition to an identity. Identities are, for Leibniz, self-evident truths. But again, we are not told how Leibniz thinks we know these definitions that reduce the proposition to an identity. Are they, at least in some cases, known empirically? For example, do we know the definition ‘a goose is a bird’ is true, while ‘a goose is a reptile’ is false, by examining geese? If so, then Leibniz’s method of analysis is an empirical one, at least sometimes, and a theory of meaning where we know the meanings of at least some concepts a posteriori is being presupposed. Are definitions of concepts all known apriori? If so, then the method is apriori and a theory of meaning where we know the meaning of terms apriori, as some philosophers claim, is being presupposed. Knowing how analysis works would in this way show us some of the presuppositions about language and meaning assumed by a philosopher or philosophical movement.

IV. KANT AND ANALYSIS

Kant’s method of analysis is likewise “decompositional” according to Beaney. I hope it is becoming apparent how limited the use of these metaphorical labels to describe types of analysis is. Locke, Leibniz and Kant are all called ‘decompositional’ analysts, yet the differences in their methods of analysis are at the center of major debates in philosophy throughout the 19th century and are important for understanding twentieth century analysis.

Kant, Beaney tells us, takes over Leibniz’s method of analysis with its principle of containment, but rejects Leibniz’s view that the predicates of all true affirmative propositions are contained in its subject, so that all truths are analytic. For Kant, like Leibniz, “analytic” propositions have subjects that contain their predicates, but unlike Leibniz, Kant also recognizes a class of synthetic propositions whose subjects do not contain their predicates.

In his Critique of Pure Reason and Prolegomena to Any Future Metaphysics, Kant identifies analytic propositions as those whose negations are self-contradictory.5 Kantian analysis would thus show, according to Beaney, that a proposition is analytic by showing that its denial is self-contradictory, and this would show that the predicate of the proposition is contained in the subject, and so clarify the meaning of the subject. For Kant, Beaney tells us, analysis can at most clarify our concepts but cannot extend our knowledge.

It is odd, however, that Beaney assumes that the results of Kantian analysis are analytic statements. After all, Kant refers to the entire Prolegomena as a work of analysis, while calling the Critique of Pure Reason a “synthetic” work, and the results of the Prolegomena’s analysis are famously synthetic apriori statements, not analytic ones. Like most philosophers of the early modern period, what Kant meant by ‘analysis’ and ‘synthesis’ is that analysis is a method of discovery that uncovers the self-evident presuppositions of some desired conclusion and that synthesis is a chain of reasoning in the reverse direction, that is, a proof or explanation of the conclusion in terms of the self-evident presuppositions discovered by analysis.

For example, Kant says of the Prolegomena: “…I offer here [an outline of the first Critique] which is sketched out after an analytical method, while the Critique itself had to be executed in the synthetical style, in order that the science may present all its articulations [in this analytical sketch, i.e., in the Prolegomena], as the structure of a peculiar cognitive faculty, in all their natural combination.” (Kant, 1977, p. 8) So for Kant, the Prolegomena is an analysis and the first Critique is a synthesis.

Moreover, Kant’s analytic propositions are not to be confused with his method of analysis or the results of such analysis. As Kant says: “The analytical method, insofar as it is opposed to the synthetical, is very different from an aggregate of analytical propositions. It signifies only that we start from what is sought, as if it were given, and ascend to the only conditions under which it is possible [that is, show what is necessary for it to be true]. In this method, we often use nothing but synthetical propositions, as in mathematical analysis…” (Ibid., p. 21.)

Kant’s analytical method of the Prolegomena is thus meant to articulate the ideas of the first Critique, that is, show the synthetic apriori ideas that are necessary for knowledge. So the results of the analysis of the Prolegomena are, for Kant, synthetic apriori propositions, not analytic ones. In Beaney’s terminology, Kant’s analytic method is regressive (finding the necessary presuppositions for something to be known), not decompositional.

Why then does Beaney think that Kant’s method of analysis yields only analytic apriori propositions? Perhaps because analytic philosophers have frequently asserted that in doing analysis, analytic philosophers are just doing what all great philosophers of the past have done in analysis, and they further assume that the results of their own analyses are analytic apriori, so that the results of all philosophical analyses must be analytic apriori. But for Kant (and for Plato, Locke, Hume and probably Leibniz as well), this is not true.

For Kant, then, the Prolegomena is an analysis while the first Critique is a synthesis. But what does Kant think a synthesis is? Either a proof or an explanation, but which? Beaney is carelessness in distinguishing between synthesis as proof and synthesis as explanation throughout his essay on analysis. Here, a correct answer to this question is crucial for a proper understanding of the first Critique.

Most Kant scholars today view the argument in the first Critique, put forth in the Transcendental Deduction, as purporting to establish that objective and valid apriori categories are necessary for knowledge; that is, they view the argument of the first Critique as an analysis in Kant’s sense of the term. Beaney himself assumes that the first Critique is such an analysis when he says that Kant “recognizes a … class of … synthetic apriori truths, which it is the main task of the Critique of Pure Reason to elucidate”.

From the above discussion, however, we know that elucidation for Kant is what analysis does—in fact, this particular elucidation (establishing that objective and valid apriori categories are necessary for knowledge ) is exactly the analysis that Kant says that the Prolegomena performs—and that according to Kant, the first Critique is not an analysis, but a synthesis. Therefore, the argument of the first Critique cannot be an analysis of knowledge showing that it presupposes (valid and objective) categories. Instead, Kant must think the first Critique is an argument that either justifies or explains knowledge based on self-evident principles found out through analysis.

Understanding how the first Critique can be a synthesis, rather than the analysis it is now standardly viewed as being, is, by my view, the fundamental problem of Kant scholarship—one that comes before all others. If Kant is explaining how categories can be objective and valid apriori, he needn’t prove this point—explanations assume the truth of what they are explaining. In that case, the standard view of the first Critique is not particularly threatened, because an analysis doesn’t prove that the categories are apriori objective and valid either, except in a question-begging way. (If there is objective and valid knowledge, there are objective and valid apriori categories. But is there objective and valid knowledge? This is just what we want the first Critique to tell us.) But if Kant thinks he is justifying the apriori objectivity and validity of the categories with a view to eventually justifying knowledge, this calls for a radically different reading of the first Critique from the way we read it today.

V. THE 19TH CENTURY

Beaney devotes just one brief paragraph to 19th century philosophical analysis. He claims that many 19th c. concepts of analysis were responses to Kant’s so-called “decompositional” method of analysis, for example, those of German or British idealists, who viewed such analysis as trivial and “destructive and life-limiting” and thus took a negative attitude toward it. He claims that later Kantians, such as the neo-Kantians, took a more positive attitude toward analysis and used it to disclose the essential synthetic apriori structure of science. (Due to his abovementioned confusion between Kant’s analytic propositions and Kantian analysis, Beaney does not see that Kant used analysis to do this same thing.)

But it was the British empiricist forms of analysis, not Kantian analysis, that 19th century Kantians and other idealists took to be trivial and incapable of correctly analyzing concepts. British empiricists (“associationists”) followed Locke and Hume in claiming that concepts are constructed from and can be entirely analyzed in terms of associations of sense impressions. For Kantians and other idealists, Hume had shown that concepts such as ‘causality’ cannot be defined just in terms of sense impressions. They felt that Kant had then shown, on the basis of Hume’s arguments, that we must add metaphysical concepts (the transcendental categories) to our impressions in order to construct the concepts of science and everyday life, and that these categories can only be found in the mind, not in experience.

However, the kind of Kantian analysis preferred by the Germanic opponents of empiricism was one revised in the light of romanticism, which relied heavily upon intuition. For example, among the neo-Kantians, Wilhelm Windelband and Ernst Cassirer held a more romantic view that we come to these ideas that cannot be found in experience by a kind of artistic intuition, though some, such as Heinrich Rickert, rejected this romantic view and stuck to the more strictly Kantian one that it is by pure reason that we know of these categories. Husserl, of course, came down firmly on the side of intellectual intuitions of concepts.

There was, however, a more holistic strain in German and British idealism that did view analysis not just as trivial, but as “destructive and life-limiting”, as Beaney puts it, and also as a kind of falsification. The roots of this holism can be found in Goethe, parts of Kant, and of course in Hegel, among other places. It is this latter more holistic strain of idealism that early 20th century analysts like Russell and Moore attacked with their insistence that analysis is possible. But again, much of Russell and Moore’s attack was actually focused on Kant, who did not deny that analysis is possible, but only that empiricist analysis is. But Russell and Moore were not defending an empiricist form of analysis! So why attack Kant? As Peter Hylton has noted, this is a puzzle that needs solving.6

Though each of the other historical sections of Beaney’s essay have lengthy supplements linked to them which elaborate on the ideas of analysis characteristic of that period, there is no supplement linked to Beaney’s one paragraph section on the 19th century. The text says that it is not yet available. But without a careful study of 19th century analysis, basic questions such as “How much is 20th century analysis like 19th century analysis?” and “What in the world were the 20th century analysts rebelling against anyway?” cannot be answered. Beaney’s strategy of writing the history of 20th century analysis before doing the 19th century is unwise. As it is, we must now leap into a discussion of 20th century analysis without first understanding its background in the 19th century.

V. ANALYTIC PHILOSOPHY AND PHILOSOPHICAL ANALYSIS

We often hear that what characterizes 20th century analytic philosophy is a kind of decompositional analysis, where we clarify concepts by breaking them down into more basic concepts. Because Kant played down this sort of analysis, and Kantians and other idealists after him explicitly attacked it, one might expect that if analytic philosophy is a reaction to idealist claims that empiricist analysis is impossible, it would be a swing back to the empirical analysis that preceded it. This would support to A.J. Ayer’s story that analytic philosophy is just “British empiricism plus logic”—a return to the methods of Locke and Hume.

In Brazil, after the bossa nova movement, which can crudely be described as a combination of samba and jazz, there was a “purist” reaction (the “tropicál” movement) where the jazz was taken back out. But what was left was not, as you might expect, samba again, but something quite different, and thus MPB, or modern Brazilian pop, was born. Similarly, the turn back to analysis by early 20th c. analytic philosophers did not yield anything like earlier, empiricist forms of it. The analytic philosophers were doing something quite different. But what was it? If we could get clear on this question, we would understand this philosophy better.

Examining the twentieth century, Beaney begins with a general characterization of 20th century philosophical analysis. “What characterizes analytic philosophy as it was founded by Frege and Russell,” he says, “is the role played by logical analysis, which depended on the development of modern logic. Although other and subsequent forms of analysis, such as linguistic analysis, were less wedded to systems of formal analysis, the central insight motivating logical analysis remained.” Beaney admits that this characterization does not fit Moore or one strand of analytic philosophy, but thinks that the tradition founded by Russell and Frege is analytic philosophy’s central strand.

What is characteristic of Russell and Frege’s sense of analysis, and thus of 20th century logical analysis in general, Beaney tells us, is that it is interpretive—we first interpret what we wish to analyze by transforming it according to some system of interpretation, so that we may then solve a particular problem. Analytic geometry, for example, transforms geometric problems into algebraic ones so it may then solve them. Similarly, Frege, Russell, and 20th c. analytic philosophers in general attempted to solve philosophical problems by translating natural language sentences into predicate logic, so that a possibly misleading grammatical form, which a purely decompositional analysis would take as given, is replaced with the sentence’s true logical form. If the sentence is decomposed into its components after we have translated it into its correct logical form, we will not then be misled by grammar as to what its components are. ‘On Denoting’ is thus Beaney’s model of 20th c. analysis.

This sort of analysis is the key to reducing mathematics to logic, and many would argue that the primary motive for the development of it was to make explicit the sort of analysis necessary for reducing mathematics to logic. We translate mathematical concepts like ‘number’ into logical ones, so that we can derive mathematical truths from logical truths and show mathematics to be pure logic. The method applied to language more generally may similarly solve many philosophical problems. For example, the statement ‘Unicorns do not exist’ can be understood as saying that ‘The concept unicorn has no instances’ (‘The class of unicorns is empty’, or ‘~ (∃x) Fx’). The subject is no longer unicorns by the new translation, but the concept ‘unicorn’. In this way, we do not need to think that non-existing objects like unicorns have some reality or “subsistence” in order for statements about them to be meaningful. This analysis is a strategy used by Russell in his theory of descriptions and Wittgenstein in the Tractatus.

What is crucial to this sort of analysis is the development of modern quantification logic. For Frege and Russell, it is predicate logic that statements are to be translated into. As Beaney notes, this introduced a divergence between grammatical and logical form, so that “the process of translation itself became an issue of philosophical concern”. Hence, the need for articles like ‘On Denoting’ arose.

But what of subsequent analysts in the 20th century? Beaney asserts that though later ordinary linguistic analysts questioned whether there could ever be a definitive logical analysis of typical statements, they retained the idea that ordinary language could mislead. For example, in his essay ‘Systematically Misleading Expressions’, Gilbert Ryle used such analysis to avoid attributing existence to concepts, and he retained this concept of analysis to solve problems in his later years as an ordinary language philosopher. This, then, is what Beaney finds common to 20th century analytic philosophy—a method of analysis that translates ordinary expressions into more philosophically and logically respectable expressions.

There are, however, important questions that Beaney’s characterization of 20th century analysis does not address. One—already touched upon in this review—is that of the presuppositions of the method in question. In the case of the analysis of mathematical concepts in terms of logical ones, we know we have correctly analyzed a mathematical concept when the logical construction does everything the mathematical one does. But then we must presuppose mathematical theory in order to know that our logical constructions adequately replace mathematics. Similarly, logical theorems meant to replace mathematical truths can only known to be equivalent to the mathematical ones by the same sort of comparison.

Cases outside of mathematics and pure logic proceed similarly; when we reduce non-logical concepts to other non-logical concepts (for nowhere except in mathematics and logic itself are we going to reduce concepts to logical ones), we again know that we have correctly defined our concept when the definition functions identically to the original concept. But non-logical concepts are typically about the world and occur in theories about the world, so we can only know that a logical construction of such a concept is equivalent to the original one when it agrees with our best theories about the world. Such logical analysis would thus be a posteriori.

The important question, then, of whether or not analysis is apriori or a posteriori seems answered. Though commonly claimed to be apriori (though not by Beaney; his own interpretation of analysis into decompositional, regressive, and interpretive types does not ask such questions), analysis of empirical concepts and propositions must presuppose empirical theories in order for us to know that the analysis is correct.

What, then, can people be thinking when they claim that philosophical analysis is apriori? Many of them seem to be assuming this: we know the meanings of words apriori, and thus can know apriori that the reconstruction of some original concept is correct by comparing it to meanings that we “just know”. (One way it is thought that we know the meanings of words apriori is by having apriori “intuitions” of meanings.) It seems to me that this idea, that meanings are the sorts of things we know apriori, is the major unstated presupposition of 20th c. ideas of philosophical analysis and 20th c. analytic philosophy.

But it is unlikely that we can know the meanings of words apriori, except perhaps in the case of stipulative definitions, which clearly do not represent the majority of cases. For example, the dictionary tells us that a whale is an ocean-going mammal that suckles its young. But for ‘mammal’ to be part of the meaning of ‘whale’ required people to go out and look at whales to see this, for formerly whales were thought to be fish and only when people looked more closely and saw, e.g., that they had no gills, were warm-blooded, had lungs, had breasts that gave milk, etc., did the meaning of the word change and ‘whale’ come to include the concept ‘mammal’. Words signifying empirical concepts thus get their meaning empirically. When we try to determine if an analysis of them is correct, we must look to the world to determine that the new definition functions the same as the original term.

Analytic philosophy presents itself as apriori but is not; it presents itself as an innocent method of logical analysis that makes no controversial metaphysical assumptions when in fact it does make such assumptions; and it is likely that it makes such assumptions due to 19th century influences on it. Again, however, these are not issues addressed by Beaney.

VI. GOTTLOB FREGE AND THE ELEPHANT IN THE PARLOR

Fregean analysis translates a proposition into argument-function form rather than the subject-predicate form that decompositional (whole-parts) analyses provide. Thus, Frege analyzes ‘Socrates is mortal’ into an argument ‘Socrates’ and function ‘__ is mortal’ rather than into the grammatical form ‘S is P’. By developing a logic of functions and arguments, Frege was able to logically analyze complex mathematical statements and achieve much (if not complete) success in the logical analysis of mathematics. This was then taken as a model for the logical analysis of sentences and concepts in other domains of knowledge and common sense. However, what makes the new logic so suitable for analyzing mathematics, namely, mathematics’ own essentially argument-function structure, may well make it unsuitable for analyzing natural languages.

Take for example the statement ‘All horses are mammals’. Predicate logic would analyze this as ‘For all objects x, if x is a horse then x is a mammal’. Where is the copula, the verb “to be”, in this analysis, and how well does the analysis explain the copula’s meaning? Well, first of all the conditional connective ‘if-then’ connects the concepts ‘horse’ and ‘mammal’ instead of the copula. Here of course we already have a problem, since modern logic uses the material conditional in this analysis, and the material conditional of modern logic does not really capture the sense of ‘if-then’. We will return to this problem of the conditional in a moment.

In any case, the conditional connective does not entirely replace the copula, for we need the quantifier to specify that the same thing that is a horse is a mammal. Since the two quantifiers of modern logic, ‘all’ and ‘some’, can be defined in terms of each other (‘All x’s are F’ = ‘It is not the case that some x’s are not F’) and so reduced to one concept, let us take the existential quantifier (“some”, or “there exists an x such that”) as the primitive concept. So the quantifier (and the variables and apparatus of the scope of the quantifier), which is roughly the concept of ‘existence’, also does some of the work of the copula in our analysis.

But of course, the idea of a logical (not grammatical) predicate-as-function itself contains a copula, as when we say “x is a horse’ and ‘x is a mammal’, so it too does some of the work of the original grammatical copula.7 Rather than explaining and giving us some insight into this most basic concept of natural language, modern logic seems to spread the work of the copula around in a careless, unexamined way.

An even more serious shortcoming of predicate logic is that it doesn’t provide an analysis of conditional, “if-then” reasoning that works outside of mathematics. When modern logic uses the material conditional, where ‘if p then q’ is taken to mean ‘either not p or q’, to analyze mathematics, no problems arise for it. Outside of mathematics, however—in ordinary language or empirical science—numerous problems arise for the material conditional, especially in counterfactual cases, but there is to date no analysis of ‘if-then’ that works better. In other words, modern logic does not yet have an adequate translation of the conditional, although conditional reasoning is the backbone of all reasoning.

The ineptness of quantification logic at analyzing English grammar or conditional reasoning as it occurs outside of mathematics suggests that other logics would better serve us in analyzing English sentences and describing everyday logic. And this suggests that the logic we now have is not the logic of our language or everyday reasoning, not a fundamental part of the universe or of our minds, but merely a convenient calculus that is especially good for describing mathematical logic. This provides us with further reason for caution about claims that a logical analysis of language can solve philosophical problems. If our current logic is not the last word in the subject but merely a conventionally convenient one that could be improved upon or even radically altered for the better, there is no reason to believe that in translating English sentences into this logic we are reducing them to a more fundamental, truer form.

VII. BEANEY ON RUSSELL, MOORE, WITTGENSTEIN, CARNAP, CAMBRIDGE ANALYTIC PHILOSOPHY, AND OXFORD ORDINARY LANGUAGE PHILOSOPHY

Although Beaney sees logical analysis the major and unifying form of analysis for 20th century analytic philosophy, he acknowledges other kinds of analysis. However, Beaney sees the idea of interpretation that lies behind logical analysis as motivating these other kinds of analysis as well, and thus being what is common to 20th c. philosophical analysis in general.

Bertrand Russell was never entirely clear on what he meant by ‘analysis’, his practice often did not match his words, and his clearest statement on the subject, in his 1913 manuscript Theory of Knowledge, defines ‘analysis’ in a decompositional sense as a “discovery of the constituents and the manner of combination of a given complex” (TK, 119). Beaney acknowledges all of this, but still thinks logical analysis of language into the new logic best exemplified the analytic philosophy that emerged from his work. This, for example, is the characteristic form of analysis in Russell’s essay ‘On Denoting’, where problems that emerge from a decompositional analysis of English sentences such as ‘The present King of France is bald’ disappear upon a logical analysis of them. However, as Beaney himself admits, Russell’s idea of analysis is not clearly or entirely interpretive.

Beaney finds G.E. Moore’s notion of analysis to be of a traditional decompositional sort, where complex concepts are analyzed into their constituents. This puzzles Beaney: while he admits that Moore influenced conceptions of analysis among analytic philosophers, Beaney does not address the fact that this means that his theory that 20th c. analysis as Fregean/Russellian logical analysis does not seem to work even for the major analysts. He simply ignores this problem and goes on to Wittgenstein.

Because Wittgenstein accepted Frege’s assumption that quantification logic was the logic of language, and because he utilized Russell’s method of logical analysis from ‘On Denoting’, Beaney places the early Wittgenstein in the Frege/Russell tradition of logical analysis. At the same time, Wittgenstein’s method was also decompositional, because he claimed that an analysis of language reduced it to it’s simple constituents. Beaney thus sees Wittgenstein’s notion of analysis as a combination of logical and decompositional methods. And although the emphasis in later Wittgenstein is on decompositional methods, Beaney claims that a role is left by Wittgenstein for logical analysis as well.

The Cambridge school—of Susan Stebbing, John Wisdom, and Max Black, and including Oxfordians Gilbert Ryle and C.C. Mace—who founded the journal Analysis, based their notions of analysis on Russell, Moore and Wittgenstein. While taking Russell’s theory of definite descriptions in ‘On Denoting’ as a “paradigm” of analysis, they emphasized the logical analysis found in the article and de-emphasized the metaphysical reduction of concepts to other ones.

Russell, Moore, and Wittgenstein similarly influenced Carnap, who developed a method of construction reminiscent of Russell that he first called quasi-analysis and later called logical analysis. Carnap used quasi-analysis in his 1928 Aufbau to construct simple qualities from individual experiences. At the same time Carnap developed a notion of explication he called ‘rational reconstruction’. This is a different kind of translation, where vague everyday concepts are replaced with more precise “scientific” ones.

Beaney finds Oxford linguistic “ordinary language” philosophers to be less like Frege and Russell and more like Wittgenstein in believing that the analysis of language can tell us about thought. Russell and Frege were dismissive of ordinary language as misguided and misguiding. Oxfordians believed that language pretty clearly reflects our concepts. Ryle and Austin are discussed as examples of this view, as are Strawson’s more Kantian analyses. Though Beaney recognizes that they are straying from the logical analysis he thinks unifies analytic philosophy, he thinks that like the earlier forms of analysis, they all seek to clarify concepts.

This may seem to be a rather thin comparison, but perhaps after reviewing 26 centuries of philosophical analysis, Beaney is simply running out of steam. However, it should be obvious even from this brief description of Beaney’s survey of the 20th c. that his model of 20th c. analysis as based on logical analysis does not fair well even on his own terms. In the end, Beaney changes tack and defines analytic philosophy as being a set of interlocking subtraditions unified by a shared repertoire of conceptions of analysis that different philosophers drew on in different ways. But it is not clear that this definition is adequate to distinguish analytic philosophy from any other philosophy, for as Beaney himself has shown, all philosophies seem to draw on this shared repertoire of conceptions of analysis, each in its own way.

VIII. ASKING THE RIGHT QUESTIONS

Beaney’s essay on analysis is a historical one, a part of the recent history of early analytic philosophy movement. Early work in this new field seldom asked what analytic philosophy is, but assumed that we already know this and simply elaborated on the standard picture of analytic philosophy with new facts and greater detail but without presenting us with a different overall picture of it. Recently, there has been a revisionist turn in the field, a turn towards asking the initial questions of what analytic philosophy and analysis are. This new trend threatens to reject conventional answers and provide us with new ones, and Michael Beaney is out ahead of this pack, and to some extent leading it in this direction. His entry on analysis in the Stanford Encyclopedia of Philosophy has itself stimulated some of this activity, the April 2005 conference on Varieties of Analysis that he organized was the big event in the field for the year, and it is hoped that the publication of his book on analysis will push the field even further in this direction, and so push historians of analytic philosophy to a better understanding of their subject. For this reason alone ‘Analysis’ is a significant work. It is also signifycant for its ambition and scope, and, I must say, for its depth of analysis. Although I have criticized Beaney here for not digging deeply enough into methods of analysis to answer important questions about them, the amount of analysis he has done is impressive. Also impressive is his bibliography, which is an extensive survey of the literature on this subject. Anyone who likes books and has an interest in the history of philosophy, and especially in the history of analytic philosophy, will enjoy reading through it nearly as much as they will enjoy reading the article itself.

Philosophy Department
Edinboro University of PA
Edinboro, PA 16444
jongley@edinboro.edu

NOTES

1. Republic, 532.

2. Plato, Euthyphro, my emphasis.

3. This is reminiscent of Michael Dummett’s claim that analytic philosophy made a linguistic turn that set aside the epistemological concerns and methods of modern philosophy and returned to scholastic concerns and methods where philosophical logic is foundational rather than epistemology. (Dummett, Frege: Philosophy of Language, 1973, p. xxxiii)

4. Beaney 2003.

5. A150-1, B189-91; Prolegomena, Hackett, 1977, p. 12.

6. Peter Hylton, ‘Hegel and Analytic Philosophy’, Cambridge Companion to Hegel, Cambridge University Press, 1993.

7. Jaakko Hintikka, from whom I first heard this analysis, has made this same point in print.

8. This work in the history of analytic philosophy should not be confused with the parallel but separate movement in the history of philosophy of science, which has been exuberantly revisionist.