* This is the latest installment of “Problematica.” It has nothing to do with historical science, although the protagonist of the essay, Stephen Toulmin, did write a book on “the discovery of time” and influentially critiqued Thomas Kuhn using a comparison with the uniformitarian-catastrophist debate. Anyway, this essay isn’t going to discuss either of those things. Instead, it’s going to discuss one of the most interesting pieces of philosophy of science to come out of the 1950s: a genuinely practice-oriented account of the physical sciences that drips with freshness and insight. Please read on! Problematica is written by Max Dresow…
I’m not one for hyperbole, but if I was I might describe the middle of the twentieth century in philosophy of science as the most arid, lifeless, boring, myopic, over-technical, arrogant, and frankly mystifying period in the history of the subject. That’s the kind of thing I might say if I were prone to hyperbole. But since I’m not, let me just say that I’m not the biggest fan of what’s been called “the analytical project” in philosophy of science. This is the attempt to give formal analyses of theory structure, empirical confirmation, and scientific explanation; or, to use the language of the period, to formalize the “logical structure” of the sciences.
Once, this was the beating heart of the discipline, the only project that really mattered. All the big names worked on it, from Rudolf Carnap to Hans Reichenbach, Carl Hempel to Ernest Nagel. So it came to be widely accepted that theories are axiomatic systems; that laws of nature are unrestricted generalizations; that explanation involves deducing statements in logical arguments that have laws of nature as premises; and that confirmation can be analyzed as an abstract relation between sentences. All of which gave philosophers plenty of scope to demonstrate their technical wizardry in a series of debates that made virtually no contact with scientific practice.
I am obviously being somewhat simplistic. But on the whole I’m prepared to stand behind my simplifications. The middle of the twentieth century really was a ponderous time in philosophy of science: a kind of doldrums between the heady vitality of the interwar years and the spirited provocations of the 1960s. Perhaps this owed something to Joe McCarthy and the surveillance apparatus of the FBI; the 1950s were not an auspicious time to be doing politically-engaged philosophy of science (Reisch 2005). But even on matters of epistemology the lions of the period were narrow and formalistic. They believed that the techniques of logical analysis had come to solve their problems. Yet in their eagerness to put these techniques to use, they tended to idealize science in a way that flattered their narrowness. The result was a period of scholasticism that tipped sometimes into parody, as competing accounts met each other on battlegrounds constructed entirely out of philosophical abstractions.
It all came crashing down of course, for however good it may have been as philosophy, it made little sense of living, breathing science. Thomas Kuhn is sometimes credited with the demo job and it’s true that his work hastened the obsolescence of “logical empiricism.” Still, this claim needs to be taken with a grain of salt, and anyway, it isn’t what I want to talk about here (Friedman 2002). Instead, I want to discuss a failed attempt to reorient philosophy of science around scientific practice, launched almost a decade before the Kuhnian bombshell burst. This attempt belongs to Stephen Toulmin: in my opinion, the most insightful philosopher of science active during the fifties and early sixties.*
[* Surely this is hyperbole, you say. But no! I really like Stephen Toulmin and anyway, I'm not one for hyperbole.]
Stephen Toulmin was born in London in 1922, just four months before Thomas Kuhn was born in Ohio. He earned an undergraduate degree in mathematics and physics at King’s College and then joined the war effort, first at the Ministry of Aircraft Production (where he worked on radar technology) and later at the Supreme Headquarters of the Allied Expeditionary Force in Germany.* After the war he returned to Cambridge, where he took a PhD in philosophy with a dissertation on “the place of reason in ethics.” This would later be published as his first book, bringing him to the attention of the philosophical community. Yet by far the most important thing that happened to Toulmin in Cambridge was his encounter with the Austrian-born philosopher Ludwig Wittgenstein.
[* Kuhn also worked on research related to radar following his matriculation from Harvard in 1943.]
Wittgenstein was a philosophical singularity: a person of overwhelming intensity and almost unlimited charismatic appeal. He was also the most important philosopher of the twentieth century, having produced not one but two philosophies that set the philosophical world on fire (Monk 1990). In the mid-1940s, when Toulmin attended his lectures, he was ailing. He would die, aged just 62, in 1951. Yet it’s clear that he’d lost none of his personal magnetism, and in a short time the young Toulmin was made a lifelong convert to his conception of philosophy. This was the basis for Toulmin’s distinctive contribution to philosophy of science: an area of study for which the mature Wittgenstein had little affinity.
Toulmin became a full-time philosopher of science in 1949, when he was hired as a lecturer at Oxford University. Four years later his second book appeared, bearing the unassuming (and arguably misleading) title The Philosophy of Science: An Introduction. As an introduction to the field, it was most unusual. A standard introductory volume provides a snapshot of an area— an image of consensus— edited for conciseness and clarity at an introductory level. Toulmin’s book, by contrast, was an entirely original work, vigorously argued and intended to challenge some of the major assumptions of the field. Even its central framing device was unusual. Toulmin complained that many works of popular science tend rather to obfuscate than to illuminate their subject matter. (Consider Eddington’s discussion of the two tables: one familiar, the other “scientific.”) He compared these works to “a searchlight in the darkness, which picks up here a pinnacle, here a chimney, and there an attic window,” dazzling what it touches but leaving the rest in darkness. It was to remedy this situation that Toulmin put pen to paper.*
[* Here it is useful to know that Toulmin was an avid consumer of popular books on cosmology as a youth. It was these, he would later recall, that steered him towards mathematical physics as an undergraduate.]
There was another aim too. As I’ve said, philosophy of science in the middle of the century had an air of inauthenticity, at least when posing as philosophy of science. Sometimes this was framed as a methodological necessity, as when Nelson Goodman (1955) excused his use of “commonplace and even trivial illustrations rather than more intriguing ones drawn from the sciences” on the grounds that the former “attract the least attention to themselves” and are “least likely to divert attention from the problem or principle being explained.” (“We have to isolate for study a few simple aspects of science just as science has to isolate a few simple aspects of the world; and we are at an even more rudimentary stage in philosophy than in science.”) More often, however, this inauthenticity went unaddressed. This had the effect of alienating philosophy of science from its subject matter: the sort of science actually practiced in the field and the laboratory. Again, Toulmin sought a remedy. His watchword was “practice.” Repeatedly, he urged readers to attend to “[the] actual practice of scientists” or else to their use of this or that tool in a particular context. He ended his introduction by quoting Einstein (1934): “If you want to find out anything from the theoretical physicists… don’t listen to their words, fix your attention on their deeds.”
All this was so much Wittgensteinian philosophy. By the time Toulmin knew him, Wittgenstein was every bit a philosopher of practice (Johanssen 1988). This represented a significant shift from his early work, in which his central concern was language itself. “The logical picture of the facts is the thought,” he wrote in a characteristic passage. Don’t worry too much about what this means. The thing to know is that it has to do with the inner workings of language, here conceptualized as a system of signs for representing states of affairs in the world (Ryle 1957). Later, he would repudiate this picture, focusing instead on the concrete practices and social contexts in which language is deployed and through which it gains its meaning. Practice and use. Also, change. Language “is not something fixed, given once and for all; but new types of language, new language-games, as we may say, come into existence, and others become obsolete and get forgotten.” By “language-game,” Wittgenstein meant a bit of language use along with “the actions into which it is woven.” For him, even the meaning of words had to be understood in terms of their use in language-games, which is to say, the rules governing their employment and co-employment in concrete situations. So far did Wittgenstein’s practice orientation extend.
Toulmin’s strategy was to apply the concept of a language-game to the physical sciences. This might have been expected to produce “talk about talk,” as the popular dig had it, only focused on scientific gab as opposed to ordinary speech. But for Wittgenstein, language-games includ not just bits of language but all those actions into which they are woven in particular situations. As Toulmin put it elsewhere, “to characterize a specific language-use was, for him, to place it, not just in a linguistic context but in a behavioral one: to show the pattern of conduct against the background of which the concept has to be understood” (Toulmin 1960). Language-games were a way of describing social practice.*
[* Toulmin was also influenced by the writings of the physicist W. H. Watson, in particular, his 1938 book On Understanding Physics. These writings were in turn heavily indebted to Wittgenstein, and aimed to show the relevance of Wittgenstein’s later philosophy to physics. For a concise summary of Watson’s philosophical work, see Franco (2021), especially Section 6.]
Toulmin’s slim volume addressed four topics, which built on one another in a sequence. These were scientific discovery, laws of nature, scientific theories, and lastly, determinism. He opened with discovery. What is it for something to be “discovered” in the physical sciences, he asks? More specifically, when certain very basic conclusions are arrived at, like the conclusion that light travels in straight lines, what kind of an achievement is this? How similar is it to the discovery that a particular constant takes such and such a value? (That light travels at 299,792,458 meters per second, say.) Or to the discovery that Greenland sharks are the longest lived vertebrates on the planet?
Not at all similar, Toulmin answers. The findings of natural history presuppose a relatively settled conceptual background, much of it taken over from everyday life. Likewise the measurement of physical constants relies on a background consisting of physical theories, models, and bits of technical apparatus. Contained within these is a “way of regarding phenomena,” and perhaps also a way of representing them (visually or mathematically). But this is precisely what’s at stake in the “discovery” of light’s rectilinear propagation. Writes Toulmin: “The importance for physics of such a principle as the Rectilinear Propagation of Light comes from the fact that, over a wide range of circumstances, it has been found that one may confidently represent optical phenomena in this sort of way [i.e., by drawing the sorts of diagrams familiar in geometrical optics].”
To say ‘Light travels in straight lines’ is, therefore, not just to [state another fact about the world]: it is to put forward a new way of looking at the phenomena, with the help of which we can make sense of [optical phenomena].
It follows that the principle of rectilinear propagation cannot be regarded as an empirical generalization of the sort of philosophers often identify with laws of nature. For one thing, if rectilinear propagation is regarded as a factual generalization “it would have to be qualified by some such clause as ‘in general’ or… ‘except when it doesn’t.” Otherwise it would be false (and scientific principles presumably are not false). But a better reason for regarding it as an inference rule as opposed to a generalization concerns its use. The principle, Toulmin notes, is “so to speak, parasitic on the [explanatory techniques it underwrites]: separated from them it tells us nothing, and will be either unintelligible or else misleading.” Because of this,
one might almost as well call the principle a ‘law of our method of representation’ as a ‘law of nature’ [see Watson (1938)]: its role is to be the keystone of geometrical optics, holding together the phenomena which can be explained by that branch of science and the symbolism, which, when interpreted in the way suggested by the model, is used by physicist to account for these phenomena.
It is worth underscoring how Wittgensteinian this is. Want to understand what a principle like rectilinear propagation amounts to? Then ask how it is used in actual scientific practice. Or better yet, draw a ray diagram as a student of geometrical optics might.
Toulmin’s use of the word “model” in the above passage merits a remark. The reason is that, for him, the “discovery” of light’s rectilinear propagation had two sides. One I have already discussed, and comprised “the development of a technique for representing optical phenomena which was found to fit a wide range of facts.” The other was the “the adoption along with this technique of a new model, a new way of regarding these phenomena and of understanding why they are as they are.” Models in physics tend to be homely things: the pendulum, the billiard ball, the “light-ray.” These work by putting flesh on a formal skeleton, and so conferring intelligibility on physical phenomena. Trouble only begins when models are discussed in isolation from the inferring techniques that give them life, and treated as though they were the whole of a theory. Popularizers, for example, tend to focus exclusively on those parts of a theory that confer intelligibility. This has the effect of severing the connection between a model and the phenomena it’s supposed to explain, and so of lending a false solidity to the model, as if the content of the theory were exhausted by intelligibility the model provides. (If you tell readers that atoms are mostly empty space, but you don’t say what phenomena the model renders intelligible, you haven’t conferred a real understanding of atomic theory.)
Toulmin’s next subject is laws. Having argued that “all major discoveries” in physics involve “the discovery of novel methods of representation,” his challenge is to show that this applies to honest-to-goodness laws of nature, not just to “principles” like rectilinear propagation. The example he discusses is Snell’s Law. This describes how light bends or “refracts” when it travels from one medium to another. Its great virtue, Toulmin thinks, is that it “allows us to extend the inferring techniques of geometrical optics” to a new set of situations, and so to explain a wide range of optical phenomena involving refraction. Logically speaking, its discovery involved “not so much the deduction of new corollaries or the discovery of new facts as the adoption of a new approach.” So, laws of nature are best construed as rules of inference in accordance with which conclusions about the world can be drawn. They are not statements about the world that function as premises in deductive arguments.*
[* Again: “As onlookers… we can regard the discovery of Snell’s Law as the discovery of how the optical phenomena encountered in a specifiable range of situations are to be represented, and so explained— to such-and-such a degree of precision, and with certain provisos… This may seem to be stated vaguely, but it is inevitable that it should be: if you try to say exactly and explicitly what was involved in the discovery,… to ‘make an honest fact of it’, you will only succeed in producing a tautology.”]
So far this is more of the same. But then Toulmin makes a series of interesting observations. First, he notes that it is not settled in advance “how far and under what circumstances these techniques can be employed.” Laws of nature express the form of regularities, but they do not express their scope. As such, the right question to ask about a law-statement is not, “Is this true or not?” (As a statement that expresses the form of a regularity, it just is.) Rather, one should ask, “Under what circumstances does this hold?” When investigators ask the second question instead of the first, Toulmin says, they are treating a statement as a law. It then becomes part of the framework of theory in an area, and can be used to state and solve problems as part of an explanatory toolkit. At the same time,
Departures from the law and limitations on its scope… come to be spoken of as anomalies and thought of as things in need of explanation in a way in which ordinary [phenomena are] not; and at the same time, the statement of the law comes to be separated from statements about the scope and application of the law.
(This passage cannot help but remind us of Kuhn, only in Toulmin there is no indication that the accumulation of anomalies precipitates major changes in scientific thought and practice. Later in his career, Toulmin would criticize the Kuhnian account of scientific change, and seek to substitute an evolutionary for Kuhn’s revolutionary model (Toulmin 1972, 1974).)
Second, and relatedly, laws of nature are used to introduce new terms into the language of physics, like “refractive index” in the case of Snell’s Law. This means that “questions about refractive index will have a meaning only in so far as Snell’s Law holds, so that in talking about refractive index we have to take the applicability of Snell’s Law for granted.” Physical theory is stratified, and this reflects the fact that, in science, earlier problem-solutions have to be taken for granted to state current problems. Perhaps every statement in science is ultimately open to falsification. But in any concrete investigation, a great many of these statements “[must] not actually be called in question, for by questioning some we deprive others of their very meaning.” Now and then we may have cause to doubt the very foundations of our knowledge. “[But] when this happens, and the lower courses [of the structure] have to be altered, the superstructure has to be knocked down, too, and a batch of concepts in terms of which the scientist’s working problems used to be stated— ‘phlogiston’ and the like— will be swept into the pages of history books.” Anyway, most of the time, “it is only the top course of bricks, the matters which are actively in question, which the scientist has to deal with.” (Laws, Toulmin thinks, are mid-level propositions in the structure of physics, with “principles” like rectilinear propagation beneath them and “hypotheses” above.)
Third, and turning now to the laws of motion, Toulmin suggests that these, by themselves, tell us nothing about the world (see also Watson 1938). The point was “put forcibly, and almost to the point of paradox,” by Wittgenstein in the Tractatus:
The fact that it can be described by Newtonian mechanics tells us nothing about the world; but this tells us something, namely, that it can be described in that particular way in which as a matter of fact it is described. (6.342)
In one sense, this merely restates what has already been said. The laws of motion are not empirical generalizations of the “all A’s are B’s” variety; what they provide is “a form of description to use in accounting for [motion].” Also, there is “a division of labour in physics” between laws themselves and statements about their scope. Laws of nature do not come with a handbook detailing their scope and applications; this has to be worked out through careful experimentation.
But there is a more basic point here, which is this. Laws of nature by themselves don’t do anything, including represent the world. “[It] is we who do things with them, and there are several different kinds of things we can do with their help.” Consequently,
there’s no need for us to be puzzled by the question whether Newton's Laws are descriptions, definitions, or assertions about methods of measurement: rather, it is up to us to see how in some application physicists use them to describe, say, the way a shell moves, in others to define some quantity as electromotive force, and in others again to devise a mode of measurement of, say, the mass of a new type of fundamental particle.
To understand laws of nature, ask how scientists use them in their concrete practices. As Wittgenstein (1953) counseled, “Don’t think, but look!”
* * *
At this point Toulmin turns to theories, and again he departs from philosophical orthodoxy. According to the standard logical empiricist view, theories are best reconstructed as axiomatized deductive systems plus some sentences that give content to this “calculus.” This is not quite a view of theories in the raw; no scientific theory ever presented itself to a philosopher in the costume of first-order logic. Still, it implies a view of what theories essentially are: namely, deductively-closed sets of sentences. It was a conception to flatter the predilections of scientifically-minded philosophers, many of whom studied mathematics, and nearly all of whom were smitten with the power and promise of predicate logic (Toulmin 1977a).
Toulmin offers a very different picture. Recall that, for him, laws have several uses in scientific practice. One of these is supplying warrants for inference: “inference tickets” in Gilbert Ryle’s terminology (Ryle 1949). Now, a remarkable thing about these “tickets” is that they are not one-way tickets that allow you to travel from a single starting point to a select destination. Instead, they are more like day passes, which allow you to explore the area served by the transportation system at your whim and discretion. Toulmin suggests that we take this metaphor of exploration seriously. Laws of nature really do allow us to explore landscapes of possibility in an open-ended way. So then what exactly is a scientific theory? Toulmin argues that we should think about it in terms of another metaphor. If laws of nature are like day passes (in one of their applications, at least), then theories are best thought of as maps.
Cool— theories are maps. But maps of what, exactly? For Toulmin, the answer is phenomena. Theories show scientists how to navigate a space of phenomena using the routes prescribed by a set of inferential techniques. These routes, moreover, are practically unlimited. A scientific theory can take you to an indefinitely large number of places.
But this intriguing point does not exhaust the usefulness of the map metaphor. Changing direction somewhat, Toulmin suggests that scientists can be seen as “surveyors of phenomena.” Like surveyors, they are able “from a limited number of highly precise and well-chosen measurements” to produce a “map” from which one can read off “an almost unlimited number” of facts of nearly equal precision.” This observation, he thinks, resolves a general point of mystification in “the traditional logical account of the sciences.” In this account, there is a serious puzzle about how experiments are used to establish scientific theories. “In the first place, physicists seem to be satisfied with far fewer observations than logicians would expect them to make”: a divergence that “is partly to be accounted for by the logician’s confusion between laws and generalizations— one would hesitate to assert, say, that all ravens are black if one had seen only a half dozen of the species, whereas to establish the form of a regularity in physics only a few careful observations are needed.” Yet there is another, related difficulty. How are new applications of a theory related to the observations on which the theory was first established? Here Toulmin observes that not all applications of a theory need to have been anticipated when the theory was established, just as not everything that can be read off a map needs to have been put into that map by the mapmaker. “Having made a limited number of highly accurate observations… one [will sometimes be] in a position to formulate a theory with the help of which one can draw, in appropriate circumstances, an unlimited number of observations of comparable accuracy.” (Toulmin knows that not all phenomena can be mapped in a simple way. The question of how many observations need to be made before a theory can be considered trustworthy is something that will vary from system to system “and which it is part of a physicist’s training to learn to judge.”)
There’s more. In many areas of physics phenomena are covered by two or more theories “in which techniques of different degrees of sophistication are employed.” What is the relation of one theory to the other(s)? Again, Toulmin recommends the map metaphor as a guide. He first imagines a simple road map featuring several towns and the roads connecting them. Such maps are useful and familiar, he notes, but they needn’t be the only maps of an area. For example, there may also be more elaborate physical maps drawn to a larger scale and showing features in greater detail. In these maps, “roads will perhaps be drawn to scale, not represented by lines of purely conventional widths, while towns and villages will be marked, not as mere dots and blobs of standard sizes, but as having definite shapes and [layouts].”
Now, there are two things to notice about these maps. First, many things can be represented on the physical map that cannot be pictured on the road map. This is a consequence of the way the maps are made, and of “the comparable [and deliberate] poverty of the system of signs used on the road map.” Second, and conversely, “given the physical map, one could produce a satisfactory road map” with little additional effort. But this does not mean that the road map is somehow deficient or redundant. Indeed, “for some applications one will be able to discover the thing one wants to know, e.g. distances by car, more easily from the road map than from the physical one.” Maps are evaluated relative to the purposes of map-users.
Turning to theories, Toulmin suggests that the relation between geometrical optics and the wave-theory of light is similar to that between a road map and a detailed physical map.
Thus the fact that one can explain on the wave-theory, not only all the phenomena that can be accounted for on the geometrical theory, but also why the geometrical account holds and fails to hold where it does, is like the fact that one can construct a road map from a physical map; but again, [this] is not a sign that the geometrical theory needs to be superseded for all purposes. Road maps did not go out of use when detailed physical maps were produced.
Still, “[the] conceptual equipment of the geometrical theory, like the system of signs on a road map, is too poor for one to do with it all that can be done with the wave-theory.” Even the notion of a light-ray “is artificial [in] very much the way that the conventional-width road is, and has to be abandoned in the wave-theory because the accuracy with which one wants to answer questions about optical phenomena is too great for the conventional picture to be retained.” So in science we have multiple representations of phenomena fit for different purposes. And sometimes these even make incompatible assumptions about the nature of phenomena, because they are associated with different models (waves and rays).
These points are now standard philosophical fare. It is worth pausing, then, to emphasize that Toulmin stated them clearly over seventy years ago, in 1953.
What light does this throw on that perennial philosophical problem, the problem of theory change? Arguably quite a bit. To begin, we may observe that at any given time there will typically be one theory that is regarded as the “fundamental” theory in an area: as answering all of the most important questions in the most satisfactory way. But in science, the standards for what constitutes a “complete” or successful theory are subject to change. “[There] is at any given stage a standard of what sorts of things require explaining,” Toulmin argues: “The standard accepted at any time determines the horizon of physicists’ ambitions at that time, the goal which for them would have been reached if ‘everything’... had been found a place in the theories of physics.”
[Yet in] physics, as in traveling, the horizon shifts as we go along. With the development of new theories, new problems are thrown into prominence, [and] ways are seen of fitting into physical theory things which before had hardly been recognized as matters requiring a place at all.
An especially important thing happens when one set of standards, one ideal, replaces another; and indeed this is when philosophical disputes are prone to break out in the sciences. “In the change-over from Aristotelian to Newtonian dynamics, for example, certain phenomena which were previously regarded as ‘natural’ and taken for granted, such as carts stopping when the horses ceased to pull and heavy bodies falling to the ground, came to be thought of as complex phenomena needing explanation.” In this respect, the new dynamics expanded the horizon of physical theory. Yet in other respects the horizon shrank, as phenomena that had previously been regarded as complex and in need of explanation became regarded as simple. (Examples include arrows flying and the motions of planets in their tracks.) It was this shrinkage, Toulmin thinks, which made the development of a new dynamics so bitterly hard; and this friction is typical of episodes of major conceptual turnover.*
[* Again, it is difficult to read passages like this and not think of Kuhn— especially when Toulmin notes that evaluative standards “[are] something with which scientists grow familiar in the course of their training, but which [are] hardly ever stated.” But Toulmin was writing nearly a decade before Structure, and his purpose was mostly to reveal the contingency of the standards used in theory evaluation.]
There is one more part of Toulmin’s discussion of theories that warrants discussion. This is his answer to a question posed by Eddington (1939): How much does the structure of physical theory tell us about nature, and how much about ourselves? Again, the cartographic analogy provides a starting point. Toulmin observes that, in cartography, a great deal needs to be contributed by the mapmaker before any map can be drawn. “Cartographers and surveyors have to choose a base-line, orientation, scale, method of projection and system of signs before they can even begin to map an area.” Depending on what choices they make, the maps that result from their labors may take a variety of forms. But the fact that human choices shape the form a map takes does not lessen the value of the map. “For the alternative to a map of which the method of projection, scale and so on were chosen in this way, is not a truer map— a map undistorted by abstraction: the only alternative is no map at all.”
The same holds in physical theory. Recall that on Toulmin’s view, many features of theories need to be understood in terms of the methods of representation these theories employ. This illustrates the key role human decisions play in scientific theorizing. If we had chosen different modes of representation, we would have ended up with different theories.Yet it does not imply anything improper about theories that they are shaped by human decisions. “For the alternative to a theory that has been built up with the help of decisions of this kind is not a truer theory… [it] is no theory at all.” Again, Toulmin channels Wittgenstein: “If we are to say anything, we must be prepared to abide by the rules and conventions that govern the terms in which we speak: to adopt these is no submission, nor are they shackles.” It is a condition of human understanding that we abide by certain rules, or else introduce new rules along with new systems of representation. This is a prerequisite of saying anything true, because it is a prerequisite of saying anything at all. Better get used to it.
* * *
I’m going to stop here before I overstay my welcome. There’s more in Toulmin’s book, in particular, a nice discussion of how the existence of “such a temperature as Absolute Zero” is “a consequence of the way in which we give meaning to the notion of temperature, and put degrees of warmth and cold into relation with the number-series.” There’s also a chapter on notions of determinism and necessity in science, where Toulmin’s linguistic orientation is especially pronounced.
Instead of discussing this stuff, I want to do two final things. The first is to talk about the critical reception of Toulmin's book. The second is to situate the project within the broader history of philosophy of science.
Toulmin’s book was widely, and for the most part positively, reviewed. Notices appeared in Science, The Journal of Philosophy, The Philosophical Review, Mind, Philosophy of Science, and The British Journal for Philosophy of Science, to name only the more prestigious journals. The most extensive and perceptive reviews were penned by Ernest Nagel (Mind) and Michael Scriven (The Philosophical Review). Both were enthusiastic about the book— Scriven in particular— and yet devoted most their energy to qualifying Toulmin’s claims. Chief among these was the idea that laws of nature do not stand in deductive relations with statements about phenomena; statements about phenomena do not entail laws of nature, and— more controversially— laws of nature do not entail statements about phenomena. This claim hinged on the idea that laws of nature are statements about the form of regularities that, by themselves, say nothing about the world. This is doubtful as a general characterization of laws, the reviewers observed. But even if we go along with it, the point is perhaps narrower than Toulmin indicates, since statements about the form of regularities certainly can be used to deduce statements about phenomena in conjunction with statements about initial and boundary conditions, say. On my reading, Toulmin seems to grant this (as he should). But then it is curious that he lays so much stress on the point that laws stand in no deductive relations to observable phenomena. Here a too-strict adherence to a Wittgensteinian picture of laws seems to have landed him in trouble.
More broadly, some of Toulmin’s reviewers worried that he was peddling a kind of pragmatist-instrumentalism. After all, didn’t he say that laws of nature are simply modes of representing phenomena? And that a major part of theories is a set of techniques for explanatory inference? This was familiar stuff. The name Percy Bridgman was mentioned. But then Toulmin says relatively little about how theories prove their instrumental value, apart from some remarks about “ accommodating… phenomena,” solving problems, and the like. In his discussion of scientific change, he says that new theories sometimes prevail at least in part because they find a place for “things which before had hardly been regarded as matters requiring a place at all.” But then he insists, as Kuhn later would, that no global standards of cumulative progress exist. So in the end Toulmin has little to say about how theories are evaluated relative to a body of evidence. This raised the specter of relativism. Surely not all ways of regarding phenomena are equally valid or equally powerful; how then are we to sort the better ones from the worse? And is this even a question practice-based philosophy of science can answer?
The last wasn’t a question that had been seriously posed in 1953. Yet it burst into the open in the 1960s, when Kuhn raised the specter of relativism with much greater urgency than Toulmin ever had. At this point, the “historical turn” in philosophy of science was well and truly underway, pushing scientific practice into the philosophical spotlight. Toulmin played a role in this, publishing another book on philosophy of science in 1961, and then embarking on a brief career as an intellectual historian of science with his second wife, June Goodfield (Toulmin and Goodfield 1961, 1962, 1965).*
[* Probably I will discuss Toulmin’s second philosophy of science book— Foresight and Understanding— at a later date. It was here that he took his own “historical turn” and attempted to come to grips with the historical and contextual criteria used for evaluating theories.]
But in 1953 that was all in the future. Indeed, it was only in 1954 that Toulmin took a serious interest in the history of science, during an exchange visit to Melbourne University (Toulmin 1977b). Before this it was not the historical record but Wittgenstein that was his muse. So where does Toulmin (1953) fit in the fabric of twentieth century philosophy of science?
As I noted above, philosophy of science during the middle part of the twentieth century was dominated by a particular analytical project. This strove to provide formal analyses of things like scientific explanation, theory structure, and confirmation, and in its most extreme form, to build up “a single, comprehensive axiom-system, having [Bertrand] Russell’s analysis of mathematics as its formal core, which [w]ould be capable in principle… of representing the totality of our positive scientific knowledge” (Toulmin 1977). It was a swashbuckling project, built upon the formal techniques of logical analysis and buttressed by a strict prohibition against confusing logical and empirical matters— only the former was the subject of philosophy of science. But it was also narrow: deliberately narrow, to be sure, but perhaps even narrower than its champions intended.
It was also largely an American project, at least after most of the leading empiricists had emigrated to the United States. Toulmin, by contrast, was British: a disciple of Wittgenstein in a country without a strong tradition in philosophy of science. At Oxford, Toulmin’s research supervisor was Richard Braithwaite, an eclectic philosopher who was then “developing [some] ideas about the formal character of scientific explanation [in the tradition of Frank Ramsey]” (Toulmin 1977b).* Then there was Gilbert Ryle, and in London, Karl Popper, newly arrived from New Zealand. Among physicists, Herbert Dingle was philosophically active, as was W. H. Watson (a discipline of Wittgenstein whom Toulmin counted as a major inspiration). Still, there was no center of gravity as there was in the American context. This had the effect of fostering creativity— along with Toulmin, Great Britain produced Mary Hesse (UCL), Michael Scriven (Australian, but trained at Oxford), and N. R. Hanson (American, but trained at Oxford and Cambridge) in the span of a decade. But it also intensified the risk that creative work would struggle to gain traction in a fragmented intellectual landscape. This is evidently what happened to Toulmin’s early work. Personally, Toulmin thrived, gaining a professorship at Leeds in 1955. His attempt to infuse the later Wittgenstein into philosophy of science, by contrast, made little headway.**
[* Braithwaite’s enduring claim to fame is that he supplied the poker that Wittgenstein allegedly brandished at Karl Popper during a 1946 meeting at Braithwaite’s rooms in King’s College.]
[** This remark needs to be clarified. Certain of the later Wittgenstein’s ideas did eventually find their way into mainstream philosophy of science: in particular his views on perception (Kindi 2017). Yet Toulmin’s own applications of Wittgenstein, as well as his Wittgenstein-inspired methodological approach, remained marginal. (For some responses to Toulmin’s philosophy of science, mostly negative, see Alexander (1958), Lakatos (1976), and Suppe (1977).]
A final thought. Paul Franco has recently argued that “ordinary language” philosophers played an underappreciated role in the historical turn in philosophy of science (Franco 2021). Along with Toulmin, he discusses Scriven, Hesse and the physicist Watson. According to Franco, the problems in view for ordinary language analysis “[were] part and parcel of the historical turn, as well as its reception.” The implication is that, by attending to these contributions, we can better understand why philosophy of science developed as it did in the decades following World War II.
I think this is basically right, and a welcome corrective to some of the narratives we tell about the field in the middle of the century. Still, when it comes to Toulmin’s early work, I’m inclined to emphasize a different point. Far from illuminating how philosophy of science actually developed during the twentieth century, Toulmin’s philosophy is more useful for drawing attention to an unrealized possibility: a road not taken. Philosophy of science in the post-war period was decidedly not Wittgensteinian, at least in the sense exemplified by Toulmin. In many ways it was out of tune with the basic thrust of this philosophy. And while philosophers would eventually find their way back to topics like representation and understanding, almost none of this work would be influenced by Toulmin’s philosophical approach. The road actually taken may bear the stamp of ordinary language analysis, but of Toulmin’s philosophy of science in practice, very little survived.
References
* I’ve embedded a couple videos of Toulmin after the reference list. One is short and the other is long (and very interesting, at least for Toulmin aficionados— he talks about Wittgenstein around the 42:00 mark). Scroll down to find them…
** For a delightful short primer on Wittgenstein’s philosophy (covering both the early and late philosophy, but emphasizing the Tractatus), see Gilbert Ryle’s short article in Scientific American, called “The work of an influential but little-known philosopher of science: Ludwig Wittgenstein” (1957). This article also contains an entertaining portrait of Wittgenstein as a character (“He was a spellbinding and somewhat terrifying person. He had unnervingly piercing eyes…”).
Alexander, H. G. 1958. General statements as rules of inference. In H. Feigl, M. Scriven and G. Maxwell, eds., Minnesota Studies in Philosophy of Science, Volume II, 309–329. Minneapolis: University of Minnesota Press.
Carnap, R. 1945. On inductive logic. Philosophy of Science 12:72–97.
Eddington, A. 1939. The Philosophy of Physical Science. Cambridge (UK): Cambridge University Press.
Einstein, A. 1934. On the method of theoretical physics. Philosophy of Science 1:163–169.
Feigl, H. 1970. The “orthodox” view of theories: remarks in defense as well as critique. In M. Radner and S. Winokur, eds., Analyses of Theories and Methods of Physics and Psychology, 3–16. Minneapolis: University of Minnesota Press.
Franco, P. L. 2021. Ordinary language philosophy, explanation, and the historical turn in philosophy of science. Studies in History and Philosophy of Science Part A 90:77–85.
Friedman, M. 2002. Kuhn and Logical Empiricism. In T. Nickles, ed., Thomas Kuhn, 19–44. Cambridge (UK): Cambridge University Press.
Goodman 1955. Fact, Fiction and Forecast. New York: The Bobbs-Merrill Company, Inc.
Johanssen, K. S. 1988. The concept of practice in Wittgenstein’s later philosophy. Inquiry 31:357–369.
Kindi, V. 2017. Wittgenstein and philosophy of science. In H. Glock and J. Hyman, eds., A Companion to Wittgenstein, 587–602. New York: Wiley-Blackwell.
Kuhn, T. 1962. The Structure of Scientific Revolutions. Chicago: The University of Chicago Press.
Lakatos, I. 1976. Understanding Toulmin. Minerva 14:126–143.
Monk, R. 1990. Ludwig Wittgenstein: The Duty of Genius. New York: Penguin Books.
Nagel, E. 1954. The Philosophy of Science: An Introduction [book review]. Mind 63:403–412.
Reisch, G. 2005. How the Cold War Transformed Philosophy of Science: To the Icy Slopes of Logic.
Ryle, G. 1949. The Concept of Mind. London: Hutchinson.
Ryle, G. 1957. The work of an influential but little-known philosopher of science: Ludwig Wittgenstein. Scientific American 157:251–259.
Scriven, M. 1955. The Philosophy of Science: An Introduction [book review]. The Philosophical Review 64:124–128.
Suppe, P. (ed.) 1977. The Structure of Scientific Theories (Second Edition). Chicago: University of Illinois Press.
Toulmin, S. 1953. The Philosophy of Science: An Introduction. New York: Harper & Brothers.
Toulmin, S. 1960. Concept-formation in philosophy and psychology. In S. Hook, ed., Dimensions of Mind: A Symposium, 211–225. New York: NYU Press.
Toulmin, S. 1961. Foresight and Understanding: An Inquiry into the Aims of Science. Bloomington: Indiana University Press.
Toulmin, S. 1972. Human Understanding: The Collective Use and Evolution of Concepts. Princeton: Princeton University Press.
Toulmin, S. 1974. Does the distinction between normal and revolutionary science hold water? In I. Lakatos & A. Musgrave, eds., Criticism and the Growth of Scientific Knowledge, 39–48. Cambridge (UK): Cambridge University Press.
Toulmin, S. 1977a. Postscript: the structure of scientific theories. In P. Suppe, ed., The Structure of Scientific Theories, 600–616. Chicago: University of Illinois Press.
Toulmin, S. 1977b. From form to function: philosophy and history of science in the 1950s and now. Daedalus 106:143–162.
Toulmin, S. and Goodfield, J. 1961. The Fabric of the Heavens: The Development of Astronomy and Dynamics. Chicago: The University of Chicago Press.
Toulmin, S. and Goodfield, J. 1962. The Architecture of Matter. Chicago: The University of Chicago Press.
Toulmin, S. and Goodfield, J. 1965. The Discovery of Time. Chicago: The University of Chicago Press.
Watson, W. H. 1938. On Understanding Physics. Cambridge (UK): Cambridge University Press.
Wittgenstein, L. 1922. Tractatus Logico-Philosophicus. New York: Harcourt, Brace, & Company, Inc.
Wittgenstein, L. 1952. Philosophical Investigations. Oxford: Basil-Blackwell.