On Hacking’s Emergence of Probability

Ian Hacking’s The Emergence of Probability (Cambridge University Press, 1975), was in many ways the launching pad for history of statistics as a scholarly topic in (but not limited to) history of science. Like its author, the book resists classification. Ian took his graduate training in philosophy at Cambridge, and he preferred simply “philosophy” to more cumbersome labels like “history and philosophy of science.” His first book (1965), The Logic of Statistical Inference, left him dissatisfied by the typical failure to distinguish aleatory from epistemological probability, that is, measures of uncertainty from distributions of chance events. “We seem to be in the grip of darker powers than are admitted into the positivist ontology” he wrote (15).

He thought the explanation might lie in historical conditions of origin. This historical project, then, had a philosophical purpose, and the book diverges sharply from the usual historical writing. But historians of science were very much interested. To give one decidedly nontrivial example, Lorraine Daston was inspired by the book in the framing of her (dissertation and) first book on Classical Probability in the Enlightenment. On a much larger scale, the German philosopher Lorenz Krüger combined Hacking’s book with Thomas Kuhn’s Structure of Scientific Revolutions to organize in 1982-83 a year-long research project on “The Probabilistic Revolution, 1800-1950).” It included about twenty scholars from a wide range of disciplines in residence, and even more visitors. I got in just under the wire as the youngest member of the group.

My copy of Emergence of Probability records that I acquired it in November, 1984, but had read already it in August, 1979. That was about three months after I took my general exam at Princeton and proposed a thesis on nineteenth-century statistical sciences. In the Princeton program, dissertation plans had no role either in the written or oral section of this examination, but my major field of graduate study, nineteenth-century physics, included thermodynamics and statistical gas theory. My advisor, Charles Gillispie, sat me down and suggested topics I should address as well as the most relevant scholarship. He did not mention his own important work on the topic, probably assuming I already knew it, and within a week or two of starting the research, I did. He emphasized Ian Hacking and Stephen Stigler, plus a few French scholars, perhaps Bernard Bru and Eric Brian. While I am sure I knew already of The Emergence of Probability, I chose my topic without yet having read it.[1] Curiously, as a Stanford senior in 1975-76 I had taken Ian’s lecture course on history of philosophy, I think from Descartes to Kant. He omitted Leibniz, he said, because he knew too much and found Leibniz too interesting. An intense but naïve undergraduate, I had no idea at this time of Professor Hacking’s new book, and I don’t think he ever mentioned it, or even the topic of probability. By 1979, he had made a serious start on the work that would become The Taming of Chance (1990), and that was why Charles mentioned him. In 1980, on a visit to California, I made an appointment to talk with him about his latest work. He clued me in to a few relevant philosophical writers I had not known about.

Both of Ian’s books on numbers and probability emphasized emergence over historical development. Although many of his subjects, still more in Emergence of Probability than in Taming of Chance, were known to philosophers and historians of science, his approach differed in that he tended to situate them as place holders rather than as agents of historical change. He gave special attention to bureaucratic actors in his book on the nineteenth century, and to professionals, especially doctors, in the seventeenth. For good reason, the longest entry in his analytical table of contents was for chapter 5, titled “Signs.” It reads:

Probability is a child of the low sciences, such as alchemy or medicine, which had to deal in opinion, whereas the high sciences, such as astronomy or mechanics, aimed at demonstrable knowledge. A chief concept of the low sciences was that of the sign, here described in some detail. Observation of signs was conceived as reading testimony. Signs were more or less reliable. Thus on the one hand a sign made an opinion probable (in the old sense of Chapter 3) because it was furnished by the best testimony of all. On the other hand, signs could be assessed by the frequency with which they spoke truly. At the end of the Renaissance, the sign was transformed into the concept of evidence described in Chapter 4. This new kind of evidence conferred probability on propositions, namely made them worthy of approval. But it did so by virtue of the frequency with which it made correct predictions. This transformation from sign into evidence is the key to the emergence of a concept of probability that is dual in the sense of Chapter 2.

We should recall that Ian had an important role in introducing English-language readers to Michel Foucault. There are, however, just two mentions of Foucault (both of The Order of Things) in the book. While neither appears in this crucial chapter, he credits Foucault near the end (p. 183) for the idea that a transformation of the sign had led to the modern sense of “evidence,” and that in this way (now quoting Foucault) “Hume has become possible.”

Ian was generous enough in giving credit, but always an independent thinker who showed his appreciation by putting ideas to work. For me, I think, as for Raine Daston, the most inspiring aspect of his work was the way he linked important scientific tools and concepts to the discourses and practices of ordinary life, including legal proofs, business contracts, and the pricing of annuities. This last was especially important for his argument, since it formed the basis for that deep and misbegotten link between epistemic and aleatory probability. His particular version of this argument relied heavily on his assertion of a factitious bond between distinct senses of the word probability and seems to me somewhat strained, since mathematicians did not generally use it, preferring “doctrine of chances” and the like.

Histories of statistics, quantification, measurement, calculation, and data have gradually come to flourish since 1975. The field, if I may call it that, is scarcely a scholarly specialty, but a loose clustering of topics and approaches. Among historians of science, Hacking is better known for his work on philosophy of experimentation in Representing and Intervening and for his thoughts on social construction, but the work on probability and statistics has attracted an audience extending over a wide range of social and humanistic scholarship. He had a solid background in the discipline of philosophy and took seriously the fields and topics he engaged with, but he never let them constrain him artificially. He is a genuine intellectual who is able to say something interesting about everything he touches.

 


[1] My sense of the topic came first of all from the chapter on “The Statistical View of Nature” in A History of European Thought in the Nineteenth Century (1904-1912), pp. 548-626.  Everyone regarded Merz as a resource rather than a scholarly model, yet his naïve (as it already seemed) emphasis on “the scientific spirit” encouraged him to move freely across disciplinary boundaries.


Posted

in

Tags: