February 28, 2022

Objective probability at different levels of knowledge

Alex Meehan, philosophy, Yale
Monique Jeanblanc, mathematics, Evry
Barry Loewer, philosophy, Rutgers
Tahir Choulli, mathematics, Alberta

In his 1843 book on probability, Cournot argued that objective probabilities can be consistent with God’s omniscience.  As he saw the matter, truly objective probabilities are the probabilities of a superior intelligence at the upper limit of what might be achieved by human-like intelligence.  As he explained,

Surely the word chance designates not a substantial cause, but an idea: the idea of the combination of many systems of causes or facts that develop, each in its own series, each independently of the others. An intelligence superior to man would differ from man only in erring less often or not at all in the use of this idea. It would not be liable to consider series independent when they actually influence each other in the causal order; inversely, it would not imagine a dependence between causes that are actually independent. It would distinguish with greater reliability, or even with rigorous exactness, the part due to chance in the evolution of successive phenomena. . . . In a word, it would push farther and apply better the science of those mathematical relations, all tied to the idea of chance, that become laws of nature in the order of phenomena.

In the 1970s and 1980s, the consistency of probabilities at different levels of knowledge was studied within measure-theoretic probability by Paul-André Meyer’s Strasbourg seminar.  One question studied was whether the semimartingale property is preserved when a filtration is enlarged.  A discrete-time process S is a semimartingale with respect to a filtration F if it is the sum of two processes, say S = A + B, where

  • A is predictable with respect to F, in the sense that an observer whose growing knowledge is represented by F already knows the value of At+1 at time t, and
  • B is a martingale, meaning that this observer assigns expected value zero to B’s gain on the next step: E(Bt+1|Ft)= Bt

An observer whose knowledge is represented by a larger filtration F* knows more and so can predict more, but if the remaining unpredictable part is still a martingale, then the probabilities with respect to F can be considered just as objective as those with respect to F*.  In discrete time, the semimartingale property is always preserved, but in continuous time regularity conditions are needed.

The most extreme enlargement of any filtration is the filtration that knows the world’s entire trajectory at the outset.  In Cournot’s picture, this would be God’s filtration.  Every process is predictable and hence a semimartingale with respect to this filtration.

In recent decades, the enlargement of filtrations has been used to study insider trading and default risk in mathematical finance.

The philosophers Barry Loewer and David Albert, inspired largely by statistical mechanics, have proposed a picture roughly similar to Cournot’s, in which the superior intelligence is represented by “standard Lebesgue measure over the physically possible microstates” consistent with a description of the universe right after the Big Bang, and God is replaced by David Lewis’s Humean mosaic.  Loewer has called this the “Mentaculus vision”.

Questions for the panel:

  1. The models used by statisticians are accessible to their actual resources for representation and computation.  To what extent does Cournot’s (or Loewer’s) picture provide an objective foundation for these models?
  2. A filtration F* is an enlargement of F when the sigma-algebra F*t contains the sigma algebra Ft for each time t.  Can we suppose that the superior intelligence’s filtration is an enlargement of a data scientist’s plan for making observations?  Of a data scientist’s analysis of data already obtained?
  3. Is there a filtration for the scientist in the Mentaculus vision?  (This would mean that the scientist knows in advance the trajectory of his future knowledge as a function of the world’s trajectory.)
  4. Cournot was an outspoken opponent of Bayesian inference.  Does Bayesian uncertainty about Mentaculus take us outside the Mentaculus vision?


  1. Antoine Angustin Cournot (1843), Exposition de la théorie des chances et de probabilités, Hachette, Paris.  Some passages relevant to this discussion are translated in “Cournot in English”.  Oscar Sheynin has provided a complete translation
  2. Ashkan Nikeghbali (2006), An essay on the general theory of stochastic processesProbability Surveys3:345-412.  Includes a chapter on the enlargement of filtrations and references to applications in finance.
  3. Tahir Choulli, Catherine Daveloose, and Michèle Vanmaele (2020), A Martingale Representation Theorem and Valuation of Defaultable Securities, Mathematical Finance 30(4):1527-1564. Considers two levels of information, public information and perhaps information about default time for a firm or death time for an insurance policy.
  4. Barry Loewer (2020), The mentaculus visionStatistical mechanics and scientific explanation: Determinism, indeterminism and laws of nature, Valia Allori (ed.), pp. 3-29.