Are the repeated sampling principle and Cournot’s principle frequentist?
Marshall Abrams, philosophy, Alabama Birmingham
Ruobin Gong, statistics, Rutgers
Alistair Wilson, philosophy, Birmingham UK
Harry Crane, statistics, Rutgers
Historically, the probability calculus began with repeated trials: throws of a fair die, etc. Independent and identically distributed (iid) random variables still populate elementary textbooks, but most statistical models permit variation and dependence in probabilities. Already in 1816, Pierre Simon Laplace explained that nature does not follow any constant probability law; its error laws vary with the nature of measurement instruments and with all the circumstances that accompany them. In 1960, Jerzy Neyman explained that scientific applications had moved into a phase of dynamic indeterminism, in which stochastic processes replace iid models.
Statisticians who call themselves “frequentists” have proposed two competing principles to support the inferences about parameters in stochastic processes and other complex probability models.
- Repeated sampling principle: Assess statistical procedures by their behavior in hypothetical repetitions under the same conditions.
- Cournot’s principle: Justify inferences by statements to which the model gives high probability.
Cox and Hinkley coined the name “repeated sampling principle” in 1974. The name “Cournot’s principle” was first current in the 1950s. But both ideas are much older. When interpreted as pragmatic instructions, the two principles are more or less equivalent. But they can also be interpreted as philosophical justifications – even as explanations of the meaning of probability models – and then they seem very different.
Questions for the panel:
- The repeated sampling principle can be taken as saying that the meaning of a probability measure lies in the assumption that salient probabilities and expected values given by the measure will be replicated by frequencies and averages in hypothetical repetitions. Is this a frequentist interpretation of probability?
- Similarly, Cournot’s principle says that the meaning of a probability measure lies in the assumption that salient events with probability close to one will happen. Is this a frequentist interpretation of probability?
- From a philosophical perspective, do these two interpretations of probability differ?
- The game-theoretic foundation for probability generalizes Cournot’s principle to the case where nonstochastic explanatory or decision variables may be determined in the course of observing the data and be influenced by the values of earlier variables. Here the principle says that a salient betting strategy using the probabilities for the stochastic variables will not multiply its capital by a large factor. Is this principle frequentist?
References
- Pierre Simon Laplace (1816), Letter to Bernhard von Lindenau, pp. 1100-1102 of Volume II of Correspondance de Pierre Simon Laplace (1749-1827), edited by Roger Hahn
- Jerzy Neyman (1960), Indeterminism in Science and New Demands on Statisticians, Journal of the American Statistical Association 55(292):625-639
- David R. Cox and David V. Hinkley (1974), Theoretical Statistics, Chapman and Hall
- Glenn Shafer (2022), “That’s what all the old guys said”: The many faces of Cournot’s principle. Working Paper 60, www.probabilityandfinance.com
- Glenn Shafer and Vladimir Vovk (2019), Game-Theoretic Foundations for Probability and Finance, Wiley.