Efron, B. and Hastie, T. (2016). Computer Age Statistical Inference: Algorithms, Evidence, and
Data Science. Cambridge University Press.
Efron, B., and Morris, C. (1971). Limiting the risk of Bayes and empirical Bayes estimators—Part
I: The Bayes case. Journal of the American Statistical Association 66, 807–815.
Efron, B., and Morris, C. (1972). Limiting the risk of Bayes and empirical Bayes estimators—Part
II: The empirical Bayes case. Journal of the American Statistical Association 67, 130–139.
Efron, B., and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. London: Chapman and
Hall.
Fay, R. E., and Herriot, R. A. (1979). Estimates of income for small places: An application of
James-Stein procedures to census data. Journal of the American Statistical Association
74
,
269–277.
Felsenstein, J. (1985). Confidence limits on phylogenies: An approach using the bootstrap. Evolution
39, 783–791.
Freund, Y., and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and
an application to boosting. Journal of Computer and System Sciences 55, 119–139.
Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of
Statistics 29, 1189–1232.
Gabry, J., Simpson, D., Vehtari, A., Betancourt, M., and Gelman, A. (2019). Visualization in
Bayesian workflow (with discussion). Journal of the Royal Statistical Society A 182, 389–402.
Geisser, S. (1975). The predictive sample reuse method with applications. Journal of the American
Statistical Association 70, 320–328.
Gelfand, A. E., and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal
densities. Journal of the American Statistical Association 85, 398–409.
Gelman, A. (2003). A Bayesian formulation of exploratory data analysis and goodness-of-fit testing.
International Statistical Review 71, 369–382.
Gelman, A., Vehtari, A., Simpson, D., Margossian, C. C., Carpenter, B., Yao, Y., B¨urkner, P. C.,
Kennedy, L., Gabry, J., and Modr´ak, M. (2020). Bayesian workflow.
www.stat.columbia.edu/
~
gelman/research/unpublished/Bayesian_Workflow_article.pdf
Geman, S., and Hwang, C. R. (1982). Nonparametric maximum likelihood estimation by the method
of sieves. Annals of Statistics 10, 401–414.
Gigerenzer, G., and Todd, P. M. (1999). Simple Heuristics That Make Us Smart. Oxford University
Press.
Giordano, R., Broderick, T., and Jordan, M. I. (2018). Covariances, robustness, and variational
Bayes. Journal of Machine Learning Research 19, 1–49.
Good, I, J., and Gaskins, R. A. (1971). Nonparametric roughness penalties for probability densities.
Biometrika 58, 255–277.
Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning. Cambridge, Mass.: MIT Press.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and
Bengio, Y. (2014). Generative adversarial networks. Proceedings of the International Conference
on Neural Information Processing Systems, 2672–2680.
Gordon, N. J., Salmond, D. J., and Smith, A. F. M. (1993). Novel approach to nonlinear/non-
Gaussian Bayesian state estimation. IEE Proceedings F - Radar and Signal Processing
140
,
107–113.
Greenland, S. (2005). Multiple-bias modelling for analysis of observational data. Journal of the
Royal Statistical Society A 168, 267–306.
14