Symposium on Evidence in the Natural Sciences

Date & Time


Excerpted figure from a 1611 Frankfurt printing of Johannes Kepler's Strena Seu De Niue SexangulaFRIDAY, MAY 30, 2014
Scientific Program: 8:00 AM – 3:15 PM
Evening Program: 4:30 – 7:45 PM

Gerald D. Fischbach Auditorium
160 5th Avenue, New York, New York, 10010

What is the difference between evidence, fact, and proof? Can we quantify evidence; is something more evident than something else? What does it take to convince a scientist, a scientific community, and the general public of the correctness of a scientific result in the era of very complicated experiments, big data, and weak signals?

This symposium, co-hosted by the Simons Foundation and John Templeton Foundation and in collaboration with the World Science Festival, addressed these and related questions, during a scientific program suited for for established researchers, postdoctoral fellows and graduate students working in the natural sciences and allied fields, and during an evening program aimed at the above scientists in addition to the well-informed general public.

See the foundation news feature on the symposium for further information and photographs.

SPEAKERS

Jim Baggott, Science Writer
Charles Bennett, IBM Research
David Donoho, Stanford University
Peter Galison, Harvard University
Brian Greene, Columbia University
Thomas Hales, Pittsburgh University
Tim Maudlin, New York University
Amber Miller, Columbia University
William Press, University of Texas at Austin

  • Download Agenda PDF

    Evidence in the Natural Sciences Friday, May 30, 2014
    8:00 – 9.00 AM Breakfast & Check-in
    9:00 – 9:05 AM Welcome & Introduction Yuri Tschinkel
    Simons Foundation
    Vladimir Buzek
    Templeton Foundation
    9:05 – 9:45 AM The Verification of the Proof of the Kepler Conjecture Thomas Hales
    Pittsburgh University
    9:45 – 10:25 AM Can We Believe Our Published Research? Systemic Failures, Their Causes, a Solution David Donoho
    Stanford University
    10:25 – 10:55 AM Break
    10:55 – 11:35 AM Reproducibility Now at Risk? William Press
    The University of Texas at Austin
    11:35 AM -12:15 PM How Can We Know What Happened Almost 14 Billion Years Ago? Amber Miller
    Columbia University
    12:15 – 12:55 PM Evidence, Computation, and Ethics Charles Bennett
    IBM Research
    12:55 – 1:55 PM Lunch
    1:55 – 2:35 PM New Evidence Peter Galison
    Harvard University
    2:35 – 3:15 PM Evidence and Theory in Physics Tim Maudlin
    New York University
    4:30 – 5:15 PM Tea
    5:15 – 6:45 PM Panel Discussion Brian Greene, Columbia University
    Peter Galison, Harvard University
    Jim Baggott, Science Writer
    6:45 – 7:45 PM Reception

Talks

Panel Discussion: Evidence in the Natural Sciences

Brian Greene, Columbia University
Peter Galison, Harvard University
Jim Baggott, Science Writer

Download PDF file of the abstracts

The Verification of the Proof of the Kepler Conjecture

Thomas Hales, Pittsburgh University

View Slides (PDF)

In 1998, Sam Ferguson and Tom Hales announced the proof of a 400-year-old conjecture made by Kepler. The Kepler conjecture asserts that the most efficient arrangement of balls in space is the familiar pyramid arrangement used to stack oranges at markets.

Their mathematical proof relies heavily on long computer calculations. Checking this proof turned out to be a particular challenge for referees. The verification of the correctness of this single proof has now continued for more than 15 years and is still unfinished at the formal level. This long process has fortified standards of computer-assisted mathematical proofs.

Can We Believe Our Published Research? Systemic Failures, Their Causes, a Solution

David Donoho, Stanford University

View Slides (PDF)

Statistical evidence has served as a central component of the scientific method for centuries. The proper calculation and interpretation of statistical evidence is now crucial for interpretation of the one million or more research articles published yearly that purport to discover new effects by data analysis.

Traditional practices, which aligned nicely with rigorous statistical analysis, are slipping away. One of these was the idea of defining, in advance of any data gathering, the entire pipeline of data processing and analysis, so the data itself would not affect its own analysis, and theoretical assumptions would hold. Another was the idea of carefully describing post facto the full set of analyses that led to a conclusion, including dead ends and reports left in the file drawer.

A great deal of mischief can be, and has been, unloosed by the spread of less rigorous practices, traceable ultimately to the ease with which data and analyses can be ‘tweaked.’ John Ioannidis suggests that half or more of all published research findings are false.

In this talk, we reviewed some of the validity problems becoming evident in the combined corpus of scientific knowledge at a global scale and how this is detected.

Reproducibility Now at Risk?

William H. Press, The University of Texas at Austin

View Slides (PDF)

The reproducibility of experimental results is a central tenet of science, related to equally central notions of causality. Yet irreproducibility occurs all the time in the scientific enterprise, ranging in cause from the fundamentally statistical nature of quantum mechanics and chaotic classical systems to the long list of human fallibilities that can cause experiments to go bad or even mathematical proofs to contain obscure flaws. It has recently been alleged that biomedical experiments are becoming less reproducible, to the point of stymieing new cancer drug development. Are researchers today just sloppier, or is there a more fundamental explanation? What should we do about it?

How Can We Know What Happened Almost 14 Billion Years Ago?

Amber Miller, Columbia University

View Slides (PDF)

How do we go about uncovering the history of the universe, and what evidentiary standards are required in order to leap the bar from theoretical idea to established scientific framework? Of particular importance in this field is the distinction between a model simply capable of explaining observed phenomena, and one with the power to generate unique and testable predictions. However, the application of properly predictive modeling in the theoretical framework is only one side of the coin. Equally important is the rigor with which the experimental investigations are conducted. Perhaps counter-intuitively to those working outside the field, there are powerful sociological forces at work in the cosmological community that play a constructive role in ensuring this intellectual rigor.

In this talk, Professor Miller discussed the manner and degree to which this affects the debate.

 

Evidence, Computation, and Ethics

Charles Bennett, IBM Research

View Slides (PDF)

Our world contains abundant evidence of a nontrivial past, and archaeological artifacts are deemed valuable according to how much of this long history they contain evidence of, evidence that cannot be found elsewhere. Nevertheless some important aspects of the past, like the lost literature of antiquity and the fate of Jimmy Hoffa, have resisted discovery for a long time, and there is reason to believe that some of this information has been irretrievably lost, so by now it is as objectively ambiguous as the most indeterminate aspects of the future, e.g., which of two radioactive atoms will decay first. Notions of evidence and history can be formalized by a modern version of the old idea of a monkey accidentally typing Shakespeare. A modern monkey boosts its chances by typing at a general-purpose computer instead of a typewriter. The behavior of such a randomly programed computer turns out to be a rather subtle mathematical question, whose answer contains the seeds of a non-anthropocentric ethics, in which objects (such as a book, a DNA sequence, or the whole biosphere), are deemed valuable and worthy of preservation if they contain internal evidence, unavailable elsewhere, of a nontrivial causal history requiring a long time for a computer to recapitulate.

New Evidence

Peter Galison, Harvard University

Perhaps the greatest lesson that the history of physics can offer us is this: the development of science is not just about the discovery of new theories and phenomena; it is about the creation of novel forms of evidence and argument. Statistical inference, error bars, golden events — along with argument by diagrams, symmetry, simulation, and Gedankenexperiment — these and other forms of evidence are so much a part of our armamentarium that it is easy to think they are part of the eternal firmament of physics. But they, like the objects and laws they helped establish, are very much the product of hard-fought battles in the development of the discipline. And the evolution of the very form of our evidence is a sign of the dynamic, changing nature of physics itself.

Evidence and Theory in Physics

Tim Maudlin, New York University

View Slides (PDF)

As an empirical science, physics must imply some testable predictions. And since physics proposes to offer a complete description of the physical world, those empirical consequences must follow from the theory all by itself. The main interpretational problem of quantum theory (the measurement problem or Schrödinger cat problem) arises exactly because it is unclear how to connect in a principled way the language of the theory to the language of the empirical data. John Bell offered a solution to this problem, which he called the “theory of local beables.” Professor Maudlin discussed Bell’s general solution, and a few of the exact detailed forms it might take.

Subscribe to MPS announcements and other foundation updates