The purpose of this seminar is to introduce students to the roles of randomness in scientific thinking. Some of the topics covered include the following: 1. Is probability intuitive? A class exercise will be conducted where students are asked to generate sequences of real and fake random coin tosses and are asked to develop tests to detect the difference. 2. What is the role of randomization in the design of scientific experiments (for instance, why are patients randomly assigned to treatments in a medical trial)? We recreate a famous incident in which a tea time conversation led to a statistician conducting an experiment to test whether someone could distinguish whether milk had been added first or last to a cup of tea. 3. How has statistical thinking been used and abused in the history of IQ testing? 4. In the analysis of environmental problems like global warming scientific models are often used which are deterministic (roughly speaking, such models predict a definite output for a given input). A statistical model on the other hand gives predictions in the form of probabilities of different possible outcomes. How can the deep physical understanding embedded in the deterministic models be reconciled with statistical approaches to quantifying uncertainty and risk, and why is quantifying uncertainty important? 5. How can fake random numbers generated on a computer by non-random rules sometimes do complicated calculations that aren’t easily done by other means? 6. Why is statistical thinking so crucial in modern scientific enquiries in which massive databases of mostly uninteresting information are being searched for interesting features (in astronomy, genetics and market research for example)?