I tell students that I think of Statistical Mechanics as the crossroads of Physical Theories. This makes it pedagogically very exciting. At a deeper level, the apparent conflict between microscopic reversibility and macroscopic irreversibility makes for interesting and invigorating classroom discussions. I often use ideas from (and refer students to) the wonderful papers by Joel Lebowitz (http://www.scholarpedia.org/article/Time%27s_arrow_and_Boltzmann%27s_entropy). Also, why is it that for (almost) arbitrary initial conditions, systems approach the same equilibrium distribution? I encourage students to explore the ideas of time’s arrow and existence of Statistical Equilibrium through Molecular Dynamics Simulations. Here, they encounter counter-intuitive phenomena such as the Asakura-Oosawa effect, which is difficult to understand from a microscopic point of view, but not so difficult if one assumes principles of Statistical Equilibrium.

Another exciting aspect is the existence of phase transitions and critical phenomena. It is difficult to see why the same microscopic interactions can lead to vastly different behavior at the macroscopic level between different phases. Can principles of Statistical Mechanics account for existence of matter in different phases? In my class, students also get to explore Markov-Chain Monte-Carlo methods for sampling equilibrium Statistical Distributions. This allows them to simulate phase transitions in systems such as the Lennard Jones Gas and the Ising Model.

Lectures on Statistical Physics

Following are (roughly) my undergraduate lectures on Statistical Mechanics.

These build up from the idea of the ‘Arrow of Time’ and the apparent conflict between microscopic reversibility and macroscopic irreversibility, eventually leading up to the idea of statistical equilibrium. The various equilibrium distributions are then explored and applied extensively to various interesting problems (polymer elasticity, ferromagnetism, chemical equilibrium, surface adsorption, stellar equilibrium and others). Quantum indistinguishability is discussed and applied to degenerate quantum systems.

Lectrues_Stat_Phys-1

Exploring Statistical Physics through Molecular Dynamics Simulations

Molecular Dynamics Simulations (MDS) are an excellent pedagogical tool (apart from being a research tool) to explore the foundations of Statistical Physics. Does an arrow of time arise out of time-symmetric Newton’s Laws? Can phase transitions occur if ‘particles’ do not have an attractive interaction, but instead a short-ranged repulsive interaction? How does one know if a system of particles, in equilibrium, is in a solid, liquid of vapor phase? to explore these questions, the two-dimensional ‘Lennard Jones Gas’ is an excellent pedagogical tool.

The Lennard Jones Gas

The Lennard Jones gas is a model to simulate the interactions between inert gas atoms, which is spherically symmetric. For pedagogical purposes, it is useful to simulate the system in 2-D (easy visualization of the system in 2-D, with less computational resources).

Following is a strategy to implement the dynamics in 2-D and to study equilibrium distributions

Lennard_Jones_Fluid

The strategy, implemented as code

The above code simulates a fixed energy system which, in equilibrium, represents an equivalent microcanonical distribution. To simulate a canonically distributed system (fixed temperature), we introduce the so-called ‘Berendsen Thermostat’, which exploits the fact that the temperature is a measure of the mean kinetic energy of a (classical) system. The strategy is to re-scale the velocities of particles such that the (instantaneous) kinetic energy is forced to be equal to the mean kinetic energy associated with the target temperature. One possible computational exploration is to determine the speed distribution and to verify if the Maxwell Speed Distribution (in 2-D) is achieved

Code implementation of the Berendsen Thermostat and Maxwell Speed Distribution. As a bonus, the system of atoms is also visualized

Observing Phases

Gas, liquid and solid phases can be observed by controlling the temperature of the system. Following are animations generated for a system of 900 particles. The solid phase is quite clear in the last animation, which shows a lively solid! Note the dynamic nature of the solid boundary, where atoms are continuously exchanged with a gaseous environment, since the system is technically in a mixed phase (due to constant volume of the system)

Observing Pure Phases: The NPT distribution

The above simulations involve a system of particles at a constant temperature and volume (the temperature controlled by a ‘thermostat’). However, this does not lead to pure phases, since the volume of the system is fixed. Phase transitions in particle systems naturally take place at fixed pressure and temperature, not at fixed volume. At a given pressure, as the temperature is changed, the density (equivalently, volume) of the system changes in response, resulting to formation of pure phases. Therefore, we need an algorithm to control both the temperature and the pressure of the system. Naively, we could think of doing this by including an additional Newtonian interaction in which the boundaries of the system are subject to constant external pressure, resulting in dynamics of not just the particles, but the boundaries themselves. Since our system is confined on a 2-D ‘torus’ without boundaries, this cannot be implemented. Therefore, we need to regulate the pressure in some other way. To do this, we look for an intrinsic definition of pressure, rather than an extrinsic definition as ‘force per unit area on a wall’. This is done through the so-called Virial Pressure

The Virial Pressure

Virial_Pressure

Following is a computational strategy to implement a constant temperature-pressure system in a 2-D Lennard Jones system by using a thermostat and a barostat

Lennard_Jones_NPT_Strategy

The above strategy, implemented as Python code

The ‘instantaneous’ temperature (mean kinetic energy per particle), the virial pressure and density show large fluctuations, due to the size of the system. These fluctuations fall as inverse square root of the number of particles of the system. For a truly macroscopic system, they are usually imperceptible. The above code produces pure phases, which can be categorized as solid, liquid or gas, according to the behavior of the so-called ‘Radial Distribution Function

The Radial Distribution Function

The solid phase is clearly evident in the above simulations due to obvious long range order (crystal lattice structure). However, pictorially, the difference between the liquid and gas phases is not obvious. The Radial Distribution Function (or, the correlation function) measures correlations between atoms.

Following are some slides on the radial distribution function (that I discuss in the Stat Phys Computational Lab)

RDF

Following is a computational strategy, as pseudo-code

RDF_Strategy

The following python code(s) implement this strategy. First, the target temperature and pressure are set to achieve the desired state (solid, liquid of gas). The first python notebook achieves this and stores the data to disk. The second file retrieves this data and further evolves the system using pure Newtonian dynamics (conserved energy). After this, the RDF is computed

The first code

And the second…

Following are the Radial Distribution Funcitons obtained for gas, liquid and solid phases respectively

One can clearly see short-ranged order in the liquid phase and long-range order in the solid phase.

System of hard disks

A two-dimensional system of ‘disks’ (hard-cores) is an interesting system to discover entropy-driven phase transitions. Naively, one might think that since there is no attractive interaction between the disks, the system cannot exhibit different phases. However, it turns out, the system indeed exhibits different phases, which can be explored through the Radial Distribution Function. We take a system of 2-D circular disks, confined to a square region of sides L. Each disk has radius R and the disks collide elastically with each other and with the walls. Given an initial configuration, the system can be evolved through Newtonian dynamics. One can investigate statistical properties of the system once it reaches equilibrium. One simple investigation is to observe the speed distribution attain the Maxwellian form. However, the system shows some non-trivial, finite size effects (if the size of the disks is not a negligible fraction of the size of the box). These finite size effects can in fact drive the system to undergo phase transitions, in which the system shows liquid and solid phases. These transitions are density driven, and purely entropic in nature. First, however, we need an efficient algorithm to implement Newtonian dynamics. Following is a geometric strategy

Event_Driven_Collisions

The algorithm, implemented as python code

Chaos and Time Reversibility

This system is an excellent pedagogical tool to examine time-reversal symmetry in the system and the presence of molecular chaos. To observe chaos, one can simply change the initial positions and velocities of the disks very slightly, and observe the resulting new trajectory diverge quickly. To check for the microscopic time-reversibility, given a set of initial conditions, the system is evolved for a certain duration of time, following which the velocities are reversed. One can then observe the system evolve in a manner which appears to be ‘motion picture reversed’. As an interesting exploration, one can start the system in a randomly generated low entropy state, observe the entropy grow (the ‘particles’ diffuse to occupy larger regions of the box and eventually reach a homogeneous distribution). Then, reversing velocities, one can observe the system evolve back to the low-entropy state, overshoot it, experience a local entropy minimum, and further evolve once again to a high entropy state! This demonstrate the subtle time-reversibility in the macroscopic evolution of the system. The following two animations show the ‘forward in time’ and ‘backward in time’ evolution. Notice the local entropy minimum in the second animation

The speed and position distributions

Following is code to verify the Maxwell speed distribution

A fascinating phenomenon is the non-trivial position distribution, attributed to the finite size of the disks. The disks are more likely to be found near the walls. A simple explanation from the point of view of entropy maximization is attributed to Asakura and Oosawa (depletion interaction: https://en.wikipedia.org/wiki/Depletion_force).

The code for the position distribution

Phase Transitions in a system of disks

Since each disk has a finite size, a relevant parameter is the ratio of the total area of the disks to the area of the box. We call this the ‘density parameter’. At low densities the system behaves as a gas. As the density is increased, the system passes through liquid and solid phases. The following codes are used to compute the radial distribution function for the system. The first code evolves the system till it reaches equilibrium and stores the positions and velocities to file. The second code loads this data and further evolves the system, taking ‘snapshots’ at fixed intervals, storing the positions and velocities. Finally, this data is used to compute the radial distribution function

Code to thermalize the system

Code the compute the RDF

Following is the RDF for density 0.2, evidence of a gas phase

For density 0.7, we clearly see a liquid phase

Statistical Sampling: Markov Chain Monte Carlo

Given a probability distribution (such as the Boltzmann Distribution), we often wish to compute mean values of observables (such as pressure), given a set of parameters (such as fixed volume and temperature). Such mean values can, in principle, be computed using integrals of such observables over the probability distribution. However, these are usually multi-dimensional integrals of very high dimensionality. Therefore, it is not feasible to compute them. An alternative is to ‘sample’ the probability distribution and compute these averages. One can, in principle, sample these directly using standard sampling methods involving the use of the Cumulative Distribution Functions and the ‘Universality of the Uniform’ (the uniform distribution is usually key to such sampling methods). However, such ‘direct’ sampling methods are also intractable due to the vary high dimensionality of the ‘grid’ of parameters that has to be set up. On such occasions, one resorts to ‘Markov Chain Monte Carlo’ (MCMC) methods. Following is a short note on implementing the famous Metropolis-Hastings sampling MCMC method.

Monte_Carlo

Monte-Carlo sampling of a Lennard-Jones system

Instead of applying techniques of Molecular Dynamics Simulations to compute averages of physical quantities, identify phases, etc., we can sample the Boltzmann Distribution through a Monte-Carlo simulation. Following is a simulation of a 2-D system of particles interacting via a Lennard Jones interaction, in thermal equilibrium at some temperature.

Following is the ‘picture’ of a system of 900 particles in the solid phase, generated

through the above algorithm. Also included is the Radial Distribution Function for the system, clearly demonstrating long-range order in the solid phase.

Solid Phase generated through MCMC