Molecular Structure and Dynamics
The first part of this module is all about computational chemistry. It just so happens that a section os my masters thesis is all about computational chemistry, so I thought I'd share it, as I believe it to be relevant for this course. Here it is:
Computational Chemistry
Computational chemistry uses the processing power of computer technology to apply the principles of quantum mechanics to complex atomic and molecular systems.
Quantum mechanics emerged in the early 20th century, a very enriching period for science during which scientist’s understanding of the atomic and subatomic world would be dramatically altered. The first paper on quantum mechanics was published in 1925 by Heisenberg1 but the theory was later refined by Schrödinger, who published is famous ground-breaking work in 19262. Schrödinger’s work yielded a general equation which describes the time dependent changes of a wavefunction, (a mathematical description of a quantum state of some physical system). This mathematical expression is widely known as Schrödinger’s Equation3:
(1)
where Ĥ is the Hamiltonian operator. The Hamiltonian is, in simple terms, the sum of the kinetic energies of all the particles plus their potential energy, hence the Schrödinger equation can be re-written to calculate these two factors separately4,5,6:
(2)
Where µ is the reduced mass of the particle and is a differential operator. The first term of the equation above, where the differential operator is included, is the kinetic energy operator; the second term, , is the potential energy operator.
For the case of the hydrogen atom, Schrödinger’s equation can be solved exactly. The hydrogen atom is basically a system consisting of one proton and one electron. This system has spherical symmetry and is thus described with polar (r, θ, ) rather than natural (x, y, z) coordinates, as shown in Figure 1:
Figure 1 - The hydrogen atom (1 proton, 1 electron) represented on a polar coordinate system.
The potential energy operator of the Schrodinger equation for the hydrogen atom depends solely on r:
(3)
where e is the unit charge and is the permittivity of free space. The kinetic energy operator, on the other hand, depends on both θ and ɸ and this dependence is described by the Laplace operator, ∇2, which corresponds to the differential operator seen above for the general Schrödinger equation. For the hydrogen atom, the Laplacian is defined as:
(4)
Bringing the potential and kinetic energy operators together, the hydrogen atom can then be defined by the following Schrödinger equation:
(5)
The solution to this equation is done by separation of variables. More precisely, the overall wavefunction is separated into a radial term, which is dependent on r, and an angular term, which is dependent on θ and . Hence:
(6)
The term can be further separated into7:
(7)
And therefore the wavefunction can be fully separated into three factors, dependent on each of the spherical coordinates used to describe the hydrogen atom system:
(8)
The solution for the Schrödinger equation for the hydrogen atom requires that a solution for each one of these terms is found. Solving radial Schrödinger equation term, R(r), for example, will provide the energy of each solution8:
(9)
where h is Planck’s constant and n is the principal quantum number. A solution to the radial equation exists only for integer values of n. In a similar fashion to this, the solutions for the remaining terms of the Schrödinger equation for the hydrogen atoms will each yield a different constant. The term will give rise to the orbital angular momentum quantum number, l=0, 1, 2,... n-1, which describes the magnitude of the angular momentum of the wavefunction, and the term gives rise to the magnetic quantum number,ml=-l,...,0,..., +l , which describes the projection of the angular momentum of the wavefunction.
In more practical terms, a wavefunction describes an orbital, which in turn describes the special distribution of the electron. Orbitals are characterised by quantum numbers, which in practice provide the orbitals’ energy (n), “shape” (l) and degeneracy (ml). A graphical representation of some of the solutions for the hydrogen wavefunction is shown in Figure 2 (which is awsome, but not mine!!). Different combinations of quantum numbers result in different types of orbitals, for example, an n=1, l=0 orbital is referred to as a 1s orbital; n=2, l=1 is a 2p orbital; n=3, l=2 is a 3d orbital and so on. Moreover, different orbitals with the same energy, i.e. orbitals with same n and l values, but different values of ml (for example, n=2, l=1, ml=-1, 0, 1) will be referred to as degenerate.
Figure 2 – A graphical representation of the wavefunction solutions for the hydrogen atom. ©PoorLeno, from the Wikimedia Commons.
Despite the relative simplicity of the hydrogen atom case, which consist of only one proton and one electron, multi-electron systems are much more complex and the Schrödinger equation cannot be solved exactly as has been done above. In particular, for a many-electron systems there is the need to introduce a new term in the Hamiltonian, which describes the electron-electron interactions, so that:
where me is the mass of an electron, Ri is the distance of the electron i from the nucleus, Z is the charge of the nucleus and rij is the distance between electrons i and j. The operator present in the new term inserted depends on the position of both the electrons, hence, we cannot solve the Schrödinger equation for electron i without knowing the solution for electron j and vice versa, which makes it impossible to solve. The variables cannot be separated as before in this case, because the electron-electron repulsion term takes into account the coordinates for both electrons.
It may seem, then, that this is a problem with no solution. Indeed this is the case, however there is a rather peculiar way around it, introduced in 1927 by Douglas Hartree9 and refined in independent studies by both Slater and Vladimir Aleksandrovich Fock. The Hartree-Fock theory is based on the disregard of the electron interaction, which results in the possibility of the overall electronic wavefunction to be separated and written as the product of the individual wavefunctions for each electron. This is commonly known as the Hartree Product:
However, as Slater and Fock noticed, this convenient solution has one major shortcoming, apart from obviously being a very serious approximation: it does not satisfy the asymmetry principle, which in practical terms means that it does not satisfy Pauli’s principle. To overcome this problem, the electronic wavefunction can instead be described by a Slater determinant, or, in simple terms, a square matrix that automatically satisfies the anti-symmetry requirements. This is the case because if two rows of this matrix are interchanged (which corresponds to the practical case of interchanging two electron in the same orbital) the sign of the determinant is changed but its magnitude remains the same. For a N-electron system, the Slater determinant is:
The assumption that electrons can be described by a Slater determinant is equivalent to the assumption that electron-electron interactions are not instantaneous10, but rather, each electron feels the effect of an averaged Coulomb repulsion cloud from all the other electrons (mean field approximation).
It is worth taking this opportunity to recall what our goal is – to solve the Schrödinger’s equation for a multi-electron system. The Slater determinant, on its own, is not enough to do this. The aid of yet another approximate practical method, the variation method, is needed. The variational theorem states that an expectation value for a certain quantity will always be an overestimate of the true value. Therefore, providing that the calculation of said value includes a set of parameters that can be varied, the expectation result can be minimised – the minimal expectation value being the closest to the true value. However, to do this, a starting point is necessary. In the particular case of the many-electron problem, the Slater determinant is this starting point. In other words, the Slater determinant is used as a trial wavefunction. Performing the minimisation on the Slater determinant will yield the Hartree-Fock equation:
(13)
where is the energy associated with wavefunction and is the Fock operator:
(14)
where the one-electron operator, Hi, represents the energy of an electron in the ith molecular orbital; the Coulomb operator, Jj, is the repulsion between the electrons in the jth and ith orbital; the exchange operator, Kj, which is a consequence of the anti-symmetry of the Slater determinant, essentially inhibits the pairing of same spin electrons. One equation of this type is used for each atomic or molecular orbital. A paradoxical issue now arises – the Fock operator requires that some information on the orbitals, or wavefunctions , is known before the Hartree-Fock equation, which would be used to obtain that same information, can be solved. A self-consistent field approach is necessary to resolve this, meaning that an initial result is merely estimated and the Hartree-Fock equations are iteratively solved until convergence occurs (until the new calculation yields (nearly) the same result to the previous one).
The approach detailed above can be readily applied to atoms, which have only one nucleus and a simple spherical geometry. For molecules, however, the problem becomes more complex, since the Hamiltonian now includes nuclear-nuclear interactions:
where RAB is the distance between nuclei. This can generally be treated in the same way as detailed above. However, to do so, it is necessary to invoke the Born-Oppenheimer approximation. The Hartree-Fock method deals with electron motion only and disregards any nuclear motion, however this is completely acceptable within the Born-Oppenheimer approximation, which states that, due to the massive difference between the masses of electrons and nuclei, electrons instantaneously adjust to any nuclear motion, or, in other words, electrons can be considered to move in an electrical field resulting from static nuclei.
In practice, however, while Hartree-Fock equations are solved numerically for atoms, this is not the case for molecules, since their multiple nuclei and non-spherical symmetry make numerical approaches inefficient. Instead, a Linear Combination of Atomic Orbitals (LCAO) is preformed: a number of functions which behave similarly to what is expected from atomic orbitals, called basis functions, are combined. The set of basis functions used is called a basis set and this can be tailored to better describe the system under study. There is the possibility to choose how many orbitals to combine, and how many of those are inner, medium and/or outer valence orbitals, as well as the possibility to introduce diffuse and polarization functions, etc.
Regardless of its success describing both single and many-electron systems, the Hartree-Fock method still has limitations, and the results obtained by using it may not be the most accurate. This is mostly due to the fact that this method does not account for electron correlation since, as discussed before, electrons are considered to feel the effect of an averaged charge cloud from the surrounding electrons, but instantaneous electron-electron interactions are disregarded. Moreover, while the repulsion between electrons with the same spin is guaranteed by the Slater determinant, the repulsion between opposite spin electrons is not effectively accounted for with the Hartree-Fock method.
Other methods have been developed that account for electron correlation better than the Hartree-Fock method and therefore offer more efficient alternatives for the cases where it fails. An example of such alternatives is the Møller-Plesset method, which in practice applies the perturbation theory to electronic structure calculations. The perturbation theory works within the assumption that if two problems, one which is unsolvable and another which is exactly solvable, differ only slightly, then the solutions to both problems should also be similar11. Mathematically, this can be done by defining a Hamiltonian as the sum of an unperturbed part, or a reference, H0, and a perturbation to the system, H’. This mathematical approximation is valid for small perturbations. If the Schrödinger equation is used as the reference, then:
where λ is a variable parameter that determines the strength of the perturbation. The perturbed Schrödinger equation is:
(18)
Therefore, if , the perturbed and unperturbed equations are the same (H=H0), as well as the respective wavefunction and energy. As the strength of the perturbation increases, the wavefunctions and energies must also change. Again, assuming that the perturbation is small, then this can be represented as a Taylor expansion, or a converging power series, in powers of the perturbation parameter λ.
For , as mentioned before, and , and these are the unperturbed or zero-order wavefunction and energy, respectively. The , , …, wavefunctions and the , , …, energies are the first-, second-,…, nth order corrections to the wavefunction and to the energy, respectively. The Schrödinger equation is now:
Hence, for the 1st order correction:
(22)
Providing that is normalised and that and are assumed to be orthogonal, it can be proved that the first order correction to the energy, can be calculated by:
(23)
This general approach is applied to the calculation of correlation energy in the Møller-Plesset method by selecting a sum of Fock operators as the reference Hamiltonian operator in the perturbation theory. This sum of Fock operators accounts for average electron-electron repulsion twice and the perturbation Hamiltonian becomes the fluctuation potential, essentially a potential arising from electron interactions including Coulomb and exchange corrections.
For the zero-order wavefunction, this approach simply yields the Hartree-Fock determinant, which energy corresponds to the sum of molecular orbital energies. The first-order wavefunction simply corrects the over count of electron-electron repulsion in the zero-order result and the resulting energy is exactly the Hartree-Fock energy [E(HF)]. If E(MPn) is the correction at order n and MPn is the total energy up to order n, then:
Hence, the Møller-Plesset method only provides any improvement from the lack of electron correlation in the Hartree-Fock method for . These orders use multiple Slater determinants which provide spatial flexibility to the electrons, allowing increased separation between them. This is possible because electrons are allowed to be excited to previously unoccupied (virtual) orbitals. For example, the second-order correction to the energy involves a sum over double excited determinants, which is the result of promoting two electrons from occupied orbitals i and j to virtual orbitals a and b. The difference in total energy between these two excited Slater determinants becomes the difference in molecular orbitals’ energies. Mathematically, the second-order Møller-Plesset correction is:
The Møller-Plesset method is widely used as a fast, yet efficient method to improve on the serious limitations of the Hartree-Fock method relating to lack of electron correlation. In fact, even just a second-order correction of this kind will account for ~80-90% of the electron correlation energy, and even though calculations using this method take 10-100 longer than a Hartree-Fock calculation, this is still the most economical method for including electron correlation. MP2 calculations are most used as a post-Hartree-Fock calculation. Even though they provide excellent results for geometry optimisation and vibrational frequency calculations for most systems, in some cases MP2 can still perform quite poorly, most likely due to the fact that the perturbation used, the fluctuation potential, is not small, as the perturbation theory requires it to be. This may also be the explanation to the fact that the expected convergence to a limiting, lower value as the order of perturbation increases is not observed – a higher order correction does not automatically yield a more accurate result.
There are many other methods that can be used in computational chemistry, either fully ab initio and semi-empirical (where some previously and empirically known parameters are inserted in the calculations). Each one of these methods has to be combined with the most suitable basis sets in order to provide a good result. However, it is worth noting that computational studies need to be interpreted carefully, since, as seen throughout this section, they are based in many and very serious approximations, which might, in some cases result in poor agreement with the real life scenario. Regardless, computational chemistry is a powerful tool that is vastly used to predict and model several systems. In this work, for example, it has been used to predict O-H stretching vibrations in hydrated sodium chloride complexes.
References:
- Heisenberg, Z. Phys., 1925, 33, 879.
- Schrödinger, Ann. Phys., 1926, 79, 361.
- Professor A. Ellis, Computational Chemistry Lecture Notes, 2014, University of Leicester, Chemistry Department.
- W. Greiner, Quantum Mechanics, 3rd Edition, Springer, Verlag, Berlin, 1994.
- G. H. Grant, W. G. Richards, Computational Chemistry, Oxford University Press (OUP Primers), 1995.
- P.W. Atkins, Molecular Quantum Mechanics Parts I and II: An Introduction to Quantum Chemistry (Volume 1), Oxford University Press, 1977.
- M. R. Pahlavani et al., Open J. of Microphysics, 2013, 3, 1-7.
- Georgia Tech University, Hyperphysics tutorial, 2005, http://hyperphysics.phy-astr.gsu.edu/.
- F. Fischer, Douglas Rayner Hartree: His Life in Science and Computing, 2003, World Scientific Publishing Co. Pte. Ltd.
- C. D. Sherrill, An Introduction to Hartree-Fock Molecular Orbital Theory, 2000, School of Chemistry and Biochemistry, Georgia Institute of Technology.
- F. Jensen, Introduction to Computational Chemistry, 1999, John Wiley & Sons, Ltd., England.