Lecture notes for the course given by Prof. David Andelman, Tel-Aviv University 2009
Original author: Guy Cohen, Tel-Aviv University
PDF version.
All diagrams.
Introduction and Preliminaries
We will make several assumptions throughout the course:
- The physics in question are generally in the classical regime, 
. 
- Materials are "soft": quantitatively, this implies that all relevant energy scales are of the order of 
. 
- Condensed matter physics deals with systems composed of 
 particles, and statistical mechanics applies. We are always interested in a reduced description, in terms of continuum mechanics and elasticity, hydrodynamics, macroscopic electrodynamics and so on. 
We begin with an example from Chaikin & Lubensky, the story of an H2O molecule. This molecule is bound together by a chemical bond which is around 
 at room temperature and not easily broken under normal circumstances. What happens when we put 
 water molecules is a container? First of all, with such large numbers we can safely discuss phases of matter: namely

 Gas is typical to low density, high temperature and low pressure. It is generally prone to changes in shape and volume, homogeneous, isotropic, weakly interacting and insulating. This is the least orderedform of matter relevant to our scenario, and relatively easy to treat since order parameters are small. The liquid phase is typical of intermediate temperatures. It flows but is not very compressible. It is homogeneous, isotropic, dense and strongly interacting. Its response to external forces depends on the rate of its deformation. Liquids are hard to treat theoretically, as their intermediate properties make simple approximations less effective. The solid is a dense ordered phase with low entropy and strong interactions. It is anisotropic and does not flow, it strongly resists compression and its response to forces depends on the amount of deformation they cause (elastic). Transitions between these phases occur at specific values of thermodynamic
parameters (see diagram (1)). First order changes (volume/density "jumps" at the transition, and no jump in pressure/temperature) occur on the lines; at the critical liquid/gas point, second order phase transitions occur; at the triple point, all three phases (solid/liquid/gas) coexist. The systems we are interested in are characterized by several kinds of interactions between their constituent molecules: for example, Coulombic interactions of the form 
 when charged particles are present, fixed dipole interaction of the form 
 when permanent dipoles exist, and almost always induced dipole/van der Waals interaction of the form 
. At close range we also have the "hard core" or steric repulsion, sometimes modeled by a 
 potential. Simulations often use the so-called 
 Lennard-Jones potential 
(as pictured in (2)), which with appropriate parameters correctly describes both condensation and crystallization in some cases. 
Sidenote
When only the repulsive potential exists (for instance, for billiard balls), crystallization still takes place but no condensation/evaporation phase transition between the liquid and gas phases exists.
  Starting from a classical Hamiltonian such as 
, we can predict all three phases of matter and the transitions between them. In biological systems, this simple picture does not suffice: the basic consideration behind this is that of effects which occur at different scales between the nanometric scale, through the mesoscopic and up to the macroscopic scale. Biological systems are mesoscopic in nature, and their properties cannot be described correctly when a coarse-graining is performed without accurately accounting for mesoscopic properties.
A few examples follow:
Liquid crystals
The most basic assumption we need in order to model liquid crystals is that isotropy at the molecular level is broken: molecules are represented by rods rather than spheres. Such a description was suggested by Onsager and others, and leads to three phases as shown in (3).
Polymers
When molecules are interconnected at mesoscopic ranges, new phases and properties are encountered.
Soap/beer foam
This kind of substance is approximately 95% agent, with the remainder water – yet it behaves like a weak solid as long as its deformations are small. This is because a tight formation of ordered cells separated by thin liquid films is formed, and in order for the material to change shape the cells must be rearranged. This need for restructuring is the cause of such systems' solid-like resistance to change.
Structured fluids
Polymers or macromolecules in liquid state, liquid crystals, emulsions and colloidal solutions and gels display complex visco-elastic behavior as a result of mesoscopic super-structures within them.
Soft 2D membranes
Interfaces between fluids have interesting properties: they act as a 2D liquid within the interface, yet respond elastically to any bending of the surface. Surfactant molecules will spontaneously form membranes within the same fluid, which also have these properties at appropriate temperatures. Surfactants in solution also form lamellar structures - multilayered structures in which the basic units are the membranes rather than single molecules.
Polymers
Books: Doi, de Gennes, Rubinstein, Doi & Edwards.
Introduction
Brief history
Natural polymers like rubber have been known since the dawn of history, but not understood. The first artificial polymer was made 
. Stadinger was the first to understand that polymers are formed by molecular chains and is considered to be the father of synthetic polymers. Most polymers were made by petrochemical industry. Nylon was born in 1940. Various uses and unique properties (light, strong, thermally insulating; available in many different forms from strings and sheets to bulk; cheap, easy to process, shape and mass-produce...) have made them very attractive commercially. Later on, some leading scientists were Kuhn and Flory in chemistry (30's to 70's) and Stockmayer in physical chemistry (50's and 60's). The famous modern theory of polymers was first formulated by P.G. de Gennes and Sam Edwards.
What is a polymer?
Material composed of chains, having a repeating basic unit (monomer). Connections between monomers are made by chemical (covalent) bonds, and are strong at room temperature.
![{\displaystyle \left[A\right]_{N}\equiv {\overset {N{\mbox{ times}}}{\overbrace {A-A-A-...-A} }}.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/cc2300dcda92d0cd78cc1f8c245ec9637a40a5bf.svg)
 
 is the polymerization index. 
Sidenote
More generally, this kind of structure is called a homopolymer. Heteropolymers – which have several repeating constituent units - also exist. These can have a random structure (
) or a block structure (
), in which case they are called block copolymers. These can self-assemble into complex ordered structures and are often very useful.
 
Sidenote
For an example, look up ester monomers and polyester, or polyethylene.
 
Polymerization is also the name of the process by which polymers are synthesized, which involves a chain reaction where a reactive site exists at the end of the chain. Some chemical reactions increase the chain length by one unit, while simultaneously moving the reactive site to the new end:
![{\displaystyle \left[A\right]_{N}+\left[A\right]_{1}\rightarrow \left[A\right]_{N+1}.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/38b85722e7bb4c5a8861d4a6e3778aea93835033.svg)
 There also exist condensation processes, by which chains unite:
![{\displaystyle \left[A\right]_{N}+\left[A\right]_{M}\rightarrow \left[A\right]_{N+M},}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/2beecabd37133c17ce74df89216aa7f5d26772ff.svg)
 where 
. A briefer notation, dropping the name of the monomer, is

 
Consider the example of hydrocarbon polymers, where we have a monomer which is 
(Check this...). As a larger number of such units is joined together to become polyethylene molecules, the material composed of these molecules changes drastically in nature:
 
 | 
phase
 | 
type of material
 | 
| 1-4
 | 
gas
 | 
flammable gas
 | 
| 5-15
 | 
thin liquid
 | 
liquid fuel/organic solvents
 | 
| 16-25
 | 
thick liquid
 | 
motor oil
 | 
| 20-50
 | 
soft solid
 | 
wax, paraffin
 | 
| 1000
 | 
hard solid
 | 
plastic
 | 
Types of polymer structures
Polymers can exist in different topologies, which affect the macroscopic properties of the material they form (see (4)):
- Linear chains (this is the simplest case, which we will be discussing).
 
- Rings (chains connected at the ends).
 
- Stars (several chain arms connected at a central point).
 
- Tree (connected stars).
 
- Comb (one main chain with side chains branching out).
 
- Dendrimer (ordered branching structure).
 
Polymer phases of matter
Depending on the environment and larger-scale structure, polymers can exist in many states:
- Gas of isolated chains (not very relevant).
 
- In solution (water or organic solvents). In dilute solutions, polymer chains float freely like gas molecules, but their length alters their behavior.
 
- In a liquid state of chains (called a melt).
 
- In solid state (plastic) – crystals, poly-crystals, amorphous/glassy materials.
 
- Liquid crystal formed by polymer chains (Polymeric Liquid Cristal or PLC)
 
- Gels and rubber: networks of chains tied together.
 
Ideal Polymer Chains in Solution
Some basic models of polymer chains
The simplest model of an ideal polymer chain is the freely jointed chain (FJC), where each monomer performs a completely independent random rotation. Here, at equilibrium the end-to-end length of the chain is 
, where 
 is the contour length.
A slightly more realistic model is the freely rotating chain (FRC), where monomers are locked at some chemically meaningful bond angle 
 and rotate freely around it via the torsional angle 
. Here,

 
Note that for 
 we find
that 
 and this is identical to the FJC. For very
small 
, we can expand the cosine an obtain

 
This is the rigid rod limit (to be discussed later in detail).
A second possible improvement is the hindered rotation  (HR)
model. Here the angles 
 have a minimum-energy value,
and are taken from an uncorrelated Boltzmann distribution with some
potential 
. This gives

 
Sidenote
See Flory's book for details.
 
Another option is called the rotational isomeric state  model. Here, a finite
number of angles are possible for each monomer junction and the state
of the full chain is given in terms of these. Correlations are also
taken into account and the solution is numeric, but aside from a complicated
 this is still an ideal chain with 
.
Calculating the end-to-end radius
For the polymer chain of (5), obviously we will always have 
.
The variance, however, is generally not zero: using 
,

 
FJC
In the freely jointed chain (FJC) model, there are neither correlations
between different sites nor restrictions on the rotational angles.
We therefore have 
,
and

 
Sidenote
The mathematics are similar to that of a random walk or diffusion process, where in 1D 
.
 
Therefore, 
.
FRC
In the freely rotating chain model, the bond angles are held constant
at angles 
 while the torsion angles 
are taken from a uniform distribution between 
 and 
.
This introduces some correlation between the angles: since (for one
definition of the 
) 
,
and since the 
 are independent and any averaging over a sine of cosine of one or more of them will result in a zero, only the 
 independent terms survive and by recursion this correlation has the simple form

The end-to-end radius is
![{\displaystyle {\begin{array}{lcl}R_{0}^{2}&=&\sum _{ij=1}^{N}\left\langle \mathbf {r} _{i}\cdot \mathbf {r} _{j}\right\rangle \\&=&\sum _{i=1}^{N}{\overset {\scriptstyle =\ell ^{2}}{\overbrace {\left\langle \mathbf {r} _{i}^{2}\right\rangle } }}+\ell ^{2}\sum _{i=1}^{N}{\overset {\scriptstyle k=i-j}{\overbrace {\sum _{j=1}^{i-1}\left(\cos \vartheta \right)^{i-j}} }}+\ell ^{2}\sum _{i=1}^{N}{\overset {\scriptstyle k=j-i}{\overbrace {\sum _{j=i+1}^{N}\left(\cos \vartheta \right)^{j-i}} }}\\&=&N\ell ^{2}+\ell ^{2}\sum _{i=1}^{N}\left[\sum _{k=1}^{i-1}\left(\cos \vartheta \right)^{k}+\sum _{k=1}^{N-i}\left(\cos \vartheta \right)^{k}\right].\end{array}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/0af4a76883c82c6c8832d6701f0be599e9a5a959.svg)
At large 
 we can approximate the two sums in 
 by the series 
, giving

To extract the Kuhn length 
 from this expression, we rewrite in in the following way:

 

 
To go back from this to the FRC limit, we would consider a chain with a random distribution of 
 angles such that 
.
Gyration radius
Consider once again the polymer chain of (5). Define:

 
The unprimed coordinate system is refocused on the center of mass,
such that 
. Now, it is easier to work with
the following expression:

We will calculate 
 for a long FJC. For 
 we can replace the sums with integrals, obtaining
![{\displaystyle {\begin{array}{lcr}\left\langle R_{g}^{2}\right\rangle &=&{\frac {1}{2N^{2}}}\sum _{ij}{\overset {\scriptstyle {\left|i-j\right|\ell ^{2}}}{\overbrace {\left\langle \left(\mathbf {R} _{i}-\mathbf {R} _{j}\right)^{2}\right\rangle } }}\\&=&{\frac {1}{2N^{2}}}\int _{0}^{N}\mathrm {d} u\int _{0}^{N}\mathrm {d} v\,\ell ^{2}\left|u-v\right|\\&=&{\frac {2}{2N^{2}}}\int _{0}^{N}\mathrm {d} u\int _{0}^{u}\mathrm {d} v\,\ell ^{2}\left(u-v\right)\\&=&{\frac {\ell ^{2}}{N^{2}}}\int _{0}^{N}\mathrm {d} u\left[u^{2}-{\frac {1}{2}}u^{2}\right]\\&=&{\frac {1}{6}}N\ell ^{2}.\end{array}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/f80eda4239c225483876b9ac446265cf6c39eb4b.svg)
This gives the gyration radius for an FJC:

 
Polymers and Gaussian distributions
An ideal chain is a Gaussian chain, in the sense that the end-to-end
radius is taken from a Gaussian distribution. We will see two proofs
of this.
Random walk proof
One way to show this (see Rubinstein, de Gennes) is to begin with
a random walk. For one dimension, if we begin at 
 and at each
time step 
 move left or right with moves 
 and the final displacement 

 then 

 
We define 
 as the number of configurations of
 steps with a final displacement of 
. 
is the associated normalized probability.

 
In fact, for 
 the central limit theorem tells
us that 
 will have a Gaussian distribution for any
distribution of the 
. This can be extended to 
 dimensions
with a displacement 
:

To find the normalization constant 
 we must integrate over all dimensions:

 

 
Some notes:
- An ideal chain can now be redefined as one such that 
 is Gaussian in any dimension 
. 
- This is also true for a long chain with local interactions only, such that 
. 
- The probability of being in a spherical shell with radius 
 is 
. 
- The chance of returning to the origin 
 is 
. 
 is typical of an ideal chain. 
- For any dimension 
, 
. 
Formal proof
Another way to show this follows, which is also extensible to other
distributions of the 
.
Sidenote
This proof can be found in Doi and Edwards.
 
In general, we can write

In the absence of correlations, we can factorize 
:

 
For example, for a freely jointed chain 
.
The normalization constant is found from 
,
giving

 
We can replace the delta functions with 
,
leaving us with
![{\displaystyle {\begin{matrix}P_{N}\left(\mathbf {R} \right)&=&{\frac {1}{\left(2\pi \right)^{3}}}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {R} }\int \mathrm {d} \mathbf {r} _{1}...\int \mathrm {d} \mathbf {r} _{N}\prod _{i}\left[e^{-i\mathbf {k} \cdot \mathbf {r} _{i}}\psi \left(\mathbf {r} _{i}\right)\right]&=&{\frac {1}{\left(2\pi \right)^{3}}}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {R} }\left[\int d\mathbf {r} e^{-i\mathbf {k} \cdot \mathbf {r} }\psi \left(\mathbf {r} \right)\right]^{N}.\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/492ec59f8a7a1cec6ef4d3861d203d9363634ceb.svg)
In spherical coordinates,

which gives

 
We are left with the task of evaluating the integral. This can be
done analytically with the Laplace method for large 
, since the
largest contribution is around 
: we can approximate 
by 
.
The integral is then 
![{\displaystyle {\begin{array}{lcl}P_{n}\left(\mathbf {R} \right)&=&\left({\frac {1}{2\pi }}\right)^{3}\int \mathrm {d} \mathbf {k} e^{i\mathbf {k} \cdot \mathbf {R} }e^{-{\frac {k^{2}\ell ^{2}N}{6}}}\\&=&\left({\frac {1}{2\pi }}\right)^{3}\int \mathrm {d} k_{1}\mathrm {d} k_{2}\mathrm {d} k_{3}\exp \left[\sum _{\alpha }\left(ik_{\alpha }R_{\alpha }-{\frac {Nk_{\alpha }^{2}\ell ^{2}}{6}}\right)\right]\\&=&\left({\frac {1}{2\pi }}\right)^{3}\prod _{\alpha }\int \mathrm {d} k_{\alpha }\exp \left(ik_{\alpha }R_{\alpha }-{\frac {Nk_{\alpha }^{2}\ell ^{2}}{6}}\right)\\&=&\left({\frac {3}{2\pi N\ell ^{2}}}\right)^{\frac {3}{2}}\exp \left\{-{\frac {3R^{2}}{2N\ell ^{2}}}\right\}.\end{array}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/69a9ee05f8680ba318e7377b7ec6dbde66bb79c8.svg)
 
This is, of course, the same Gaussian form we have obtained from the
random walk (we have done the special case of 
, but once again
this process can be repeated for a general dimension 
).
03/26/2009
Rigid and Semi-Rigid Polymer Chains in Solution
Worm-like chain
In considering the 
 limit of the freely rotating
chain, we have seen that 
.
This is of course unphysical, and this limit is actually important
for many interesting cases of stiff chains (for instance, DNA). If
we take the 
 limit along with 
and start over, we can make the following change of variables:
![{\displaystyle {\begin{matrix}\left\langle \mathbf {r} _{i}\cdot \mathbf {r} _{j}\right\rangle &=&\ell ^{2}\left\langle \cos \vartheta _{ij}\right\rangle &=&\ell ^{2}\left(\cos \vartheta \right)^{\left|i-j\right|}&=&\ell ^{2}\exp \left[-{\frac {\left|i-j\right|\ell }{\ell _{p}}}\right],\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/fc1d5cf05a3b1f6f787c1bf229db6d505fa90589.svg)
 
which defines the persistence length  
. For the FRC
model,

 
This is a useful concept in general, however: it defines the typical
length scale over which correlations between chain angles dies out,
and is therefore an expression of the chain's rigidity.
At small 
 we can expand the logarithm to get

 

 
Taking the continuum limit carefully then requires us to consider
 and 
 such that 
is constant. Now, we can calculate the end-to-end length 
at the continuum limit using out the new form for the correlations:

 
To simplify the calculation, we can define the dimensionless variable
, 
 and 
.
With these replacements,

 
The final result (known as the Kratchky-Porod worm-like-chain or WLC)
is

 
Importantly, is does not depend on 
 or 
 but only on
the physically transparent persistence length and contour length.
We will consider the two limits where one parameter is much larger
than the other. First, for 
 we encounter the
rigid rod  limit: we can expand the previous expression into

 The fact that 
 rather than 
 is a result of the long-range correlations we have introduced, and is an indication that at this regime the material is in an essentially different phase. Somewhere between the ideal chain and the rigid rod, a crossover regime must exist.
For 
 we can neglect the exponent, obtaining

 
This therefore returns us to the ideal chain limit, with a Kuhn length
. The crossover phenomenon we discussed occurs
on the chain itself here as we observe correlation between its pieces
at differing length scales: at small scales (
) it behaves
like a rigid rod, while at long scales we have an uncorrelated random
walk. An interesting example is a DNA chain, which can be described
by a worm-like chain with 
 and 
:
it will therefore typically cover a radius of 
.
Free Energy of the Ideal Chain and Entropic Springs
We have calculated distributions of 
 for Gaussian chains
with 
 components, 
. Let's consider
the entropy of such chains:

 
The logarithm of 
 is the same as that
of 
, aside from a factor which does
not depend on 
. Therefore,

The free energy is

 
since 
 for an ideal
chain.
What does 
 mean? It represents the
energy needed to stretch the polymer, and this energy is 
like a harmonic spring (
) with 
.
Note that the polymer becomes less  elastic (more rigid) as
the temperature increases, unlike most solids. This is a physical
result and can be verified experimentally: for instance, the spring
constant of rubber (which is made of networks of polymer chains) increases
linearly with temperature.
Consider an experiment where instead of holding the chain at constant
length, we apply a perturbatively weak force 
 to its
ends and measure its average length. We can perform a Legendre transform
between distance and force: from equality of forces along the direction
in which they are applied,

 
To be in this linear response (
) region,
we must demand that 
,
and to stress this we can write

 
Numerically, with a nanometric 
 and at room temperature the
forces should be in the picoNewton range to meet this requirement.
A more rigorous treatment which works at arbitrary forces can be carried
out by considering an FJC with oppositely charged (
) ends
in an electric field 
. The chain's
sites are at 
 with 
.
The potential is

Since 
, we can write the potential as 

 
with 
. The
partition function is

 
The function 
 is separable into product of functions 
.
Now, 

 
In spherical coordinates 
we can solve the integral:
![{\displaystyle {\begin{array}{lcl}Z_{N}\left(\mathbf {f} \right)&=&\left[\int _{0}^{\infty }\mathrm {d} r{\frac {r^{2}}{4\pi \ell ^{2}}}\delta \left(r-\ell \right)\right]^{N}\times \left[\int _{0}^{2\pi }\mathrm {d} \varphi \right]^{N}\times \prod _{i}\int _{0}^{\pi }\mathrm {d} \vartheta _{i}\sin \vartheta _{i}e^{{\frac {f\ell }{k_{B}T}}\cos \vartheta _{i}}\\&{\underset {\scriptscriptstyle x=\cos \vartheta }{=}}&\left({\frac {1}{4\pi }}\right)^{N}\left(2\pi \right)^{N}\left[\int _{-1}^{1}\mathrm {d} xe^{{\frac {f\ell }{k_{B}T}}x}\right]^{N}\\&=&{\frac {1}{2^{N}}}\left[{\frac {2k_{B}T}{f\ell }}\sinh \left({\frac {f\ell }{k_{B}T}}\right)\right]^{N}\\&=&\left[{\frac {k_{B}T}{f\ell }}\sinh \left({\frac {f\ell }{k_{B}T}}\right)\right]^{N}.\end{array}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/ba8a951c85623dcccc30f52e5ff8d419175750e0.svg)
 
The Gibbs free energy (Gibbs because the external force is fixed)
is then 
![{\displaystyle G_{N}\left(\mathbf {f} \right)=-k_{B}T\ln Z_{N}\left(\mathbf {f} \right)=-k_{B}TN\ln \left[\sinh \left({\frac {f\ell }{k_{B}T}}\right)\right]+k_{B}TN\ln \left({\frac {f\ell }{k_{B}T}}\right),}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/b9272e22cba23ceafb2f7e830a2081946bd67fca.svg)
and the average extension
![{\displaystyle {\begin{matrix}\left\langle R\right\rangle _{f}&=&-{\frac {\partial G_{N}\left(f\right)}{\partial f}}&=&-k_{B}TN\coth \left({\overset {\scriptstyle \equiv \alpha }{\overbrace {\frac {f\ell }{k_{B}T}} }}\right){\frac {\ell }{k_{B}T}}+k_{B}TN{\frac {1}{f}}&=&N\ell \left[\coth \alpha -{\frac {1}{\alpha }}\right]\equiv N\ell {\mathcal {L}}\left(\alpha \right)\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/3a65ce55569e91ade6c407f62b4ec96c18bb3e9f.svg)
 
The Langevin function  
is also typical of spin magnetization in external magnetic fields
and of dipoles in electric fields at finite temperatures.
04/02/2009
Polymers and Fractal Curves
Introduction to fractals
Book: B. Mandelbrot.
A fractal is an object with fractal  dimensionality ,
called also the Hausdorff dimension . This implies a new definition
of dimensionality, which we will discuss.
Consider a sphere of radius 
. It is considered three-dimensional
because it has 
 and 
for 
. A plane has by the same reasoning 
 for 
,
and is therefore a 
 object. Fractals are mathematical objects
such that by the same sort of calculation they will have 
,
for a 
 which is not necessarily an integer number (this definition
is due to Hausdorff). One example is the Koch curve (see (7)): in
each of its iterations, we decrease the length of a segment by a factor
of 3 and decrease its mass by a factor of 4. We will therefore have 

 
Note that a fractal's "real" length is infinite, and its approximations
will depend on the resolution. The structure exhibits self-similarity:
namely, on different length scales it will look the same. This can
be seen in the Koch snowflake: at any magnification, a part of the
curve looks similar to the whole curve. There's a very nice animation
of this in Wikipedia.
The total length of the curve depends on the ruler used to measure
it: the actual length at iteration 
 is 
.
Another definition for the fractal dimension is

 
Linking fractals to polymers
Sidenote
The Flory exponent is defined from 
 such that 
.
 
Consider the ideal Gaussian chain again. It has 
.
Since 
 is proportional to the mass, we have an object with a fractal
dimension of 2 no matter what the dimensionality of the actual space
is. We can say that a polymer in 
-space fills only 
dimensions of the space it occupies, where 
 is 2 for an ideal
polymer Gaussian and 
 in general. Flory
has shown that in some cases a non-ideal polymer can also have 
,
in particular when a self-avoiding walk (SAW) is accounted for. The
SAW as opposed to the Gaussian walk (GW) is the defining property
of a physical rather than ideal polymer, and gives a fractal dimension
of 
. A collapsed polymer has 
 and fills
space completely. Note that two polymers with fractal dimensions 
and 
 do not "feel" each other statistically if 
.
Polymers, Path Integrals and Green's Functions
Books: Doi & Edwards, F. Wiegel, or Feynman & Hibbs.
Local Gaussian chain model and the continuum limit
This model is also known as LGC. We start from an FJC in 3D where
 and 
.
By the central limit theorem 
will always be taken from a Gaussian distribution when the number
of monomers is large (whatever the form of 
, as long as it
is symmetrical around zero such that 
):

 
In the LGC approximation we exchange the rigid rods for Gaussian
springs with 
and 
, by
setting 

We can then obtain for the full probability distribution

 
where 
. 
 describes
 harmonic springs with 
 connected
in series:

 
An exact property of the Gaussian distributions we have been using
is that a sub chain of 
 monomers (such as the sub chain starting
at index 
 and ending at 
) will also have a Gaussian distribution
of the end-to-end length:

 
At the continuum limit, we will get Wiener distributions : the
correct way to calculate the limit is to take 
and 
 with 
 remaining constant. The length
along the chain up to site 
 is then described by 
,
. At this limit we can also substitute derivatives 
for the finite differences 
,
such that

 
If we add an external spatial potential 
(which is single-body), its contribution to the free energy will amount
in a factor of

 
to the Boltzmann factor.
04/23/2009
Functional path integrals and the continuum distribution function
Books: F. Wiegel, Doi & Edwards.
Consider what happens when we hold the ends of a chain defined by
 in place, such that 
and 
. We can calculate the probability
of this configuration from 

 
At the continuum limit the definition of the chain configurations
translates into a function 
 and the product
of integrals can be taken as a path integral according to 
.
The probability for each configuration with our constraint is a functional
of 
. The partition function is:

 
and we can normalize it to obtain a probability distribution function,
given in terms of this path integral:

 
We now introduce the Green's function 
which
as we will soon see describes the evolution from 
to 
 in 
 steps. We define it as:

 
Note that while the nominator is proportional to the probability 
,
the denominator does not  include include the external potential.
 has several important properties:
- It is equal to the exact probability 
 for Gaussian chains in the absence of external potential. 
- If we consider that the chain might be divided into one sub chain between step 
 and 
 and a second sub chain from step 
 to step 
, then 
 
We can use this property to compute expectations values of observables. If we have some function of a specific monomer 
, for instance:

 
- The Green's function is the solution of the differential equation (see proof in Doi & Edwards and in homework):

 
- The Green's function is defined as 0 for 
 and is equal to 
 when 
 in order to satisfy the boundary conditions. 
Relationship to quantum mechanics
This equation for 
, 
 is
very similar in form to the Schrödinger equation. To see this, we
can rewrite it as:
![{\displaystyle \left[{\frac {\partial }{\partial N}}-{\overset {\scriptstyle \equiv {\mathcal {L}}}{\overbrace {{\frac {\ell ^{2}}{6}}{\frac {\partial ^{2}}{\partial \mathbf {R} ^{2}}}+{\frac {U\left(\mathbf {R} \right)}{k_{B}T}}} }}\right]G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=\left[{\frac {\partial }{\partial N}}-{\mathcal {L}}\right]G\left(\mathbf {R} ,\mathbf {R} ^{\prime };N\right)=0.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/7690dadb58ed020dc992d9176bdddf7f9929cbde.svg)
 
If we make the replacement 
, 
and 
 this is identical
to 
.
Like the quantum Hamiltonian the Hermitian operator 
has eigenfunctions such that 
,
which according to Sturm-Liouville theory span the solution space
(
)
and can be orthonormalized (
).
The solution of the non-homogeneous problem is therefore

 
where the 
 are solutions of the homogeneous equation
.
Example
A polymer chain in a box of dimensions 
:
The potential 
 is 
 within the box and 
 on the edges.
The boundary conditions are 
if 
 or 
 are on the boundary. The
function is also separable in Cartesian coordinates:

 
Let's solve for 
 (the other 
 functions are
similar):

 
If we separate variables again with the ansatz 
we obtain

With the boundary condition

This gives an expression for the energy and eigenfunctions:

The Green's function can finally be written as

Since with the Cartesian symmetry of the box the partition function 
 is also separable and using 

we can calculate

 
We can now go on to calculate 
, and
we can for instance calculate the pressure on the box edges in the
 direction:

Two limiting cases can be done analytically: first, if the box is much larger than the polymer, 
 and

 
This is equivalent to a dilute gas of polymers (done here for a single
chain). At the opposite limit, 
, the polymer
should be "squeezed". The Gaussian approximation will be no
good if we squeeze too hard, but at least for some intermediate regime
we can neglect all but the first term in the series:

 
There is a large extra pressure caused by the "squeezing" of
the chain and the corresponding loss of its entropy.
04/30/2009
The same formalism can be used to treat polymers near a wall or in
a well near a wall, for instance (see the homework for details). In
the well case, like in the similar quantum problem, we will have bound
states for 
 (where the critical temperature is defined by
a critical value of 
, and
describes the condition for the potential well to be "deep"
enough to contain a bound state).
Dominant ground state
Note that since 

 
where 
 is positive and the 
 are real and ordered (assuming
no degeneracy, 
), at large 
 we can neglect
all but the leading terms (smallest energies) and

 
This is possible because the exponent is decreasing rather than oscillating,
as it is in the quantum mechanics case. Taking only the first term
in this series is called the dominant ground state approximation .
Polymers in Good Solutions and Self-Avoiding Walks
Virial expansion
So far, in treating Gaussian chains, we have neglected any long-ranged
interactions. However, polymers in solution cannot self-intersect,
and this introduces interactions 
into the picture which are local in real-space, but are long ranged
in terms of the contour spacing – that is, they are not limited to
. The importance of this effect depends on dimensionality:
it is easy to imagine that intersections in 2D are more effective
in restricting a polymer's shape than intersections in 3D.
The interaction potential 
 can in general
have both attractive and repulsive parts, and depends on the detailed
properties of the solvent. If we consider it to be due to a long ranged
attractive Van der-Waals interaction and a short ranged repulsive
hard-core interaction, it might be modeled by a 
 Lennard-Jones
potential. To treat interaction perturbatively within statistical
mechanics, we can use a virial expansion (this is a statistical-mechanical
expansion in powers of the density, useful for systematic perturbative
corrections to non-interacting calculations when one wants to include
many-body interactions). The second virial coefficient is
![{\displaystyle v_{2}=\int \mathrm {d} ^{3}r\left[1-e^{-{\frac {V\left(r\right)}{k_{B}T}}}\right].}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/247a0ae391d1e1231f1cb07716d22b380a5f2e0f.svg)
 
To make the calculation easy, consider a potential even simpler than
the 6-12 Lennard-Jones:

This gives
![{\displaystyle {\begin{matrix}v_{2}&=&{\overset {\scriptstyle ={\frac {4\pi }{3}}\sigma ^{3}\equiv V_{0}}{\overbrace {\int _{r<\sigma }\mathrm {d} ^{3}r\left[1-e^{-{\frac {V\left(r\right)}{k_{B}T}}}\right]} }}+{\overset {\scriptstyle ={\frac {4\pi }{3}}\left[\left(2\sigma \right)^{3}-\sigma ^{3}\right]\left(1-e^{\beta \varepsilon }\right)}{\overbrace {\int _{\sigma <r<2\sigma }\mathrm {d} ^{3}r\left[1-e^{-{\frac {V\left(r\right)}{k_{B}T}}}\right]} }}&=&8V_{0}-7V_{0}e^{\beta \varepsilon }.\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/97fcbb6fc5ade4168ed8f0b24eb3057b88769965.svg)
 
This can be positive (signifying net repulsion between the particles)
at 
 or negative (signifying
attraction) for 
. While
the details of this calculation depend on our choice and parametrization
of the potential, in general we will have some special temperature
known as the 
 temperature (in our case 
)
where 

 
This allows us to define a good solvent: such a solvent must have
 at our working temperature. This assures us (within
the second Virial approximation, at least) that the interactions are
repulsive and (as can be shown separately) the chain is swollen .
A bad solvent for which 
 will have attractive interactions,
resulting in collapse . A solvent for which 
 is
called a 
 solvent, and returns us to a Gaussian chain
unless the next Virial coefficient is taken.
Lattice model
A common numerical treatment for this kind of system is to draw the
polymer on a grid and make Monte-Carlo runs, where steps must be self-avoiding
and their probability is taken from a thermal distribution while maintaining
detailed balance. This gives in 3D 
 where
.
Renormalization group
A connection between SAWs and critical phenomena was made by de Gennes
in the 1970s. Some of the similarities are summarized in the table
below. Using renormalization group methods, de Gennes showed by analogy
to a certain spin model that

 
This gives in 3D a result very close to the SAW: 
.
| Polymers
 | 
Magnetic Systems
 | 
 ,  
 | 
  (critical temperature)    is a small parameter.
 | 
 .
 | 
Correlation length   – critical exponent  .
 | 
| Gaussian chains (non-SAW).
 | 
Mean field theory.
 | 
 .
 | 
 
 | 
For  ,  .
 | 
MFT is accurate for   (Ising model:  ).
 | 
Flory model
This is a very crude model which gives surprisingly good results.
We write the free energy as 
.
For the entropic part we take the expression for an ideal chain: 
,
. For the interaction, we use the second virial
coefficient:
![{\displaystyle {\begin{matrix}{\frac {F_{int}\left(R\right)}{k_{B}T}}&=&{\frac {1}{2}}\nu _{2}\int \left[c\left(r\right)\right]^{2}\mathrm {d} ^{3}r.\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/4a5abb6cffea77838fc07e153892fe4a590599b1.svg)
 
Here 
 is a local density such that its average value
is 
.
If we neglect local fluctuations in 
, then 
![{\displaystyle {\begin{matrix}\int \left[c\left(r\right)\right]^{2}\mathrm {d} ^{3}r&=&V\left\langle c^{2}\left(r\right)\right\rangle \approx V\left\langle c\left(r\right)\right\rangle ^{2}=R^{2}\left({\frac {N}{R^{d}}}\right)^{2},{\frac {F_{int}}{k_{B}T}}&\approx &{\frac {1}{2}}v_{2}N^{2}R^{-d}.\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/683dadab3c2dae9dbcaecee4903befa94fdbee51.svg)
The total free energy is then

 
The free parameter here is 
, but we do not know how it relates
to 
. For constant 
 the minimum is at 

which gives the Flory exponent

 
This exponent is exact for 1, 2 and 4 dimensions, and gives a very
good approximation (0.6) for 3 dimensions, but it misses completely
for more than 4 dimensions. For a numerical example consider a polymer
of 
 monomers each of which is about 
 in length.
From the expressions above,

 
This difference is large enough to be experimentally detectable by
the scattering techniques to be explained next.
The reason the Flory method provides such good results turns out to
be a matter of lucky cancellation between two mistakes, both of which
are by orders of magnitude: the entropy is overestimated and the correlations
are underestimated. This is discussed in detail in all the books.
Field Theory of SAW
Books: Doi & Edwards, Wiegel
The seminal article of S.F. Edwards in 1965 was the first application
of field-theoretic methods to the physics of polymers. To insert interactions
into the Wiener distribution, we take sum over the two-body interactions
to the continuum limit 
.
This formalism is rather complicated and not much can be done by hand.
One possible simplification is to consider an excluded-volume (or
self-exclusion) interaction of Dirac delta function form, which prevents
two monomers from occupying the same point in space:

 
The advantage of this is that a simple form is obtained in which only
the second virial coefficient 
 is taken into account. The
expression for the distribution is then

 
With expressions of this sort, one can apply standard field-theory/many-body
methods to evaluate the Green's function and calculate observables.
This is more advanced and we will not be going into it.
05/07/2009
Scattering and Polymer Solutions
Materials can be probed by scattering experiments, and for dilute
polymer solutions this is one way to learn about the polymers within
them. Laser scattering requires relatively little equipment and can
be done in any lab, while x-ray scattering (SAXS) requires a synchrotron
and neutron scattering (SANS) requires a nuclear reactor. We will
discuss structural properties on the scale of chains rather than individual
monomers, which means relatively small wavenumbers. It will also soon
be clear that small angles are of interest.
Sidenote
Modeling the monomers as points is reasonable when considering probing on the scale of the complete
chain.
 
If we assume that the individual monomers act as point scatterers (see (8)) and consider a process which scatters the incoming wave
at 
 to 
, we can define a scattering
angle 
 and a scattering wave vector 
(which becomes smaller in magnitude as the angle 
 becomes
smaller). We then measure scattered waves at some outgoing angle for
some incoming angle as illustrated in (9), where in fact many chain
scatterers are involved we should have an ensemble average over the
chain configurations (which should be incoherent since the chains
are far apart compared with the typical decoherence length scale).
All this is discussed in more detail below.
Sidenote
For this kind of experiment to work with lasers or x-rays, there must
be a contrast : the polymer and solvent must have different
indices of refraction. X-Ray experiments rely on different electronic
densities. In neutron scattering experiments, contrast is achieved
artificially by labeling the polymers or solvent – that is, replacing
hydrogen with deuterium.
 
Within a chain scattering is mostly coherent such that that the scattered
wavefunction is 
.
The intensity or power should be proportional to 
).
If we specialize to homogeneous chains where 
, then

 
This expression is suitable for a single static chain in a specific
configuration 
. For an ensemble
of chains in solution, we average over all chain configurations incoherently,
defining the structure factor  
:

 
The normalization is with respect to the unscattered wave at 
,
. Note that in an
isotropic system like the system of chain molecules in a solvent,
the structure factor must depend only on the magnitude of 
.
Inserting the expression for 
 into the above equation gives

 
We now switch to spherical coordinates with 
 parallel
to 
 with the added notation 
.
Since in these coordinates 
,
we can write 

 

 
The gyration radius and small angle scattering
For small 
 (which at least in the elastic case implies small 
),
we can expand the above expression for 
 in powers
of 
 to obtain 

 
The last equality is due to the fact 
.
If the scattering is elastic, 
and 

 
With this expression for 
 in terms of the angle 
,
the structure factor is then

 
From an experimental point of view, we can plot 
 as a function
of 
 and determine the polymer's
gyration radius 
 from the slope.
The approximation we have made is good when 
,
and this determines the range of angles that should be taken into
account: we must have 
.
For laser scattering usually 
 (about enough
to measure 
) while for neutron scattering 
(meaning we must take only very small angles into account to measure
, but also allowing for more detailed information about correlations
within the chain to be collected).
Debye scattering function
Around 1947, Debye gave an exact result (the Debye function )
for Gaussian chains:

 

 
At the limit where 
 we can expand 
 around
, yielding the 
 limit we have encountered earlier.
For 
, 
.
Sidenote
Another way to observe GW behavior is to use a 
-solvent.
 
This also works very well for non-Gaussian chains in non-dilute solutions,
where a small percentage of the chains is replaced by isotopic variants.
This gives an effectively dilute solution of isotopic chains, which
can be distinguished from the rest, and these chains are effectively
Gaussian for reasons which we will mention later. An example from Rubinstein is neutron scattering from PMMA as done
by R. Kirste et al. (1975), which fits very nicely to the Debye function
for 
. In general, however, a SAW in a dilute
solution modifies the tail of the Debye function, since 
and 
 for a SAW.
The structure factor and monomer correlations
Consider the full distribution function of the distances 
.
This is related to the correlation function for monomer 
:

 
This function is evaluated by fixing a certain monomer 
 and counting
which other monomers are at a distance 
 from it, averaging
over all chain configurations. If we now average over all monomers
, we obtain

Fourier transforming it,

 
The fact that the structure function is the Fourier transform of the
scatterer density correlation function is, of course, not unique to
the case of polymers.
At large 
, it can be shown (homework) that if 
then 
. We can therefore
determine the fractal dimension of the chain from the large 
 tail
of the structure factor (see table).
| Model
 | 
 
 | 
 
 | 
 
 | 
| 3D GW
 | 
 
 | 
 
 | 
 
 | 
| 3D SAW
 | 
 
 | 
 
 | 
 
 | 
| 3D collapsed chain
 | 
 
 | 
 
 | 
 
 | 
Polymer Solutions
Dilute and semi-dilute solutions
Up to this point, we have considered only independent chains in dilute
solutions. We have also discussed the quality of solvents and the
 temperature. Now, we consider multiple chains in a good
solvent (good because we do not want them in a collapsed state).
The concentrations of monomers 
 is defined as the number of monomers
(for all chains) per unit volume. A solution is dilute if the typical
distance between chains is more that 
 and semi-dilute if it
is more that 
. Between these limits, the concentration passes through a crossover value 
 where the inter-chain distance is equal to the typical chain size 
.
Sidenote
A concentrated  solution is defined by 
. If the solvent is removed completely, one obtains a melt , composed of polymerchains in a liquid state (a viscoelastic material). We will not be
discussing these cases further – see Rubinstein for details.
 
We can calculate 
 by calculating the concentration of monomers within a single chain and equating it to the average monomer concentration:

 
For instance, in a 3D SAW 
 and 
 such that
. We can also work in terms of volume fraction
. This turns out to be very small (for 
it is about 0.001 and for 
 it is about 0.4%).
05/14/2009
Free energy of mixing
If we have a mixture of two components - 
 units of 
 and
 units of 
 on a lattice model with cell length 
such that 
 is the total number of cells – we can define
the relative volumes 
and 
. The free energy of mixing (in the simple isotropic
case) is then

From a combinatorical argument and with the help of the Stirling series,

The average entropy of mixing per cell is therefore

 
We now consider interactions 
, 
 and 
 between
nearest neighbors of the two species. The specific form of the interaction
depends on the coordination number 
, or the number of nearest
neighbors per grid point: for instance, on a square 2D grid 
.
The mixing interaction energy can be written as

 
where the 
 count the number of boundaries of the different
types within the system. In the mean field approximation , we
can evaluate them by neglecting local variations in density:

The interaction energy per-particle due to mixing is then

 
and we will subtract from it the enthalpy of the "pure" system,
where the components are unmixed:

 
The difference between these two quantities it the change in enthalpy
per unit cell due to mixing:

 
![{\displaystyle \chi ={\frac {1}{k_{B}T}}\left[U_{AB}-{\frac {U_{AA}-U_{BB}}{2}}\right]z.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/bad36e8419546f2e0952f9a014dfdc4484caab0e.svg)
 
The sign of the Flory parameter  
 determines whether
the minimum of the energy will be at the center or edges of the parabola
in 
.

 

 
This is the MFT approximation for the free energy of mixing.
The Flory-Huggins model for polymer solutions
This is based on work mostly done by Huggins around 1942. The basic
idea is to consider a lattice like the one shown in (11), with chains
(inhabiting 
 blocks in the example) in a solvent (which can
also be a set of chains, but in the example the number of blocks per
solvent unit is 
).
The enthalpy of mixing 
 is approximately independent
of the change from the molecule-solvent system to this polymer-solvent
system, at least within the MFT approximation. We can therefore set
 (
 is the number of monomers
and 
 the number of solvent units; 
 is the
number of chains) and use the previous expressions for 
and 
. The fact we have chains rather than individual monomers
is of crucial importance when we calculate the entropy, though: chains
have more constraints and therefore a lower entropy than isolated
monomers. We will make an approximation (correct to first order in 
 for
) based on the assumption that the chains are solid objects
and can only be transformed, rather than also rotated and conformed
around their center of mass.
Sidenote
This is treated in detail in the books by Flory and by Doi & Edwards.
 
This gives, making the Stirling approximation
as before,

If we neglect the term linear in 
, which we will later show is of no importance, these two expressions lead to the Flory-Huggins free energy of mixing:

 
Compared to our previous expression, we see that the only difference is in the division of the second term by 
.
Polymer/solvent phase transfers
A system composed of a polymer immersed in a solvent can be in a uniform
phase (corresponding to a good solvent) or separated into two distinct
phases (a bad solvent). Qualitatively, this depends on 
: the
entropic contribution to the free energy from 
will always prefer mixing, but the preference of 
depends on the sign of 
. Phase transfers can only possibly
exist if 
, because otherwise the total change in energy due
to mixing is always negative.
When discussing Helmholtz free energy, 
 is the degree of freedom
- however, in the physical case of interest it constant and we must
perform a Legendre transformation, or in other words introduce a Lagrange
multiplier to impose the constraint that 
. We therefore
define

 
and after 
 is minimized 
 will be determined so as to maintain
our constraint (it turns out that 
 is the difference between
the chemical potentials of the polymer and solvent). When 
 has
multiple minima (
 for more
than one 
), a phase transfer can exist.
If 
 has only one minimum at 
, then we must have 
.
If 
 has two minima, a first order phase transfer will exist when
the free energy 
 at these two minima is the same. This amount
to a common tangent construction  condition for 
 (see 12):

 
This requires 
.
The two formulations (in terms of 
 and 
) are of course identical.
The common tangent actually describes the free energy 
 of a mixed
phase system (having a volume 
 at concentration 
and a volume 
 at concentration 
, such that 
).
When 
 this line is always lower than the
concave profile of the uniform system with concentration 
,
and therefore the mixed-phase system must be the stable state.
Note that any additional term to 
 which is linear in 
 will
only produce a shift in 
, and not qualitatively change the phase
diagram. This is because 

 
Returning to the Flory-Huggins mixing energy, for 
 we can
see that 
 has two minima and the system can therefore be in two
phases. For 
 only one minimum exist, and therefore only one
phase. Generalizing beyond the Flory-Huggins model, at any temperature
 there exists some 
, and often a dependence
 works well experimentally (we
have found a dependence 
 assuming that the interactions
are independent of temperature). For every 
 or 
, we can
find 
 and 
 from the procedure above where two
phases exist. This produces a phase diagram similar to (13), where
the 
 curve is known as the binodal  or
demixing curve .
The phase diagram (13) includes a few more details: one is the critical point 
 or 
, beyond which two solution can no longer exist. Another is the spinodal curve, existing within the demixing curve at 
, marks the point of transition between metastability and instability (within the spinodal curve, phase spearation occurs spontaneously, while between the spinodal and binodal curves it requires some initial nucleation). The spinodal curve is usually quite close to the binodal curve, and since it can be found analytically provides a useful estimate:

 
The endpoint of the spinodal curve is also the endpoint of the binodal
curve; also, this endpoint is the same for the 
and 
 curves. We can find it from
![{\displaystyle {\begin{array}{lcl}0&=&\left.{\frac {\partial \chi _{s}}{\partial \phi }}\right|_{c}={\frac {1}{2}}\left[-{\frac {1}{N}}{\frac {1}{\phi _{c}^{2}}}+{\frac {1}{\left(1-\phi _{c}\right)^{2}}}\right],\\&\Downarrow \\\phi _{c}&=&{\frac {1}{1+{\sqrt {N}}}}.\end{array}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/f6bc05ef7a290b20e53fdb96c5c32ce84239eeaa.svg)
Inserting this into the equation for 
 gives

 
There is a great deal to expand on here. Chapter 4 in Rubinstein is
a good place to start.
Surfaces, Interfaces and Membranes
Introduction and Motivation
We will differentiate between several types of surfaces:
- An outer surface  (or boundary) between a liquid phase and a solid boundary or surface. This surface needs not be in thermal equilibrium and exists under external constraints.
 
- An interface  between two phases in equilibrium with each other, like the A/B liquid mixture that was studied earlier.
 
- Membranes  have a molecular thickness and are in equilibrium with surrounding water.
 
We will talk now separately about flat interfaces first, and then
extend the discussion to curved and fluctuating interfaces.
Flat Surfaces
The simplest kind of non-homogeneous system one can imagine may be
described by the variation in some order parameter or concentration
as a function of a single spatial direction, 
.
For instance, if we have a gas at 
 and a liquid
at 
, there will be some crossover regime between
them. This kind of physics can be treated with a Ginzburg-Landau formalism,
which can be derived from the continuum limit of a lattice gas/Ising
model.
If every cell 
 (with size 
) is parametrized by a discrete
spin variable $S_{i}$ such that

we may write the Hamiltonian as

 
The 
 are the interaction constants between
cells. Note that

The partition function is

 
with 
.
We can formulate now a mean-field theory (by neglecting correlations
such as: 
)
for this model in cases of spatial inhomogeneities (presence of walls
and interfaces). The full development is left as an exercise: the
result assumes a local thermal equilibrium 
and gives
![{\displaystyle {\begin{matrix}F_{0}&=&\left\langle F_{0}\right\rangle &=&{\frac {1}{2}}\sum _{ij}J_{ij}\left\langle S_{i}\right\rangle \left(1-\left\langle S_{j}\right\rangle \right)+k_{B}T\sum _{i}\left[\phi _{i}\ln \phi _{i}+\left(1-\phi _{i}\right)\ln \left(1-\phi _{i}\right)\right].\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/b300a1b3a5bf3bd268508453b4c0bd9f0b13be0c.svg)
Separating this $F_{0}$ into internal energy and entropy,

 
![{\displaystyle -TS=k_{B}T\sum _{i}\left[\phi _{i}\ln \phi _{i}+\left(1-\phi _{i}\right)\ln \left(1-\phi _{i}\right)\right].}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/fbf3751f90b141ba6caef8d996ec62ee922185e7.svg)
 
In the continuum limit 
 and
 and
neglecting long-term interactions, we can perform a Taylor expansion:

 
![{\displaystyle {\begin{matrix}J_{ij}\phi _{i}\left(1-\phi _{j}\right)&=&{\frac {1}{2}}J_{ij}\left[\left(\phi _{i}-\phi _{j}\right)^{2}-\phi _{i}^{2}-\phi _{j}^{2}+2\phi _{i}\right]\sum _{i\neq j}J_{ij}\phi _{i}\left(1-\phi _{j}\right)&\rightarrow &{\frac {1}{4}}\int {\frac {\mathrm {d} \mathbf {r} }{\ell ^{3}}}zJ\left(-\phi ^{2}-\phi ^{2}+2\phi \right)+{\frac {1}{4}}\int {\frac {\mathrm {d} \mathbf {r} }{\ell ^{3}}}\sum _{ij}J_{ij}\left({\boldsymbol {\ell }}_{j}\cdot \triangledown \phi \right)^{2}\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/01dfaea40aa505608853010d37c7a6eb40bb7c84.svg)
$z$ is the coordination number.
![{\displaystyle U={\frac {1}{2}}\int {\frac {\mathrm {d} \mathbf {r} }{\ell ^{3}}}\left[zJ\phi \left(1-\phi \right)\right]+{\frac {1}{4}}\int {\frac {\mathrm {d} \mathbf {r} }{\ell ^{3}}}J\ell ^{2}\left(\triangledown \phi \right)^{2}.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/9f28536db0c61c2c072ebcaaa30c4877e1f85163.svg)
Adding the continuum limit entropy,
![{\displaystyle {\begin{matrix}F&=&\int \mathrm {d} \mathbf {r} \left[f_{0}\left(\phi \right)+{\frac {1}{2}}B\left(\triangledown \phi \right)^{2}\right],f_{0}&=&{\frac {k_{B}T}{\ell ^{3}}}\left[\chi \phi \left(1-\phi \right)+\phi \ln \phi +\left(1-\phi \right)\ln \left(1-\phi \right)\right],B&\equiv &{\frac {J}{2\ell }},{\frac {1}{2}}zJ&\equiv &k_{B}T\chi .\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/1506dbdcbd2a660dcc20fcb6ad6cd949f6b2f1d8.svg)
 
We can find the profile 
 at equilibrium
by minimizing the free energy functional 
with respect to 
 and taking external constraints
into account. Normally, 
 and the minimum of 
 is homogeneous
other than surfaces and interfaces. If 
,
the minimal solution 
 is a constant
and we will have a single homogeneous phase. On the other hand, if

 
and we are in the two-phase region in (
 then a 1D
profile must exist that solves the Euler-Lagrange equation, and becomes
approximately homogeneous far from the center of the interface.
1D profile at an interface
Quite independently of the previous treatment and the microscopic
model, the free energy can be written as a functional of an order
parameter and its gradients:
![{\displaystyle F=\int \mathrm {d} \mathbf {r} \left\{f_{0}\left(\phi \left(\mathbf {r} \right)\right)+{\frac {B}{2}}\left[\triangledown \phi \left(\mathbf {r} \right)\right]^{2}\right\}.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/a960028502b87356d48bbeca32ff8bf557b6ce8b.svg)
 
Since 
, for 
 the system
avoids strong local fluctuations and smooth states have smaller energies.
A uniform state is therefore preferred, and if the system is not allowed
to become fully uniform then regions of different phases form in equilibrium
with each other. This is shown in (16), and can also be described
by a tangent construction of the type illustrated in (12).
In the two phase example above, due to the symmetry of 
 in 
, the critical point is clearly
at 
. We will make a Taylor expansion of 
around $\phi_{c}$:

 
Due to the same symmetry in 
, an expansion of 
 in 
should contain only even powers. Performing this expansion gives the
result
![{\displaystyle {\begin{matrix}f_{0}&=&{\frac {k_{B}T}{\ell ^{3}}}\left[2\psi ^{2}+{\frac {4\psi ^{4}}{3}}-\ln 2+\chi \left({\frac {1}{4}}-\psi ^{2}\right)\right]+B\left(\psi ^{\prime }\right)^{2}&=&{\frac {k_{B}T}{\ell ^{3}}}\left[\left(2-\chi \right)\psi ^{2}+{\frac {4}{3}}\psi ^{4}\right]+{\mbox{const.}}\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/383536f21f0136520e37db7dab1dd934d22d1f9f.svg)
 
In general the 
 will be replaced by some positive numerical
factor 
. To obtain the correct critical behavior
(note that 
) we assume a linear
dependence of the form 
,
and minimize
![{\displaystyle {\begin{matrix}F&=&{\frac {k_{B}T}{\ell ^{3}}}\int \left[-{\frac {\alpha }{2}}\left(T-T_{c}\right)\psi ^{2}+{\frac {\gamma }{4}}\psi ^{4}\right]\mathrm {d} \mathbf {r} +{\frac {B}{2}}\int \left(\triangledown \psi \right)^{2}\mathrm {d} \mathbf {r} .\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/35c788b18d3c30b5f487df42f4e976d7625cb5bd.svg)
 
The above expansion of the inhomogeneous free energy is called the
Ginzburg-Landau (GL) model or expansion. By applying a variational
principle on this free energy, we obtain the Euler-Lagrange (EL) equations:

 
06/09/2009
Here 

 
In particular, 
and 
.
The EL equation is therefore

 
This is the well-known Ginzburg-Landau (GL) equation.
For 
 the only homogeneous (bulk) solution (arrived at by
neglecting the Laplacian term) is

 
In the other case when 
, the system has two homogeneous
solutions

 
If we do not neglect the derivative but assume a 1D profile with 
and $\psi^{\prime}\left(\pm\infty\right)=$0, we must solve the equation

The exact solution of the GL model is 

 

 
We have introduced the correlation length 
, which is typical
of the width of the meniscus (the layer in which the phases are mixed).
As a matter of fact, 
 is also the correlation length by the
definition 
.
The dependence 
 is the
mean field result with an exponent 
. In general,
. We also have for the order
parameter dependence 
 where
we have obtained in MFT 
.
Surface energy and surface tension
Surface energy  is the excess of energy in the system with respect
to the bulk. Surface tension  
 is defined as the surface
energy per unit area. Therefore, in our case of two phases separated
by a meniscus, $\sigma$ can be calculated using
![{\displaystyle \sigma \cdot {\mbox{Area}}=\Delta F=F\left[\psi \left(\mathbf {r} \right)\right]-\left[{\frac {1}{2}}Vf_{0}\left(\psi _{b}\right)+{\frac {1}{2}}Vf_{0}\left(-\psi _{b}\right)\right].}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/4c0fff69ce6d9b9cdc83b05691c86083867608a6.svg)
 
Here, we have subtracted the bulk energy of the separate surfaces
from the energy of the full system including the interface. Note that
in equilibrium, by definition 
.
With the 1D dependence we are treating, then, 
and
![{\displaystyle {\begin{matrix}\sigma &=&{\overset {\scriptstyle =1}{\overbrace {{\frac {1}{\mbox{Area}}}\int \mathrm {d} x\int \mathrm {d} y} }}\int _{-\infty }^{\infty }\mathrm {d} z\left[{\frac {B}{2}}[\psi ^{\prime }\left(z\right)]^{2}+f_{0}\left(\psi \left(z\right)\right)-f_{0}\left(\psi _{b}\right)\right]&=&\int _{-\infty }^{\infty }\mathrm {d} z\left[{\frac {B}{2}}[\psi '\left(z\right)]^{2}+f_{0}\left(\psi \right)-f_{0}\left(\psi _{b}\right)\right].\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/ae470e1e6b868979ee9484bbcea735a1d8909faf.svg)
 
This is not an extensive quantity like 
, a single
number in the size of the system: it is rather a geometry independent
parameter with units of energy per unit area.
The first term above is reminiscent of kinetic energy and the second
of potential energy. An analogy to the classical mechanics of a point
particle exists, as detailed in the following table.
\begin{table}[H]
\centering{}\begin{tabular}{|c|c|}
\hline
 & 
 (time)\tabularnewline
\hline
\hline
 & 
 (distance)\tabularnewline
\hline
 & 
 (kinetic energy)\tabularnewline
\hline
 & 
 (potential energy)\tabularnewline
\hline
 & 
 (total energy)\tabularnewline
\hline
\end{tabular}
\end{table}
With this analogy in mind, we can derive an expression similar to
energy conservation in mechanics. From applying the variational principle
to $f_{0}$ we obtain
![{\displaystyle {\frac {\partial f_{0}}{\partial \psi }}={\frac {\partial }{\partial z}}{\frac {\partial }{\partial \psi ^{\prime }}}\left[{\frac {B}{2}}\left(\psi ^{\prime }\right)^{2}\right]=B\psi ^{\prime \prime }.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/e75eb20340e0a0b352bad918efbe32326577ffc8.svg)
Multiplying this by $\psi^{\prime}$ gives
![{\displaystyle \psi ^{\prime }{\frac {\partial f_{0}}{\partial \psi }}=B\psi ^{\prime }\psi ^{\prime \prime }={\frac {B}{2}}{\frac {d}{dz}}\left[\left(\psi ^{\prime }\right)^{2}\right].}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/00089a07193e9b91a4590f12668cd77c29334786.svg)
Integrating over $z$ from $-\infty$ to $+\infty$,
![{\displaystyle {\begin{matrix}{\overset {\scriptstyle =\int _{-\infty }^{z}{\frac {df_{0}}{dz}}\mathrm {d} z}{\overbrace {\int _{-\infty }^{z}{\frac {df_{0}}{d\psi }}{\frac {d\psi }{dz}}\mathrm {d} z} }}&=&{\frac {B}{2}}\int _{-\infty }^{z}{\frac {d}{dz}}\left(\psi ^{\prime }\right)^{2}\mathrm {d} z=\left.{\frac {B}{2}}\left(\psi ^{\prime }\right)^{2}\right|_{-\infty }^{z}&\Downarrow f_{0}\left(\psi \right)-f_{0}\left(\psi _{b}\right)&=&{\frac {B}{2}}\left\{\left[\psi ^{\prime }\left(z\right)\right]^{2}-{\overset {=0}{\overbrace {\left(\psi ^{\prime }\left(-\infty \right)\right)^{2}} }}\right\}&\Downarrow \end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/c4869ce6e91a5105ac7778aef88dbbd582825d6f.svg)
 

 
The last term disappears due to the boundary condition at 
,
where 
 and therefore 
.
The analogy between this equation and the law of conservation of mechanical
energy can be stressed by writing it as

 
Returning to the surface tension, we can use this conservation law
to rewrite it in the simpler form 
![{\displaystyle {\begin{matrix}\sigma &=&\int _{-\infty }^{\infty }\left[{\frac {B}{2}}\left(\psi ^{\prime }\left(z\right)\right)^{2}+{\overset {\scriptstyle ={\frac {B}{2}}\left(\psi ^{\prime }\right)^{2}}{\overbrace {f_{0}\left(\psi \right)-f_{0}\left(\psi _{b}\right)} }}\right]\mathrm {d} z=B\int _{-\infty }^{\infty }\left(\psi ^{\prime }\left(z\right)\right)^{2}\mathrm {d} z.\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/fd22409f375713869b46ceaad61a72d3d7c61e88.svg)
An estimate may be obtained from

or

 
The exact expression for 
 may be obtained from the exact
GL form that we have derived for 
. In any case, the temperature
dependence of $\sigma$ is of the form 

 
If we insert the general exponential dependencies of 
 and 
into the equation, we will see that the exponent for surface energy
as function of 
 is 
.
This discussion can be extended to systems which do not have symmetry
between 
 and 
, such as a liquid/gas system with two
densities 
 and 
. Without proof, we will state that
within the GL formalism it can be shown that
![{\displaystyle f_{0}\left(n\right)-{\frac {1}{2}}\left[f_{0}\left(n_{g}\right)+f_{0}\left(n_{\ell }\right)\right]=c\left(n-n_{g}\right)^{2}\left(n-n_{\ell }\right)^{2}.}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/868da39560d352e18fd2c1476f1bfabbc32a58fe.svg)
The surface energy will be
![{\displaystyle \Delta F=\int \mathrm {d} \mathbf {r} \left[{\frac {1}{2}}B\left(\triangledown n\right)^{2}+f_{0}\left(n\right)-{\frac {1}{2}}f_{0}\left(n_{g}\right)-{\frac {1}{2}}f_{0}\left(n_{\ell }\right)\right].}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/52491c298d3d9ba32fe170ec6912787260d0092c.svg)
For a profile in the $z$ direction,
![{\displaystyle \sigma ={\frac {\Delta F}{\mbox{Area}}}=\int _{-\infty }^{\infty }\mathrm {d} z\left[{\frac {B}{2}}\left(n^{\prime }\right)^{2}+c[n\left(z\right)-n_{g}]^{2}[n\left(z\right)-n_{\ell }]^{2}\right].}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/13b8f76ce19533b7e8a9bbe8534f3a0d9626c5ef.svg)
After variation, one obtains for the two coexisting phases with $n_{\ell}>n>n_{g}$

 
with 
 and 
.
The density profile interpolates smoothly between the two phases:

 
A few generalizations:
- Surfactants or surface active materials: this includes soap, detergent, biological membranes composed of biological amphiphiles called phospholipids and more. What they have in common is that they are formed of molecules with charged or polarized {}"heads" connected to long hydrocarbon {}"tails". These molecules are called amphiphyllic , since the tails are hydrophobic  and the heads hydrophillic . This causes them to accumulate on interfaces between water and air, and reduce surface tension (by a factor $\sim2-$3):

 
where 
is the surface concentration of the soap molecules.
- Emulsions: drops of oil in water (or water in oil), stabilized by some sort of emulsifier (which is also a surfactant). Some common examples are milk and mayonnaise.
 
Sidenote
There is a French biochemist by the name of Herve This who specializes in molecular gastronomy, who has some very interesting popular lectures which are worth looking up. He authored several books (one is called "molecular gastronomy"), and appeared on several TV shows. In his presentations he explains how food preparations depends crucially on physical chemical processes on the molecular level. This includes preparation of mousse, whipped cream, sauces, thickeners and emulsifiers.
 
- Detergency of soap: while soap reduces surface tension between oil and water, it does not create a phase where oil and water are mixed on a molecular level. Rather, micrometric oil droplets are formed in the aqueous solution. The process of cleaning is the process where oily dirt is solubilized in the aqueous solution and is washed away from the object we clean.
 
06/11/2009
Curved Surfaces
Review of differential geometry
\begin{description}
[{Books:}] The book by Safran has a short introduction which will
be followed here. The one by Visconti is more thorough and oriented
towards other physics problems such as relativity . There also exists
a multi-authored book on the subject edited by David Nelson, and a
mathematical book on the theory of manifolds by Spivak.
\end{description}
In order to discuss surfaces and curves which exhibit local curvature,
we will need to introduce a few mathematical concepts. A brief introduction
follows.
\paragraph{Curves}
A parametric curve  
 is a set of
vectors along some contour in space, expressed as a function of the
parameter 
, which may vary, for example, from 
 to the length
 of the curve. The differential length element 
along the curve can be expressed by

A tangent vector $\hat{\mathbf{t}}$ can be found from

 
Note that from the magnitude of this expression, 
is always a unit vector. It is tangent to the curve because it is
proportional to 
.
With these definitions, we can define curvature  as one extra
derivative:

 
The unit vector 
 is a unique vector perpendicular
to 
 (this is easy to show by taking 
),
and we can also write

 
It is also useful to define the local radius of curvature  
.
Some intuition can be gained from an analogy with the kinetics of
point particles moving without a friction on a curve in space, if
 is replaced by the time 
. The tangent and curvature vectors
can then be related to the velocity and acceleration, respectively.
\paragraph{Surfaces}
Similarly to curves, a parametric surface Failed to parse (syntax error): {\displaystyle \mathbf{r'' =\mathbf{R}\left(u,v\right)}
}
in space can be defined as a function of two parameters. There are
three scalar functions contained in this explicit definition:

Note that it is also possible to represent surfaces implicitly as

 
where other than its zeros 
 is arbitrary.
Any explicit definition requires some particular choice of 
 and
$v$. For instance, one choice (called the Monge representation) is

In vector notation,

 
This works only if there is a single 
 value for each choice of
 and 
, and is very convenient for surfaces which are almost
flat. Another common choice useful for nearly spherical surfaces is
the spherical representation, where 
 and 
.
In spherical coordinates, this is

 
We can define two tangent vectors 
 and 
at every point on the surface, such that 
.
The unit vector normal to the surface is 
.
It is easy to find the unit vector from the implicit representation,
and one can usually find an implicit representation: for instance,
starting from Monge 
. On the surface, 
implies

 
The vector 
 can be any vector tangent to the
surface, and therefore 
 must be proportional to
the normal vector:

 
\paragraph{Metric of a curved surface}
A surface has been defined as an ensemble of points 
embedded in 3-dimensional space. In order to measure length along
such a surface, we must integrate along a differential length element
within it:

The metric is defined as 

It is positive definite since

The surface element can be expressed in terms of the metric:

We illustrate this in the Monge representation as an example. Here,

The surface element is

with the metric

The length element is

and therefore we have in the Monge representation

 

 
In the implicit representation, one can begin the same process by
writing the surface element in terms of the volume element:

 
using the 3D Dirac delta function 
.
A general property of the Dirac delta is that 

 
where 
 is the inverse function such that 
.
In terms of the function 
 such that the surface is defined by
$F=0$, we can use this property to write

or

Returning to the implicit version of the Monge representation,

 

 
\paragraph{Curvature of surfaces}
So far we have discussed first order differential expressions and
the area element. This has to do with properties like surface energy
. Curvature is a second order property,
useful in discussing deformations and fluctuations.
Consider a curve 
 with 
 on a surface
parametrized by 
 and 
. On the curve, 
and 
. If 
 is a vector normal
to the surface, the local curvature (of the curve) is

The first derivative is

and the second derivative

 
Since 
 is perpendicular to 
 and
$\mathbf{r}_{v}$, we are left with

 

 
\begin{minipage}[t]{1\columnwidth}
\begin{shaded}
(some missing formulas...)\end{shaded}
\end{minipage}

We finally obtain

 
and with 
and $\mathrm{d}\hat{\mathbf{n}}=\hat{\mathbf{n}}_{u}\mathrm{d}u+\hat{\mathbf{n}}_{v}\mathrm{d}v$,

 
(missing diagram...)
\paragraph{Curvature tensor}
Since $\mathrm{d}\mathbf{r}\cdot\mathbf{\hat{n}}=0$,

or 

The quantity 

 
is a second rank tensor or dyadic.
Now, we can write 
 with
\begin{minipage}[t]{1\columnwidth}
\begin{shaded}
(some missing formulas...)\end{shaded}
\end{minipage}

or

 
where Failed to parse (unknown function "\normalcolor"): {\displaystyle \mathbf{r}'(s)={\normalcolor \frac{\mathrm{d}\mathbf{r}}{\mathrm{d}s}}}
.
This can be used for the case of an implicitly defined surface where
$\hat{\mathbf{n}}=\frac{\triangledown F}{\left|\triangledown F\right|}$:
![{\displaystyle {\begin{matrix}Q_{ij}&=&\left[\triangledown \left({\frac {\triangledown F}{\left|\triangledown F\right|}}\right)\right]_{ij}&=&\partial _{i}\left({\frac {\partial _{j}F}{\left|\triangledown F\right|}}\right)={\frac {\partial _{i}\partial _{j}F}{\left|\triangledown F\right|}}-\left(\partial _{j}F\right)\cdot {\frac {\partial _{i}F}{\left|\triangledown F\right|}}.\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/7835e55e80a8abf9ccd77b0759c9b29e041e051c.svg)
Using $\partial_{i}\left|\triangledown F\right|=\partial_{i}\sqrt{\left(\partial_{x}F\right)^{2}+\left(\partial_{y}F\right)^{2}+\left(\partial_{z}F\right)^{2}}$,
![{\displaystyle Q_{ij}=-{\frac {1}{\left|\triangledown F\right|}}\left[{\frac {\partial _{i}\left|\triangledown F\right|\partial _{j}F}{\left|\triangledown F\right|}}-\partial _{i}\partial _{j}F\right].}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/782b8e45a04ee64cd319e15040749a06bc8852fe.svg)
 
06/16/2009
\paragraph{The curvature tensor and its invariants}
The dyadic matrix 
 has eigenvalues 
,
a trace 
 and a determinant 
which are invariant under similarity transformations 
.
The sum of the principal minors 
 is also invariant:
to see this, consider the characteristic polynomial

 
Here 
 is the unit matrix. Expanding 
 in
powers of $\lambda$,

 
We can identify clearly the coefficients of the polynomial 
as
the 3 invariants. One eigenvalue is always equal to zero (as an exercise
do it in the implicit representation). If we choose 
,
we are reduced to two nontrivial invariants: 
and 
(as Failed to parse (unknown function "\normalcolor"): {\displaystyle {\normalcolor \mathrm{Det}}(Q)=0)}
.
These invariants are called the mean curvature  
 and the
Gaussian curvature  $K$:

 

For example, in the implicit representation we can write
![{\displaystyle {\begin{matrix}\mathbf {\hat {n}} &=&{\frac {\triangledown F}{\left|\triangledown F\right|}},N&\equiv &\left|\triangledown F\right|,\mathrm {Tr} Q&=&-{\frac {1}{N}}\sum _{i}\left[{\frac {N_{i}F_{i}}{N}}-F_{ii}\right],\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/30e8218397cc11e54d376a9f5c86a82bf90ee401.svg)
where 

Note that since, for instance, 

with a few more steps we can show (another exercise) that
![{\displaystyle H=-{\frac {1}{2N^{3}}}\left[2F_{x}F_{y}F_{xy}-F_{xx}\left(F_{y}^{2}+F_{z}^{2}\right)+\mathrm {2\,cyclic\,permuations} \right],}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/36bc4566f5d4e5287a9b991a34f751e388af554f.svg)
 
![{\displaystyle K=-{\frac {1}{2N^{3}}}\left[F_{xx}F_{yy}F_{z}^{2}-F_{xy}^{2}F_{z}^{2}+2F_{xz}F_{x}\left(F_{y}F_{yz}-F_{z}F_{yy}\right)+\mathrm {2\,cyclic\,permuations} \right],}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/8a184713988d34c46640c8023cf473719e8b3eae.svg)
 
where by cyclic permutations we mean permuting the axes: 
.
In the case of the Monge representation where 
,
 and 
 have a simpler form:

One can then show that
![{\displaystyle H={\frac {1}{2\left({\sqrt {1+h_{x}^{2}+h_{y}^{2}}}\right)^{3}}}\left[\left(1+h_{y}^{2}\right)h_{xx}+\left(1+h_{x}^{2}\right)h_{yy}-2h_{x}h_{y}h_{z}\right],}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/0e3c1acc1070cdce48df198f7a61d37007987c2f.svg)
 

 
Small disturbances of planar surfaces
To treat nearly flat surfaces, one can use the Monge representation
to expand a Taylor series around a completely flat surface in derivatives
of $h\left(x,y\right)$:

or equivalently

From similar arguments, one can show that

In the general parametric representation,

 
with 
and
.
Picking a unit vector 
in the plane, the curvature in the direction of 
is given by

The parameters $l$ and $m$ must obey

 
In investigating 
 as a function of the direction
of 
, we can find its extrema with the constraint
$a=1$ by adding a Lagrange multiplier:

The solution takes the form of a quadratic equation

 
which has 2 roots: 
 and 
 This extremum
finding process defines the principal directions , which (we
will state without proof) are always perpendicular to each other.
The two invariants are then

 
Consider a few cases in terms of the radii of curvature 
and 
:
- If at some point both radii are positive, then 
, 
, 
 and 
 are all positive. The surface is convex around the point, as in (17a). 
- If both are negative, then 
 and 
. The surface is concave around the point, as in (17b). 
- If the two have opposing signs, 
 is negative and one is at a saddle point of the surface, as in (17c). 
- The special surface having 
 at any  point is called a minimal surface  (or Schwartz surface,  after the 19th century mathematician who studied them in detail). These surfaces have a saddle at every point, as one curvature is always positive and the second negative: 
. Hence, their Gaussian curvature is always negative: 
 
We will use the principal directions to describe a local paraboloid
expansion of a nearly flat surface. In general,

In the Monge representation,

 
Free energy of soft surfaces
\begin{description}
[{Book:}] Landau & Lifshitz' book on Elasticity  has a chapter
on elasticity of hard (solid) shells. There is also a book by Boal
on elasticity and mechanics of fluid membranes . Safran's book
shows how the parameters we describe can be derived from a microscopic
model where the lipid (surfactant) molecules are modeled as beads
connected with various springs.
\end{description}
Consider a liquid surface or fluid membrane: as such a surface curves,
its free energy varies. Phenomenologically,

 
All the integrals are taken over the surface. The fact that the above
expression models a fluid membrane is related to the fact that we
do not account for any lateral shear forces. Molecules composing the
fluid membrane are free to flow inside the membrane but they resist
elastic deformations such as bending. The first term describes the
contribution of surface tension, which is proportional to the total
surface area. The geometric values 
 and 
 are the mean and
Gaussian curvatures we have already encountered. The coefficients
 and 
 (with units of energy) depend, like 
,
on the material properties in question. The spontaneous curvature
 is also a material property: it defines a certain preferred
angle (perhaps due to the shape of surfactant molecules), and its
sign depends on the preferred direction of curvature. See (18) for
an illustration. Unless there is an active process that causes an
asymmetry in the lipid composition of the two leaflets, the bilayer
will have the same lipid composition on the inside and outside, and
therefore has in total 
. Usually for fluid membranes, 
and 
 range from 
 to 
.
One example is a sphere of radius $R$, where:

This gives

 
The interesting fact that the surface integral over the Gaussian curvature
 gives a constant value of 
 – independent on the radius
 of the sphere – has a deep meaning. It is related to the famous
Gauss-Bonnet theorem which will be stated here without further details:
according to this theorem, the integral over the Gaussian curvature
is a topological invariant of the surface whose value is equal to
, where 
 is the genus  of the surface.
A sphere or any closed object with no {}"holes" has 
 and
an integrated Gaussian curvature of 
, while a torus (or {}"donut")
with one hole has 
 and hence a zero integrated Gaussian curvature.
Sidenote
More information about the Gauss-Bonnet theorem may be found in books
on differential geometry
 
A second example is an infinite cylinder with radius 
. Here, 
and $\kappa_{b}=$0. The free energy per unit length is

 
An even simpler example is the infinite plane, where 
.
This yields

 
06/18/2009
Thermal fluctuations of a plane
\begin{description}
[{Book:}] Safran's book.
\end{description}
To second order in derivatives of 
 in the Monge representation
for $\bar{k}=$0,

 
The minimum of energy is obtained for a flat surface. Going to a Fourier
transformed form, we have

 
This gives for the free energy in terms of the normal surface modes
$\left\{ h_{q}\right\} $:

 
With 
 real, we know that 
,
or 
.
From the classical equipartition theorem we can estimate the equilibrium
energy for the average of this quantity:

 
It is now useful to define the new length scale 
,
and examine the limits of 
 and 
.
In the 
 limit, one obtains a surface dominated by surface
tension. Consider the real space thermal correlation function

 
Since this integral diverges at both large and small 
, to obtain
a physically meaningful result we must introduce cutoffs to the range
of 
: 
 where 
 is
the linear dimension of the system, and 
where 
 is the typical molecular size. This gives an example
of a famous result from the 1930s, known as Landau-Peierls instability
for 2-dimensional systems and the lack of an ordered phase at $T>$0:

 
Since the logarithmic divergence is very weak, it turns out that the
thermal fluctuations are two or three Angstroms in size for a water
surface of macroscopic (a few millimeters or centimeters) dimension.
These thermal fluctuations are not easy to measure because the signal
should come only from the water molecules at the water surface. In
the 1980s they were measured for the first time for water surfaces
at room temperature using a powerful synchrotron X-ray source. The
technique employs scattering at very low angles (called grazing incidence)
from the water surface and obtains the intensity of the scattered
X-ray as function of 
 This quantity is proportional to 
.
In the opposite limit where 
, the membrane is dominated
by its elastic energy, and $\sigma\ll kq^{2}$ can be neglected: 

 
The divergence at small 
 is much larger here than in the first
case, and
![{\displaystyle {\begin{matrix}\left\langle h^{2}\right\rangle &=&{\frac {k_{B}T}{2\pi k}}\int _{q_{\mathrm {min} }}^{q_{\mathrm {max} }}{\frac {1}{q^{3}}}\mathrm {d} q&=&{\frac {k_{B}T}{4\pi k}}\left[-{\overset {\approx 0}{\overbrace {\left({\frac {1}{q_{\mathrm {max} }}}\right)^{2}} }}+\left({\frac {1}{q_{\mathrm {min} }}}\right)^{2}\right],\end{matrix}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/075eeaf13be98b4aa61cec81192e2212181f8582.svg)
and 

 
In such membranes, which are dominated by elasticity, the fluctuations
increase linearly with membrane size. For a membrane around 
in length, a typical amplitude is in the 
 range.
Another interesting observation is that 
.
For small 
 (flexible membranes), as well as for higher temperatures,
the fluctuations become larger. This is valid as long as the condition
of the elastic-dominated case, 
, remains satisfied.
Also, recall that the source of the large membrane fluctuations comes
from the small 
 or large wavelengths, and not from small wiggles
associated with the motion of individual molecules.
Rayleigh instability
Due to surface tension, a cylinder of liquid created in air (or surrounded
by another immiscible liquid) is unstable and will break into spherical
droplets. Let's consider the following model: a cylinder of length
 and smaller radius 
, which contains inside it an
incompressible  liquid with a total volume of 
.
For simplicity, we will consider perturbations which preserve the
body-of-revolution symmetry around the main axis and the cylinder
length 
, such that only the local radius 
along the cylinder's axis may vary. Expanding 
 in normal modes
then gives

 
The mode amplitudes are 
.
Note that 

 
with 
 depending on 
. This dependence can be found from
the constant volume constraint

 
This is exact, but for small perturbations we can expand the root
and obtain

The surface energy of the distorted cylinder will be

 
(We have used expression for the surface area of a body-of-revolution
with axial symmetry). Expanding all quantities up to second order
in 
 gives

Finally,

 
The conclusion is that modes having 
 will reduce
the original cylinder free energy 
 Hence, this is the
onset of an instability called the Rayleigh instability of a liquid
cylinder. A liquid cylinder will spontaneously start to develop undulations
of wavelength 
. These undulations will
grow and eventually break up the cylinder into spherical droplets
of size 
. Note that if we go back to the planar surface by
taking the limit 
, no such instability will
occur since the planar geometry has the lowest surface area with respect
to any other fluctuating surface.
Student Projects
Polymer Dynamics
\noindent \begin{center}
{\huge Physical Models in Biological }
\par\end{center}{\huge \par}
\noindent \begin{center}
{\huge Systems and Soft Matter}
{\huge ~}
~
~{\huge }
\par\end{center}{\huge \par}
\noindent \begin{center}
{\huge Final Course Project}
{\huge ~}
{\huge ~}
{\huge ~}
{\huge ~}
\par\end{center}{\huge \par}
\begin{center}
\includegraphics[scale=0.6]{Photo-of-Combi-Formulations-Example-4}
\par\end{center}
~
~
~
~
\noindent \begin{center}
{\huge A Guided Tour to the Essence }
{\huge ~}
{\huge of Polymer Dynamics}
\par\end{center}{\huge \par}
~
\noindent \begin{center}
{\large Submitted by : Shlomi Reuveni}
\par\end{center}{\large \par}
~
~
~
~\newpage{}
\tableofcontents{}
\newpage{}
What is this document all about?
This paper is submitted as a final project in the course {\small {}"Physical
Models in Biological Systems and Soft Matter". }Writing this document
I aimed at achieving two goals. The first was getting to know a little
better a subject that I found interesting and was not covered during
the course. As an interesting by product I have also profoundly improved
my knowledge on diffusion, a subject I was already superficially acquainted
with. The second goal was to provide an accessible exposition to the
subject of polymer dynamics aimed mainly for advanced undergraduate
students who are curious about the subject and would like an easy
start. This is also the reason this document is titled: {}"A Guided
Tour to the Essence of Polymer Dynamics" and for the fact it is
written in the form of questions and answers.
The saying goes: {}"There are two ways by which one can really
master a subject: research and teaching". I felt that the effort
I have put into making this document readable for advanced undergraduate
students taught me more than I would have learned by passive reading.
I have tried hard to make this document as self contained and self
explanatory as possible and therefore hope that it will be of some
help to you the curious student. So, if you wonder {}"What do you
mean by polymer dynamics?" and {}"How can this subject be of any
interest to me?" please read on.
\newpage{}
\section{O.K, sum it up in a few lines so I can decide if I want to go on
reading!}
What's a polymer?
A polymer is a large molecule (macro-molecule) composed of repeating
structural units (monomers) typically connected by covalent chemical
bonds. Due to the extraordinary range of properties accessible in
polymeric materials, they have come to play an essential and ubiquitous
role in everyday life – from plastics and elastomers on the one hand
to natural biopolymers such as DNA and proteins that are essential
for life on the other.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{Single_Polymer_Chains_AFM}
\par\end{centering}
\caption{Appearance of real linear polymer chains as recorded using an atomic
force microscope on surface under liquid medium. Chain contour length
for this polymer is 
; thickness is 
. Taken
from: Y. Roiter and S. Minko, AFM Single Molecule Experiments at the
Solid-Liquid Interface: In Situ Conformation of Adsorbed Flexible
Polyelectrolyte Chains, Journal of the American Chemical Society,
vol. 127, iss. 45, pp. 15688-15689 (2005) }
\end{figure}
What's polymer dynamics?
As every other molecule a polymer is also affected by the thermal
motion of surrounding molecules. It is this thermal agitation that
causes a flexible polymer to move about in the solution while constantly
changing its shape. This motion is referred to as polymer dynamics.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{Motion}
\par\end{centering}
\caption{Photographs of DNA polymers in aqueous solution taken by fluorescence
microscopy. There is a 1-second interval between successive frames.
The motion is clearly visible. Taken from: Introduction to Polymer
Physics, M. Doi Translated by H. See, Clarendon Press, 30 November
1995.}
\end{figure}
What can I find in the rest of this document?
If you ever wondered how can one understand the motion of a polymer
and what are the physical properties emanating from the dynamics of
these materials you should read on. We will start with the building
blocks, the dynamics of a single particle in solution. We will then
gradually build on, presenting two models for polymer dynamics. Experimental
observations will also be discussed as we confront our models with
reality.
\newpage{}
\section{I knew there must be some preliminaries, can you keep it short and
to the point? }
\subsection{Why do you bore me with this? why can't I skip directly to section
4?}
If you are familiar with concepts such as Diffusion, Einstein relation
and Brownian motion you would find this section easier to read. If
you are also familiar with the mathematics behind these concepts,
Smoluchowski and Langevin equations, you can skip directly to section
4. In order to understand polymer dynamics we have to start from something
more basic. A polymer can be thought of as long chain of particles
(the monomers), the particles are connected to one another and hence
interact. It would be wise to first try and understand the dynamics
of a single particle and only then take into account these interactions.
The dynamics of a single particle lies in the heart of the section.
\subsection{Can't say I know much about any of the stuff you mentioned above
but first thing is first, what is diffusion?}
Molecular diffusion, often called simply diffusion, is a net transport
of molecules from a region of higher concentration to one of lower
concentration by random molecular motion. The result of diffusion
is a gradual mixing of material. In a phase with uniform temperature,
absent external net forces acting on the particles, the diffusion
process will eventually result in complete mixing or a state of equilibrium.
Basically, it is the movement of molecules from an area of high concentration
to a lower area.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.55]{Diffusion_(1)}
~
\includegraphics[scale=0.14]{cell_diffusion_ink_India}
\par\end{centering}
\centering{}\caption{Top: Schematic representation of mixing of two substances by diffusion.
Bottom: Ink diffusing in water.}
\end{figure}
How can we mathematically treat diffusion?
As mentioned above diffusion is basically the movement of molecules
from an area of high concentration to an area of lower concentration.
For simplicity we will consider one-dimensional diffusion. Let 
be the concentration at position 
 and time 
. A Phenomenological
description of diffusion is given by Fick's law: 

 
In words: if the concentration is not uniform, there will be a flux
of matter which is proportional to the gradient in concentration.
The proportionality constant is called the diffusion constant and
it is denoted by 
 its units are 
. The
minus sign is there to take care of the fact that the flow is from
the higher concentration region to the lower concentration region.
Where is this flux coming from?
Its microscopic origin is the random thermal motion of the particles.
The average velocity of each particle is zero, and there is an equal
probability for each particle to have a velocity directioned right
or left. However, if the concentration is not uniform the number of
particles which happen to flow from the higher concentration region
to the lower concentration region is higher than the number of particles
flowing in the other direction simply because there are more particles
there.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{\string"23-07-2009 22-19-57\string".eps}
\par\end{centering}
\centering{}\caption{Microscopic explanation for Fick's law. Suppose that the particle
concentration 
 is not uniform. If the particles move randomly
as shown by the arrows, there is a net flux of particles flowing from
the higher concentration region to the lower concentration region.
Here the diffusion constant of the particle, which determines the
average length of the arrows, is assumed to be constant. }
\end{figure}
How do we go on?
We now write an equation for the conservation of matter, the change
in the number of particles located at the interval 
from time 
 to time 
 is given by the number of
particles coming/going from the left minus the number of particles
coming/going from the right: {\tiny 
![{\displaystyle N(t+\triangle t)-N(t)\simeq \left[c(x+{\frac {\triangle x}{2}},t+\triangle t)-c(x+{\frac {\triangle x}{2}},t)\right]\triangle x\simeq \left[j(x,t+{\frac {\triangle t}{2}})-j(x+\triangle x,t+{\frac {\triangle t}{2}})\right]\triangle t}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/66e3e9b246e38df5ec0c851ecb20752b4b3dc838.svg)
}or: 
![{\displaystyle {\frac {\left[c(x+{\frac {\triangle x}{2}},t+\triangle t)-c(x+{\frac {\triangle x}{2}},t)\right]}{\triangle t}}={\frac {\left[j(x,t+{\frac {\triangle t}{2}})-j(x+\triangle x,t+{\frac {\triangle t}{2}})\right]}{\triangle x}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/25e099bd118cbe1c80d8670e8add9559ccb4534c.svg)
 
taking the limits 
 and assuming
continuity and differentiability of the concentration and the flux
we obtain: 

 
Substituting the expression for the flux gives the well known diffusion
equation:

 
\subsubsection{What happens if the particles are under the influence of some kind
of a potential 
?}
If this happens Fick's law must be modified since the potential 
exerts a force: 

 
on the particle and gives an non zero average velocity 
. If the
force is weak there is a linear relation between force and velocity
given by: 

 
the constant 
 is called the friction constant and its inverse
 is called the mobility.
\subsubsection{How come the velocity doesn't grow indefinitely? there is a constant
force!}
Correct, but it is not the only force acting on the particle. There
are also friction and random forces exerted by other particles and
hence like a feather falling under its own weight the particle reaches
a finite average velocity.
O.K, and what do we do now?
We will obtain the Smoluchowski equation that takes the potential
into account, but first we will obtain an important relation between
the diffusion constant the temperature and the friction constant.
The average velocity of the particle gives an additional flux 
so that the total flux is: 

 
An important relation is obtained from this equation. As you may recall
from statistical mechanics, in equilibrium the concentration is given
by the Boltzmann distribution:

for which the flux must vanish and hence:

Substituting for $c_{eq}(x,t)$ we get:
![{\displaystyle {\frac {Dc_{eq}}{k_{B}T}}{\frac {\partial U}{\partial x}}-{\frac {c_{eq}}{\zeta }}{\frac {\partial U}{\partial x}}=c_{eq}{\frac {\partial U}{\partial x}}\left[{\frac {D}{k_{B}T}}-{\frac {1}{\zeta }}\right]=0}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/2d3e53b75d57c6e1eee98132983db30ab219cbd7.svg)
Since this is true for every $x$ it follows that: 

 
this relation is called the Einstein relation. The Einstein relation
states that the diffusion constant which characterizes the thermal
motion is related to the friction constant which specifies the response
to external force. The Einstein relation is a special case of a general
theorem called the fluctuation dissipation theorem, which states the
spontaneous thermal fluctuations are related to the characteristics
of the system response to an external field.
====And the Smoluchowski equation is obtained by plugging in the {===="new"
flux into the continuity equation right?}
Exactly right! using the Einstein relation we rewrite the flux as:
![{\displaystyle j(x,t)=-{\frac {1}{\zeta }}\left[k_{B}T{\frac {\partial c(x,t)}{\partial x}}+c{\frac {\partial U}{\partial x}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/65360601a59acc108c782d650324294808801fbc.svg)
 
Substituting this into the continuity equation we get the Smoluchowski
equation:
![{\displaystyle {\frac {\partial c(x,t)}{\partial t}}=-{\frac {\partial j(x,t)}{\partial x}}={\frac {\partial }{\partial x}}{\frac {1}{\zeta }}\left[k_{B}T{\frac {\partial c(x,t)}{\partial x}}+c{\frac {\partial U}{\partial x}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/c7adadabdb031a20dfab972ca08ca2010d3f8a61.svg)
 
which serves as a phenomenological description of diffusion under
the influence of an external potential. Although we have derived the
above equation for the concentration 
 the same equation will
also hold for the probability distribution function 
 that
a particular particle is found at position 
 at time 
. This
is true since the distinction between 
 and 
 is,
for non-interacting particles, only the fact that 
 is
normalized. The evolution equation for the probability 
is hence written as: 
![{\displaystyle {\frac {\partial \Psi (x,t)}{\partial t}}={\frac {\partial }{\partial x}}{\frac {1}{\zeta }}\left[k_{B}T{\frac {\partial \Psi (x,t)}{\partial x}}+\Psi (x,t){\frac {\partial U}{\partial x}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/b8c0edf54f8a4bedac71f485a121973eab3230ce.svg)
 
which will also be termed the Smoluchowski equation.
What's Brownian motion?
Brownian motion (named after the Scottish botanist Robert Brown) is
the seemingly random movement of particles suspended in a fluid (i.e.
a liquid or gas) or the mathematical model used to describe such random
movements. Brownian motion is traditionally regarded as discovered
by the botanist Robert Brown in 1827. It is believed that Brown was
studying pollen particles floating in water under the microscope.
He then observed small particles within the vacuoles of the pollen
grains executing a jittery motion. By repeating the experiment with
particles of dust, he was able to rule out that the motion was due
to pollen particles being 'alive', although the origin of the motion
was yet to be explained.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{PerrinPlot2}
\par\end{centering}
\caption{Three tracings of the motion of colloidal particles of radius 0.53\textmu{}m,
as seen under the microscope, are displayed. Successive positions
every 30 seconds are joined by straight line segments (the mesh size
is 3.2\textmu{}m).Reproduced from the book of Jean Baptiste Perrin,
Les Atomes,Perrin, 1914, p. 115.}
\end{figure}
And the mathematical treatment?
\subsubsection{Before we start I have to say that it seems awfully similar to diffusion,
what's new?}
You are right! these are opposite sides of the same coin. However,
the approach we take here is microscopic rather than macroscopic.
Instead of starting from a macroscopic quantity, the concentration,
we will start from the equation of motion for a single particle in
solution, Newton's second law: 

 
Here the first term on the right hand side is the friction force which
is assumed to take a standard form of being opposite in direction
and proportional to the velocity. The second term is the force exerted
as a consequence of the external potential and the third term is a
random force that represents the sum of the forces due to collisions
with surrounding particles. Let us now rewrite this equation in the
form: 

 
where we have defined 
. Our next
step is an approximation, treating very small and light weight particles
we will drop the inertial term 
assuming it is negligible and obtain: 

 
we will refer to this equation as the Langevin equation. This equation
describes the motion of a single Brownian particle, solving it one
can (in principle) obtain a trajectory of such a particle.
I don't understand why you throw away the inertial term, please explain!
This can be further explained by the following example. Consider a
particle immersed in some solvent moving under the influence of a
constant external force 
. Let us
denote the velocity: 

the equation of motion for $v$ is given by:

 
For simplicity let us factor out the random force by taking an ensemble
average (to avoid the subtleties of taking the time average) of both
sides of the equation and obtaining an equation for the average velocity:

 
Multiplying both sides by 
 and integrating
from zero to $t$ we are able to solve for $<v>$: 
![{\displaystyle <v>=e^{\frac {-t\zeta }{m}}{\overset {t}{\underset {0}{\int }}}{\frac {F}{m}}e^{\frac {t'\zeta }{m}}dt'={\frac {F}{\zeta }}\left[1-e^{\frac {-t\zeta }{m}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/227726995839dd864dfd81f3f7108f6dd79b6b49.svg)
 
Where we have assumed that the particle was at rest at time zero.
We see that the velocity approaches an asymptotic value of 
exponentially fast and that the characteristic relaxation time is
. Dropping the inertial term in the first place
we would have simply gotten: 

 
i.e. an immediate response to the force. It is now clear that if the
relaxation time 
 is small, dropping the inertial
term is a good approximation! In the case of small particles (atoms,molecules,colloidal
particles, etc...) immersed in a liquid, the relaxation time 
is indeed very small supporting the validity of our approximation.
\subsubsection{If these are opposite sides of the same coin how does the Langevin
equation relate to the Smoluchowski equation?}
As mentioned earlier, since we don't know the exact time dependence
of 
 we will treat it as a random force. The freedom in choosing
the distribution of 
 is very large, here however we will limit
ourselves to a model which will be equivalent to the Smoluchowski
equation.
\subsubsection{The Langevin equation gives us trajectories, the Smoluchowski equation
gives us a probability distribution for the position, how can they
be equivalent?}
Excellent question! Examining many trajectories one can generate the
probability distribution for the position. For example, starting the
particles from a given origin and following its trajectory up to some
time 
 one can record the position 
 Repeating the processes
many many times will yield many many different 
 Creating a
histogram one can generate an empirical probability distribution for
the position at time 
. One can show () that if the probability
distribution of 
 is assumed to be Gaussian and is characterized
by: 

 
then the distribution of 
 determined by the Langevin equation
satisfies the Smoluchowski equation. In other words, if 
 is
a Gaussian random variable with zero mean and variance 
and if 
 and 
 are independent for 
 then the
above statement holds.
\subsubsection{I still don't understand, can you demonstrate on a simple special
case?}
Yes! Consider the Brownian motion of a free particle (no external
potential) for which the Langevin equation reads:

 
If the particle is at 
 at time 
, its position at time
$t$ is given by:

 
From the above we deduce that 
 is a linear combination
independent Gaussian random variables. We now recall that the sum
independent Gaussian random variables is a Gaussian random variable
itself and hence the probability distribution of 
 may be written
as: 
![{\displaystyle \Psi (x,t)={\frac {1}{\sqrt {2\pi B}}}exp\left[-{\frac {\left(x-A\right)^{2}}{2B}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/f60a74970e8564a0f81c947290a873959e3e49c4.svg)
where: 

The mean is calculated from: 

For the variance we have: 
Failed to parse (syntax error): {\displaystyle  B=<\left(\overset{t}{\underset{0}{\int}}g(t')dt'\right)\left(\overset{t}{\underset{0}{\int}}g(t")dt"\right)>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}<g(t')g(t")>dt'dt"}
hence: 
Failed to parse (syntax error): {\displaystyle  B=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}\frac{2k_{B}T}{\zeta}\delta(t-t')dt'dt"=\frac{2k_{B}T}{\zeta}t=2Dt}
and thus:
![{\displaystyle \Psi (x,t)={\frac {1}{\sqrt {4\pi Dt}}}exp\left[-{\frac {\left(x-x_{0}\right)^{2}}{4Dt}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/56231ed5fd2eb35279f1c472d8ab769809f978f2.svg)
 
which is exactly (check by direct differentiation) the solution for
the Smoluchowski equation: 

 
In other words, both equations result in the same probability distribution
for 
. An important conclusion is that the mean square displacement
of a Brownian particle from the origin is given by 
 and is hence
linear in time.
O.K, I think we covered everything! anything else?
We are almost done but in order to complete our analysis we need to
analyze one more problem, the Brownian motion of a harmonic oscillator.
\subsubsection{Why do we have to do this? how come we always have to talk about
the harmonic oscillator?}
The harmonic oscillator is a simple system that serves as a prototype
for problems we will solve later one. Treating it here will ease things
for us later.
I understand, please go on.
Consider a Brownian particle moving under the following potential:

The equation of motion for this particle is given by: 

 
In order to get a formal solution for 
 we multiply both sides
by $e^{\frac{k}{\zeta}t'}$and do some algebra:
![{\displaystyle {\frac {dx(t')}{dt'}}e^{{\frac {k}{\zeta }}t'}+{\frac {k}{\zeta }}x(t')e^{{\frac {k}{\zeta }}t'}={\frac {d}{dt'}}\left[x(t')e^{{\frac {k}{\zeta }}t'}\right]=g(t')e^{{\frac {k}{\zeta }}t'}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/4b05b84bc0ea4c13f913a59faf3f076f10a6d8c7.svg)
We now integrate from $-\infty$ to $t$ and get: 
![{\displaystyle \left[x(t')e^{{\frac {k}{\zeta }}t'}\right]_{t'=-\infty }^{t'=t}={\overset {t}{\underset {-\infty }{\int }}}g(t')e^{{\frac {k}{\zeta }}t'}dt'}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/8cc32bfaea0c473919ddd2806a8673f8733a9d78.svg)
Assuming the following boundary condition: 

We conclude that:

 
It is also possible to solve under the initial condition 
,
in that case:
![{\displaystyle \left[x(t')e^{{\frac {k}{\zeta }}t'}\right]_{t'=0}^{t'=t}={\overset {t}{\underset {0}{\int }}}g(t')e^{{\frac {k}{\zeta }}t'}dt'}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/be17f438ece06d1d48eda021deda96385400beea.svg)
and we have 

 
\subsubsection{O.K, but 
 is a random variable and hence 
 is also one
that doesn't tell me much... Can we calculate some moments? Start
with the case of the particle that has been with us since 
.}
First we note that for the mean position we have: 

 
and the mean position is hence zero. We now aim at finding an expression
for the mean square displacement from the origin 
,
the variance of 
 will be calculated as a by product. We start
with the time correlation function of $x(t)$:

Recalling that: 

we get:
![{\displaystyle <x(t)x(0)>={\overset {0}{\underset {-\infty }{\int }}}dt_{1}e^{{\frac {k}{\zeta }}\left(2t_{1}-t\right)}{\frac {2k_{B}T}{\zeta }}=\left[e^{{\frac {k}{\zeta }}\left(2t_{1}-t\right)}{\frac {k_{B}T}{k}}\right]_{t_{1}=-\infty }^{t_{1}=0}={\frac {k_{B}T}{k}}e^{-{\frac {k}{\zeta }}t}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/44cd0f880c651c8feca94098b16f2c888b6d452f.svg)
 
Here we assumed that 
 and used the fact that 
since $t_{2}<$0. Similarly if $t<$0 we get:
![{\displaystyle <x(t)x(0)>={\overset {t}{\underset {-\infty }{\int }}}dt_{1}e^{{\frac {k}{\zeta }}\left(2t_{1}-t\right)}{\frac {2k_{B}T}{\zeta }}=\left[e^{{\frac {k}{\zeta }}\left(2t_{1}-t\right)}{\frac {k_{B}T}{k}}\right]_{t_{1}=-\infty }^{t_{1}=t}={\frac {k_{B}T}{k}}e^{{\frac {k}{\zeta }}t}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/ecf30760b3e66d8c1f296a8b07c5615d8ca5342d.svg)
We may hence conclude that: 

Letting $t=$0 we get 

 
which coincides with the known result obtained from statistical mechanics
with the use of the Boltzmann distribution 
.
We will now show that this is also the variance: 
Failed to parse (syntax error): {\displaystyle  B=<(x(t)-A(t))^{2}>=<x(t)x(t)>=\overset{t}{\underset{-\infty}{\int}}\overset{t}{\underset{-\infty}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"}
and hence: 

 
The mean square displacement 
 can now be easily
calculated:{\scriptsize 
![{\displaystyle <(x(t)-x(0))^{2}>=<x(t)^{2}>+<x(0)^{2}>-2<x(t)x(0)>=2\left[<x(t)^{2}>-<x(t)x(0)>\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/8a508c2b0fdfd9d118eca50e60d9ad92a34e51a5.svg)
}and hence:
![{\displaystyle <(x(t)-x(0))^{2}>={\frac {2k_{B}T}{k}}\left[1-e^{-{\frac {k}{\zeta }}\left|t\right|}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/107ec506cea2a516e368bad9bb88c215a2878965.svg)
 
Here, unlike the case of free diffusion, for long times the mean square
displacement is bounded by 
. The bound is approached
exponentially fast with a characteristic relaxation time 
.
Considering the opposite limit 
 (very
short times) we have (to first order):
![{\displaystyle <(x(t)-x(0))^{2}>={\frac {2k_{B}T}{k}}\left[1-1+{\frac {k}{\zeta }}\left|t\right|\right]=2D\left|t\right|}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/08e2358be20bc28b921e34d0faab9e03bf92a21f.svg)
 
Indeed, in this limit the particle has yet to {}"feel" the harmonic
potential and we expect regular diffusion.
\subsubsection{I think that since 
 is a linear sum of Gaussian random variables
and hence Gaussian itself, we can also write an expression for the
it probability distribution. Am I right?}
Yes you are! We already found the mean and variance and hence the
probability distribution for $x(t)$ is:
![{\displaystyle \Psi (x,t)={\frac {1}{\sqrt {2\pi B}}}exp\left[-{\frac {\left(x-A\right)^{2}}{2B}}\right]={\frac {1}{\sqrt {\frac {2\pi k_{B}T}{k}}}}exp\left[-{\frac {kx^{2}}{2k_{B}T}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/54b568885005c8bc779421505b0359a2b9287618.svg)
 
which is exactly the Boltzmann distribution. We could have guessed
that this will be so since we have given the particle an infinite
amount of time to equilibrate with the potential well.
====Let's proceed to the case of the particle that started at Failed to parse (syntax error): {\displaystyle x_{0====}
! }
First we note that for the mean position we have: 

 
the mean position depends on time and exponentially decays towards
zero. For the variance we have: 
Failed to parse (syntax error): {\displaystyle  B=<(x(t)-A(t))^{2}>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"}
and hence: 
![{\displaystyle B={\overset {t}{\underset {0}{\int }}}{\frac {2k_{B}T}{\zeta }}e^{{\frac {k}{\zeta }}\left(2t'-2t\right)}dt'={\frac {k_{B}T}{k}}\left[1-e^{-{\frac {2k}{\zeta }}t}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/f8c6941478fb00decb75adcd7116db21e187023a.svg)
 
Here again the variance exponentially decays towards the equilibrium
variance. The probability distribution is Gaussian again and we have:
{\footnotesize 
![{\displaystyle \Psi (x,t)={\frac {1}{\sqrt {2\pi B}}}exp\left[-{\frac {\left(x-A\right)^{2}}{2B}}\right]={\frac {1}{\sqrt {{\frac {2\pi k_{B}T}{k}}\left[1-e^{-{\frac {2k}{\zeta }}t}\right]}}}exp\left[-{\frac {k(x-x_{0}e^{-{\frac {k}{\zeta }}t})^{2}}{2k_{B}T\left[1-e^{-{\frac {2k}{\zeta }}t}\right]}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/2c2588f1dc42187668c35a45928f1a86e8d5838e.svg)
 
}which for short times 
 is the same as
free diffusion:
![{\displaystyle \Psi (x,t)={\frac {1}{\sqrt {4\pi Dt}}}exp\left[-{\frac {(x-x_{0})^{2}}{4Dt}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/c9c86c8a70253a90ac8d369f69793fdeaf780a00.svg)
and for long times gives the Boltzmann distribution: 
![{\displaystyle \Psi (x,t)={\frac {1}{\sqrt {\frac {2\pi k_{B}T}{k}}}}exp\left[-{\frac {kx^{2}}{2k_{B}T}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/ad27f252fc024d93a9acaa3ad15d68fa3c68b9ee.svg)
 
\newpage{}
The Bead-Spring (Rouse) Model for Polymer Dynamics
Give me the simplest model for polymer dynamics!
A polymer is a chain of monomers linked to one another by covalent
bonds. It is natural to represent a polymer by a set of beads connected
to one another by springs. The dynamics of the polymer is modeled
by the Brownian motion of these beads. Such a model was first proposed
by Rouse in the fifties of the twentieth century and has been
the basis of the dynamics of polymers in dilute solutions.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{\string"26-07-2009 16-17-49\string".eps}
\par\end{centering}
\caption{A pictorial description of the Rouse model.}
\end{figure}
But now the beads are connected! how do we take that into account?
Let 
 be the positions of the beads,
if we assume the beads experience a drag force proportional to their
velocity as they move through the solvent, then for each bead we can
write the following Langevin equation:

 
Here 
 is the friction coefficient of the 
 bead
and from now on we will assume that the beads are all alike and hence
 for every 
. The random force 
is Gaussian with the following characteristics: 

 
i.e. the random forces acting on different beads and/or in perpendicular
directions and/or in different times are independent.
And the potential 
? Harmonic as always?
 
Indeed, having harmonic springs connecting the beads, we will take
it as:

 
In this model the Langevin equation becomes a linear equation for
$\vec{R}_{n}(t)$, for the internal beads we have:

and for the beads at each end we have:

 
In order to unify the treatment we define two additional hypothetical
beads $\vec{R}_{0}$ and $\vec{R}_{N+1}$ as:

 
under this definition the Langevin equation for beads 
is given by: 

 
How do we proceed?
In order to proceed it is convenient to assume that the beads are
continuously distributed along the polymer chain. We first recall
that in the continuum limit: 

 
Letting 
 be a continuous variable, and writing 
as $\vec{R}(n,t)$ the Langevin equation takes the form: 

 
The definitions we made regarding the additional hypothetical beads
 and 
 now turn into the following boundary
conditions:

 
\subsubsection{I don't know how to solve this one, can we bring it to a form of
something we have solved before? }
Yes we can, as a first step we define normal coordinates by the following
transformation: 

whose inverse is given by:

 
\subsubsection{Defining new coordinates (call them as you will) is one thing but
the inverse must be defined such that it takes you back to the original
coordinates! Is this truly the correct inverse? }
We verify this by direct substitution: 
![{\displaystyle {\vec {X}}_{p}(t)={\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right)\left[{\vec {X}}_{0}(t)+2{\overset {\infty }{\underset {p'=1}{\sum }}}cos\left({\frac {p'\pi n}{N}}\right){\vec {X}}_{p'}(t)\right]dn}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/18017f4495e7b70757ff5adf4cbb32618b56a49d.svg)
The first term gives: 

Using the trigonometric identity:
![{\displaystyle cos(A)cos(B)={\frac {1}{2}}\left[cos(A-B)+cos(A+B)\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/0eb3e78635c7161975c0c58e9fd06509a89e84f5.svg)
the second term is written as: 
![{\displaystyle {\begin{cases}{\frac {1}{N}}{\overset {\infty }{\underset {p'=1}{\sum }}}{\vec {X}}_{p'}(t){\overset {N}{\underset {0}{\int }}}\left[cos\left({\frac {(p+p')\pi n}{N}}\right)+cos\left({\frac {(p-p')\pi n}{N}}\right)\right]dn&\,\,\,p=1,2,3,..{\overset {\infty }{\underset {p'=1}{\sum }}}{\frac {2{\vec {X}}_{p'}(t)}{p'\pi }}sin\left({\frac {p'\pi n}{N}}\right)|_{n=0}^{n=N}=0&\,\,\,p=0\end{cases}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/d72ace82972cf5e6901fc9c2804c19dbcf455638.svg)
which gives: 

We conclude that: 

 
which proves that the inverse transformation is defined correctly.
How does this new set of coordinates help us?
We will now show that the equations of motion for the normal coordinates
 are the equations of motion for an infinite set
of uncoupled Brownian harmonic oscillators. Since we have already
treated the problem of a Brownian harmonic oscillator, this will ease
our lives considerably. We start by applying 
to both side of the Langevin equation for $\vec{R}(n,t)$: {\footnotesize 

}The left hand side term is identified as: 
![{\displaystyle {\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right){\frac {\partial {\vec {R}}(n,t)}{\partial t}}dn={\frac {\partial }{\partial t}}\left[{\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right){\vec {R}}(n,t)dn\right]={\frac {d{\vec {X}}_{p}(t)}{dt}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/67f630a75c45ca8c4aa3f9ae011a261bbc9fed6b.svg)
The first term on the right hand side gives: 

 
by integration by parts. Invoking the boundary condition for 
the first term drops, another round of integration by parts gives:

 
Here the sine kills the first term and the second term is identified
as:

where we have defined: 

 
We are left with the second term on the right hand side of the original
equation which we deal with by defining the random forces:

Which are characterized by zero mean:

And by: 

since: {\footnotesize 

}and use of the trigonometric identity: 
![{\displaystyle cos(A)cos(B)={\frac {1}{2}}\left[cos(A-B)+cos(A+B)\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/0eb3e78635c7161975c0c58e9fd06509a89e84f5.svg)
gives: {\footnotesize 
![{\displaystyle <g_{\alpha p}(t)g_{\beta p'}(t')>={\frac {k_{B}T\delta _{\alpha \beta }\delta (t-t')}{\zeta N^{2}}}{\overset {N}{\underset {0}{\int }}}\left[cos\left({\frac {(p+p')\pi n}{N}}\right)+cos\left({\frac {(p-p')\pi n}{N}}\right)\right]dn}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/35c53c220b38f048244806849bec0ab581538f74.svg)
 
}which yields the result after preforming the integration. This means
that the random forces with different values of 
 and/or acting
in perpendicular directions and/or acting in different times are independent.
The equations of motion for the normal coordinates 
are given by: 

 
and since the random forces are independent of each other, the motions
of the 
's are also independent of each other. These
are the equations of motion for an infinite set of uncoupled Brownian
harmonic oscillators, each with a force constant 
 and friction
constant 
 of its own. We have gone from one partial differential
equation (which we don't know how to solve directly) for 
to an infinite set of uncoupled ordinary differential equations (from
a type we are already familiar with) for the normal coordinates 
.
Great, we can now do some analysis!
What can we say about the motion of the center of mass?
Using the results of section 3 we will now calculate two time correlation
function that will help us in the near future. We first note that
since 
, 
 is actually preforming free diffusion
and hence:{\tiny 

 
}On the other hand, the time correlation function for 
($p>0)$ is the one for a Brownian harmonic oscillator and hence:

where the relaxation time $\tau_{p}$ is given by:

A conclusion from the previous result is that: 
![{\displaystyle <\left(X_{p\alpha }(t)-X_{p'\beta }(0)\right)^{2}>=\delta _{\alpha \beta }\delta _{pp'}{\frac {2k_{B}T}{k_{p}}}\left[1-e^{-t/\tau _{p}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/03a3c9e1296f9131ae26fe50194c9b583f804c33.svg)
 
We are now ready to calculate some real features of the Brownian motion
of a polymer. We start with the motion of the center of mass, the
position of the center of mass: 

 
is the same as the normal coordinate 
. The mean square
displacement of the center of mass is hence given by:

where the diffusion constant $D_{G}$ is given by:

 
and we note that it is inversely proportional to the number of monomers.
What can we say about rotational motion?
To characterize rotational motion of the polymer molecule as a whole,
let us consider the time correlation function 
of the end to end vector 
. Using normal coordinates, 
can be written as: 
![{\displaystyle {\vec {P}}(t)\equiv {\vec {R}}(N,t)-{\vec {R}}(0,t)=2{\overset {\infty }{\underset {p=1}{\sum }}}\left[cos\left(p\pi \right)-1\right]{\vec {X}}_{p}(t)}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/d0a8b5a7f24993dd7c738e5fe37354bc6bc6e21a.svg)
which results in:

We therefore conclude that:

 
This time correlation function is a summation over many terms with
different relaxation times. We will now see that for large enough
times this infinite sum is well approximated by the first term. We
rewrite the correlation function as: 
![{\displaystyle <{\vec {P}}(t)\cdot {\vec {P}}(0)>={\frac {24Nk_{B}T}{\pi ^{2}k}}e^{-t/\tau _{1}}\left[1+{\overset {}{\underset {p:odd\,\,integers>1}{\sum }}}{\frac {1}{p^{2}}}e^{-t\left[{\frac {1}{\tau _{p}}}-{\frac {1}{\tau _{1}}}\right]}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/fe4dce499e5d9ce7c6351f84ffd05cfd7a53f33f.svg)
but since: 
![{\displaystyle \left[{\frac {1}{\tau _{p}}}-{\frac {1}{\tau _{1}}}\right]\geq \left[{\frac {1}{\tau _{3}}}-{\frac {1}{\tau _{1}}}\right]>0}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/cc70dbb50c3d45d426abe1cc7be9b2aceeaf4c42.svg)
we have:
![{\displaystyle e^{-t\left[{\frac {1}{\tau _{p}}}-{\frac {1}{\tau _{1}}}\right]}\leq e^{-t\left[{\frac {1}{\tau _{3}}}-{\frac {1}{\tau _{1}}}\right]}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/2d05e6aa8faf9500bf7a0b2d8f768209ae615317.svg)
We also know that: 

 
and hence the second term in the parentheses is bounded by an exponentially
decaying function and moreover it is never larger than $1/$4: 
![{\displaystyle {\overset {}{\underset {p:odd\,\,integers>1}{\sum }}}{\frac {1}{p^{2}}}e^{-t\left[{\frac {1}{\tau _{p}}}-{\frac {1}{\tau _{1}}}\right]}\leq \left[{\frac {\pi ^{2}}{8}}-1\right]e^{-t\left[{\frac {1}{\tau _{3}}}-{\frac {1}{\tau _{1}}}\right]}<{\frac {1}{4}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/f5279babffeb9764dd1cc7a0a4a5196b5770dd11.svg)
 
We conclude that the second term may be neglected for large times
and the correlation function is approximated to be: 

 
which decays exponentially with a single relaxation time 
.
The relaxation time 
 is called the rotational relaxation
time, it is also denoted $\tau_{r}$ and is given by:

 
What can we say about the motion of one specific bead?
We now turn to study the internal motion of a polymer chain focusing
on the mean square displacement of the $n-th$ monomer: 
![{\displaystyle \phi (n,t)\equiv <\left[{\vec {R}}(n,t)-{\vec {R}}(n,0)\right]^{2}>}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/aff17e16341ddd21f009227001e28cd3a22d6ee2.svg)
 
Direct substitution for 
 and 
 gives:
![{\displaystyle \phi (n,t)=<\left[{\vec {X}}_{0}(t)-{\vec {X}}_{0}(0)+2{\overset {\infty }{\underset {p=1}{\sum }}}cos\left({\frac {p\pi n}{N}}\right)\left({\vec {X}}_{p}(t)-{\vec {X}}_{p}(0)\right)\right]^{2}>}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/ceeb0a9302cd40f00b83c037ec3ac004225a0377.svg)
 
utilizing the correlation functions we have obtained above all the
cross terms vanish and we are left with: 
![{\displaystyle \phi (n,t)=6D_{G}t\,+{\overset {\infty }{\underset {p=1}{\sum }}}{\frac {24k_{B}T}{k_{p}}}cos\left({\frac {p\pi n}{N}}\right)^{2}\left[1-e^{-tp^{2}/\tau _{r}}\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/2064bcd0e3cf9707dfd3e017689b0c3597a183fd.svg)
 
Let us examine this expression in two limits, for 
:

 
The second term is a constant that doesn't depend on time (it is easily
seen that the infinite sum converges) and hence 
 is linear
in 
 in this limit. For large enough times the displacement of
the 
 monomer is determined by the diffusion constant of the
center of mass as the monomer drifts away with the polymer as a whole.
On the other hand, for 
, the motion of the segments
reflects the internal motion due to the many modes of vibration. In
this limit we may approximate by replacing summation with integration
and 
 by its average value 
:
![{\displaystyle \phi (n,t)={\frac {6k_{B}T}{N\zeta }}t\,+{\overset {\infty }{\underset {p=0}{\int }}}{\frac {6Nk_{B}T}{p^{2}\pi ^{2}k}}\left[1-e^{-tp^{2}/\tau _{r}}\right]dp={\frac {6k_{B}T}{N\zeta }}t+I}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/a4042a3262f17a87b4d00489c8071f87850b887e.svg)
Doing the integral by parts we get: {\tiny 
![{\displaystyle I={\overset {\infty }{\underset {p=0}{\int }}}{\frac {6Nk_{B}T}{\pi ^{2}k}}\left[e^{-tp^{2}/\tau _{r}}-1\right]d{\frac {1}{p}}={\frac {6Nk_{B}T}{p\pi ^{2}k}}\left[e^{-tp^{2}/\tau _{r}}-1\right]|_{0}^{\infty }+{\overset {\infty }{\underset {p=0}{\int }}}{\frac {12tNk_{B}T}{\tau _{r}\pi ^{2}k}}e^{-tp^{2}/\tau _{r}}dp}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/f8ec4fdd30320b40cf0751a5105c56d76c020deb.svg)
 
}The first term vanishes (basic calculus) and the second term is transformed
into a Gaussian integral which gives:

We can now write the $\phi(n,t)$ as: 
![{\displaystyle \phi (n,t)={\frac {6Nk_{B}T}{\pi ^{2}k}}\left[{\frac {t}{\tau _{r}}}+{\sqrt {\frac {\pi t}{\tau _{r}}}}\right]{\underset {t\ll \tau _{r}}{\simeq }}{\frac {6Nk_{B}T}{\pi ^{3/2}k}}{\sqrt {\frac {t}{\tau _{r}}}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/f46a907750aa9332900409a00cf6a57116e346b6.svg)
 
and observe that in this limit the mean square displacement of the
 monomer increases like 
, i.e. in a sub-diffusive
manner.
How does the Rouse model stand in comparison to experimental results?
Unfortunately not as good as one might have hoped. The Rouse model
may seem to be a very natural way to describe the Brownian motion
of a polymer chain, but unfortunately the conclusions drawn from it
do not agree with the experimental results. As we saw above, for the
Rouse model: 

 
where 
 is the molecular weight of the polymer. Experimentally
however, the following dependencies were measured: 

 
Here, the exponent 
 is that which is used to express the dependence
of the radius of gyration 
 on molecular weight ():

 
The value of 
 is determined by the nature of the interaction
between the solvent and the polymer, in a good solvent 
and in the 
 state 
 ().
The reason for the discrepancy between experiments and the Rouse model
is that in the latter we have assumed the average velocity of a particular
bead is determined only by the external force acting on it, and is
independent of the motion of the other beads. However, in reality
the motion of one bead is influenced by the motion of the surrounding
beads through the medium of the solvent. For example, if one bead
moves the solvent surrounding it will also move, and as a result other
beads will be dragged along. This type of interaction transmitted
by the motion of the solvent is called hydrodynamic interaction. We
will discuss a model taking this interaction into account in the next
section.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{\string"29-07-2009 14-52-26\string".eps}
\par\end{centering}
\caption{The hydrodynamic interaction. If bead 
 moves under the action
of the force 
, a flow is created in the surrounding
fluid, which causes the other beads to move.}
\end{figure}
\newpage{}
The Zimm Model for Polymer Dynamics
\subsection{So we need a model that will take into account hydrodynamics interactions,
but how do we do that?}
In the Rouse model we have assumed the average velocity of a particular
bead is determined only by the external force acting on it, and is
independent of the motion of the other beads. This assumption led
to the following Langevin equation:

 
In order to take into account hydrodynamic interaction we can generalize
this assumption. Denoting the forces acting on the beads by 
, we assume that there is a linear relationship between these forces
and the average velocity 
 and so the following
holds: 

 
Here 
 is a 
 matrix, the 
 component of 
.
It is now our task to calculate 
 and write the appropriate
Langevin equation. This can be done using hydrodynamics and some approximations (),
the result of the calculation gives: 
![{\displaystyle H_{nm}={\begin{cases}I/\zeta &\,\,\,n=m{\frac {[{\hat {r}}_{nm}{\hat {r}}_{nm}+I]}{8\pi \eta \left\Vert {\vec {r}}_{nm}\right\Vert }}&\,\,\,n\neq m\end{cases}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/afd2cdc2ab3269f38105dc8aa7556253340581ac.svg)
 
where 
 is the viscosity of the liquid, 
 is the 
identity matrix, 
 and
 is a unit vector in the direction of 
.
The appropriate Langevin equation is given by (taking the same potential
$U$ as in the Rouse model): 

 
and the random force 
 is Gaussian with the following
characteristics: 

 
\subsubsection{The Langevin equation we got seems complicated, it is not even linear
in 
! I guess there is an approximation coming my
way, am I right? }
Since 
 depends on 
 the Langevin equation
we got is not linear in 
 and hence tremendously hard
to solve. Zimm's idea was to replace 
 (the factor that is
causing the non-linearity) by its equilibrium average value 
,
this is called the preaveraging approximation. In general the equilibrium
value of 
 depends on the interactions between the solvent
and the polymer and hence will have a different value in a good/medium/bad
solvents. Here we will concentrate on a special state of a polymer
in solution, this state was also mentioned earlier and is called the
 state (). For a polymer in 
conditions, the vector 
 is characterized by a Gaussian
distribution with zero mean and a variance of 
. Here
 is the distance between two adjacent monomers and it follows
that the probability density function for $\vec{r}_{nm}$ is: 

 
Since 
 is a function only of 
 we can calculate
$<H_{nm}>_{eq}$ (for $n\neq m$) as follows:{\scriptsize 
![{\displaystyle <H_{nm}>_{eq}=\int d^{3}{\vec {r}}P({\vec {r}})H_{nm}=\int d{\vec {r}}\left({\frac {3}{2\pi |n-m|b^{2}}}\right)^{3/2}exp\left(-{\frac {3r^{2}}{2|n-m|b^{2}}}\right){\frac {[{\hat {r}}{\hat {r}}+I]}{8\pi \eta r}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/a63d51ea8e76532d6c3219afc8de6cd929f6dd1c.svg)
}Noting that in spherical coordinates: 
Failed to parse (unknown function "\begin{matrix}"): {\displaystyle  \hat{r}\hat{r}=\left[\begin{matrix}{ccc} sin^{2}\theta cos^{2}\phi & sin^{2}\theta cos\phi sin\phi & cos\theta sin\theta cos\phi  sin^{2}\theta cos\phi sin\phi & sin^{2}\theta sin^{2}\phi & cos\theta sin\theta sin\phi  cos\theta sin\theta cos\phi & cos\theta sin\theta sin\phi & cos^{2}\theta\end{array}\right]}
We have:
Failed to parse (unknown function "\begin{matrix}"): {\displaystyle  \overset{\pi}{\underset{0}{\int}}sin\theta d\theta\overset{2\pi}{\underset{0}{\int}}d\phi\hat{r}\hat{r}=\left[\begin{matrix}{ccc} \frac{4\pi}{3} & 0 & 0  0 & \frac{4\pi}{3} & 0  0 & 0 & \frac{4\pi}{3}\end{array}\right]=\frac{4\pi}{3}I}
and hence:{\scriptsize 

 
}The integral is calculated in a straight forward way, defining 
we have:{\scriptsize 

}and hence: 

 
where we have defined:

 
Substituting this result into our Langevin equation and re-writing
it in the continuum limit we get:

 
where the random force 
 is Gaussian with the following
characteristics:{\small 

 
}Note that 
 depend only on 
 and we have indeed linearized
our equation as promised.
Seems familiar, shall we try normal coordinates again?
Yes, we will one again use the normal coordinates defined for the
Rouse model. We start by applying 
to both side of the Langevin equation for $\vec{R}(n,t)$:{\tiny 
![{\displaystyle {\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right){\frac {\partial {\vec {R}}(n,t)}{\partial t}}dn={\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right)\left[k{\overset {N}{\underset {0}{\int }}}dmh(n-m){\frac {\partial ^{2}{\vec {R}}(m,t)}{\partial ^{2}m}}\right]dn+{\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right){\vec {g}}(n,t)dn}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/ea08ae726b0668866ee14044afc5a1594ce95e11.svg)
}The left hand side term is identified as: 
![{\displaystyle {\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right){\frac {\partial {\vec {R}}(n,t)}{\partial t}}dn={\frac {\partial }{\partial t}}\left[{\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right){\vec {R}}(n,t)dn\right]={\frac {d{\vec {X}}_{p}(t)}{dt}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/67f630a75c45ca8c4aa3f9ae011a261bbc9fed6b.svg)
The first term on the right hand side gives: 
![{\displaystyle {\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right)\left[k{\overset {N}{\underset {0}{\int }}}dmh(n-m){\frac {\partial ^{2}\left[{\vec {X}}_{0}(t)+2{\overset {\infty }{\underset {q=1}{\sum }}}cos\left({\frac {q\pi m}{N}}\right){\vec {X}}_{q}(t)\right]}{\partial ^{2}m}}\right]dn}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/d5afe5a0b1c9e2141ecee95508298e6ad3a5fc5e.svg)
which yields: 
![{\displaystyle {\frac {1}{N}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right)\left[-2k{\overset {N}{\underset {0}{\int }}}dmh(n-m){\overset {\infty }{\underset {q=1}{\sum }}}\left({\frac {q\pi }{N}}\right)^{^{2}}cos\left({\frac {q\pi m}{N}}\right){\vec {X}}_{q}(t)\right]dn}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/701d56c17003365a3b99e999559b01bbd0a1451e.svg)
and with some additional algebra we get: 
![{\displaystyle -{\overset {\infty }{\underset {q=1}{\sum }}}{\frac {2kq^{2}\pi ^{2}}{N}}{\vec {X}}_{q}(t)\left[{\frac {1}{N^{2}}}{\overset {N}{\underset {0}{\int }}}{\overset {N}{\underset {0}{\int }}}cos\left({\frac {p\pi n}{N}}\right)cos\left({\frac {q\pi m}{N}}\right)h(n-m)dmdn\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/7626b990f8e9110cbe2e86817b3f0fce67743b6d.svg)
 Defining: 

this term can be written as: 

 
But this doesn't decouple the equations! another approximation?
Indeed, we will approximate by neglecting all the off diagonal terms.
The reasoning goes as follows, we first note that setting 
and noting that $h(n-m)=h(m-n)$ we can write $h_{pq}$ as: 

we now use a trigonometric identity:

to get:{\tiny 
![{\displaystyle h_{pq}={\frac {1}{N^{2}}}{\overset {N}{\underset {0}{\int }}}dn\left[cos\left({\frac {p\pi n}{N}}\right)cos\left({\frac {q\pi n}{N}}\right){\overset {N-n}{\underset {-n}{\int }}}cos\left({\frac {q\pi l}{N}}\right)h(l)dl\,-cos\left({\frac {p\pi n}{N}}\right)sin\left({\frac {q\pi n}{N}}\right){\overset {N-n}{\underset {-n}{\int }}}sin\left({\frac {q\pi l}{N}}\right)h(l)dl\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/433d99cff66d1648d99a240f8a955785289e37d9.svg)
 
}For large 
, the two inner integrals rapidly approach the following
integrals:

With this substitution $h_{pq}$ becomes:\underbar{ }

and after using the trigonometric identity: 
![{\displaystyle cos(A)cos(B)={\frac {1}{2}}\left[cos(A-B)+cos(A+B)\right]}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/0eb3e78635c7161975c0c58e9fd06509a89e84f5.svg)
{\small 

 
}If 
 is small our approximation is still fair but for the case
 it is invalid and this case deserves special attention. The
careful reader may have noticed that the sum:

 
starts from 
 and it may seem that a discussion regarding 
is pointless. We will nevertheless require this case (
) later
on and so we calculate directly{\small : }

The inner integral gives:

which results in:

Substituting this into the expression for $h_{p0}$ gives:

 
where we have changed variables to 
. It is now easy to see
that for odd $p$: $h_{p0}=$0, while for even $p$ we get: 

For $p=$0 this gives:

 
while for even 
, the integral may be re-expressed in terms of
the Fresnel integral 
to give: 

and we see that:

concluding that for $p>$0:

 
We see that for large 
, 
 is small and also decays with
. We will hence neglect 
 for 
 and keep only the
diagonal term 
.
O.K, what about the random forces?
We are left with the second term on the right hand side of the original
equation which we deal with by defining the random forces:

Which are characterized by zero mean:

And by: 

since: {\footnotesize 

}gives: {\footnotesize 

 
}which yields the result by definition of 
. This means that
the random forces with different values of 
 (remember 
)
and/or acting in perpendicular directions and/or acting in different
times are independent.
That was a bit long, could you please sum up the main result?
The main result is that we have found the equations of motion for
the normal coordinates 
 and that they are given by:

with 

 
and since the random forces are independent of each other, the motions
of the 
's are also independent of each other. These
are the equations of motion for an infinite set of uncoupled Brownian
harmonic oscillators, each with a force constant 
 and friction
coefficient 
 of its own. We have once again gone from
one partial differential equation (which we don't know how to solve
directly) for 
 to an infinite set of uncoupled ordinary
differential equations (from a type we are already familiar with)
for the normal coordinates 
.
\subsubsection{Great! This is very similar to what we got for the Rouse model, are
we going to repeat the same type of analysis? }
Since the equation for the normal modes is the same as that for the
Rouse model, we can immediately write the expressions for the diffusion
constant of the center of mass and the rotational relaxation time
using the results of the previous section:

 
How does the Zimm model stand in comparison to experimental results?
As can been seen 
 and 
 depend on the molecular
weight $M$ as follows (recall that $M\propto N$): 

 
The dependence of these quantities on the molecular weight agrees
with experiments performed on solutions in the 
 state. Furthermore,
the relaxation times of the normal modes are: 

 
and hence for short times (
) the average mean square
displacement of the $n-th$ monomer is given by: 

integration by parts gives:
![{\displaystyle \phi (n,t)\approx {\frac {2Nb^{2}}{\pi ^{2}}}\left[{\frac {-1+exp(-tp^{3/2}/\tau _{r})}{p}}\right]|_{0}^{\infty }+{\frac {3Ntb^{2}}{\tau _{r}\pi ^{2}}}{\overset {\infty }{\underset {0}{\int }}}{\frac {exp(-tp^{3/2}/\tau _{r})}{\sqrt {p}}}dp}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/6307b79ddc46981bdb7c10c2717669a979c8c60f.svg)
 
The first term drops (elementary calculus), the second term is treated
by a change of variable $x=tp^{3/2}/\tau_{r}$ :
![{\displaystyle \phi (n,t)\approx {\frac {3Ntb^{2}}{\tau _{r}\pi ^{2}}}{\overset {\infty }{\underset {0}{\int }}}{\frac {2\tau _{r}exp(-x)}{3t\left[{\frac {\tau _{r}x}{t}}\right]^{2/3}}}dx={\frac {2Nb^{2}\Gamma (1/3)}{\pi ^{2}}}\left[{\frac {t}{\tau _{r}}}\right]^{2/3}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/946d3a8ee20b60c6fffc78571d05e2b7a733e665.svg)
 
where we have identified the gamma function 
.
The relation 
 has been confirmed by analysis
of the Brownian motion of DNA molecules.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{\string"31-07-2009 01-21-59\string".eps}
\par\end{centering}
\caption{The average mean square displacement of the terminal segment of a
DNA molecule (solid line), observed by fluorescence microscopy. The
dashed line is calculated from the theory of Zimm. The graph is plotted
on a log-log scale, on this type of plot the slope of the lines corresponds
to the exponent 
 in the relation 
.
The fact the lines are parallel, supports the prediction 
.
Taken from: J. Polym. Sci., 30, 779, Fig. 5. }
\end{figure}
\newpage{}
I Have More Questions, Where can I get Answers?
\begin{thebibliography}{3}
\bibitem{key-4}Introduction to Polymer Physics, Chapters 4\&5, M.
Doi Translated by H. See, Clarendon Press, 30 November 1995.
\bibitem{key-5}The Theory of Polymer Dynamics, Chapters 3–5, M. Doi
and S. F. Edwards, Clarendon Press, 3 November 1988.
\bibitem{key-6}Polymer Physics, Chapter 8, Michael Rubinstein and
Ralph H. Colby Oxford University Press, 26 June 2003.
\end{thebibliography}