Archive for the ‘Math’ Category

Applicatives, Monads and Concurrency

February 12, 2015

With the Functor-Applicative-Monad proposal just around the corner, I’ve been wondering what exactly the relation between Applicative and Monad is. Every Monad can be made a Functor with fmap = liftM where

liftM :: Monad m => (a -> b) -> m a -> m b
liftM f ma = do a <- ma
                return (f a)

In fact, parametricity guarantees that fmap = liftM; it’s a free theorem. There is another law relating Functor and Monad, ma >>= f = join (fmap f ma). This law is particularly interesting because if you define a Monad the way mathematicians do using join instead of (>>=) then you must have Functor as a superclass in order to define (>>=). So it seems very natural for Functor to be a superclass of Monad.

However, it’s not so obvious to me why Applicative should be a superclass of Monad. It’s true that every Monad can be made an Applicative with (<*>) = ap where

ap :: Monad m => m (a -> b) -> m a -> m b
ap mf ma = do a <- ma
              f <- mf
              return (f a)

One should naturally expect that (<*>) = ap is a law when Applicative is a superclass of Monad. But, (<*>), unlike fmap, is not uniquely determined. A type constructor may have more than one instance. For example, lists have two useful instances.

ap :: [a -> b] -> [a] -> [b]
ap fs as = [f a | a <- as, f <- fs]

zap :: [a -> b] -> [a] -> [b]
zap [] _ = []
zap _ [] = []
zap (f:fs) (a:as) = (f a) : zap fs as

ap delivers a list of all functions in the first list applied to all values in the second list while zap goes down the lists applying functions to values at the same index. So which do you choose? Well, ap is the one which is compatible with the Monad instance on lists, so it wins. zap is used as the Applicative instance for a newtype on lists, the ZipList. Interestingly, there is no compatible Monad instance on ZipLists.

The ambiguity for lists resolves nicely enough. But in creating data types for concurrency, one runs into cases where you might want Applicative and Monad instances where (<*>) /= ap. A really good example can be found in the Haxl paper. Haxl is a library developed by Facebook to manage data requests in a way to maximize concurrency for the sake of efficiency. In section 4 of the paper, we find out that for the type constructor Fetch,

Blocked (Done (+1)) <*> Blocked (Done 1) = Blocked (Done 2)
Blocked (Done (+1)) `ap` Blocked (Done 1) = Blocked (Blocked (Done 2))

I won’t go over the details here but the takeaway is that the Applicative can take advantage of looking at both arguments of (<*>) which are both in Fetch while the Monad cannot because it must use (>>=) the second argument of which is a function with pure input. So here we have an example of a law breaking Applicative/Monad. As with ZipList, there is no Monad instance which is compatible with the Applicative instance. So why not newtype? The motivation is that we want to be able to use concurrencly implicitly without the overhead of having to wrap and unwrap a newtype. Whether that justifies breaking the law is a judgment call. Notice however that the law breaks in a well defined way, that is (<*>) is equal to ap up to idempotency of Block, the relation that Block . Block = Block .

I want to work through another example that was inspired by concurrency in Purescript.

First lets take care of the imports.

import Control.Applicative
import Control.Concurrent
import Control.Concurrent.Async
import Control.Concurrent.MVar
import Control.Monad.IO.Class
import Data.Traversable
import Prelude hiding (mapM)
import System.Random

Annoyingly we have to hide either the mapM from Prelude or Data.Traversable but this should be fixed by the Foldable/Traversable in Prelude proposal. The type constructor we’ll work with is for callbacks.

newtype Callback a = Callback {runCallback :: (a -> IO ()) -> IO ()}

You should recognize that Callback is isomorphic to ContT () IO, so we can just crib the definitions of Functor and Monad instances from your favorite source. We’ll also give a MonadIO instance.

instance Functor Callback where
    fmap f c = Callback (\k -> runCallback c (k . f))

instance Monad Callback where
    return a = Callback (\k -> k a)
    c >>= f = Callback (\k -> runCallback c (\a -> runCallback (f a) k))

instance MonadIO Callback where
    liftIO io = Callback (\k -> io >>= k)

For the Applicative instance we want to take advantage of the ability to do our IO side effects concurrently. Let’s see how this works.

instance Applicative Callback where
    pure a = Callback (\k -> k a)
    cf <*> ca = Callback $ \k -> do
        vf <- newEmptyMVar
        va <- newEmptyMVar
        let finish = do f <- takeMVar vf
                        a <- takeMVar va
                        k (f a)
        race_ (runCallback cf (putMVar vf) >> finish)
              (runCallback ca (putMVar va) >> finish)

Ok, so what’s going on the definition of (<*>)? Well first we create new empty mutable variables in which we will store a function f and a value a to apply it to, but which we must perform some IO effects to get. finish is where we perform the callback on the result. finish looks a whole lot like ap. The magic happens in the function race_ which is from Simon Marlow’s delightful async package. It runs two IO actions concurrently, canceling the loser. Because finish blocks when either vf or va are empty, both callbacks are run concurrently with a putMVar as the callback, filling up the variables. Then finish will unblock and the first branch to complete it wins the race.

Let’s see the difference in performance between the Applicative and Monad instances on an example.

square :: Int -> Callback Int
square n = do liftIO $ do let second = 10^6
                          random <- randomRIO (0 , second)
                          threadDelay random
                          let time = fromIntegral random / fromIntegral second
                          putStrLn $ "Squaring "
                              ++ show n ++ " after "
                              ++ show time ++ " seconds."
              return (n^2)

main = do putStrLn "Traversing [1..10] using Monad"
          runCallback (mapM square [1..10]) print
          putStrLn "Traversing [1..10] using Applicative"
          runCallback (traverse square [1..10]) print

And here’s an example of what the output looks like.

Traversing [1..10] using Monad
Squaring 1 after 0.723773 seconds.
Squaring 2 after 0.603614 seconds.
Squaring 3 after 0.203097 seconds.
Squaring 4 after 0.535666 seconds.
Squaring 5 after 0.414605 seconds.
Squaring 6 after 0.807218 seconds.
Squaring 7 after 0.580662 seconds.
Squaring 8 after 0.715894 seconds.
Squaring 9 after 0.641902 seconds.
Squaring 10 after 0.180422 seconds.
[1,4,9,16,25,36,49,64,81,100]
Traversing [1..10] using Applicative
Squaring 4 after 4.8804e-2 seconds.
Squaring 5 after 0.140811 seconds.
Squaring 3 after 0.142185 seconds.
Squaring 7 after 0.373733 seconds.
Squaring 9 after 0.386626 seconds.
Squaring 8 after 0.393645 seconds.
Squaring 1 after 0.417399 seconds.
Squaring 6 after 0.780407 seconds.
Squaring 2 after 0.803894 seconds.
Squaring 10 after 0.96637 seconds.
[1,4,9,16,25,36,49,64,81,100]

The Monad is forced to perform the effects consecutively, taking up to 10 seconds (ignoring the time it takes to get a random number and to print). The Applicative performs the effects concurrently, taking only up to 1 second. So here’s another example where (<*>) /= ap and we may be somewhat justified by the desire for implicit concurrency. Notice that it’s just the effects that are different, not the result. Said another way, if we bind x <- mf <*> ma and y <- mf `ap` ma then we have that x = y so we’re breaking the law but not flagrantly.

Advertisements

Ribbon categories

October 23, 2009

In the last post I discussed the category of framed oriented tangles, which according to Shum’s theorem is a free ribbon category. As a corollary to Shum’s theorem, we may derive tangle invariants from any ribbon category. Let’s see how this works for the Kauffman bracket.

Consider planar diagrams, that is curves in the plane. These are like tangle diagrams only without self-intersections, i.e. no crossings. Just like tangles, they form a monoidal category since we can place them side by side or atop each other. Also just like tangles they have duality cups and caps.

Cup and Cap

Cup and Cap

Inspired by the definition of the Kauffman bracket, we extend the category of planar diagrams by linear combinations with coefficients polynomials in A,A^{-1} and mod out by the circle relation:

Circle Relation

Circle Relation

This gives a braiding and twist as in the calculations for the Kauffman bracket.

Braiding

Braiding

Twist

Twist

The resulting ribbon category is called the Temperley-Lieb category, named for mathematicians who studied its implications in the context of statistical mechanics.

Now we have two examples of ribbon categories, the category of tangles and the Temperley-Lieb category. How else can we generate examples of ribbon categories? Recall that the category of finite dimensional vector spaces and linear maps formed a monoidal category with duals. We consider the subcategory of representations of an algebra A.

An algebra is a vector space in which we have a multiplication and unit with the familiar properties of associativity and unitality. For example, given a vector space V the space End(V) of endomorphisms of V, that is linear maps V\to V, forms an algebra where our multiplication is composition of linear maps and our unit is the identity map 1_V. V is a representation of A iff there is a linear map \rho_V:A\to End(V) which preserves multiplication and unit.

A Hopf algebra, in addition to having a multiplication and unit, also has maps \Delta:A\to A\otimes A, \eta:A\to k called comultiplication, counit which are coassociative, and counital where k is the field of scalars. This guarantees that the category Rep_{fd}(A) of finite dimensional representations of A is monoidal since we can define representations \rho_{V\otimes W}=(\rho_V\otimes \rho_W)\Delta and \rho_k=\eta. We also require a map S:A\to A called the antipode which switches the order of multiplication and is the convolution inverse to the identity. This guarantees that Rep_{fd}(A) has left duals with \rho_{V^*}=\rho_V S.

If there are elements R\in A\otimes A,h\in A such that P_{V,W}(\rho_V\otimes\rho_W)(R) is a braiding, where P_{V,W}:V\otimes W\to W\otimes V is the swap map P_{V,W}(v\otimes w)=w\otimes v and where \rho_V(h) is a twist, then we call A a ribbon Hopf algebra. Clearly then Rep_{fd}(A) is a ribbon category.

Surprisingly, ribbon Hopf algebras turn up in the study of Lie algebras. One may “quantize” a Lie algebra, deforming it by a formal parameter meant to mimic Plank’s constant \hbar and the result is a ribbon Hopf algebra. This discovery led to a whole slew of new invariants and a new understanding of old invariants. For instance the Jones’ polynomial and the Kauffman bracket are related to the quantization of the most basic Lie algebra sl(2,\mathbb{C})=su(2)\otimes\mathbb{C}=so(3)\otimes\mathbb{C}. Invariants of tangles derived from quantized Lie algebras are called Reshetikhin-Turaev invariants or simply quantum invariants. When applied to links they give polynomials in a variable q=e^\hbar.

The category of tangles

October 1, 2009

I want to get back to discussing tangles. So far we’ve been thinking about tangles entirely topologically. But as it turns out, tangles are also fundamentally algebraic objects. The algebraic gadget we need to understand tangles is that of a free ribbon category. Indeed, Shum’s theorem states that framed, oriented tangles form the morphisms of a free ribbon category on a single generator.

To begin to understand this deep statement we must start with the definition of a category. A category is a set of objects A,B,C,\ldots along with a class (for technical reasons a class, not a set) of morphisms f,g,h,\ldots. Each morphism has a source object and a target object so that we can think of a morphism as an arrow B\leftarrow A. There is a composition operation of morphisms gf which is defined only if the source of g is the target of f. There is also an identity morphism 1_A for every object A whose source and target are both A. Finally we require that composition be associative (hg)f=h(gf) and unital 1_B f=f=f 1_A.

Tangles form morphisms in a category. Just let the objects be points in a plane; then clearly tangles form morphisms with their bottom endpoints as source and their top endpoints as target (or vice versa, it’s just a convention). We can compose tangles by placing them one atop the other, so long as their sources and targets match up. Identity tangles are simply a bunch of vertical lines connecting matching top and bottom endpoints. Clearly, associativity and unitality hold so tangles do indeed form a category.

We can form a category of tangles with a completely different composition however. Instead of placing tangles atop each other, we can place them side by side. Now the empty tangle is the identity. Also, in this category there is only 1 object since we can always place tangles next to each other; there’s nothing to match up! Something with 2 different categorical structures like this is called, logically enough, a 2-category. But, as we said, the second category structure has a unique object. These kinds of 2-categories are so common they get their own name, monoidal categories. Thus, tangles form the morphisms of a monoidal category.

Actually, that’s not the end of the story! We could put the tangles side by side in different ways, since the endpoints live in planes, we have 2 dimensions to work with. The two independent ways of placing tangles next to each other in addition to the standard composition of placing them atop each other turn tangles into a 3-category. Since both ways of putting tangles next to each other can be done without worrying about matching this is a special kind of 3-category called a doubly monoidal category. Doubly monoidal categories always have a way of transforming the monoidal product (side-by-side placement) into its opposite (side-by side placement but in the reverse order). This comes from the fact that the 2 monoidal structures are essentially the same. Try to think about why this is true for tangles.

Let’s think about how to transform two points sitting side by side into the same two points sitting in the opposite order. As we transform in two dimensions rotating one around the other, we trace out the familiar crossing. Of course we can rotate them in the other direction and get the other crossing.

Crossings

Crossings

In general, this sort of thing is called a braiding, and doubly monoidal categories always have them. For this reason, they’re also called braided monoidal categories.

Orientation means that the endpoints of our tangle are more than just points. They have directions associated with them, either up or down. We call this a dual structure, since the dual of up is down. This is familiar from linear algebra where to each vector space V we can associate a dual vector space V^* of linear maps from V to the field of scalars. The important structure relating vector spaces and their duals are the evaluation and coevalutation maps. Evaluation takes a dual vector f and a vector v and evaluates to the scalar f(v). Coevaluation makes use of the isomorphism V\otimes V^*=End(V) where End(V) is the space of endomorphisms of V. The coevaluation takes a scalar to that scalar multiple of the identity. Now, we have the same sort of structure morphisms in the category of tangles, the caps and cups. This makes the category of tangles a monoidal category with duals, just like the category of linear transformations of vector spaces.

Cup and Cap

Cap and Cup

Since cups and caps may be oriented in 2 different ways, we have 2 dual structures, a left and a right dual. The same can be said of the category of vector spaces but there, one simply identifies left and right duals. In the category of tangles it’s not so easy. Instead one must build a natural isomorphism between left and right duals and for this you need a twist. A twist is what it sounds like, take your endpoints and twist them around 360 degrees. This is where framing comes into play. If you do this to a single endpoint, you get a ribbon with a full twist in it. This has a blackboard diagram that looks like either side of the framed Reidemeister 1 move.

Framed Reidemeister 1

Twist on 1 strand

What if you had 2 endpoints? Think about this for a bit, you get 2 crossings between 2 ribbons each of which has a full twist in it. Luckily this is the compatibility condition between the braiding and the twist that is required of a so-called ribbon category.

Twist on two strands

Twist on 2 strands

To recap, a ribbon category is a braided monoidal category with duals and a twist. All of these may be defined algebraically but have intuitive topological definitions in the category of tangles. The fact that algebra may be thought about topologically can be rigorously summed up in the statement of Shum’s theorem given at the beginning of the post: framed, oriented tangles form the morphisms of a free ribbon category on a single generator.

Electrodynamics on a Principal Bundle II

August 16, 2009

Suppose we had a principal U(1)-bundle \pi:P\to M with a connection \omega with curvature \Omega.

The Lie algebra \mathfrak{u}(1) is just the set of imaginary numbers i\mathbb{R} with trivial Lie bracket {[},{]}=0. The local potential is a real-valued 1-form A_{U} defined by \omega_{U}=iA_{U}. The local field strength F_{U} is defined by \Omega_{U}=iF_{U}.

A change of gauge is given by g_{UV}=e^{i\lambda} with \lambda:U\cap V\to\mathbb{R}. We see that local connections are related by \omega_{V}=e^{-i\lambda}\omega_{U}e^{i\lambda}+e^{-i\lambda}de^{i\lambda}=\omega_{U}+id\lambda, so that local potentials are related by A_{V}=A_{U}+d\lambda. Local curvatures are related by \Omega_{V}=e^{-i\lambda}\Omega_{U}e^{i\lambda}=\Omega_{U}, so that local field strengths are related by F_{U}=F_{V}. This means that the field strength is globally defined on M.

By the Bianchi identity we have d\Omega=0 so dF=0, so the homogeneous Maxwell equation comes along for free. We can get the inhomogeneous Maxwell equation by requiring that d*F=*J.

Now, consider the action U(1) on \mathbb{C} given by multiplication e^{i\lambda}z. Associated to our principal U(1) bundle we get a vector bundle with fiber \mathbb{C} with an induced connection \nabla locally given by \nabla=d+\omega_U=d+iA_U. We will write sections of the associated bundle as \psi. We can define the d’Alembert operator \square=*\nabla*\nabla+\nabla*\nabla*. If we require the Klein-Gordon equation, \square\psi=m^2\psi, then we have a theory of a charged spin-0 particle coupled to electromagnetism.

In order to couple electromagnetism to more interesting particles like Dirac’s electron, we need to incorporate spin somehow.

Consider the matrix group O(1,3), i.e. matrices B such that B^{T}\eta B=\eta where \eta=diag(1,-1,-1,-1), or equivalently \eta(Bv,Bw)=\eta(v,w) for any events v,w in Minkowski spacetime. This group has 4 connected components coming from det(B)=\pm1 and B_{00}>0 or B_{00}<0. The component containing the identity is called the proper, orthochronous Lorentz group L=L_{+}^{\uparrow}. Physically it contains all rotations, and boosts (Lorentz tranformations) and so dim(L)=6.

We can cover L by the simply connected group SL(2,\mathbb{C}), i.e. 2\times2 complex matrices A with det(A)=1. First we identify Minkowski spacetime \mathbb{R}^{4} with the space of 2\times2 Hermitian matrices, i.e. matrices H such that \overline{H}^{T}=H, in such a way that if H is the Hermitian matrix identified with the event x then det(H)=|x|^{2}. Then we can define a covering map \Lambda:SL(2,\mathbb{C})\to L by identifying \Lambda(A)x with AH\overline{A}^{T}. We have that \Lambda(A)\in L since
|\Lambda(A)x|^{2}=det(AH\overline{A}^{T})=det(A)det(H)det(A)=det(H)=|x|^{2}. It can be shown that \Lambda is a 2-1 homomorphism of Lie groups.

Now, there are two important irreducible representations for SL(2,\mathbb{C}) on \mathbb{C}^{2}, the “spin \frac{1}{2}” representations given by multiplication A\left(\begin{array}{c} z_{1}\\ z_{2}\end{array}\right) and multiplication by the adjoint \overline{A}^{T}\left(\begin{array}{c} z_{1}\\z_{2}\end{array}\right). The Dirac representation is the direct sum of these representations \left(\begin{array}{cc}A& 0\\ 0&\overline{A}^{T}\end{array}\right) \left(\begin{array}{c}z_{1}\\z_{2}\\z_{3}\\z_{4}\end{array}\right).

Let \pi:FM\to M be the orthonormal frame bundle for spacetime. Its fibers F_{m}M are ordered orthonormal bases of T_{m}M, or equivalently isometries p:\mathbb{R}^{4}\to T_{m}M. There is a right action of O(1,3) given by right composition pB which makes the frame bundle an O(1,3)-bundle. We say that M is space and time orientable iff FM has 4 components and a choice of component FM_{0} is a space and time orientation. Then the restriction \pi:FM_{0}\to M is an L-bundle.

The solder form is an \mathbb{R}^{4}-valued 1-form \phi on FM_{0} given by \phi_{p}(X)=p^{-1}(\pi_{*}(X)). The torsion of a connection \theta on FM_{0} is \Theta=d\phi+\theta\wedge\phi. It turns out that there is a unique connection whose torsion is \Theta=0. This is the Levi-Civita connection \theta.

A spin structure on M is a manifold SM and a smooth map \lambda:SM\to FM_{0} such that \pi\circ\lambda:SM\to M is an SL(2,\mathbb{C})-bundle with \lambda(pA)=\lambda(p)\Lambda(A). We can define a connection \tilde{\theta} on SM by \tilde{\theta}=\Lambda_{*}^{-1}\lambda^{*}\theta where \Lambda_{*} is the isomorphism of Lie algebras induced by \Lambda:SL(2,\mathbb{C})\to L.

Now consider sections \psi of the vector bundle associated to SM by the Dirac representation. Dirac’s idea was to introduce an operator \not\hspace{-4pt}D such that \not\hspace{-4pt}D^{2}=\square, i.e. the Dirac operator is the “square root” of the d’Alembert operator. A full understanding of the Dirac operator requires Clifford algebras, i.e the algebra generated over Minkowski space modulo v^{2}=\eta(v,v). It turns out that the smallest representation \gamma of this Clifford algebra is 4-dimensional which is why we need a 4-dimensional representation of SL(2,\mathbb{C}) as well. Then we can define the Dirac operator as \not\hspace{-4pt}D=\eta(\gamma,\nabla) where \nabla is the connection associated to \tilde{\theta} and we inner product them somehow.

In more detail for the d’Alembertian on Minkowski spacetime, \square=\frac{\partial^{2}}{\partial t^{2}}-\frac{\partial^{2}}{\partial x^{2}}-\frac{\partial^{2}}{\partial y^{2}}-\frac{\partial^{2}}{\partial z^{2}}, define

\not\hspace{-4pt}D=\left(\begin{array}{cccc} 1& 0& 0& 0\\ 0 & 1& 0& 0\\ 0 & 0& -1& 0\\ 0 & 0& 0& -1\end{array}\right)\frac{\partial}{\partial t}+\left(\begin{array}{cccc} 0& 0& 0& 1\\ 0 & 0& 1& 0\\ 0 & -1& 0& 0\\ -1& 0& 0& 0\end{array}\right)\frac{\partial}{\partial x}

+\left(\begin{array}{cccc} 0& 0& 0& -i\\ 0 & 0& i& 0\\ 0 & i& 0& 0\\ -i& 0& 0& 0\end{array}\right)\frac{\partial}{\partial y}+\left(\begin{array}{cccc}0& 0& 1& 0\\ 0 & 0& 0& -1\\ -1& 0& 0& 0\\ 0 & 1& 0& 0\end{array}\right)\frac{\partial}{\partial z}

We can work out that \not\hspace{-4pt}D^{2}=\square.

Then we demand that the Dirac equation holds, \not\hspace{-4pt}D\psi=m\psi. This gives us a theory of a spin-\frac{1}{2} particle, an electron or positron, but we have not yet coupled it to electromagnetism.

Right now, our notion of an electron is that it is a field which takes its values in a representation of the spin group Spin(1,3)=SL(2,\mathbb{C}). In order to couple to the electromagnetic field, we will rather think of the electron taking its values in a representation of the charged spin group Spin_C(1,3)=U(1)\times SL(2,\mathbb{C})/(\mathbb{Z}/2).

We can splice a G_{1}-bundle \pi_{1}:P_{1}\to M with a G_{2}-bundle \pi_{2}:P_{2}\to M. Define P=\{(p_{1},p_{2})\in P_{1}\times P_{2}:\pi_{1}(p_{1})=\pi_{2}(p_{2})\} and \pi:P\to M by \pi(p_{1},p_{2})=\pi_{1}(p_{1})=\pi_{2}(p_{2}). This is a G_{1}\times G_{2}-bundle with (p_{1},p_{2})(g_{1},g_{2})=(p_{1}g_{1},p_{2}g_{2}). Given connections \omega_{1},\omega_{2} on P_{1},P_{2}, we can define a connection \omega on P by \omega=\pi^{1*}\omega_{1}\oplus\pi^{2*}\omega_{2} with \pi^{i}:P\to P_{i} given by \pi^{i}(p_{1},p_{2})=p_{i}.

Splice together our U(1)-bundle P with SM and also splice \omega with \tilde{\theta}. Consider the representation of U(1)\times SL(2,\mathbb{C}) on \mathbb{C}^{4} given by combining the Dirac representation with multiplication by e^{i\lambda}. This structure is \mathbb{Z}/2-invariant so defines a Spin_C(1,3) -bundle. We get an associated vector bundle with an associated connection and Dirac operator \not\hspace{-4pt}D. A charged electron coupled to electromagnetism is then a section \psi for which the Dirac equation \not\hspace{-4pt}D\psi=m\psi holds.

Electrodynamics on a Principal Bundle I

August 16, 2009

I want to switch gears and talk about some mathematical physics. Actually, I’m going to cross-post some exposition I wrote for a gauge theory seminar that we held at Stony Brook.

Maxwell’s equations in relativistically covariant form are

\partial_{\mu}F^{\mu\nu}=J^{\nu}
\partial_{[\lambda}F_{\mu\nu]}=0

Since F_{\mu\nu}=-F_{\nu\mu} we can define a 2-form F=F_{\mu\nu}dx^{\mu}dx^{\nu}. We can also define a 1-form J=J_\mu dx^\mu. Then we can re-express Maxwell’s equations using exterior differentiation and the Hodge star.

d*F=*J
dF=0

The continuity equation d*J=0 then follows from the inhomogeneous Maxwell equation. We expect from the homogeneous Maxwell equation that F=dA. In fact this is only true locally. This means that for every event m in our spacetime M there is an open set U with m\in U\subset M and a 1-form A_U on U with F|_U=dA_U. This follows from Poincare’s lemma.

We cannot say the A exists globally. For instance if F=sin\phi d\phi d\theta, the area form of the unit sphere in spherical coordinates, then dF=cos\phi d\phi d\phi d\theta=0 since d\phi d\phi=0 by antisymmetry of wedge product of 1-forms. Also, taking \Sigma to be the unit sphere, we know that \int_\Sigma F=4\pi. However, by Stokes’ Theorem, if F=dA then \int_\Sigma F=\int_\Sigma dA=\int_{\partial \Sigma} A=0\neq 4\pi. So, we cannot have F=dA globally.

Physically we interpret this as a magnetic monopole with magnetic charge 4\pi and worldline, the time axis, r=0. Mathematically, what is happening is that the complement of the time axis has nontrivial topology. Specifically its second de Rham cohomology is nontrivial. Intuitively, there is a kind of 2-dimensional, spherical “hole” in the complement of the time axis.

In addition to being nonglobal, the potential A is defined only up to addition of a closed 1-form since d(A+d\lambda)=dA=F. We would like to find a global mathematical object corresponding to the potential which doesn’t depend on our “choice of gauge”. This is our motivation for understanding connections on principal bundles.

We will assume G is a group of matrices. A principal G-bundle is a smooth surjection of manifolds \pi:P\to M with a free transitive right action R of G on P such that \pi^{-1}\pi(p)=pG and for any m\in M there is an open set U with m\in U\subset M and a diffeomorphism T_U=\pi\times t_U:\pi^{-1}(U)\to U\times G called a “local trivialization” such that t_U(pg)=t_U(p)g. Local trivializations correspond to the physical notion of “choice of gauge”.

Intuitively, P is a manifold composed of copies of the group G parametrized by the base space M. A good example is the boundary of the Mobius strip which can be thought of as a \mathbb{Z}/2-bundle over S^1.

A useful notion is that of a local section \sigma_U:U\to P with U an open set with U\subset M such that \pi(\sigma_U(m))=m. It can be shown that there is a canonical 1-1 correspondence between local sections \sigma_U and local trivializations T_U.

Define transition functions g_{UV}:U\cap V\to G by g_{UV}(m)=t_U(p)t_V(p)^{-1} where \pi(p)=m. This is well defined since t_U(pg)t_V(pg)^{-1}=t_U(p)gg^{-1}t_V(p)^{-1}=t_U(p)t_V(p)^{-1}. Transition functions correspond to the physical notion of “change of gauge”. We can relate any two local sections by \sigma_V=\sigma_U g_{UV}.

Let \mathfrak{g} be the Lie algebra for G. A connection \omega is a \mathfrak{g}-valued 1-form on P such that if If X\in\mathfrak{g} and \tilde{X} is the tangent field on P given by \tilde{X}_{p}=\frac{d}{dt}pe^{tX}|_{t=0}, then \omega(\tilde{X})=X. Also we require that R(g)^{*}(\omega)=g^{-1}\omega g.

We define local connections on M by \omega_U=\sigma_U^*\omega. Local connections are related by \omega_{V}=g_{UV}^{-1}\omega_{U}g_{UV}+g_{UV}^{-1}dg_{UV}.

We define curvature \Omega=d\omega+\frac{1}{2}[\omega,\omega] meaning \Omega(X,Y)=d\omega(X,Y)+\frac{1}{2}[\omega(X),\omega(Y)]. We can define local curvature by \Omega_U=\sigma_U^*\Omega. Local curvatures are then related by \Omega_V=g_{UV}^{-1}\Omega_U g_{UV}. The Bianchi identity says d\Omega=[\omega,\Omega].

We are now in a position to define electrodynamics on a principal bundle.

Jones’ Polynomial

August 6, 2009

In the last post we investigated the linking number and writhe. These were numerical invariants of oriented links and framed knots. Now I will introduce new invariants which take their values as polynomials.

For a given crossing, we can perform an operation called resolving or smoothing the crossing. We can do this in two ways.

0-smoothing

0-smoothing

1-smoothing

1-smoothing

Let us suppose that there is a polynomial invariant of links <L> in variables A,B,C so that concentrating on a neighborhood of a crossing in a diagram for L, we have that the following relation, called the skein relation, holds.

Skein Relation

Skein Relation

Performing smoothings on all crossings reduces a link diagram to some number of circles in the plane. Let’s require that adding a circle \bigcirc to a link diagram L gives <L\bigcirc>=C<L>. Finally we require a normalization, that for the empty link <>=1. From this we can deduce that the bracket of n circles is <\bigcirc\cdots\bigcirc>=C^n.

We need to check invariance under Reidemeister moves. Let’s start with Reidemeister 2.

Reidemeister 2 Calculation

Reidemeister 2 Calculation

Thus, in order for the bracket to be invariant we must have A^2+ABC+B^2=0 and AB=1. Solving for B,C in terms of A, we get B=A^{-1},C=-A^2-A^{-2}.

The nice thing now is that Reidemeister 3 comes along for free by using invariance under Reidemeister 2.

Reidemeister 3 Calculation

Reidemeister 3 Calculation

Performing Reidemeister 1 on the other hand does not leave the bracket invariant. However, we can see that opposite Reidemeister 1 moves cancel so that the bracket is invariant under the framed Reidemeister 1 move.

Reidemeister 1 Calculation

Reidemeister 1 Calculation

Consequently, the bracket is an invariant of framed links whose values are polynomials in A and A^{-1}. To calculate it, take a blackboard diagram for the framed link and apply the skein relation, the circle relation and the normalization relation until you reach the answer.

The bracket was introduced by Kauffman as an elementary way to define Jones’ polynomial, an invariant of oriented links which was originally derived using some difficult algebra. We can define the Jones’ polynomial by V(L)=-A^{-3TotWr(L)}<L>|_{A=q^{1/4}}. Here, TotWr(L) the total writhe is the sum of signs of all crossings in the diagram and it is this factor which makes V(L) now invariant under Reidemeister 1 moves.

The Kauffman bracket and Jones’ polynomial are very closely related, in a similar way to how the writhe and linking numbers are closely related. Following the discovery of the Jones’ polynomial, there was a great deal of interest in knot theory. The Jones’ polynomial showed new connections between topology on the one hand and representation theory and quantum physics on the other.

Invariants

July 14, 2009

How can we tell if two tangles (or links, or knots) are different? That we cannot move the strings around as we are allowed and get from one tangle to another? We find invariants which can tell the difference. The best way to explain what an invariant is, is to give an example. The component number of a tangle is the number of strings in the tangle. Clearly if two tangles have different number of strings then they are not the same. For example the trefoil knot has component number 1 and the Hopf link has component number 2.

Trefoil Knot

Trefoil Knot

Hopf Link

Hopf Link

An invariant is some mathematical object, like a number or a polynomial that we can associate to tangles (or links, or knots) that depends only on the tangle-type. For instance the component number of a tangle doesn’t change when the strings move about or are stretched. Therefore, it is an invariant.

The component number is rather a blunt invariant. What if we want to tell the difference between tangles with the same component number? Let’s define an invariant for links with component number 2. We will call it the linking number. The linking number is actually an invariant for “oriented” links with component number 2. Oriented means that each string in the tangle comes with a preferred direction. We indicate this in a diagram by drawing an arrow on each string.

Oriented Hopf Link

Oriented Hopf Link

Whenever two different strings cross we can use the right hand rule to assign a positive or negative value to the crossing. Put your thumb in the direction (according to the orientation) of the over-strand and your fingers in the direction of the under-strand. If your palm is facing up (away from the screen) then it is a positive crossing and if your palm is facing down (towards the screen) then it is a negative crossing.

Signs of Oriented Crossings

Signs of Oriented Crossings

Now think of our link as having components (strings) called A and B. The linking number Lk(A,B) is the sum of the signs of the crossings in which A crosses over B. In order to see that the linking number is an invariant we need to analyze its behavior under Reidemeister moves.

Reidemeister 1

Reidemeister 1

Consider the first Reidemeister move. The left part of the equation has a crossing, but it comes from only 1 component, so it contributes 0 to the linking number. The same applies to the right part of the equation. The middle part of the equation has no crossings and so it contributes 0 to the linking number. Thus the linking number is invariant under Reidemeister 1.

Reidemeister 2

Reidemeister 2

Consider the second Reidemeister move. There are two cases, either the strands come from the same component or different components. In the first case, the left side of the equation contributes 0 to the linking number. In the second case, no matter which orientation there is on the strands, the two crossings have opposite signs and so contributes 0 to the linking number. In either case, the right side of the equation has no crossings and so contributes 0 to the linking number. Thus the linking number is invariant under Reidemeister 2.

Reidemeister 3

Reidemeister 3

Consider Reidemeister 3. Notice that each pair of strands cross in the same way but in different places on each side of the equation. Thus, no matter which components the strands belong to, nor which orientation we give them, each side contributes the same to the linking number. Thus the linking number is invariant under Reidemeister 3.

Thus, the linking number Lk(A,B) is an invariant of 2 component oriented links. Even better, it’s symmetric Lk(A,B)=Lk(B,A). So we can calculate it by summing the signs of the crossings where B crosses over A.

We can easily calculate the linking number for the oriented Hopf link pictured above, Lk(A,B)=-1.

What happens if we try to calculate the self-linking number of a knot Lk(K,K). Unfortunately it is no longer invariant under Reidemeister 1, since the argument we had used to prove invariance required that we were calculating linking number Lk(A,B) between different components A and B. You can see that the arguments for Reidemeister 2 and 3 did not require that the components were different so that the self-linking number, which we shall call the writhe Wr(K)=Lk(K,K), is invariant under Reidemeister 2 and 3. Furthermore, it does not depend on the orientation, since switching the orientation will not change the sign of a crossing (the orientation switches on both strands, so the sign is preserved).

In order to remedy the problem of non-invariance of the writhe under Reidemeister 1, we introduce a new property of tangles, “framing”. If orientation can be thought of as arrows going parallel to the tangle, then framing can be thought of as arrows going perpendicular to the tangle. If we extend the tangle along these arrows we obtain a “ribbon”, that is a tangle whose components are 2-dimensional surfaces. Now the self-linking number makes sense, as the linking number of the two edges of the ribbon.

We can project framed tangles in such a way that the ribbon is flattened in the projection. Then we need only draw the ribbon using a string as before and we can extend that string perpendicularly in the plane of projection. The resulting framing is called the “blackboard framing”. Such diagrams represent equivalent tangles if and only if they are connected by a sequence of Reidemeister 2 & 3 moves and the framed Reidemeister 1 move.

Framed Reidemeister 1

Framed Reidemeister 1

Notice that no matter what orientation is chosen, both sides have negative crossings. Since the sign of the crossing cannot change, the writhe is invariant under framed Reidemeister 1. Thus, the writhe, Wr(K), is an invariant of framed knots.

We have introduced two interesting new invariants, the linking number Lk(A,B) and the writhe Wr(K) but in order to do so we had to add more structure to tangles, orientation and framing. That these structures are natural as well as closely related is hinted at by our study of invariants. The linking number is sensitive to orientation but not framing and the writhe is sensitive to framing but not orientation. We will have more to say about these features of tangles in the future.

Reidemeister’s theorem

July 1, 2009

Despite living in 3-space our minds can only really grasp 2 dimensions since our eyes project the 3-dimensional world onto our  2-dimensional retinae. Nevertheless, we have a limited perception of 3 dimensions that comes from “layering” the different views of our retinae.

We perform a similar operation on tangles, projecting the inherently 3-dimensional objects onto a surface. Look at the example from the last post. It was necessary to draw it 2-dimensionally since computer displays are 2-dimensional. Nevertheless, we obtain a perception of the 3rd dimension by drawing “crossings” where one strand crosses over another. Try to locate the crossings in the example.

Example of a tangle

Example of a tangle

These projections are the most common way of representing tangles. They are called “tangle diagrams”. When we project a tangle diagram we take care to allow the only singularities (places where the projection doesn’t look nice) to be “transverse double points” which we represent as crossings. We don’t allow any of the following singularities: cusps, tangencies, or triple points.

Cusp

Cusp

Tangency

Tangency

Triple point

Triple point

We can guarantee that there are no such singularities by slightly tilting our projection if there are.

Now, how can we know if two tangle diagrams represent the same tangle? The answer is Reidemeister’s theorem: two tangle diagrams represent the same tangle if and only if they are connected by a sequence of Reidemeister moves. The pictures below demonstrate the 3 Reidemeister moves.

Reidemeister 1

Reidemeister 1

Reidemeister 2

Reidemeister 2

Reidemeister 3

Reidemeister 3

Looking at these pictures, it should be intuitively clear that performing Reidemeister moves does not change the tangle which a tangle diagram represents. The first Reidemeister move consists of adding or removing a “kink”. The second Reidemeister move consists of sliding strands past each other. The third Reidemeister move consists of moving a strand past a crossing. Look at the third move again and try to understand it physically: grab the middle strand and pull it through the crossing until it’s on the other side.

The difficult part of the theorem is proving the “only if” part, that is proving that the 3 Reidemeister moves suffice in order to transform one tangle diagram into any other tangle diagram which represents the same tangle. Notice, however that in the course of performing each of the Reidemeister moves we run afoul of our disallowed singularities. Perform a Reidemeister 1 on a physical tangle and as you get from one side of the equation to the other there will be a point in time where your projection is a cusp. Similarly, performing Reidemeister 2 will yield a tangency and Reidemeister 3 will yield a triple point.

One more important point is that Reidemeister moves are local. This means that if we have a large tangle we can perform Reidemeister moves on small pieces of the tangle. Let’s do an example to clarify.

Example of Reidemeister's theorem

Example of Reidemeister's theorem

We perform a single Reidemeister move locally in each equality. Try to identify where they occur.

Reidemeister’s theorem gives us the perfect tool for showing that two tangle diagrams represent the same tangle. Just perform Reidemeister moves to get from one diagram to the other. How can we show that two tangle diagrams represent different tangles? We may try to connect them via Reidemeister moves and fail, but that doesn’t show anything. Perhaps if we were smarter we could find the right sequence of Reidemeister moves. There’s an infinite number of such sequences so there’s no hope in testing them all. The answer to this puzzle is to look for invariants. But that’s the subject for another post!

My, what a tangled web we weave

July 1, 2009

Hi, my name is Eitan. I’m a student of mathematics. Welcome to my personal blog. I intend to post mathematical exposition, but since this is a personal blog I will also post political thoughts or anything else that comes to mind. Let’s start with some math.

What’s a tangle? Tangles are very important objects in topology. Physically, they are just a number of strings in our usual 3-dimensional space whose endpoints (if they have endpoints) are attached to some boundary surface. We allow the strings to move around freely in 3-space so long as the endpoints remain fixed and we cannot pass strings through each other (or over endpoints). We also allow the strings to stretch and compress as though they’re made of rubber. Here’s an example of a tangle:

Example of a tangle

Example of a tangle

Try to count how many strings there are.

Tangles are really generalizations of knots and links. A knot is a tangle made of only 1 closed string, meaning the string has no endpoints, it’s just a circle. A link is a tangle made of any number of closed strings. Notice that all knots are links and that all links are tangles. Another way of thinking about tangles is that they are “local” pictures of knots, that is, if we zoom in on a knot and look at a small neighborhood, that neighborhood will contain a tangle.

Let’s look at examples of knots and links.

Trefoil Knot

Trefoil Knot

Hopf Link

Hopf Link

By tracing along with your finger, verify that the trefoil knot has only 1 string while the Hopf link has 2 strings.

Recall that I said the endpoints of the strings in a tangle must be attached to some boundary surface. In the example the boundary surface comes in two pieces, a bottom and a top. This is one convention for tangles, the “monoidal category” convention. Another convention, the “planar algebra” convention, is that the surface has only one piece. Really, there’s no important difference between thinking in either convention. It’s only a question of convenience for a given application.

I think that’s enough for now. Stay tuned to hear about Reidemeister’s theorem.