Archive for the ‘Topology’ Category

Ribbon categories

October 23, 2009

In the last post I discussed the category of framed oriented tangles, which according to Shum’s theorem is a free ribbon category. As a corollary to Shum’s theorem, we may derive tangle invariants from any ribbon category. Let’s see how this works for the Kauffman bracket.

Consider planar diagrams, that is curves in the plane. These are like tangle diagrams only without self-intersections, i.e. no crossings. Just like tangles, they form a monoidal category since we can place them side by side or atop each other. Also just like tangles they have duality cups and caps.

Cup and Cap

Cup and Cap

Inspired by the definition of the Kauffman bracket, we extend the category of planar diagrams by linear combinations with coefficients polynomials in A,A^{-1} and mod out by the circle relation:

Circle Relation

Circle Relation

This gives a braiding and twist as in the calculations for the Kauffman bracket.

Braiding

Braiding

Twist

Twist

The resulting ribbon category is called the Temperley-Lieb category, named for mathematicians who studied its implications in the context of statistical mechanics.

Now we have two examples of ribbon categories, the category of tangles and the Temperley-Lieb category. How else can we generate examples of ribbon categories? Recall that the category of finite dimensional vector spaces and linear maps formed a monoidal category with duals. We consider the subcategory of representations of an algebra A.

An algebra is a vector space in which we have a multiplication and unit with the familiar properties of associativity and unitality. For example, given a vector space V the space End(V) of endomorphisms of V, that is linear maps V\to V, forms an algebra where our multiplication is composition of linear maps and our unit is the identity map 1_V. V is a representation of A iff there is a linear map \rho_V:A\to End(V) which preserves multiplication and unit.

A Hopf algebra, in addition to having a multiplication and unit, also has maps \Delta:A\to A\otimes A, \eta:A\to k called comultiplication, counit which are coassociative, and counital where k is the field of scalars. This guarantees that the category Rep_{fd}(A) of finite dimensional representations of A is monoidal since we can define representations \rho_{V\otimes W}=(\rho_V\otimes \rho_W)\Delta and \rho_k=\eta. We also require a map S:A\to A called the antipode which switches the order of multiplication and is the convolution inverse to the identity. This guarantees that Rep_{fd}(A) has left duals with \rho_{V^*}=\rho_V S.

If there are elements R\in A\otimes A,h\in A such that P_{V,W}(\rho_V\otimes\rho_W)(R) is a braiding, where P_{V,W}:V\otimes W\to W\otimes V is the swap map P_{V,W}(v\otimes w)=w\otimes v and where \rho_V(h) is a twist, then we call A a ribbon Hopf algebra. Clearly then Rep_{fd}(A) is a ribbon category.

Surprisingly, ribbon Hopf algebras turn up in the study of Lie algebras. One may “quantize” a Lie algebra, deforming it by a formal parameter meant to mimic Plank’s constant \hbar and the result is a ribbon Hopf algebra. This discovery led to a whole slew of new invariants and a new understanding of old invariants. For instance the Jones’ polynomial and the Kauffman bracket are related to the quantization of the most basic Lie algebra sl(2,\mathbb{C})=su(2)\otimes\mathbb{C}=so(3)\otimes\mathbb{C}. Invariants of tangles derived from quantized Lie algebras are called Reshetikhin-Turaev invariants or simply quantum invariants. When applied to links they give polynomials in a variable q=e^\hbar.

Advertisements

The category of tangles

October 1, 2009

I want to get back to discussing tangles. So far we’ve been thinking about tangles entirely topologically. But as it turns out, tangles are also fundamentally algebraic objects. The algebraic gadget we need to understand tangles is that of a free ribbon category. Indeed, Shum’s theorem states that framed, oriented tangles form the morphisms of a free ribbon category on a single generator.

To begin to understand this deep statement we must start with the definition of a category. A category is a set of objects A,B,C,\ldots along with a class (for technical reasons a class, not a set) of morphisms f,g,h,\ldots. Each morphism has a source object and a target object so that we can think of a morphism as an arrow B\leftarrow A. There is a composition operation of morphisms gf which is defined only if the source of g is the target of f. There is also an identity morphism 1_A for every object A whose source and target are both A. Finally we require that composition be associative (hg)f=h(gf) and unital 1_B f=f=f 1_A.

Tangles form morphisms in a category. Just let the objects be points in a plane; then clearly tangles form morphisms with their bottom endpoints as source and their top endpoints as target (or vice versa, it’s just a convention). We can compose tangles by placing them one atop the other, so long as their sources and targets match up. Identity tangles are simply a bunch of vertical lines connecting matching top and bottom endpoints. Clearly, associativity and unitality hold so tangles do indeed form a category.

We can form a category of tangles with a completely different composition however. Instead of placing tangles atop each other, we can place them side by side. Now the empty tangle is the identity. Also, in this category there is only 1 object since we can always place tangles next to each other; there’s nothing to match up! Something with 2 different categorical structures like this is called, logically enough, a 2-category. But, as we said, the second category structure has a unique object. These kinds of 2-categories are so common they get their own name, monoidal categories. Thus, tangles form the morphisms of a monoidal category.

Actually, that’s not the end of the story! We could put the tangles side by side in different ways, since the endpoints live in planes, we have 2 dimensions to work with. The two independent ways of placing tangles next to each other in addition to the standard composition of placing them atop each other turn tangles into a 3-category. Since both ways of putting tangles next to each other can be done without worrying about matching this is a special kind of 3-category called a doubly monoidal category. Doubly monoidal categories always have a way of transforming the monoidal product (side-by-side placement) into its opposite (side-by side placement but in the reverse order). This comes from the fact that the 2 monoidal structures are essentially the same. Try to think about why this is true for tangles.

Let’s think about how to transform two points sitting side by side into the same two points sitting in the opposite order. As we transform in two dimensions rotating one around the other, we trace out the familiar crossing. Of course we can rotate them in the other direction and get the other crossing.

Crossings

Crossings

In general, this sort of thing is called a braiding, and doubly monoidal categories always have them. For this reason, they’re also called braided monoidal categories.

Orientation means that the endpoints of our tangle are more than just points. They have directions associated with them, either up or down. We call this a dual structure, since the dual of up is down. This is familiar from linear algebra where to each vector space V we can associate a dual vector space V^* of linear maps from V to the field of scalars. The important structure relating vector spaces and their duals are the evaluation and coevalutation maps. Evaluation takes a dual vector f and a vector v and evaluates to the scalar f(v). Coevaluation makes use of the isomorphism V\otimes V^*=End(V) where End(V) is the space of endomorphisms of V. The coevaluation takes a scalar to that scalar multiple of the identity. Now, we have the same sort of structure morphisms in the category of tangles, the caps and cups. This makes the category of tangles a monoidal category with duals, just like the category of linear transformations of vector spaces.

Cup and Cap

Cap and Cup

Since cups and caps may be oriented in 2 different ways, we have 2 dual structures, a left and a right dual. The same can be said of the category of vector spaces but there, one simply identifies left and right duals. In the category of tangles it’s not so easy. Instead one must build a natural isomorphism between left and right duals and for this you need a twist. A twist is what it sounds like, take your endpoints and twist them around 360 degrees. This is where framing comes into play. If you do this to a single endpoint, you get a ribbon with a full twist in it. This has a blackboard diagram that looks like either side of the framed Reidemeister 1 move.

Framed Reidemeister 1

Twist on 1 strand

What if you had 2 endpoints? Think about this for a bit, you get 2 crossings between 2 ribbons each of which has a full twist in it. Luckily this is the compatibility condition between the braiding and the twist that is required of a so-called ribbon category.

Twist on two strands

Twist on 2 strands

To recap, a ribbon category is a braided monoidal category with duals and a twist. All of these may be defined algebraically but have intuitive topological definitions in the category of tangles. The fact that algebra may be thought about topologically can be rigorously summed up in the statement of Shum’s theorem given at the beginning of the post: framed, oriented tangles form the morphisms of a free ribbon category on a single generator.

Jones’ Polynomial

August 6, 2009

In the last post we investigated the linking number and writhe. These were numerical invariants of oriented links and framed knots. Now I will introduce new invariants which take their values as polynomials.

For a given crossing, we can perform an operation called resolving or smoothing the crossing. We can do this in two ways.

0-smoothing

0-smoothing

1-smoothing

1-smoothing

Let us suppose that there is a polynomial invariant of links <L> in variables A,B,C so that concentrating on a neighborhood of a crossing in a diagram for L, we have that the following relation, called the skein relation, holds.

Skein Relation

Skein Relation

Performing smoothings on all crossings reduces a link diagram to some number of circles in the plane. Let’s require that adding a circle \bigcirc to a link diagram L gives <L\bigcirc>=C<L>. Finally we require a normalization, that for the empty link <>=1. From this we can deduce that the bracket of n circles is <\bigcirc\cdots\bigcirc>=C^n.

We need to check invariance under Reidemeister moves. Let’s start with Reidemeister 2.

Reidemeister 2 Calculation

Reidemeister 2 Calculation

Thus, in order for the bracket to be invariant we must have A^2+ABC+B^2=0 and AB=1. Solving for B,C in terms of A, we get B=A^{-1},C=-A^2-A^{-2}.

The nice thing now is that Reidemeister 3 comes along for free by using invariance under Reidemeister 2.

Reidemeister 3 Calculation

Reidemeister 3 Calculation

Performing Reidemeister 1 on the other hand does not leave the bracket invariant. However, we can see that opposite Reidemeister 1 moves cancel so that the bracket is invariant under the framed Reidemeister 1 move.

Reidemeister 1 Calculation

Reidemeister 1 Calculation

Consequently, the bracket is an invariant of framed links whose values are polynomials in A and A^{-1}. To calculate it, take a blackboard diagram for the framed link and apply the skein relation, the circle relation and the normalization relation until you reach the answer.

The bracket was introduced by Kauffman as an elementary way to define Jones’ polynomial, an invariant of oriented links which was originally derived using some difficult algebra. We can define the Jones’ polynomial by V(L)=-A^{-3TotWr(L)}<L>|_{A=q^{1/4}}. Here, TotWr(L) the total writhe is the sum of signs of all crossings in the diagram and it is this factor which makes V(L) now invariant under Reidemeister 1 moves.

The Kauffman bracket and Jones’ polynomial are very closely related, in a similar way to how the writhe and linking numbers are closely related. Following the discovery of the Jones’ polynomial, there was a great deal of interest in knot theory. The Jones’ polynomial showed new connections between topology on the one hand and representation theory and quantum physics on the other.

Invariants

July 14, 2009

How can we tell if two tangles (or links, or knots) are different? That we cannot move the strings around as we are allowed and get from one tangle to another? We find invariants which can tell the difference. The best way to explain what an invariant is, is to give an example. The component number of a tangle is the number of strings in the tangle. Clearly if two tangles have different number of strings then they are not the same. For example the trefoil knot has component number 1 and the Hopf link has component number 2.

Trefoil Knot

Trefoil Knot

Hopf Link

Hopf Link

An invariant is some mathematical object, like a number or a polynomial that we can associate to tangles (or links, or knots) that depends only on the tangle-type. For instance the component number of a tangle doesn’t change when the strings move about or are stretched. Therefore, it is an invariant.

The component number is rather a blunt invariant. What if we want to tell the difference between tangles with the same component number? Let’s define an invariant for links with component number 2. We will call it the linking number. The linking number is actually an invariant for “oriented” links with component number 2. Oriented means that each string in the tangle comes with a preferred direction. We indicate this in a diagram by drawing an arrow on each string.

Oriented Hopf Link

Oriented Hopf Link

Whenever two different strings cross we can use the right hand rule to assign a positive or negative value to the crossing. Put your thumb in the direction (according to the orientation) of the over-strand and your fingers in the direction of the under-strand. If your palm is facing up (away from the screen) then it is a positive crossing and if your palm is facing down (towards the screen) then it is a negative crossing.

Signs of Oriented Crossings

Signs of Oriented Crossings

Now think of our link as having components (strings) called A and B. The linking number Lk(A,B) is the sum of the signs of the crossings in which A crosses over B. In order to see that the linking number is an invariant we need to analyze its behavior under Reidemeister moves.

Reidemeister 1

Reidemeister 1

Consider the first Reidemeister move. The left part of the equation has a crossing, but it comes from only 1 component, so it contributes 0 to the linking number. The same applies to the right part of the equation. The middle part of the equation has no crossings and so it contributes 0 to the linking number. Thus the linking number is invariant under Reidemeister 1.

Reidemeister 2

Reidemeister 2

Consider the second Reidemeister move. There are two cases, either the strands come from the same component or different components. In the first case, the left side of the equation contributes 0 to the linking number. In the second case, no matter which orientation there is on the strands, the two crossings have opposite signs and so contributes 0 to the linking number. In either case, the right side of the equation has no crossings and so contributes 0 to the linking number. Thus the linking number is invariant under Reidemeister 2.

Reidemeister 3

Reidemeister 3

Consider Reidemeister 3. Notice that each pair of strands cross in the same way but in different places on each side of the equation. Thus, no matter which components the strands belong to, nor which orientation we give them, each side contributes the same to the linking number. Thus the linking number is invariant under Reidemeister 3.

Thus, the linking number Lk(A,B) is an invariant of 2 component oriented links. Even better, it’s symmetric Lk(A,B)=Lk(B,A). So we can calculate it by summing the signs of the crossings where B crosses over A.

We can easily calculate the linking number for the oriented Hopf link pictured above, Lk(A,B)=-1.

What happens if we try to calculate the self-linking number of a knot Lk(K,K). Unfortunately it is no longer invariant under Reidemeister 1, since the argument we had used to prove invariance required that we were calculating linking number Lk(A,B) between different components A and B. You can see that the arguments for Reidemeister 2 and 3 did not require that the components were different so that the self-linking number, which we shall call the writhe Wr(K)=Lk(K,K), is invariant under Reidemeister 2 and 3. Furthermore, it does not depend on the orientation, since switching the orientation will not change the sign of a crossing (the orientation switches on both strands, so the sign is preserved).

In order to remedy the problem of non-invariance of the writhe under Reidemeister 1, we introduce a new property of tangles, “framing”. If orientation can be thought of as arrows going parallel to the tangle, then framing can be thought of as arrows going perpendicular to the tangle. If we extend the tangle along these arrows we obtain a “ribbon”, that is a tangle whose components are 2-dimensional surfaces. Now the self-linking number makes sense, as the linking number of the two edges of the ribbon.

We can project framed tangles in such a way that the ribbon is flattened in the projection. Then we need only draw the ribbon using a string as before and we can extend that string perpendicularly in the plane of projection. The resulting framing is called the “blackboard framing”. Such diagrams represent equivalent tangles if and only if they are connected by a sequence of Reidemeister 2 & 3 moves and the framed Reidemeister 1 move.

Framed Reidemeister 1

Framed Reidemeister 1

Notice that no matter what orientation is chosen, both sides have negative crossings. Since the sign of the crossing cannot change, the writhe is invariant under framed Reidemeister 1. Thus, the writhe, Wr(K), is an invariant of framed knots.

We have introduced two interesting new invariants, the linking number Lk(A,B) and the writhe Wr(K) but in order to do so we had to add more structure to tangles, orientation and framing. That these structures are natural as well as closely related is hinted at by our study of invariants. The linking number is sensitive to orientation but not framing and the writhe is sensitive to framing but not orientation. We will have more to say about these features of tangles in the future.

Reidemeister’s theorem

July 1, 2009

Despite living in 3-space our minds can only really grasp 2 dimensions since our eyes project the 3-dimensional world onto our  2-dimensional retinae. Nevertheless, we have a limited perception of 3 dimensions that comes from “layering” the different views of our retinae.

We perform a similar operation on tangles, projecting the inherently 3-dimensional objects onto a surface. Look at the example from the last post. It was necessary to draw it 2-dimensionally since computer displays are 2-dimensional. Nevertheless, we obtain a perception of the 3rd dimension by drawing “crossings” where one strand crosses over another. Try to locate the crossings in the example.

Example of a tangle

Example of a tangle

These projections are the most common way of representing tangles. They are called “tangle diagrams”. When we project a tangle diagram we take care to allow the only singularities (places where the projection doesn’t look nice) to be “transverse double points” which we represent as crossings. We don’t allow any of the following singularities: cusps, tangencies, or triple points.

Cusp

Cusp

Tangency

Tangency

Triple point

Triple point

We can guarantee that there are no such singularities by slightly tilting our projection if there are.

Now, how can we know if two tangle diagrams represent the same tangle? The answer is Reidemeister’s theorem: two tangle diagrams represent the same tangle if and only if they are connected by a sequence of Reidemeister moves. The pictures below demonstrate the 3 Reidemeister moves.

Reidemeister 1

Reidemeister 1

Reidemeister 2

Reidemeister 2

Reidemeister 3

Reidemeister 3

Looking at these pictures, it should be intuitively clear that performing Reidemeister moves does not change the tangle which a tangle diagram represents. The first Reidemeister move consists of adding or removing a “kink”. The second Reidemeister move consists of sliding strands past each other. The third Reidemeister move consists of moving a strand past a crossing. Look at the third move again and try to understand it physically: grab the middle strand and pull it through the crossing until it’s on the other side.

The difficult part of the theorem is proving the “only if” part, that is proving that the 3 Reidemeister moves suffice in order to transform one tangle diagram into any other tangle diagram which represents the same tangle. Notice, however that in the course of performing each of the Reidemeister moves we run afoul of our disallowed singularities. Perform a Reidemeister 1 on a physical tangle and as you get from one side of the equation to the other there will be a point in time where your projection is a cusp. Similarly, performing Reidemeister 2 will yield a tangency and Reidemeister 3 will yield a triple point.

One more important point is that Reidemeister moves are local. This means that if we have a large tangle we can perform Reidemeister moves on small pieces of the tangle. Let’s do an example to clarify.

Example of Reidemeister's theorem

Example of Reidemeister's theorem

We perform a single Reidemeister move locally in each equality. Try to identify where they occur.

Reidemeister’s theorem gives us the perfect tool for showing that two tangle diagrams represent the same tangle. Just perform Reidemeister moves to get from one diagram to the other. How can we show that two tangle diagrams represent different tangles? We may try to connect them via Reidemeister moves and fail, but that doesn’t show anything. Perhaps if we were smarter we could find the right sequence of Reidemeister moves. There’s an infinite number of such sequences so there’s no hope in testing them all. The answer to this puzzle is to look for invariants. But that’s the subject for another post!

My, what a tangled web we weave

July 1, 2009

Hi, my name is Eitan. I’m a student of mathematics. Welcome to my personal blog. I intend to post mathematical exposition, but since this is a personal blog I will also post political thoughts or anything else that comes to mind. Let’s start with some math.

What’s a tangle? Tangles are very important objects in topology. Physically, they are just a number of strings in our usual 3-dimensional space whose endpoints (if they have endpoints) are attached to some boundary surface. We allow the strings to move around freely in 3-space so long as the endpoints remain fixed and we cannot pass strings through each other (or over endpoints). We also allow the strings to stretch and compress as though they’re made of rubber. Here’s an example of a tangle:

Example of a tangle

Example of a tangle

Try to count how many strings there are.

Tangles are really generalizations of knots and links. A knot is a tangle made of only 1 closed string, meaning the string has no endpoints, it’s just a circle. A link is a tangle made of any number of closed strings. Notice that all knots are links and that all links are tangles. Another way of thinking about tangles is that they are “local” pictures of knots, that is, if we zoom in on a knot and look at a small neighborhood, that neighborhood will contain a tangle.

Let’s look at examples of knots and links.

Trefoil Knot

Trefoil Knot

Hopf Link

Hopf Link

By tracing along with your finger, verify that the trefoil knot has only 1 string while the Hopf link has 2 strings.

Recall that I said the endpoints of the strings in a tangle must be attached to some boundary surface. In the example the boundary surface comes in two pieces, a bottom and a top. This is one convention for tangles, the “monoidal category” convention. Another convention, the “planar algebra” convention, is that the surface has only one piece. Really, there’s no important difference between thinking in either convention. It’s only a question of convenience for a given application.

I think that’s enough for now. Stay tuned to hear about Reidemeister’s theorem.