For the project that I’m working on, I needed to know the basics of riemannian connections. Connections confused the hell out of me until I took a few days to really absorb them. I’m writing down my interpretation here so that I can burn it into the neurons, and hopefully help someone else trying to understand the same topic.
Covariant Derivatives of Scalar Functions
A connection is also called a covariant derivative. One of the principles of differential geometry is that everything should behave the same regardless of which coordinate system you work in, so we’d like a way to get the derivative of a quantity when along an arbitrary direction. When we consider a scalar function f, the covariant derivative is just the directional derivative. If :
I found it extremely useful to think of the covariant derivative as a linear operator:
Covariant Derivatives of Vector Fields
If we want to apply to a vector field , then we can apply the operator:
Immediately we can see an interpretation for : see how changes with respect to each coordinate direction, and then sum the resulting vectors together, weighted by each component of . It’s easy to see how this gives us a coordinate-free derivative of a vector field. What we have right now is called an affine connection.
Affine connections have two properties; linearity in and the product rule on . This is immediate from the operator representation:
This means that we can expand the representation in :
It should be pretty obvious that is the same as , in that they both represent how changes in the unit direction of . If you’ve been paying attention, you’ve probably been wondering about how we compute these constructs. It’s fairly straightforward to assume that in Cartesian coordinates, we just differentiate each component of . What about in other bases? Well, assuming that , we can just apply the product rule on the terms:
In Cartesian coordinates, the second term is going to vanish, because the coordinate directions don’t change with respect to any direction. So our assumption about Cartesian coordinates is correct. In other bases, we can just think of the second term as a corrective factor for the curvature of the coordinate frames. In most texts, the vector is defined, where the are called Christoffel symbols. I won’t get into them here, except to say that they have some important symmetries.
If you’re familiar with this material, you may have noticed that I’ve hand-waved a lot. There’s a lot of machinery that needs to be set up to prove existence and uniqueness of all these constructs. It’s also machinery that works fairly well in Euclidean space, but we can’t make the same assumptions on general smooth manifolds. We’d like a connection that works on general manifolds, but we need to make some extra assumptions. A Riemannian connection is an affine connection with some extra properties:
Where is an inner product on the tangent space, and is the Lie bracket. The first condition imposes a restriction on the coordinate frames that states that the frames must be torsion-free; that is, the coordinate frames may not twist when moving in any particular direction. The second just imposes the product rule on the inner product. Euclidean space already has these properties, so the covariant derivative as I described it above is a Riemannian connection.
These extra rules basically allow us to assume that a connection is unique on any particular smooth manifold that has an inner product defined on its tangent space, and that we can use the above formula to write it out explicitly. There’s a lot more to it, of course, but we have enough to work with. I’ll be writing more posts that cover this topic, but I encourage you to read up on it yourself and derive your own intuition of what’s going on.