I’m going to describe the basic ideas of the Schur functors, , where is a partition and is a vector space. These will turn out to be the complete set of irreducible polynomial representations of (for all ). The main facts to strive for are:

- Every irreducible representation of is a unique Schur functor. Conversely, every Schur functor is irreducible.
- The character of is the Schur polynomial .
- The dimension of is the number of SSYTs of shape and entries from (where .) This fact will be explicit: there will be a “tableau basis” for the representation.

As a corollary, we get an improved understanding of the Littlewood-Richardson numbers and the isomorphism between the representation ring and the ring of symmetric polynomials.

In particular:

- The map takes , the Schur polynomial,
- Since the map respects (tensor) products, the Littlewood-Richardson number is the multiplicity of in the tensor product . Equivalently,

Finally:

- If we allow twists by negative tensor powers of the determinant representation , we also get all the rational algebraic representations of . These are representations which are morphisms of the algebraic group . (This distinguishes them from the polynomial representations, which extend to morphisms of affine -space, the space of all matrices.)
- These are indexed by decreasing sequences of (not necessarily positive) integers.

Let’s get started.

**The Beginning: Sylvester’s Lemma**

I want to try to motivate this construction. It generalizes the “exchange relation” I gave last post for , involving flag tensors. The key idea is that there are relations between wedges and when the subspaces spanned by the ‘s and ‘s are not sufficiently general.

The first step is the following:

**Sylvester’s Lemma**. Let , where . Let . Then

where the sum is over all ways of exchanging the *first* vectors on the right, , for *any* of the ‘s, preserving the ordering. (There are summands.)

Note that this equation lives in . (Special case: if , then the identity reduces to , which is true in and false in .)

Fulton’s book has a very elegant coordinate-free proof, which I’ll briefly reproduce:

*Proof*. Let be the difference of the two sides of the equation. Then, considered as a function of vectors, is obviously multilinear. It’s also easy to verify that is alternating on . (The way to check is to set and follow a minus sign.)

Now, we claim that, in addition to the above, is also alternating on the argument . To see this, define to be the above expression with , considered as a function of vectors. By the same reasoning, is alternating on . And, is also alternating on , since if , then there are three vectors that are all the same (), so everything vanishes. Thus is alternating on vectors in an -dimensional space, hence is identically zero. The same then holds for .

Very nice! There are a few ways to improve this proof. First of all, it didn’t matter that there were vectors on the right, since we never thought about the ones past . So we have

**Sylvester’s Lemma, v.2**. The analogous relation holds in , for any .

Note that the case is slightly different, since the inductive step, showing that the expression vanishes if , follows by inspection, not by alternation ( is alternating on vectors, which isn’t enough to force it to be zero, but it is in fact zero when you write it down).

To connect this to “flag tensors”, let’s extend this to , where and , and

This is the “flag condition”. We have the following:

**Sylvester’s Lemma, v.3**. Let be spanned by “flag tensors”. Then the exchange relations hold on , for any . (Note: here, .)

*Proof*. Given a flag tensor, let be the larger subspace. Then the exchange relation actually lives in the space

and now it follows from Sylvester’s Lemma, v.2 above.

There’s one final, separate improvement we can make, involving “incident-subspace tensors” rather than flag tensors — that is, wedges such that is big enough. I’ll discuss these briefly, but flag tensors are all I actually need.

In the basic argument we showed that the expressions and were alternating on . Well, in fact they’re also alternating on , the vectors being exchanged: those ‘s always end up in the same wedges, so if (for example) , then every single term in the relation is zero. So, is actually alternating on vectors, hence vanishes if So:

**Sylvester’s Lemma, v.4**. Let be such that Then the exchange relation holds on *all* of , for exchanges.

**Sylvester’s Lemma, v.5** Let and let be spanned by “incident-subspace tensors” , such that

.

Then the exchange relation holds on for exchanges of vectors.

*Proof*. Let be the subspace generated by the vectors in the relation. Then the relation lives in

,

and the inequality just says that .

That’s enough Sylvester’s Lemma for now. Let’s put these together into actual Schur functors.

**Schur Functors**

Let be a partition with distinct column lengths, say , where . Then define the Schur functor

to be the subspace spanned by flag tensors, that is, tensors with .

The following facts are clear:

- This is a sub--representation of the product of exterior powers.
- It is functorial in . (Note that if a map collapses the dimension of some piece of some flag tensor, then it will kill that element inside the exterior power.)
- satisfies all the flag tensor exchange relations given by Sylvester’s Lemma, v.3 above.

To get the case where has repeated column lengths, replace by , the symmetric power. (This is what’s going on in the original form of Sylvester’s Lemma.)

**Visualizing with Tableaux**

We can think of a flag tensor as an element of . That is, we literally write the tensor as a Young diagram of shape , with vectors in each of the boxes. Each column corresponds to a piece of the flag.

The algebraic rules for manipulating these are:

- The diagram is multilinear in all of the boxes.
- The columns are alternating – that is, we can rearrange entries of a column, introducing a minus sign each time.
- The exchange relations hold in the following form. Fix two columns, say of sizes , and . Then the flag tensor/tableau satisfies , where the sum runs over all ways of exchanging the
*first*entries of the right-hand (size ) column with*any*entries in the left column.

This gives a very convenient (and compact) way of picturing elements of the Schur functor.

**The Tableau Basis for **

Fix a basis for . We’ve already seen how to write down bases for and in terms of a basis for , which made it easy to read off the traces of the exterior and symmetric powers as -representations. Now we’ll do the same thing for all the Schur functors.

**Lemma**. The Schur functor is spanned by SSYTs in the basis . (An entry in an SSYT means the vector .)

*Proof*. First, use relation (2) to put all columns in increasing order. Strictly increasing order, in fact, since the tensor is zero if there are any repeats in a column. This gives us column-strictness.

To get the weakly-increasing-along-rows condition, use a careful choice of exchange relation (3). It’s something like, among all strict decreases in rows, find the one that is farthest to the right, then lowest down. Then use an exchange relation with *just enough* boxes to move the offending box to the left. Continue in this manner (details are in Fulton’s book.)

Next:

**Lemma**. The SSYTs are linearly independent.

*Proof*. Fulton proves this by explicitly realizing inside the “ring of matrix coefficients” , and using a monomial ordering.

Specifically, a column with entries corresponds to the top-justified determinant using the columns of the matrix . An SSYT corresponds to the product of the top-justified determinants coming from its columns. Clearly, this is an explicit realization of a “flag tensor” in coordinates (the dimension- step of the flag is given by the top rows of the matrix.)

Now, the standard lexicographic ordering of the monomials (English reading order on the matrix) distinguishes the SSYTs.

As a final note, let’s observe that this description specializes to the description of the bases for the exterior powers (when is a single column) and symmetric powers (when is a single row). The tableau basis is really in the same spirit.

**Trace, Weight and the Schur polynomial**

The nice thing about the tableau basis is that every SSYT is a “weight-vector”. Specifically, if is a diagonal matrix, so , then the action of on a tableau is immediate:

,

where is the weight of . So, the trace of the representation, in the monomial basis, is

the Schur polynomial on variables (where ).

**Irreducibility**

Let’s end with a discussion of why the Schur functors are irreducible representations. This uses the theory of Lie algebras and root spaces.

The basic idea is: given a representation , we analyze its character by studying the action of the torus of diagonal matrices. The representation must split as a direct sum of one-dimensional sub--representations called “weight spaces”, since the torus is abelian.

On the Lie algebra side, we consider the induced Lie algebra representation , with the action of the torus . The story is even simpler: there is a lattice corresponding to the possible torus weights, with “roots” that classify them.

There’s now a notion of highest-weight vector (coming from ordering the basis : is highest, followed by , and so on). These are vectors for which , where is the Borel subgroup of upper-triangular matrices. In the case of the Schur functors, it’s easy enough to see that the unique highest-weight vector is the SSYT with all 1s in the first row, all 2s in the next, and so on.

The following is the important fact:

**Fact**. A representation is irreducible if and only if it has a unique highest-weight vector. Furthermore, two irreducible representations with the same highest-weight vector are isomorphic.

Moreover, in the case of the Lie algebra , the possible highest-weight vectors corresponding to representations are all partitions (the comes from the theory of root systems of Lie algebras and “positive roots”, and the fact that is comparatively simple).

So every irreducible representation corresponds to a (unique) partition, and we’ve exhibited one for each partition. So we’ve found them all.