When I first planned this mini-course, I expected this topic to be hard to motivate to geometers — why should they care about one algorithm over another for computing the Littlewood-Richardson numbers? But there’s plenty of subtlety in considering the strengths and weakness of the various Littlewood-Richardson rules out there:

- some are fast and convenient for actual computations (on a computer);
- some are “geometric”, that is, they actually describe something happening in ;
- some are “symmetric”, that is, they exhibit directly the symmetries coming from the fact that the Littlewood-Richardson numbers are the structure constants of a commutative, associative ring — and/or they are invariant under transposing partitions;
- some have turned out to generalize more readily to other contexts (such as K-theory).

The third bullet point is a little surprising: in fact, the earliest rules for computing the Littlewood-Richardson coefficient treated the three partitions completely differently, and it is **far** from self-evident that the rules give rise to the symmetries we know exist. For example, recall that, in the cohomology of ,

So by commutativity we know that , and by rearranging the triple product, we also know . As we will see, few of the rules exhibit all of these symmetries. (There is an additional symmetry coming from transposing the partitions, too.)

So this post will be about the contrasts between the different rules. There is a zoo of algorithms out there, so we will just focus on four of the most well-known: the description via Yamanouchi words in Young tableaux; Ravi Vakil’s checkergames rule; the Knutson-Tao puzzle rule; and Fomin’s description via growth diagrams.

All proofs will be omitted.

**1. Yamanouchi Words**

This is one of the earliest Littlewood-Richardson rules (maybe the first?) for . A **Yamanouchi word** is a word using the letters , such that, as the word is read out letter by letter, the number of ones is always to the number of twos, which is always to the number of threes, and so on. (Sometimes these words are called “ballot sequences”: we can think of them as elections results being read out, where the rank of the candidates never changes.)

So, consider the skew tableau , and fill it in so that its **reverse reading word** (from right to left, starting at the top row) is a Yamanouchi word of weight that is, it has instances of the number . Then there are ways to do this.

This is easy to have a computer do — it’s just a tree-traversal algorithm to fill in the tableau successively. Yet the rule treats all three partitions utterly asymmetrically! Moreover, it really sheds no light on the geometry of Grassmannians. This is a rule about tableaux.

Moving on.

**2. Vakil’s Checker Games.**

This is a truly complicated rule — enough that I won’t even describe it in this post. Suffice it to say that there are white checkers and black checkers on the board, and you move them according to a somewhat involved set of rules, and the initial and final game states describe (respectively) the partitions being multiplied and the resulting output.

This is emphatically *not* a rule that is easy to compute with – the checkergames may branch off into two possibilities at each step, and there are (always) steps in all. So even working out the algorithm for a simple product (say, ) is fairly cumbersome.

The significance of the rule is that it is truly geometric — indeed, the title of the paper is *A Geometric Littlewood-Richardson Rule*, and the proof relies on geometry, not combinatorics. What it describes is a sequence of degenerations of , during which the backwards flag is gradually moved closer and closer to the forward flag. By the end of the game, the intersection has broken into a union of actual Schubert varieties with multiplicities, exhibiting the fact that the intersection is cohomologous to a linear combination of Schubert classes.

Again, this rule treats the three partitions differently — no symmetries are readily apparent.

**3. Knutson-Tao Puzzles**

The ‘puzzle rule’ consists of filling in an equilateral triangular ‘puzzle’ using three allowed puzzle pieces: triangles with edges labeled ‘1’, triangles with edges labeled ‘0’, and lozenges with alternating 1s and 0s on its edges (the mirror image of the lozenge is not a valid puzzle piece, but the rotations of it are valid). In filling in the puzzle, the edge labels of adjacent pieces must match.

Then, to compute , we label the outside edges of the triangle by the binary sequences corresponding to the jumps in the dimension sequence for the corresponding Schubert variety. (I defined dimension sequences here.) The number of valid ways of filling in the puzzle is the Littlewood-Richardson coefficient.

This rule is nice for several reasons:

- Puzzles take less effort to compute with than checkerboards (the picture doesn’t have to be re-drawn at each step).
- The rule is still geometric — in fact it describes the same sequence of degenerations. (That said, the proof uses torus-equivariant cohomology, not algebraic geometry as such.)
- The puzzles are rotationally symmetric — they exhibit the equality .
- The puzzles exhibit the symmetry from transposing partitions, , by the operation “take the mirror image, then swap all 1s and 0s”.

Also, the puzzle rule is fun! On a side note, the puzzles are not mirror-symmetric (since the mirror image of the lozenge piece is not allowed), so they don’t exhibit the “commutativity” of Littlewood-Richardson coefficients. Oh well.

**4. Fomin’s Growth Diagrams**

This last rule is more combinatorial, in the spirit of the tableau combinatorics of rule 1 above. (In fact, it reduces easily to the rule using Yamanouchi words and rectification of tableaux.) It is not geometric.

So why did I pick it? First, unlike the three rules above, it exhibits the *commutativity* of Littlewood-Richardson numbers. The cyclic symmetry isn’t too difficult to prove either — it follows basically by formal properties of the algorithm, despite not being evident at first glance. (There is actually a similar-in-spirit rule, using 3D “cartons”, that manages to exhibit the full symmetry of the coefficients.)

Second, growth diagrams generalize to a rule for K-theoretic Schubert calculus (due to Thomas and Yong) — coming next post.

Here’s how the rule works: a growth diagram is a two-dimensional grid of partitions, satisfying the following two rules:

- (Growth rule) A step up or to the right corresponds to adding a box.
- (Recurrence rule) Let be the lower-left and upper-right corners of a square in the grid, so is two boxes. If the two boxes are non-adjacent, then there are two different possible partitions that could fit between and . (Otherwise there’s only one possible.) Then
**each must appear once**in the two other corners of the square.

This last rule is a “local” rule for filling in entries in the diagram. In particular, suppose all the entries along the left and top sides of the grid are known. Then, working from the top-left square, rule 2 uniquely specifies a recurrence for filling in the rest of the grid. (In fact, if the entries are known along *any* path from the bottom-left to top-right corner, then the rest of the diagram is determined in this way.)

Now, fix standard tableaux of shape respectively. By considering successive truncations, we can think of a standard tableau as an increasing sequence of partitions, growing by one box at a time.

So, we start filling in a growth diagram by putting along the left side and along the bottom. Then is the number of ways to fill in the growth diagram so that the top-right entry is .

The transpose of a growth diagram is still a growth diagram. Since this corresponds to swapping the roles of and , this shows directly that , the commutativity symmetry.

*Note*: Growth diagrams have many uses in tableau combinatorics — classic theorems such as the RSK correspondence can be done via growth diagrams, and often the result is easier to prove! This seems to be because of the simplicity and local nature of the definition. For example, the top row of the growth diagram is a skew tableau of shape , and (it’s easy to show) the rows below it are the successive rectifications of this skew tableau by . In particular, the JDT rectification of is . From this it’s only another step to reduce to Yamanouchi words.

That’s it! Four rules, out of many, each with certain advantages and disadvantages. Next post, the last on Schubert calculus, will be on how the story generalizes (or not) beyond cohomology of .