Home » Mathematics » Algebraic combinatorics » Everything you wanted to know about symmetric polynomials, part IV

# Everything you wanted to know about symmetric polynomials, part IV

### Follow Jake's Blog

Alternating and Symmetric Polynomials

Consider the following recipe for building symmetric polynomials, using alternating polynomials. Consider the Vandermonde determinant

$\displaystyle{ \Delta(x_1, \ldots, x_n) = \left|\begin{matrix} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\ & \vdots & & \vdots & \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \end{matrix}\right| = \prod_{i < j} (x_j - x_i). }$

To see the last equality, note that the determinant is zero if $x_i = x_j$ for any $i \ne j$, so it is divisible by $(x_j-x_i)$, and all these factors are distinct. By degree-counting (the polynomial is evidently homogeneous of degree $n \choose 2$), this is the whole thing, up to a scalar. Finally, to get the scalar, we do some computation (e.g. plugging in convenient values like $x_i = i$).

If, instead of the row $(1 \ x_i\ x_i^2\ \cdots\ x_i^{n-1})$, we used some other sequence of polynomials, like $(f_1(x_i)\ f_2(x_i)\ \cdots \ f_{n-1}(x_i))$, the result would still be alternating in the $x_i$‘s, so it would still be divisible by the product above. However, the degree-counting argument might no longer be relevant. (For example, if we use $(x_i\ x_i^2\ \cdots\ x_i^n)$, then the result is the original Vandermonde determinant times $x_1 \cdots x_n$.)

Still, we can divide out the Vandermonde determinant, and (surprise!) the result will be a symmetric polynomial, since the sign-change is “divided out” as well.

For example, we might as well consider

$\det(x_i^{c_j})$,

for any sequence of exponents $c_1, \ldots, c_n$. Of course, the determinant will only be nonzero if they’re all distinct, and by rearranging we can assume $c_1 < c_2 < \cdots < c_n$. It's worth noting that the "Vandermonde case" of $c_j = j-1$ is the simplest one – in fact, it is the unique(!) alternating polynomial of minimal degree, up to scaling. We may as well describe the others in terms of it: let's define

$\Delta(\lambda) = \det(x_i^{j-1+\lambda_{n+1-j}})$,

for any partition $\lambda = \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0$. (So the “zero partition” corresponds to the Vandermonde determinant itself.) Now we get a class of symmetric polynomials

$\displaystyle{ s_\lambda = \frac{\Delta(\lambda)}{\Delta(x_1, \ldots, x_n)} }$,

each of which is homogeneous of degree $|\lambda|$. We have the following theorem:

Theorem. The ratio of alternants $s_\lambda$ is the Schur polynomial.

In fact, this was the original definition of the Schur polynomials, before Young tableaux were used. The proof is by clearing the $\Delta(\lambda)$ denominator and comparing monomial coefficients. As usual, it boils down to one particular computation, namely:

Lemma. Let $\lambda$ be a partition, $\alpha \in \mathbb{Z}^n$, and $\rho = (0,1,\ldots, n-1)$. Then

$\displaystyle{ \sum_{\sigma \in S_n} (-1)^\sigma K_{\lambda, \sigma \cdot (\alpha+\rho) - \rho} = \begin{cases} 1 & \lambda = \alpha \\ 0 & \text{ otherwise.} \end{cases}}$

This is proved using a sign-canceling involution on $SSYT(\lambda) - \{T_*\}$, where $T_*$ is the “highest-weight tableau”, whose $i$-th row is all $i$s. It’s in the same spirit as the Bender-Knuth involution: it changes the weight of each tableau from $w(T)$ to $s_i(w(T) + \rho) - \rho$, where $s_i$ is a transposition ($i$ is the first row where $T$ differs from $T_*$.) This makes all the terms cancel, except possibly the $T_*$ term itself (if it appears at all — it has weight $\lambda$). Finally, we check that $T_*$ can only show up in the $\sigma = id$ term, and then only if $\lambda = \alpha$.

Jacobi-Trudi and Pieri

It’s time to discuss a couple of big formulas involving the Schur polynomials. The first, the Jacobi-Trudi formula, is both theoretically-useful, as it expresses $s_\lambda$ as a polynomial in the $h_k$‘s, and has a beautiful proof from graph theory. The second, called the Pieri rule, is a useful formula for multiplying complicated Schur polynomials by simple ones; in particular, it’s way easier to use than the Littlewood-Richardson rule.

Theorem (Jacobi-Trudi). The Schur polynomial satisfies

$s_\lambda = \det (h_{\lambda_i + j - i})$,

that is, it is the determinant of the matrix whose $i$-th row is

$(\cdots h_{\lambda_i + t} \ h_{\lambda_i + t+1} \cdots)$,

with $h_{\lambda_i}$ down the main diagonal.

The proof of Jacobi-Trudi uses the Lindstrom-Gessel-Viennot Lemma, which expresses the determinant as a weighted sum over collections of certain nonintersecting paths through a graph. Whenever the paths intersect, we flip them around each other and they cancel out. It’s a lovely graph-theoretic argument. I won’t attempt to reproduce the proof here (since it really needs a picture), but it’s worth knowing, particularly as the same approach gives rise to other determinantal formulas as well — generalizations of Jacobi-Trudi, for example, and the Cauchy-Binet formula (which directly establishes the multiplicativity / functoriality of the determinant).

This identity gives another, circuitous proof of the fact that the homogeneous symmetric polynomials $h_k$ generate $\Lambda$ as a ring: we know independently that the Schur polynomials are an additive basis, and now the Jacobi-Trudi identity expresses the Schurs as polynomials in the $h_k$‘s. There will be some other useful consequences later on. On a side note, since $h_k = s_k$, we can also think of this fact as showing that the $s_k$‘s generate $\Lambda$ as a ring.

One final fact: although the Littlewood-Richardson rule (briefly discussed last post) gives a general algorithm for multiplying Schur polynomials, there are other rules for special cases. One of these is the Pieri rule, which is much easier to understand, and computes $s_{1,\ldots,1} \cdot s_\lambda (= e_k \cdot s_\lambda)$. I’ll discuss it in detail, as it gives some of the ‘flavor’ for working with alternating polynomials.

Pieri Rule. We have

$\displaystyle{s_{1^k} \cdot s_\lambda = \sum_{\nu} s_\nu,}$

where the sum runs over all $\nu \supset \lambda$ such that $\nu/\lambda$ consists of $k$ boxes in a (collection of) vertical strip(s). That is, no two of the boxes we added to $\lambda$ to get $\nu$ should be in the same row. (To remember this, keep in mind that the partition $(1^k)$ is, itself, a vertical strip).

Proof. We’ll use the ratio of alternants formula somewhat indirectly: we consider $s_{1^k} \cdot s_\lambda \cdot \Delta(x_1, \ldots, x_n)$ and decompose the result into a sum of terms of the form $\Delta(\nu).$

First, let’s determine the coefficients of monomials in $s_{1^k} \cdot s_\lambda \cdot \Delta(x_1, \ldots, x_n) = s_{1^k} \cdot \Delta(\lambda)$. This is an alternating polynomial, so the coefficient of $x_1^{a_1} \cdots x_n^{a_n}$ vanishes if two $a_i$‘s are equal. So, up to sign changes, we can assume $a_1 < \cdots < a_n$.

Now, we know that $\Delta(\lambda)$ is the determinant with entries $x_i^{j-1+\lambda_{n+1-j}}$. And $s_{1^k} = e_k$ is the sum of all squarefree monomials of total degree $k$. If we expand the determinant and the product, and consider the monomials we get, we see that we’re trying to get exponents

$(a_1, \ldots, a_n) = \sigma \cdot (\rho + \lambda^R) + (0,1,\ldots,1,0)$,

where $\rho = (0,1,\ldots,n-1)$, $\lambda^R$ means the reversal of $\lambda$ (since we’re counting upwards), $\sigma$ is a permutation and the third summand has $k$ ones and all other entries $0$.

The key observation is: $\rho + \lambda$ is in strictly-increasing order, and adding $k$ ones here and there can, at most, weaken certain inequalities. It can never reverse them. In particular, if the end result is $(a_1, \ldots, a_n)$, which is again in strictly-increasing order, then:

• the permutation $\sigma$ must be the identity,
• the $k$ ones must only be in entries $j$ so that $a_j < a_{j+1}$.

In other words, either $a_j = \rho_j + \lambda_{n+1-j}$ or $\rho_j + \lambda_{n+1-j} + 1$, and the latter case happens precisely when "there's room for it" without breaking the inequality.

Thinking pictorially, this means $a$ corresponds to a new partition, that is,

$a = \rho + \nu^R,$

with $\nu$ obtained by adding $k$ new boxes to $\lambda$ in a “vertical strip” (since each row increments by at most 1), and the placement rule is just guaranteeing that the new boxes result in a valid Young diagram. Moreover, $\nu$ only arises once this way, since it comes from a unique choice of the $k$ ones.

To complete the proof, we double-check that this is equal to $\sum_\nu \Delta(\nu)$, the sum over the relevant partitions $\nu$. This is more or less immediate (it’s alternating, restrict to monomials with strictly increasing exponents, compare coefficients).

Corollary. The change-of-basis from $e$ to the Schur polynomials is given by

$\displaystyle{e_\lambda = \sum_\mu K_{\mu^T \lambda} s_\mu. }$

Proof. By definition, $e_\lambda = e_{\lambda_1} \cdots e_{\lambda_k}$. Since $e_i = s_{1^i}$, we can apply the Pieri rule repeatedly. The coefficient of $s_\mu$ is, then, the number of ways of getting $\mu$ by a putting together a sequence of $k$ vertical strips of the appropriate sizes, starting with an empty tableau.

This is the same data needed to construct a “row-strict” tableau of shape $\mu$ and weight $\lambda$ (imagine filling in the $i$-th vertical strip with all $i$‘s). In other words, (by transposing, so as to be column-strict) the coefficient is the number of SSYTs of shape $\mu^T$ and weight $\lambda$.

Some final remarks

By now, we’ve computed many changes of basis: $s, e$ to $m$, and $h,p$ to $e$ (somewhat indirectly, since I only wrote out a polynomial change of basis, not an additive one). Now we also know $e$ to $s$.

Ignoring the somewhat-less-important power symmetric polynomials, we have almost every change of basis: we’re just missing $h$ to $m$ and $h$ to $s$. (It’s also easy to get $p$ to $m$.) Stay tuned!