Home » Mathematics » Algebraic combinatorics » Everything you wanted to know about symmetric polynomials, part I

Everything you wanted to know about symmetric polynomials, part I

A symmetric polynomial f(x_1, \ldots, x_n) is a polynomial which doesn’t change under permutations of the variables. So, f(x,y,z) = f(x,z,y) = f(y,z,x) and so on. Because of the simplicity of the definition (and the ubiquity of polynomials in math), symmetric polynomials show up in a variety of fields: algebraic geometry and combinatorics, Galois theory, birational geometry and intersection theory, linear algebra, and representation theory — to say the least.

Perhaps the most well-known are the elementary symmetric polynomials e_k:

\displaystyle{ e_k = \sum (\text{ all squarefree monomials of total degree k }) }


So, if our variables are x,y,z, then:

\begin{array}{rl}  e_1 =& x + y + z,\\  e_2 =& xy + yz + xz,\\  e_3 =& xyz.\\  \end{array}

(With only three variables, e_k = 0 for k \geq 4.)

The elementary symmetric polynomials are well-known because they relate the coefficients of (single-variable) polynomials to their roots. Namely, if f(z) is a polynomial of degree n with roots a_1, \ldots, a_n, and leading coefficient 1, then

\displaystyle{ f(z) = (z-a_1)(z-a_2)\cdots(z-a_n) = 1 + e_1 z + e_2 z^2 + \cdots + e_n z^n.}

It’s easy to see this – just expand the product and collect z terms. Here, of course, e_k is shorthand for e_k(a_1, \ldots, a_n). There are so many cases where it’s useful to relate the roots and coefficients of polynomials that it’s hard to enumerate them! Think eigenvalues, algebraic numbers, interpolation problems, not to mention things like Chern classes and Schubert calculus (the latter of which I’ll get into on this blog soon enough).

In this post, I want to sketch out the basic theory of symmetric polynomials, from an algebraic and combinatorial viewpoint.

The Ring of Symmetric Polynomials

As we saw above, some of the e_k‘s are zero if we don’t have enough variables, even though they are nonzero when we have more. There’s a neat approach that lets us avoid, or at least postpone, this sort of technicality. Instead, we’ll be able to deal with “all” symmetric polynomials, in infinitely-many (or at least arbitrarily-many) variables all at once.

To do this, we’ll define \Lambda, the ring of symmetric polynomials in infinitely-many variables. Note that \Lambda does not consist of polynomials in the usual sense – its elements are things like

e_2 = xy + xz + yz + xw + yw + zw + xt + yt + zt + wt + xr + \cdots

or

p_3 = x_1^3 + x_2^3 + x_3^3 + x_4^3 + \cdots

(The latter is the third power symmetric polynomial.)

In particular, \Lambda is not contained in the ring k[x_1, x_2, \ldots], since its elements are infinite sums of monomials. On the other hand, it’s not totally crazy — all the elements will still have finite degree, so there are no infinite monomials and no power series.

Still, we have to find another way to construct \Lambda than as a subring of an ordinary ring of polynomials. So, consider the rings of symmetric polynomials in finitely-many variables,

\displaystyle{ \cdots \to k[x_1,\ldots,x_n]^{S_n} \to \cdots \to k[x_1,x_2,x_3]^{S_3} \to k[x_1,x_2]^{S_2} \to k[x_1], }

with transition maps given by setting x_k = 0. The inverse limit, \varprojlim k[x_1, \ldots, x_n]^{S_n}, is too big, since it actually does include the power series and infinite monomials we wanted to avoid. For example, it includes crazy stuff like

f = x_1 + \cdots  + x_1^2 + \cdots + x_1^3 + \cdots.

So, we’ll take \Lambda \subseteq \varprojlim k[x_1,\cdots,x_n]^{S_n} to be the subring consisting of (inverse limits of) polynomials of bounded degree. (That is, for each f \in \Lambda, there should be a single upper bound for the degree of every monomial in f.)

Of course, in practice we can specialize to k[x_1, \ldots, x_n]^{S_n} for sufficiently large n whenever it’s convenient. (Generally, we’ll do this when it’s convenient to think explicitly in terms of the variables x_i, rather than things like e_k and p_k.)

Additive and Ring Bases for \Lambda

Despite being infinitely-generated, \Lambda has a lot of structure that keeps it pretty manageable. For one thing, it’s graded by degree, and in each degree it’s a finite-dimensional k-vector space, since there are only finitely-many “types” of monomial of a given total degree. (We’ll formalize this in a moment). For example, there’s only one linear symmetric polynomial (up to scaling): \sum x_i.

What I want to do now is give several very natural (and one slightly less natural) bases for \Lambda, both additively and as a ring. Note: for simplicity, I’m working with coefficients from a field. Working over \mathbb{Z} is fine for all the bases except one, the power symmetric basis. Likewise, working in positive characteristic causes problems with that basis. Oh well – let’s just stick to (characteristic zero) fields, or gloss over the choice of coefficients.

Throughout, let \lambda = (\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_k \geq 0) be a partition.

Here are the bases, called m,e,h,p,s.

  • The monomial symmetric polynomial m_\lambda is the sum of all monomials of the form x_{i_1}^{\lambda_1} x_{i_2}^{\lambda_2} \cdots x_{i_k}^{\lambda_k}.
  • The elementary symmetric polynomial e_\lambda = e_{\lambda_1} \cdots e_{\lambda_k}, where, as defined earlier, e_k = m_{1,1,\ldots,1}, the sum of all squarefree monomials of degree k.
  • The homogeneous symmetric polynomial h_\lambda = h_{\lambda_1} \cdots h_{\lambda_k}, where h_k is the sum of all monomials of total degree k.
  • The power symmetric polynomial p_\lambda = p_{\lambda_1} \cdots p_{\lambda_k}, where, as defined earlier, p_k = m_k, the sum of all the k-th powers.
  • The Schur polynomial \displaystyle{ s_\lambda = \sum_{T \in SSYT(\lambda)} x^T}. Here T is a semistandard Young tableau of shape \lambda, and x^T is the monomial with exponents given by the weight of T. For example, if T has three 1s and two 5s, then x^T = x_1^3x_5^2.

Examples: if \lambda = (3,1), then (omitting the Schur polynomial):

\begin{array}{ll}  m_\lambda &= x^3y + y^3x + z^3x + x^3z + y^3z + \cdots \\  e_\lambda &= (xyz + xyw + xzw + yzw + \cdots)(x + y + z + \cdots) \\  h_\lambda &= (x^3 + y^3 + x^2z + xz^2 + xyz + \cdots)(x + y + z + \cdots) \\  p_\lambda &= (x^3 + y^3 + z^3 + w^3 + \cdots)(x + y + z + \cdots)  \end{array}

A few remarks are in order:

  • The monomial symmetric polynomials are obviously an additive basis for \Lambda. That is, it’s obvious both that they’re linearly independent, and that they span. (If f \in \Lambda includes some monomial, then by symmetry it must include the whole corresponding m_\lambda.) In particular, observe that the n-th graded component of \Lambda has vector space dimension p(n), the number of partitions of n.
  • None of the other claimed bases are obviously independent or spanning. Note, however, that all of them are indexed by the same set – the partitions – and that, for each one, |\lambda| is the degree. So, since \Lambda is finite-dimensional in each degree, it’s actually enough to show just linear independence, or just that they span. We’ll mostly show both, but only because it takes no extra work to do so.
  • It’s easy to multiply the e,h,p bases: by definition, this just results in stringing together the corresponding partitions. It’s not as easy to multiply in the m,s bases – there are some structure constants to figure out,

    \displaystyle{ m_\lambda \cdot m_\mu = \sum_\nu a_{\lambda\mu}^\nu m_\nu \qquad \text{ and } \qquad s_\lambda \cdot s_\mu = \sum_\nu c_{\lambda\mu}^\nu s_\nu}.

    The structure constants c_{\lambda\mu}^\nu for multiplying the Schur polynomials are called the Littlewood-Richardson numbers, and are pretty deep. We’ll return to them later. The (nameless) constants a_{\lambda\mu}^\nu for multiplying the monomial symmetric polynomials are much easier.

  • For the Schur polynomials, it’s not obvious that they are even symmetric polynomials to begin with! Luckily, I indirectly ‘proved’ this in my last post, as a consequence of the RSK correspondence (though I didn’t actually prove RSK). Namely, RSK implies that the Kostka number K_{\lambda\mu}, the number of tableaux of shape \lambda and weight \mu, doesn’t depend on the ordering of \mu. Consequently, we have the expansion

    \displaystyle{ s_\lambda = \sum_\mu K_{\lambda\mu} m_{\mu},}

    where \mu ranges over partitions. Later, I hope to give a more direct proof of the symmetry of the Schur polynomials, via the very cool Bender-Knuth involution.

I’ll stop here for now. Next post, I’ll prove that the various bases work properly.

Advertisements

1 Comment

  1. […] Everything you wanted to know about symmetric polynomials, part I […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: