Combinatorial theory of Macdonald polynomials I: Proof of Haglund's formula
See allHide authors and affiliations

Edited by Richard V. Kadison, University of Pennsylvania, Philadelphia, PA, and approved December 7, 2004 (received for review November 15, 2004)
Abstract
Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H̃ _{μ}. We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H̃ _{μ}. As corollaries, we obtain the cocharge formula of Lascoux and Schützenberger for Hall–Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J _{μ}, a formula for H̃ _{μ} in terms of Lascoux–Leclerc–Thibon polynomials, and combinatorial expressions for the Kostka–Macdonald coefficients K̃ _{λ,μ} when μ is a twocolumn shape.
In 1988, Kadell (1) conjectured the existence of a family of symmetric polynomials, depending on variables x _{1},..., x_{n} , a partition μ of n, and two other parameters, which were involved in a generalization of Selberg's integral. Later that same year, Macdonald (2) resolved this conjecture, introducing the symmetric polynomials P _{μ}[x _{1},..., x_{n} ; q, t] that now bear his name. The P _{μ} (along with their variants Q _{μ}, J _{μ}, and H̃ _{μ}, defined below) simultaneously generalize many well known bases for symmetric functions, such as the Schur basis, the monomial basis, the elementary symmetric functions, the Hall–Littlewood basis, Jack's symmetric functions, zonal symmetric polynomials, and zonal spherical functions. The Macdonald polynomials and their specializations have found applications in many diverse fields including hypergeometric function theory, representation theory, algebraic geometry, Fourier analysis on homogeneous spaces, statistics, algebraic combinatorics, quantum mechanics, group theory, and more.
Unfortunately, the original definition of the Macdonald polynomials P _{μ} is quite indirect and nonexplicit. Consequently, for the next 16 years, these polynomials had to be studied by using rather technical algebraic and analytic methods. Besides the pioneering work of Macdonald himself, Cherednik (3) made great advances by exploiting links between Macdonald polynomials and the representation theory of affine Hecke algebras (4), whereas Garsia and coworkers (5–7) obtained many results by using intricate manipulations of formal power series and plethystic identities. Haiman (8), using properties of the Hilbert scheme of n points in the plane, gave a representationtheoretical interpretation for H̃ _{μ} that proved Schur positivity, thereby resolving one of Macdonald's original conjectures. However, none of these developments yielded an explicit combinatorial formula for the Macdonald polynomials. Many combinatorists hoped to express these polynomials as a sum of weighted objects, similar to the definition of Schur functions using semistandard tableaux. Despite all efforts, such a formula eluded researchers for years, and it was generally felt that combinatorial methods were simply not powerful enough to handle the algebraic subtleties of Macdonald polynomials. However, in light of the recent conjecture by Haglund giving a combinatorial formula for the modified Macdonald polynomials (9), such pessimism is now seen to be unwarranted. This article outlines a proof of the conjecture using only elementary combinatorial techniques. Complete details and proofs for all results mentioned herein will appear in a forthcoming paper.
Background in Algebraic Combinatorics
We begin by reviewing some standard material concerning partitions, tableaux, and symmetric functions; more details may be found in ref. 10. Throughout this whole discussion, let n and N ≥ n be fixed positive integers, and let z _{1},..., z_{N} be independent indeterminates. (By taking suitable inverse limits, we can allow N = ∞ as well.)
Partitions. A “partition” of n is a weakly decreasing sequence μ = (μ_{1} ≥ μ_{2} ≥ ··· ≥ μ _{s} ) of positive integers such that μ_{1} + ··· + μ _{s} = n. In this situation, we write μ = n, μ ⊢ n, l (μ) = s, μ _{i} = 0 for , and . The “diagram” of μ, which we also denote by μ, is the set of “cells” . We lexicographically order the cells in by setting (x, y) <_{lex} (u, v) if and only if (iff) y > v or y = v and x < u. For any c = (x _{0}, y _{0}) in the diagram of μ, we define the “arm” a(c) = {(x, y _{0}) ∈ μ: x > x _{0}} and the “leg” l(c) = {(x _{0}, y) ∈ μ: y > y _{0}}.
Pictorially, we represent the cell c = (x, y) as a unit square the upperright corner of which is located at (x, y). Then the diagram of μ consists of l(μ) rows of squares in the first quadrant of the xy plane, leftjustified, with μ _{i} squares in the ith row from the bottom. For any square c, a(c) is the number of squares strictly to the right of c in μ, and l(c) is the number of squares strictly above c in μ.
The “transpose” of μ, denoted μ′, is the partition with diagram {(y, x): (x, y) ∈ μ}. Geometrically, the diagram of μ′ is the reflection of the diagram of μ about the line y = x. For λ, μ ⊢ n, we write λ ≤ μ iff λ_{1} + ··· + λ _{i} ≤ μ_{1} + ··· + μ _{i} for all i. This gives the “dominance” partial ordering on the set of partitions of n.
Fillings and Tableaux. For all m > 0, we define “alphabets” . For μ ⊢ n, a “filling” of the diagram μ with entries in an alphabet A is a function T: μ → A. A “standard filling” is a bijection . For any filling T, we set T(x, y) = 0 whenever (x, y) is not in the diagram of μ. We also define z_{T} =∏ _{c} _{∈μ} z _{} _{T} _{(} _{c} _{)}, N(T) = T ^{–1}({i ∈ A: i < 0}), and P(T) = T ^{–1} ({i ∈ A: i > 0}).
Given any total ordering ≺ on some alphabet A, a semistandard tableau of shape μ relative to ≺ is a filling Q: μ → A such that: x < x′ implies Q(x, y) ⪯ Q(x′, y) for all y such that (x, y), (x′, y) ∈ μ; and y < y′ implies Q(x, y) ⪯ Q(x, y′) for all x such that (x, y), (x, y′) ∈ μ. We let SSYT_{≺}(μ) denote the set of such tableaux; let SSYT_{≺}(μ, ν) = {Q ∈ SSYT_{≺}(μ): z_{Q} = z ^{ν}}. Also, define the “standard tableaux” SYT(μ) to be the set of standard fillings S: μ → A ^{+} _{n} in SSYT_{<}(μ), where < is the usual total ordering on A ^{+} _{n} .
Pictorially, a filling T of μ is obtained by putting the letter T(x, y) ∈ A in each square (x, y) of the diagram of μ. T is a standard filling iff it uses the letters 1,..., n exactly once each. The monomial z_{T} records which letters appear in T, disregarding signs; N(T) is the number of negative letters used; and P(T) is the number of positive letters used. In a semistandard tableau Q, letters weakly increase in each row and strictly increase in each column (relative to the given ordering ≺). A standard tableau is a semistandard tableau (relative to <) in which the letters 1,..., n are used once each.
Words. A “word” in an alphabet A is a sequence of letters w = w _{1} w _{2} ··· w_{m} , where each w_{i} ∈ A. We can identify this word with a filling w: (m) → A in the obvious way. Define as before. A word w on A ^{+} _{N} has “partition content” iffz_{w} = z ^{μ} for some μ ⊢ n; if μ = (1 ^{n} ), the word w is a “permutation.” Given μ ⊢ n, let c _{1},..., c_{n} be the cells of μ in lex order. For a filling T: μ → A, define the “reading word” of T to be the word w(T) = T(c _{1}) ··· T(c_{n} ), i.e., the word obtained by reading the elements in the rows of T from left to right, visiting the rows from top to bottom.
Suppose (A, <) and (B, ≺) are two totally ordered alphabets. Also suppose that v is a word of length m in A, w is a word of length m in B such that w _{1} ⪯ ··· ⪯ w_{m} , and w_{i} = w_{i} _{+1} implies v_{i} ≤ v_{i} _{+1}. The RobinsonSchenstedKnuth (RSK) algorithm gives a bijection from the set of such pairs (v, w) onto the set of pairs (P, Q), where P ∈ SSYT_{<}(μ) and Q ∈ SSYT_{≺}(μ) for some μ ⊢ m. P is called the “insertion tableau” for (v, w), and Q is called the “recording tableau.” We have z_{P} = z_{v} and z_{Q} = z_{w} . See ref. 11 for more details.
Standardization. Let (A, ≺) be a totally ordered alphabet, and let T: μ → A be a filling of μ ⊢ n. We define a standard filling S = std(T, ≺) called the “standardization of T relative to ≺.” Formally, S: μ → A ^{+} _{n} is the unique standard filling such that: T(c) ≺ T(d) implies S(c) < S(d); T(c) = T(d) > 0 and c <_{lex} d implies S(c) < S(d); and T(c) = T(d) < 0 and c <_{lex} d implies S(c) > S(d). Informally, we obtain S from T by relabeling the cells of T with the numbers 1,..., n, such that: earlier symbols in the ordering ≺ get relabeled first; cells containing equal positive symbols get relabeled in lex order; and cells containing equal negative symbols get relabeled in reverse lex order. See Fig. 2 for examples of standardization relative to the orderings <_{1} and <_{2} defined below. In Fig. 2, we denote –a by ā for clarity.
If w is a permutation of n letters, we define D(w) to be the set of a ∈ {1, 2,..., n – 1} such that a + 1 occurs earlier than a in w. If S is a standard filling with reading word w(S), we define D(S) = D(w(S)). If T: μ → A is a filling and ≺ is a total ordering of A, we define D _{≺}(T) = D(std(T, ≺)).
Symmetric Functions. Let F be the field . We consider homogeneous polynomials of degree n in the variables z _{1},..., z_{N} with coefficients in F. Such a polynomial is “symmetric” iff it is unchanged under any permutation of the variables z _{1},..., z_{N} ; let denote the set of symmetric polynomials. Important examples of symmetric functions are the “power sums” p_{k} , the “elementary symmetric functions” e_{k} , and the “complete homogeneous symmetric functions” h_{k} , defined by is an Fvector space with bases that are indexed by partitions of n. There are five classical bases for (10).

Monomial basis. For μ ⊢ n, let m _{μ} be the sum of all distinct monomials in z _{1},..., z_{N} that can be obtained by permuting the subscripts of the monomial . It is clear that {m _{μ}: μ ⊢ n} is an independent set spanning .

Powersum basis. For μ ⊢ n, let . Then {p _{μ}: μ ⊢ n} is a basis for .

Elementary basis. For μ ⊢ n, let . Then {e _{μ}: μ ⊢ n} is a basis for .

Homogeneous basis. For μ ⊢ n, let . Then {h _{μ}: μ ⊢ n} is a basis for .

Schur basis. Let ≺ be any total ordering of the alphabet . Define the “Schur function” s _{μ} = ∑ _{T} _{∈SSYT} (μ)z_{T} . It can be shown that: s _{μ} does lie in ; the element s _{μ} is independent of the total ordering ≺ on ; and {s _{μ}: μ ⊢ n} is a basis for .
The “Hall scalar product” 〈·, ·〉 is defined on by requiring that the Schur basis be orthonormal. For any u ∈ F, we define a linear map A_{u} on by setting and extending by linearity. The maps A_{u} form a commuting family of linear operators on , which are invertible for u ≠ 1. The classical map ω, which sends p _{μ} to .
Quasisymmetric Functions. For each subset I ⊆ {1, 2,..., n – 1}, define Gessel's “fundamental quasisymmetric function” by .It can be shown that these 2 ^{n} ^{–1} polynomials are linearly independent. Let denote the vector space spanned by the elements Q _{n,I}. Then is a subspace of .
Suppose μ ⊢ n, S ∈ SYT(μ), and < is the standard total ordering of . From the definition of standardization, one sees easily that .Adding over all standard tableaux S, we obtain the expansion
Macdonald Polynomials
We now introduce several versions of the Macdonald polynomials (10, 12, 13), beginning with the “modified Macdonald polynomials” H̃ _{μ} = H̃ _{μ}[z _{1},..., z_{N} ; q, t].
Theorem 1. There exists a unique basis {H̃ _{μ}: μ ⊢ n} for satisfying the following axioms:

(M1) ω(A_{q} (H̃ _{μ})) = ∑_{λ≥μ} a _{μ,λ} s _{λ} for some a _{μ,λ} ∈ F.
(M2) ω(A_{t} (H̃)) = ∑_{λ≥μ′} b _{μ,λ} s _{λ} for some b _{μ,λ} ∈ F.

(M3) 〈H̃ _{μ}, s _{(} _{n} _{)}〉 = 1.
Our main result is a constructive combinatorial proof of the existence assertion in Theorem 1. The uniqueness assertion is much easier to prove. Suppose the basis {G _{μ}: μ ⊢ n} also satisfies the three axioms. Consider column vectors G = (G _{μ})_{μ⊢} _{n}, H = (H̃ _{μ})_{μ⊢} _{n}, S_{q} = ((ωA_{q} )^{–1}(s _{μ}))_{μ⊢} _{n} , and S_{t} = ((ωA_{t} )^{–1}(s _{μ′}))_{μ⊢} _{n} . Axiom M1 says that H = X′S_{q} and G = X″S_{q} for some invertible uppertriangular matrices X′, X″; hence, G = X H for some uppertriangular matrix X. Similarly, axiom M2 says that G = Y H for some lowertriangular matrix Y. Because {G _{μ}} and {H̃ _{μ}} are bases, we must have X = Y, so that X is a diagonal matrix. Axiom M3 now shows that X is the identity matrix, and G = H. For example, take G _{μ} = H̃ _{μ′}[z _{1},..., z_{N} ; t, q]. The family {G _{μ}: μ ⊢ n} clearly satisfies M1–M3, so uniqueness gives the “symmetry” property Here is the outline of our proof of the existence of H̃ _{μ}.

We rewrite the axioms M1–M3 in the following simpler form:

(A1) A_{q} (H̃ _{μ}) = ∑_{λ≤μ′} c _{μ,λ} m _{λ} for some c _{μ,λ} ∈ F.

(A2) A_{t} (H̃ _{μ}) = ∑_{λ≤μ} d _{μ,λ} m _{λ} for some d _{μ,λ} ∈ F.

(A3) The coefficient of z^{n} _{1} in H̃ _{μ} is 1.
The equivalence of the two sets of axioms easily follows from these facts: μ ≤ ν iff ν′ ≤ μ′; ω(s _{μ}) = s _{μ′}; and the monomial and Schur bases are related by the triangular Kostka matrix.


We recall Haglund's definition of the “combinatorial Macdonald polynomials” , which are sums of weighted objects depending on μ. It will be clear that axiom A3 holds for C _{μ}.

We prove that , i.e., that these polynomials are symmetric.

We use quasisymmetric functions to find combinatorial interpretations for A_{q} (C _{μ}) and A_{t} (C _{μ}) as sums of signed, weighted objects.

We construct signreversing involutions to verify axioms A1 and A2 for C _{μ}. Because {H̃ _{μ} : μ ⊢ n} are the unique elements of satisfying A1–A3, it follows that C _{μ} = H̃ _{μ}.
Once we have constructed the modified Macdonald polynomials H̃ _{μ} = H̃ _{μ}[z _{1},..., z_{N} ; q, t], we can define the “integral Macdonald polynomials” J _{μ}, the “dual Macdonald polynomials” Q _{μ}, and the “original Macdonald polynomials” P _{μ} by setting We remark that P _{μ} has the following specializations: when q = t, P _{μ} reduces to s _{μ}; when t → 1, P _{μ} reduces to m _{μ}; when q → 1, P _{μ} reduces to e _{μ′}; when q → 0, P _{μ} reduces to a Hall–Littlewood polynomial; and when q = t ^{α} and then t → 1, P _{μ} reduces to Jack's symmetric polynomial. In turn, Jack's polynomials reduce to zonal spherical functions when α = 1/2 and to zonal symmetric polynomials when α = 2.
We will use the identity c _{μ} = H̃ _{μ} to give combinatorial formulas for the bases J _{μ}, Q _{μ}, and P _{μ}. One of our formulas for J _{μ} specializes to a formula of Sahi and Knop for Jack polynomials. As additional applications of our combinatorial formulas, we sketch derivations of one of Macdonald's specializations for H̃ _{μ}, Lascoux and Schützenberger's cocharge formula for Hall–Littlewood polynomials, and interpretations for certain Kostka–Macdonald coefficients.
Haglund's Combinatorial Polynomials
Fix μ ⊢ n. An ordered pair of cells ((x, y), (u, v)) in μ is an “attacking pair” iff y = v and x < μ, or y = v + 1 and u < x. For a cell (x, y) ∈ μ, we let d(x, y) = (x, y – 1) be the cell directly below it, which does not lie in μ if y = 1. A “special triple” of μ consists of three cells (c _{1}, c _{2}, c _{3}) with c _{1} = (x, y) ∈ μ, c _{2} = (x, y – 1) = d(x, y), and c _{3} = (z, y) ∈ μ for some z > x and some y ≥ 1.
Let S be a standard filling. A “descent” of S is a cell (x, y) ∈ μ such that y > 1 and S(x, y) > S(x, y – 1). The set of all such cells is the “descent set” of S, denoted Des(S). The “major index” of S is maj(S) = ∑ _{c} _{∈Des(} _{S} _{)}(l(c) + 1). An “inversion pair” of S is an attacking pair ((x, y), (u, v)) such that S(x, y) > S(u, v); let Inv(S) be the set of such pairs. An “inversion triple” of S is a special triple (c _{1}, c _{2}, c _{3}) with S(c _{1}) < S(c _{2}) < S(c _{3}) or S(c _{2}) < S(c _{3}) < S(c _{1}) or S(c _{3}) < S(c _{1}) < S(c _{2}); let Trip(S) be the set of such triples. The “inversion statistic” for S is inv(S) = Inv(S) – ∑ _{c} _{∈Des(} _{S} _{)} a(c). An easy case analysis shows that inv(S) = Trip(S). For any totally ordered alphabet (A, ≺) and any filling T: μ → A, we set inv ≺(T) = inv(std(T, ≺)) and maj_{≺}(T) = maj(std(T, ≺)).
Haglund defined the combinatorial Macdonald polynomials (9) by setting where < is the usual ordering on . For each μ, there is exactly one positive filling T on μ with z_{T} = z^{n} _{1}, and inv_{<}(T) = 0 = maj_{<}(T) for this T. Therefore, axiom A3 holds for C _{μ}. Moreover, by using the definition of standardization, it is easy to see that . Fig. 1 shows the objects used to calculate C _{(2,1)}. We have
Symmetry of C_{μ}
Lascoux et al. (14) introduced a class of symmetric polynomials commonly known as Lascoux–Leclerc–Thibon (LLT) polynomials. One way to show that is to write C _{μ} as a weighted sum of LLT polynomials. For μ ⊢ n and D ⊆ μ, set We clearly have C _{μ} = ∑ _{D} _{⊆μ} C _{μ,} _{D} ∏ _{c} _{∈} _{D} t^{l} ^{(c)+1} q ^{–a(c)}, so it suffices to prove that each C _{μ,} _{D} is a symmetric polynomial. Using corollary 5.2.4 of ref. 15, it is easily shown that C _{μ,} _{D} is an LLT polynomial indexed by a tuple of ribbons the sizes of which are the parts of μ′ and the shapes of which are determined by D. Because LLT polynomials are known to be symmetric, it follows that each , and so .
We now sketch an alternative, elementary proof that . If m ≥ 0 and S is a subset of {(i, j): 1 ≤ i < j ≤ m}, we say that S has the “connectivity property” iff for all i, j, k with i < j < k, (i, k) ∈ S implies (i, j) ∈ S and (j, k) ∈ S. Let W_{m} = {1, 2} ^{m} for m ≥ 0. For each w ∈ W_{m} , define inv _{S} (w) = {(i, j): (i, j) ∈ S and w_{i} > w_{j} }. For any set Z ⊆ W_{m} , let
Lemma 1. For all m ≥ 0, G_{S} _{,} _{Wm} (z _{1}, z _{2}; q) = G_{S} _{,} _{Wm} (z _{2}, z _{1}; q).
Proof: We use induction on m, the cases m ≤ 1 being easy. Assume m > 1 and Lemma 1 holds for all m′ < m. Consider the set of integers j such that (j, m) ∈ S. If this set is empty, then inv _{S} (w) does not depend on w_{m} , and hence By induction, the right side is symmetric in z _{1} and z _{2}.
Otherwise, choose j minimal with (j, m) ∈ S. Now we prove the result for m by induction on S, the case S = 0 being easy. The idea is to compare G_{S} _{,} _{Wm} and G_{S} _{′,} _{Wm} , where S′ = S – {(j, m)}. Note that S′ < S and S′ satisfies the connectivity condition. Let B = {w ∈ W_{m} : w_{j} = 2 and w_{m} = 1}, and let A = W_{m} – B. Let S″ be obtained from S by deleting all pairs involving j or m and subtracting one from all indices exceeding j; S″ satisfies the connectivity property. One must now verify the following equations: Here, Eqs. 7 and 8 are clear, and Eq. 9 follows because inv _{S} (w) = inv _{S} _{′}(w) for all w ∈ A. To prove Eq. 10, consider the bijection φ: B → W_{m} _{–2} that deletes w_{j} = 2 and w_{m} = 1 from each w ∈ B. It suffices to show that inv _{S} (w) = inv _{S} _{″}(φ(w)) + m – j for w ∈ B. For every k with j < k < m, the pairs (j, k) and (k, m) are both in S by connectivity. Exactly one of these contributes to inv _{S} (w), depending on whether w_{k} = 1 or w_{k} = 2. Moreover, the pair (j, m) ∈ S contributes an inversion. No other pair involving j or m contributes to inv _{S} (w), by minimality of j. It follows easily that inv _{S} (w) exceeds inv _{S} _{″}(φ(w)) by exactly m – j. Eq. 11 is proved in the same way, noting that (j, m) ∉ S′. Finally, Eq. 12 follows from the preceding five relations. By induction on m and S, the right side of Eq. 12 is symmetric in z _{1} and z _{2}, completing the proof.
Now we prove symmetry of C _{μ,} _{D} . For D ⊆ μ, let C _{μ,≥} _{D} = ∑ _{E} _{:} _{D} _{⊆} _{E} _{⊆μ} C _{μ,} _{E} . First, it suffices to show that C _{μ,≥} _{D} is symmetric for all D. Second, it suffices to show that C _{μ,≥} _{D} is unchanged when z_{i} and z_{i} _{+1} are interchanged. Third, suppose we are given a fixed decomposition of the diagram of μ into disjoint sets A, B, and a fixed partial filling T _{0}: B → {1,..., N} – {i, i + 1}. Consider the set Z of fillings T of μ with Des(T) ⊇ D, TB = T _{0}, and T ^{–1}({i, i + 1}) = A. It suffices to show that ∑ _{T} _{∈} _{Z} q ^{inv(T)} _{zT} is symmetric in z_{i} and z_{i} _{+1}, because C _{μ,≥} _{D} is a sum of such expressions. Let A _{1} = {c: c ∈ A ∩ D and d(c) ∈ A}, A _{2} = d(A _{1}), and A _{3} = A – (A _{1} ∪ A _{2}). One checks that, if Z is nonempty, Hence, we can identify T ∈ Z with the word w_{T} ∈ {i, i + 1}^{A3} that lists the entries of the cells in A _{3} in lex order. Change notation so that w_{T} ∈ {1, 2}^{A3}. For T ∈ Z, one checks that there exists a set S satisfying the connectivity property and a constant j _{0} (which depend on T _{0}, D, A, and B but not on TA _{3}) such that inv(T) = j _{0} + inv _{S} (w_{T} ). It readily follows that
By Lemma 1, this expression is unchanged when z_{i} and z _{i + 1} are interchanged.
Combinatorial Interpretation for A_{u}(C_{μ})
Let {x_{i} : i ∈ A_{N} } be a new set of indeterminates, and let R = F[{x_{i} : i ∈ A_{N} }] be the corresponding polynomial ring. Define an Flinear map B: by setting and extending by linearity. Let ≺ be any total ordering of A_{N} . Define an Flinear map B _{≺}: Q^{n} _{F} → R by setting and extending by linearity.
It can be shown, by using the Pieri rules and the expansion (Eq. 1) of Schur functions in terms of quasisymmetric functions, that B(s _{μ}) = B _{≺}(s _{μ}) for all μ. Because these two linear maps agree on a basis of , we see that for any ordering ≺ on A_{N} .
Now, given u ∈ F, consider the evaluation homomorphism e: R → F[z _{1},..., z_{N} ] sending x_{i} to uz_{i} for i > 0 and sending x_{i} to –z _{i} for i < 0. Clearly, e ○ B is the restriction to of every map e ○ B _{≺}. On one hand, we see that e ○ B = A_{u} by evaluating each side on p _{μ}. On the other hand, Using the fact that , the same standardization argument used to prove Eq. 1 establishes the following theorem: for any total ordering ≺ of A_{N} ,
Involutions
For λ, μ ⊢ n, let A _{μ,λ} be the set of fillings T: μ → A_{N} such that z_{T} = z ^{λ}. We apply Eq. 13 with u = q and ≺ = <_{1}, where
We conclude that the coefficient of m _{λ} in A_{q} (C _{μ}) is To verify axiom A1, we define a signreversing, weightpreserving involution I: A _{μ,λ} → A _{μ,λ} such that I has no fixed points unless λ ⪯ μ′.
Let T ∈ A _{μ,λ}. To compute IT, consider the set of all attacking pairs of cells (c, d) in μ such that T(c) = T(d). If this set is empty, then T is a fixed point of I; we call such fillings T “nonattacking.” Otherwise, choose the attacking pair in this set for which: (i) T(c) is minimized; (ii) d is maximal (relative to <_{lex}) among all attacking pairs satisfying i; and (iii) c is maximal (relative to <_{lex}) among all attacking pairs satisfying i and ii. Define IT by flipping the sign of the entry in cell c; i.e., IT(c) = –T(c) and IT(e) = T(e) for all e ≠ c. See Fig. 2 for an example.
It is clear that I maps A _{μ,λ} into itself and that I is an involution. Without loss of generality, assume that T(c) > 0. Write S = std(T, <_{1}) and IS = std(IT, <_{1}). One must now check the following statements by using the definitions of I and <_{1}:(i) N(IT) = N(T) + 1 and P(IT) = P(T) – 1; (ii) Des(IS) = Des(S) and so maj(IS) = maj(S); (iii) Inv(IS) = Inv(S) ∪ {(c,d)}, where (c,d) ∉ Inv(IS); and (iv) inv(IS) = inv(S) + 1. It follows that the summands indexed by T and IT cancel out in Eq. 14.
If c _{μ,λ} ≠ 0, there must exist an uncancelled summand in Eq. 14, which corresponds to some fixed point T of I. For this T, there are no attacking pairs (c, d) with T(c) = T(d). In particular, for each row of T, no two entries in that row have the same absolute value. For each i, we have λ_{1} + ··· + λ _{i} = T ^{–1}({±1,..., ±i}) because z_{T} = z ^{λ}. On the other hand, adding up the number of entries in each row of T with absolute value at most i, we see that Therefore, λ ≤ μ′.
Next, we apply Eq. 13 with u = t and ≺ = <_{2}, where We conclude that the coefficient of m _{λ} in A_{t} (C _{μ}) is To verify axiom A2, we define a signreversing, weightpreserving involution J: A _{μ,λ} → A _{μ,λ} such that J has no fixed points unless λ ≤ μ.
Let T ∈ A _{μ,λ}. To compute JT, consider the set of cells (x, y) ∈ μ such that T(x, y) < y. If this set is empty, then T is a fixed point of J. Otherwise, choose the cell (x, y) in this set for which: (i) T(x, y) is minimized; (ii) y is maximized among all cells satisfying i; and (iii) x is minimized among all cells satisfying i and ii. Define JT by flipping the sign of the entry in cell (x, y); i.e., JT(x, y) = –T(x, y) and JT(c) = T(c) for all c ≠ (x, y). See Fig. 2 for an example.
It is clear that J maps A _{μ,λ} into itself and that J is an involution. Without loss of generality, assume that T(x, y) > 0. Write S = std(T, <_{2}) and JS = std(JT, <_{2}). One must now check the following statements by using the definitions of J and <_{2}:(i) N(JT) = N(T) + 1 and P(JT) = P(T) – 1; (ii) Trip(JS) = Trip(S) and so inv(JS) = inv(S); (iii) (x, y) ∉ Des(S) and (x, y) ∈ Des(JS); (iv) either (x, y + 1) ∉ μ, or else (x, y + 1) ∈ Des(S) and (x, y + 1) ∉ Des(JS); and (v) maj(JS) = maj(S) + 1. It follows that the summands indexed by T and JT cancel out in Eq. 15.
If d _{μ,λ} ≠ 0, there must exist an uncancelled summand in Eq. 15, which corresponds to some fixed point T of J. For this T, we must have y ≤ T(x, y) for all cells (x, y) ∈ μ. In other words, for j > 0, all occurrences of j or –j in T must appear in the lowest j rows of T. Hence, for any i, all occurrences of ±1,..., ±i in T appear in the lowest i rows of T. Consequently, for all i, where the equality follows because z_{T} = z ^{λ}. Therefore, λ ≤ μ.
Other Macdonald Bases
Now we give combinatorial formulas for J _{μ}, Q _{μ}, and P _{μ}. From Eqs. 2 and 13, we deduce that Similarly, Eqs. 3 and 13 lead to the formula By comparing a filling T to the corresponding positive filling U given by U(c) = T(c), one can show that The negative terms in ∏_{1} and ∏_{2} contribute the necessary extra powers of q and t that arise from making certain entries in U negative. For any cell c, setting T(c) = –U(c) always leads to an extra factor of –t. If c is a cell for which U(c) ≠ U(d(c)), one checks that this is the only extra factor needed, because U is nonattacking. On the other hand, if U(c) = U(d(c)), making U(c) negative causes a new descent at cell c, leading to a further correction factor of q^{l} ^{(c)+1} t^{a} ^{(c)}.
From any of these formulas for J _{μ}, we get formulas for Q _{μ} and P _{μ} by interpreting the denominators in Eqs. 4 and 5 as geometric series. Explicitly, we have Moreover, setting q = t ^{α} in J _{μ}, dividing by (1 – t)^{μ}, and letting t approach 1, we obtain the integral Jack polynomials. Applying these operations to Eq. 16 and simplifying, we obtain the Sahi–Knop formula (16)
Applications
We now give three more applications of our combinatorial formulas for Macdonald polynomials. These applications all involve the (modified) “Kostka–Macdonald coefficients” K̃ _{λ,μ}(q, t), which are the unique scalars in satisfying H̃ _{μ} = ∑_{λ⊢} _{n}K̃ _{λ,μ}(q, t)s _{λ}.
Macdonald's Specialization. Let u be a new indeterminate, and let be the linear map defined on the powersum basis by . We show that the coefficient of (–u) ^{d} in ε _{u} (H̃ _{μ}) is This result is equivalent to Macdonald's formula VI.8.8 (10) and to the formula giving the Kostka–Macdonald coefficient when λ is a hook.
Now we sketch the proof of Eq. 17. The same arguments that led to Eq. 13 show that where we use the ordering 1 < –1. To extract the coefficient of (–u) ^{d} , we restrict to fillings T with N(T) = d. Such a filling T is uniquely determined by the delement subset S′= T ^{–1}({–1}) of μ. The definitions easily imply that Define a bijection φ on the diagram of μ by reversing the order of the cells in row 1 and reversing the order of the cells above the first row in each column. Setting S = φ(S′), one sees that from which Eq. 17 readily follows.
Cocharge Formula for Hall–Littlewood Polynomials. We define “cocharge” first on permutations, then on words with partition content, and finally on semistandard tableaux. If w = w _{1} w _{2} ··· w_{n} is a permutation, define the cocharge of w by cchg(w) = ∑ _{i} _{∈} _{D} _{(} _{w} _{)}(n – i). If w has partition content, we use the following algorithm to compute cocharge. Let l = max _{c} w_{c} , and let c _{1} be the largest index such that w_{c} _{1} = 1. Proceeding inductively, if c_{i} _{–1} has been determined for i < 1, let c_{i} be the largest index less than c_{i} _{–1} with w_{c} _{i} = i; if there is no such index, let c_{i} be the largest index with w _{ci} = i. Let S be the set of cells c _{1},..., c_{l} . Let y be the subword of w consisting of the letters at the positions in S from left to right, and let z be the complementary subword of w. Then y is a permutation, and z has partition content. Define cocharge recursively by cchg(w) = cchg(y) + cchg(z). Finally, if T ∈ SSYT(λ, μ), define cchg(T) = cchg(w(T)). Given a pair of words (v, w) mapping to (P, Q) under the RSK algorithm, if v has partition content, then cchg(v) = cchg(P). This fact is easily verified using Knuth relations.
The Lascoux and Schützenberger formula for Hall–Littlewood polynomials (17) is We sketch a combinatorial proof of this formula. Let < be the standard total ordering of , and let ≺ be the reverse of this ordering. Using the definition of C _{μ} and the formula , it suffices to show that Let X be the set of pairs (P, Q) with P ∈ SSYT_{<}(λ, μ) and Q ∈ SSYT_{≺}(λ) for some λ ⊢ n. Let Y be the set of fillings T: with inv_{<}(T) = 0. It now suffices to construct a bijection φ: Y → X such that, whenever (P, Q) = φ(T), z_{T} = z_{Q} and maj_{<}(T) = cchg(P). Given T ∈ Y, we compute φ(T) in three steps.
First, map T to the list of triples ((a _{1}, x _{1}, y _{1}),..., (a_{n}, x_{n}, y_{n} )), where (x_{i}, y_{i} ) ∈ μ, a_{i} = T(x_{i}, y_{i} ), a _{1} ≥ ··· ≥ a_{n} , and a_{i} = a_{i} _{+1} implies y_{i} ≤ y_{i} _{+1}. Note that , and the word y ··· y_{n} has μ _{i} occurrences of i for all i. Clearly, T can be recovered from the list of triples.
Second, we delete the x coordinates x_{i} to obtain a list of pairs ((a _{1}, y _{1}),..., (a_{n}, y_{n} )). This step is reversible because of the condition inv_{<}(T) = 0. In more detail, knowing the multiset of entries in each row of T, there exists a unique arrangement of the entries in each row for which inv_{<}(T) = 0. Entries in the lowest row must appear in weakly increasing order. Suppose the first i – 1 rows have been filled, and S is the multiset of a_{j} values such that y_{j} = i. We fill the cells c in row i from left to right. At each step, cell c in row i is filled with the smallest unused element in S larger than T(d(c)), if any; otherwise, cell c is filled with the smallest unused element of S. Comparing this process to the definition of cocharge, it is not hard to check that maj_{<}(T) is precisely cchg(y _{1} y _{2} ··· y_{n} ).
Third, to find (P, Q) = φ(T), we apply the RSK insertion algorithm to the pair of words y = y _{1} ··· y_{n} and a = a _{1} ··· a_{n} , inserting y in P and recording a in Q. Clearly, z_{Q} = z_{T}, z_{P} = z ^{μ}, and cchg(P) = cchg(y _{1} ··· y_{n} ) = maj_{<}(T), as required. It is easy to see that φ is a bijection.
Kostka–Macdonald Coefficients for TwoColumn Shapes. Our final application is a formula for K̃ _{λ,μ} when μ has at most two columns. A “Yamanouchi word” is a word w = w _{1}... w_{n} such that for all k, the suffix w_{k} ··· w_{n} has partition content. Then, for μ_{1} ≤ 2, .We omit the proof, which uses crystals, the RSK algorithm, and jeu de taquin.
Acknowledgments
This work was supported by National Security Agency Grant MSPF02G193 (to J.H.), National Science Foundation Grant DMS0301072 (to M.H.), and National Science Foundation Postdoctoral Research Fellowship DMS0303178 (to N.L.).
Footnotes

↵ † To whom correspondence should be addressed. Email: jhaglund{at}math.upenn.edu.

This paper was submitted directly (Track II) to the PNAS office.

Abbreviations: iff, if and only if; RSK, Robinson–SchenstedKnuth; LLT, Lascoux–Leclerc–Thibon.
 Copyright © 2005, The National Academy of Sciences
References
 ↵

↵
Macdonald, I. G. (1988) Sém. Lothar. Combin. 372 , 131–171.

↵
Cherednik, I. (1995) Ann. Math. 141 , 191–216.

↵
Macdonald, I. G. (1995) Semin. Bourbaki, 189–207.

↵
Bergeron, F., Garsia, A. M., Haiman, M. & Tesler, G. (1999) Methods Appl. Anal. 6 , 363–420.

Garsia, A. M., Haiman, M. & Tesler, G. (1999) Sém. Lothar. Combin. 42, www.emis.de/journals/SLC/wpapers/s42garsia.html
 ↵
 ↵

↵
Haglund, J. (2004) Proc. Natl. Acad. Sci. USA 101 , 16127–16131. pmid:15534210

↵
Macdonald, I. G. (1995) Symmetric Functions and Hall Polynomials (Oxford Univ. Press, New York), 2nd Ed.

↵
Stanley, R. P. (1999) Enumerative Combinatorics (Cambridge Univ. Press, Cambridge, U.K.), Vol. 2.

↵
Haiman, M. (1999) New Perspectives in Algebraic Combinatorics, (Cambridge Univ. Press, Cambridge), Mathematical Sciences Research Institute Publication 37, pp. 207–254.

↵
Haiman, M. (2003) Current Developments in Mathematics (International Press, Somerville, MA), pp. 39–112.
 ↵

↵
Haglund, J., Haiman, M., Loehr, N., Remmel, J. & Ulyanov, A. (2005) Duke Math. J., in press.
 ↵

↵
Lascoux, A. & Schützenberger, M.P. (1978) C. R. Acad. Sci. Sér. I 286A, 323–324.