Notes on Schubert Polynomials
Chapter 5

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au

Last update: 2 July 2013

Orthogonality

Recall that

Pn = [x1,,xn], Λn = [x1,,xn]Sn

where x1,,xn are independent indeterminates.

(5.1) Pn is a free Λn-module of rank n! with basis

Bn= { xα:0αi i-1,1in } .

Proof.

by induction on n. The result is trivially true when n=1, so assume that n>1 and that Pn-1 is a free Λn-1-module with basis Bn-1. Since Pn=Pn-1[xn], it follows that Pn is a free Λn-1[xn]-module with basis Bn-1. Now

Λn-1[xn] =Λn[xn],

because the identities

er(x1,,xn) =x=0r (-xn)s er-s (x1,,xn)

show that Λn-1Λn[xn], and on the other hand it is clear that ΛnΛn-1[xn]. Hence Pn is a free Λn[xn]-module with basis Bn-1.

To complete the proof it remains to show that Λn[xn] is a free Λn-module with basis 1,xn,,xnn-1. Since i=1n(xn-xi)=0, we have

xnn=e1 xnn-1- e2xnn-2 ++(-1)n-1 en,

from which it follows that the xnn-i (1in) generate Λn[xn] as a Λn-module. On the other hand, if we have a relation of linear dependence

i=1nfi xnn-i=0

with coefficients fiΛn, then we have also

i=1nfi xjn-i=0

for j=1,2,,n, and since

det(xjn-i) =i<j (xi-xj)0,

it follows that f1==fn=0.

As before, let δ=(n-1,n-2,,1,0). By reversing the order of x1,,xn in (5.1) it follows that

(5.1') The monomials xα,αδ (i.e., 0αin-i for 1in) form a Λn-basis of Pn.

We define a scalar product on Pn, with values in Λn, by the rule

(5.2) f,g= w0(fg) (f,gPn)

where w0 is the longest element of Sn. Since w0 is Λn-linear, so is the scalar product.

(5.3) Let wSn and f,gPn. Then

(i) wf,g= f,w-1g
(ii) wf,g= ε(w)f,w-1g.
where ε(w)=(-1)(w) is the sign of w.

Proof.

(i) It is enough to show that if,g= f,ig for iin-1. We have

if,g = w0 ((if)g) =w0si i((if)g) = w0si ( (if) (ig) )

because if is symmetrical in xi and xi+1. The last expression is symmetrical in f and g, hence if,g= ig,f= f,ig as required.

(ii) Again it is enough to show that sif,g= -f,sig. We have

sif,g= w0((sif)g) =w0si isi(f(sig))

and since isi=-i this is equal to

-w0sii (f(sig))=- w0(f(sig)) =-f,sig.

(5.4) Let u,vSn be such that (u)+(v)=(n2). Then

𝔖u,𝔖v= { 1 ifv= w0u, 0 otherwise.

Proof.

We have

𝔖u,𝔖v = u-1w0 xδ,𝔖v = xδ, w0u 𝔖v

by (5.3). Also (w0u)= (w0)- (u)= (v), hence

w0u𝔖v= { 1 ifv=w0u, 0 otherwise.

It follows that

𝔖u,𝔖v= { 0 ifvw0u, xδ,1= w0(xδ)=1 ifv=w0u.

(5.5) Let u,vSn. Then

w0𝔖u, 𝔖vw0 =ε(v)δuv.

Proof.

We have

w0𝔖u, 𝔖vw0 = w0𝔖u, w0v-1w0 xδ = w0vw0 (w0𝔖u), xδ = ε(v) w0v𝔖u, xδ

by (5.3) and (2.12). By (4.2) the scalar product is therefore zero unless (u)-(v)=(uv-1), and then it is equal to ε(v)w0𝔖uv-1,xδ. Now 𝔖uv-1 is a linear combination of monomials xα such that αδ and |α|=(u)-(v). Hence w0(𝔖uv-1)xδ is a sum of monomials xβ where

β=w0α+δw0 δ+δ=(n-1,,n-1) .

Now w0xβ=0 unless all the components βi of β are distinct; since 0βin-1 for each i, it follows that w0xβ=0 unless β=wδ for some wSn, and in that case

w0α=β-δ=wδ-δ

must have all its components 0. So the only possibility that gives a nonzero scalar product is w=1,α=0,u=v, and in that case

w0𝔖u, 𝔖vw0 = ε(v)1,xδ = ε(v)w0 (xδ)=ε(v).

(5.6) The Schubert polynomials 𝔖w,wSn, form a Λn-basis of Pn.

Proof.

Let u,vSn and let

(1) w0𝔖u= αδ auαxα, (2) ε(v)𝔖vw0= βδbvβ xβ,

with coefficients auα,bvβΛn. Let cαβ=xα,xβ. Then from (5.5) we have

α,βauα cαβbvβ= δuv,

or in matrix terms

(3) ACBt=1

where A=(auα), B=(bvβ) and C=(cαβ) are square matrices of size n!, with coefficients in Λn. From (3) it follows that each of A,B,C has determinant ±1; hence the equations (2) can be solved for xβ,βδ, as Λn-linear combinations of the Schubert polynomials 𝔖w,wSn. Since by (5.1') the xβ from a Λn-basis of Pn, so also do the 𝔖w.

We have

(5.7) f,g= wSnε (w)w (w0f) ww0(g)

for all f,gPn.

Proof.

Let Φ(f,g) denote the right-hand side of (5.7). We claim first that

(1) Φ(f,g) Λn.

For this it is enough to show that iΦ=0 for 1in-1. Let

Ai= { wSn: (siw)> (w) } ,

then Sn is the disjoint union of A and siA, and siA=Aw0. Hence

Φ(f,g)= wAiε (w) { w(w0f) i(siww0g) -iw (w0f) (siww0g) } .

Since for all ϕ,ψPn we have

i(ϕiψ-(iϕ)ψ) =(iϕ) (iψ)- (iϕ) (iψ)=0,

it follows that iΦ(f,g)=0 for all i as required.

Next, since each operator w is Λn-linear, it follows that Φ(f,g) is Λn-linear in each argument. By (5.6) it is therefore enough to verify (5.7) when f=w0𝔖u and g=𝔖vw0, where u,vSn. We have then

Φ ( w0𝔖u, 𝔖vw0 ) = wSnε(w) w-1(𝔖u) w-1w0 (𝔖vw0)

which by (4.2) is equal to

(2) wε(w) 𝔖uw𝔖vw

summed over wSn such that

(uw)=(u)- (w-1)= (u)-(w)

and

(vw)= (vw0)- (w-1w0)= (w)-(v).

Hence the polynomial (2) is (i) symmetric in x1,,xn (by (1) above), (ii) independent of xn, (iii) homogeneous of degree (u)-(v). Hence it vanishes unless (u)=(v) and u=w-1=v, in which case it is equal to ε(w)=ε(v). Hence

Φ ( w0𝔖u, 𝔖vw0 ) =ε(v)δuv= w0𝔖u,𝔖vw0

by (5.5). This completes the proof of (5.7).

Now let x=(x1,,xn) and y=(y1,,yn) be two sequences of independent variables, and let

(5.8) Δ=Δ(x,y) =i+jn (xi-yj)

(the "semiresultant"). We have

(5.9) Δ(wx,x)= { 0 ifww0, ε(w0) aδ(x) ifw=w0.

For

Δ(wx,x)= i+jn (xw(i)-xj)

is non-zero if and only if w(i)j whenever i+jn, that is to say if and only if ww0; and

Δ(w0x,x) = i+jn (xn+1-i-xj) = j;tk (xk-xj)= ε(w0) aδ(x).

The polynomial Δ(x,y) is a linear combination of the monomials xα,αδ, with coefficients in [y1,,yn]=Pn(y), hence by (4.11) can be written uniquely in the form

Δ(x,y)= wSn𝔖w (x)Tw(y)

with Tw(y)Pn(y). By (5.5) we have

Tw(y)= Δ(x,y), w0𝔖ww0 (-x) x

where the suffix x means that the scalar product is taken in the x variables. Hence

(1) Tw(y) = w0 ( Δ(x,y)w0 (𝔖ww0(-x)) ) = aδ(x)-1 vSnε (v)Δ (vx,y)vw0 (𝔖ww0(-x))

by (2.10), where vSn acts by permuting the xi.

Now this expression (1) must be independent of x1,,xn. Hence we may set xi=yi (1in). But then (5.9) shows that the only non-zero term in the sum (1) is that corresponding to v=w0, and we obtain

Tw(y)= 𝔖ww0(-y).

Hence we have proved

(5.10) ("Cauchy formula")

Δ(x,y)= wSn𝔖w (x)𝔖ww0 (-y).

Remark. Let n=r+s where r,s1, and regard Sr×Ss as a subgroup of Sn, with Sr permuting 1,2,,r and Ss permuting r+1,,r+s. Let w0(r),w0(s) be the longest elements of Sr,Ss respectively, and let u=w0(r)×w0(s). If wSn, we have u𝔖w=𝔖wu if (wu)=(w)-(u), that is to say if wu is Grassmannian (with its only descent at r), and u𝔖w=0 otherwise. Hence by applying u to the x-variables in (5.10) we obtain

uΔ(x,y)= vGr,s 𝔖v(x) 𝔖vuw0(-y)

where Gr,sSn is the set of Grassmannian permutations v with descent at r (i.e. v(i)<v(i+i) if ir). On the other hand, it is easily verified that

uΔ(x,y)= i=1r j=1s (xi-yj)

and that v=vuw0 is the permutation

( v(r+1),, v(r+s), v(1),, v(r) )

hence is also Grassmannian, with descent at s.

The shape of v is

λ=λ(v)= ( v(r)-r,, v(2)-2, v(1)-1 )

and the shape of v is say

μ=λ(v)= ( v(r+s)-s,,v (r+2)-2,v (r+1)-1 ) .

The relation between these two partitions is

μi=s-λr+1-i (1ir)

that is to say λ is the complement, say μˆ, of μ in the rectangle (sr) with r rows and s columns. Hence, replacing each yj by -yj, we obtain from (5.10) by operating with u on both sides and using (4.8)

(5.11) i=1r j=1s (xi+yj)= sμˆ(x) sμ(y)

summed over all μ(sr), where μˆ is the complement of μ in (sr). This is one version of the usual Cauchy identity [Mac1979, Chapter I, (4.3)'].

Let (𝔖w)wSn be the Λn-basis of Pn dual to the basis (𝔖w) relative to the scalar product (5.2). By (5.3) and (5.5) we have

𝔖u,w0𝔖vw0 =ε(vw0)δuv

or equivalently

𝔖u(x),w0 𝔖vw0(-x) =δuv

which shows that

(5.12) 𝔖w(x)=w0 𝔖ww0(-x)

for all wSn. From (5.10) it follows that

Δ(x,y)= wSn𝔖w (x)w0𝔖w(y)

or equivalently

(5.13) 1i<jn (xi-yj)= wSn𝔖w (x)𝔖w(y).

Let (xβ)βδ be the basis dual to (xα)αδ. If

𝔖u = auα xα, 𝔖v = bvβ xβ,

then by taking scalar products we have

αauα bvβ=δuv

and therefore also

wawα bwβ= δαβ,

so that

wSn𝔖w (x)Sw(y) = α,β ( wawα bwβ ) xαyβ = αxαyα.

From (5.13) it follows that yα is the coefficient of xα in i<j(xi-yj), and hence we find

(5.14) xα=(-1)|β| i=1n-1 eβi (xi+1,,xn)

where β=δ-α.

Let

C(x,y)=ε(w0) Δ(w0x,y)= i<j (yi-xj).

If f(x)Hn (4.11), let f(y) denote the polynomial in y1,,yn obtained by replacing each xi by yi. Then we have

(5.15) f(x),C(x,y) x =f(y),

where as before the suffix x means that the scalar product is taken in the x variables. In other words, C(x,y) is a "reproducing kernel" for the scalar product.

Proof.

From (5.13) we have

C(x,y)=wSn ε(w0)𝔖w (w0x) 𝔖ww0(-y).

Hence by (5.5)

C(x,y), 𝔖ww0(x) x = ε(ww0) 𝔖ww0(-y) = 𝔖ww0(y).

Hence (5.15) is true for all Schubert polynomials 𝔖u,uSn. Since the scalar product is Λn-linear it follows from (5.6) that (5.15) is true for all fHn.

Let θyx be the homomorphism that replaces each yi by xi. Then (5.15) can be restated in the form

(5.15') θyx f(x),C(x,y)x =f(x)

for all fHn.

Now let z=[z1,,zn] be a third set of variables and consider

(1) C(x,y),u v-1C(x,z) x

for u,vSn, where u and v-1 act on the x variables. By (5.3) this is equal to

(2) ε(v) C(x,z),v u-1 C(x,y) x

and by (5.15') we have

(3) θyx C(x,y),u v-1C(x,z) x =uv-1 C(x,z), (4) θzx C(x,z),v u-1 C(x,y) x =vu-1 C(x,y).

Since θyx and θzx commute, it follows from (1)-(4) that

θyxvu-1 C(x,y) = ε(v)θzx uv-1C (x,z) = ε(v)θyx uv-1C (x,y).

Hence we have

(5.16) θ ( vu-1w0 Δ ) =ε(v)θ (uv-1w0Δ)

for all u,vSn, where Δ=Δ(x,y) and θ=θyx.

Let En denote the algebra of operators ϕ of the form

ϕ=wSn ϕww,

with coefficients ϕwn=(x1,,xn). For such a ϕ we have

(5.17) ϕw=ε(w0) aδ-1θ (ϕ(w-1w0Δ))

for all wSn, where ϕ and w-1w0 act on the x variables in Δ.

For θ(ϕ(w-1w0Δ))= uSnϕu θ(uw-1w0Δ), and by (5.8) θ(uw-1w0Δ) =Δ(uw-1w0x,x) is zero if uw, and is equal to ε(w0)aδ if u=w.

Let uSn, and let (a1,,ap) be a reduced word for u, so that u=a1ap. Since a=(xa-xa+1)-1(1-sa) for each a1, it follows that we may write

(5.18) u=ε(w0) aδ-1vu αuvv,

where vu means that v is of the form sb1sbq, where (b1,,bq) is a subword of (a1,,ap).

The coefficients αuv in (5.18) are polynomials, for it follows from (5.16) and (5.17) that

(5.19) αuv = θ(u(v-1w0Δ)) = ε(v)θ (vu-1w0Δ).

(5.20) For all fPn we have

θ(u(Δf))= { w0f ifu=w0, 0 otherwise.

Proof.

From (5.18) we have

θ(u(Δf)) =aδ-1vu αuvv(f)θ (vΔ).

By (5.9) this is zero if uw0, and if u=w0 then by (2.10)

θ(w0(Δf)) = aδ-1wSn ε(w)w(f)θ (wΔ) = aδ-1ε(w0) w0(f)ε(w0) aδ=w0(f)

by (5.9) again.

The matrix of coefficients (αuv) in (5.18) is triangular with respect to the ordering , and one sees easily that the diagonal entries αuu are non-zero (they are products in which each factor is of the form xi-xj). Hence we may invert the equations (5.18), say

(5.21) u=vu βuvv

and thus we can express any ϕEn as a linear combination of the operators w. Explicitly, we have

(5.22) ϕ=wSnθ (ϕ(w-1w0Δ)) w.

Proof.

By linearity we may assume that ϕ=fu with fQn. Then

θ(ϕ(w-1w0Δ)) =fθ(uw-1w0Δ) .

Now by (4.2) uw-1w0 is either zero or equal to uw-1w0, and by (5.20) θ(uw-1w0Δ) is zero if wu, and is equal to 1 if w=u. Hence the right-hand side of (5.22) is equal to fu=ϕ, as required.

In particular, it follows from (5.22) and (5.21) that

(5.23) βuv=θ (uv-1w0Δ),

hence is a polynomial.

The coefficients αuv,βuv in (5.18) and (5.23) satisfy the following relations:

(5.24)
(i) βuv= ε(uv) αvw0,uw0,
(ii) αu-1v-1= v-1(αuv),
(iii) αu,v= ε(uw0)w0 (αuv),
for all u,vSn, where u=w0uw0, v=w0vw0.

Proof.

(i) By (5.23) and (2.12) we have

βuv = ε(v-1w0) θ(uw0w0v-1w0Δ) = ε(v-1w0) ε(uw0)θ (vw0w0u-1w0Δ) by (5.16) = ε(uv) αvw0,uw0. by (5.19).

(ii) From (5.18) we have

θ(vu-1w0Δ) = ε(w0)v (aδ-1) wv (αu-1,w-1) θ(vw-1w0Δ) = ε(v)v (αu-1,v-1) by (5.9),

and likewise

θ(uv-1w0Δ) = ε(w0)aδ wαuwθ (wv-1w0Δ) = αuv

again by (5.9). Hence (ii) follows from (5.16).

(iii) Since u=ε(u)w0uw0 (2.12) we have

vαuv v = ε(uw0)w0 (vαuvv) w0 = ε(uw0)v w0(αuv) v

and hence αuv=ε(uw0)w0(αuv).

(5.25) Let En be the subalgebra of operators ϕEn such that ϕ(Pn)Pn. Then En a free Pn-module with basis (w)wSn.

Proof.

If ϕ=wSnϕwwEn, then by (5.22)

ϕw=theat (ϕ(w-1w0Δ)) Pn.

On the other hand, the w are a Qn-basis of En, and hence are linearly independent over Pn.

Notes and References

This is a typed excerpt of the book Notes on Schubert Polynomials by I. G. Macdonald.

page history