Kac-Moody Lie Algebras
Chapter IV: Affine Lie algebras

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au

Last update: 7 October 2012

Abstract.
This is a typed version of I.G. Macdonald's lecture notes on Kac-Moody Lie algebras from 1983.

Loop algebras

Let A be an indecomposable Cartan matrix of finite type, so that 𝔤(A) is finite-dimensional and simple.

Let L=k[t,t-1] denote the algebra of Laurent polynomials in one variable over k.

The loop algebra of 𝔤 is defined to be

L(𝔤)=Lk𝔤= mtm𝔤

i.e. it is constructed from 𝔤 by extension of scalars from k to L. I shall drop the tensor product notation and write tmx in place of tmx (m,x𝔤). Then the Lie bracket in L(𝔤) is defined by

[ tmx,tny ] 0 =tm+n [x,y] (1)

( m,n;x,y𝔤 ) .

Recall that A is symmetrizable and hence that 𝔤 carries an invariant scalar product x,y. We extend this to L(𝔤) as follows:

tmx,tny = { x,y , ifm+n=0 , 0 , otherwise. (2)

One verifies immediately that this scalar product on L(𝔤) is still invariant.

Finally we define a derivation d of L(𝔤) by

d(tmx)=m tmx (m,x𝔤) (3)

i.e., d=tddt. That d is a derivation is immediate from (1).

The next stage is to construct a 1-dimensional central extension of L(𝔤).

Central extensions of Lie algebras

In general let

0𝔞𝔤1p𝔤0

be an exact sequence of Lie algebras with 𝔞=Ker(p) contained in the centre of 𝔤 i.e. [𝔞,𝔤1]=0. Choose a section s:𝔤𝔤1, i.e. a k-linear map such that ps=1𝔤. Then for x,y𝔤

ψ(x,y)= [sx,sy]- s[x,y]𝔞 (1)

(because it is killed by p); the function ψ:𝔤×𝔤𝔞 is bilinear, skew-symmetric and satisfies δψ=0, where

δψ(x,y,z) = ψ([x,y],z)+ ψ([y,z],x)+ ψ([z,x],y) (2)

For we have

ψ([x,y],z) = [s[x,y],sz]-s [[x,y],z] = [[sx,sy],sz]-s [[x,y],z]

by (1) and the centrality of 𝔞; now apply the Jacobi identity.

In other words, ψ is a 2-cocycle on 𝔤 with values in 𝔞 (with trivial 𝔤-action).

Conversely, given a 2-cocycle ψ:𝔤×𝔤𝔞, define 𝔤1=𝔤×𝔞 (direct product of vector spaces) with Lie bracket given by

[ (x,a), (y,b) ] = ( [x,y],ψ (x,y) ) (3)

(x,y𝔤;a,b𝔞). Then the Jacobi identity holds in 𝔤1 by virtue of (2): for we have

[ [ (x,a), (y,b) ] , (z,c) ] = [ ( [x,y],ψ(x,y) ) ,(z,c) ] = ( [[x,y],z], ψ([x,y],z) )

so that cyclic summation gives 0 by virtue of (2) and the Jacobi identity in 𝔤. (The motive for the definition (3) is that in the original context we have [sx+a,sy+b]= [sx,sy]=s[x,y] +ψ(x,y) .)

Thus 𝔤1 is a Lie algebra, 𝔞 is a central ideal in 𝔤, and p:𝔤1/𝔞𝔤 is an isomorphism of Lie algebras (𝔤 is not a subalgebra of 𝔤1).

In the present context we define ψ:L(𝔤)×L(𝔤)k by

ψ(ξ,η)= dξ,η (ξ,ηL(𝔤))

Explicitly, if ξ=tmx, η=tny. (m,n;x,y𝔤) then

ψ(ξ,η) = mtmx,tny = { mx,y , ifm+n=0 , 0 , otherwise,

from which it follows that ψ(η,ξ)=- ψ(ξ,η).

Next we verify that δψ(ξ,η,ζ)=0. By linearity we may assume that ξ=tpx, η=tqy, ζ=trz (p,q,r;x,y,z𝔤). If p+q+r0 then certainly δψ=0, and if p+q+r=0 we have

δψ(ξ,η,ζ) = (p+q) [x,y],z+ (q+r) [y,z],x+ (r+p) [z,x],y = px,[y,z]+ q[x,y],z+ (q+r) x,[y,z]+ (r+p) z,[x,y] = 0(sincep+q+r=0).

So we construct a 1-dimensional central covering L(𝔤) of L(𝔤) as follows:

L(𝔤) = L(𝔤)kc

with Lie bracket defined by

[ ξ+λc,η+μc ] = [ξ,η]0+ψ (ξ,η)c = [ξ,η]0+ dξ,ηc

Notice that 𝔤 is a subalgebra of L(𝔤), since dξ=0 for ξ𝔤.

We extend the derivation d to L(𝔤) by requiring that dc=0. We ought to verify that d, extended in this way, is still a derivation. On the one hand we have

d [ ξ+λc,η+μc ] = d[ξ,η]0

and on the other hand

[ d(ξ+λc) ,η+μc ] + [ ξ+λc,d(η+μc) ] = [dξ,η+μc]+ [ξ+λc,dη] = [dξ,η]0+ψ (dξ,η)+ [ξ,dη]0+ψ (ξ,dη) = d[ξ,η]0+ψ (ξ,dη)-ψ (η,dξ) = d[ξ,η]0+ dξ,dη- dη,dξ = d[ξ,η]0.

Finally we construct the semidirect product

L^(𝔤) = L(𝔤)kd= L(𝔤)kd

with bracket

[ ξ+λ1d,η+μ1d ] = [ξ,η]+λ1dη- μ1dξ

(ξ,ηL(𝔤); λ1,μ1k). So altogether

L^(𝔤) = L(𝔤)kckd

and

[ ξ+λc+λ1d, η+μc+μ1d ] = [ξ,η]0+λ1 dη-μ1dξ+ dξ,ηc.

Our aim is to show that L^(𝔤)𝔤 (A(1)), where A(1) is an indecomposable Cartan matrix of affine type. To construct A(1) we need the following lemma:

(4.1) The root system R of 𝔤(A) has a unique highest root φ such that φα for all αR (i.e. φ-αQ+). We have φ=i=1 aiαi with each coefficient ai1; φ(hi)0 for all i, and φ(hi)>0 for some i.

Proof.

Since A is of finite type, R is finite. Let φ=aiαi be a maximal element of R (with respect to the partial ordering ). Since wiφ=φ-φ(hi) αi, it follows that φ(hi)0 for all i. If φ(hi)=0 for all i, then φ,αi=0 for all i and therefore φ,φ=0, whence φ=0. Hence φ(hi)>0 for at least one value of i.

Clearly φR+ (otherwise -φ>φ), hence the coefficients ai are all 0. Let J=supp(φ)= {i:ai0} . If s[1,] then by connectedness there exists jJ and kJ such that akj<0, whence

φ(hk)= iJaiαi (hk)=iJ aiaki<0

(because all the terms are 0, and at least one is <0). This is a contradiction hence all the ai are 1.

Finally let φφ be another maximal root. Then from above (with φ replaces by φ) we have φ,αj0 for all j, and φ,αj>0 for some j, whence φ,φ= ajφ,αj >0 and therefore also φ(hφ)>0. By (2.31) (root strings) φ-φR. Hence either φ>φ or φ>φ, neither of which is possible, and therefore φ is unique.

Now let ei,fi (1i) as usual be the generators of 𝔤=𝔤(A), and let 𝔥 be the Cartan subalgebra. Normalize the scalar product on 𝔤 so that φ,φ=2, and choose eφ𝔤φ, fφ𝔤-φ such that

[eφ,𝔤φ] = hφ (1)

(or equivalently) such that eφ,fφ=1. (Recall that [eφ,fφ]= eφ,fφ hφ and that hφ=hφ because φ=φ, by our choice of scalar product.)

Define

e0=tfφ, f0=t-1eφ, h0=-hφ+c (2)

and let 𝔥^=𝔥kckd. We extend each root αR to a linear form (also denoted by α) on 𝔥^ by setting α(c)=α(d)=0; also define δ𝔥^* by

δ(𝔥kc)=0, δ(d)=1

Finally set

α0=δ-φ (4)

so that

i=0aiαi =δ (5)

where a0=1, and a1,,a are the coefficients of φ (4.1).

Let

aij=αj (hi) (0i,j) (6)

and let A(1)= (aij) 0i,j . The matrix A(1) has A as a principal submatrix

(4.2) A(1) is an indecomposable Cartan matrix of affine type.

Proof.

We calculate:

α0(h0)= (δ-φ)(c-hφ) =φ(hφ)=2; α0(hi)= (δ-φ)(hi)= -φ(hi)0 by (4.1) αj(h0)=αj (c-hφ)=-αj (hφ)=- φ,αj

which is a positive scalar multiple of -φ,αj =-φ(hj), hence also 0. Hence A(1) is a Cartan matrix, and is indecomposable because A is indecomposable and a0i=-φ(hi)<0 for some i0, again by (4.1). Also from (3) and (5) we have

j=0aijaj= j=0ajαj (hi)=δ(hi)=0

for 0i, so that by (2.17) A(1) is of affine type.

If A is of type X (=An,,G2: see table F), then A(1) is of type X (see table A; the integers αi are the labels attached to the vertices there).

(4.3) Theorem

L^(𝔤) 𝔤(A(1)) (withei,fi, 𝔥^as generators) L(𝔤) 𝔤(A(1)) L(𝔤) 𝔤(A(1)).

Proof.

The proof is a sequence of verifications:

  1. ( 𝔥^, (hi) 0i , (αi) 0i ) is a minimal realization of the Cartan matrix A(1).

    Well, dim𝔥^=+2=2n- where n=+1 is the number of rows of A(1), and is its rank. Clearly the hi are linearly independent in 𝔥^, and the αi are .i. in 𝔥^ * .

  2. The ei,fi and 𝔥^ satisfy the defining relations (1.2).

    Since 𝔤 is a subalgebra of L^(𝔤), we have [ei,fj]= δijhi for 1i,j; moreover

    [e0,fj]= [tfφ,fj]=t [fφ,fj]+ tfφ,fj c=0,(1j) [e0,f0]= [tfφ,t-1eφ]= [fφ,eφ]+ fφ,eφc =-hφ+c=h0;

    next, if h^𝔥^, say h^=h+λc+μd (h𝔥;λ,μk) then for i=1,2,, we calculate

    [h^,ei] = [h+λc+μd,ei] =[h,ei]+μd(ei) =[h,ei] = αi(h)ei=αi (h^)ei; [h^,e0] = [h+λc+μd,tfφ]= t[h,fφ]+μd(tfφ) = -φ(h)tfφ+μt fφ=(μ-φ(h)) e0=α0(h^) e0

    (because α0(h^)= (δ-φ) (h+λc+μd)=μ φ(h)).

    Finally, it is clear that 𝔥^ is abelian.

  3. Let αR{0}, m. Then tm𝔤α is a weight space for the adjoint action of 𝔥^ on L^(𝔤), with weight α+mδ: for with h^ as above,

    [h^,tmx] = [ h+λc+μd, tmx ] = tm[h,x] +μd(tmx) = (α(h)+mμ) tmx = (α+mδ) (h^) tmx,

    Thus the α+mδ (αR{0},m) are the roots of L^(𝔤) relative to 𝔥^. Now let 𝔞 be an ideal of L^(𝔤) such that 𝔞𝔥^=0. By (1.5) 𝔞 is the direct sum of its weight spaces 𝔞tm𝔤α (where 𝔤0 is to be interpreted as 𝔥^. Hence if 𝔞0 there exists αR{0}, x0 in 𝔤α and m such that tmx𝔞. Choose y𝔤-α such that x,y=1, then [x,y]=hα0 and

    z = [ tmx, t-my ] = [x,y]+ d(tmx), t-my c = [x,y]+mc0

    lies in 𝔞𝔥^: contradiction.

    Hence L^(𝔤) has no ideals 𝔞0 such that 𝔞𝔥^=0.

  4. To complete the proof, it remains to be shown that L^(𝔤) is generated by the ei,fi (0i) and 𝔥^. Let L1 be the subalgebra generated by these elements. Then certainly 𝔤L1; also tfφ=e0L1, and since 𝔤 is simple it is generated (as a 𝔤-module) by fφ, i.e. [fφ,𝔤]=𝔤 and therefore [e0,𝔤]=t𝔤. Thus t𝔤L1. Now assume that tk𝔤L1 for some k1. Since 𝔤=[𝔤,𝔤] we have tk+1𝔤= [t𝔤,tk𝔤] L1, and hence tk𝔤L1 for all k0. In the same way we prove that tk𝔤L1 for all k0, and hence that L1=L^(𝔤).

(4.4) Corollary If S is the root system of 𝔤(A(1)) then

Sre = { α+mδ:α R,m } Sim = { mδ: m,m0 }

each imaginary root mδ has multiplicity =rank(A(1)).

The positive real roots are

α+mδ ( αR,m1and αR+,m=0 )

The bilinear form ξ,η on L(𝔤) we extend to L^(𝔤) as follows:

c,L(𝔤) =d,L(𝔤) =0;c,c =d,d=0; c,d=1

It is still invariant: the only nontrivial case to be checked is that

[d,ξ],η = d,[ξ,η] (ξ,ηL(𝔤))

which is true because

[d,ξ],η = dξ,η

and

d,[ξ,η] = d, [ξ,η]0 +dξ,ηc = dξ,η.

Remark. L(𝔤) is certainly not simple: it has lots of ideals. For example, let ak* and let ua:L(𝔤)𝔤 be the homomorphism defined by ua(tmx)= amx (m;x𝔤). Then ua is a Lie algebra homomorphism, and its kernel is a nontrivial ideal, indeed a maximal ideal.

Construction of the remaining affine Lie algebras

Let A= (aij) 1i,jn be an indecomposable symmetric Cartan matrix of finite type, i.e. of type A, D or E. As usual, let 𝔥,R,Q,W denote the Cartan subalgebra, the root system, the root lattice and the Weyl group of the algebra 𝔤(A)=𝔤. The invariant bilinear form x,y on 𝔤, constructed as in (3.12), is such that hi,hj=aij and also (on 𝔥*) αi,αj=aij.

In particular, αi2= aii=2; since the scalar product on 𝔥* is W-invariant and all the roots are real, we have α2=2 for all roots αR. Conversely, by (2.34), if αQ is such that α2=2, then αR.

Let α,βR. Then (Cauchy-Schwarz)

α,β α·β =2

and since α,β is an integer, it can therefore take only the values 0,±1,±2.

(4.5) We have α,β= 2,1,0,-1,-2 respectively if and only if

α=β; α-βR; α±βR{0}; α+βR; α=-β

Proof.

For example, since α-β2= α2+ β2-2 α,β =4-2α,β it follows that

α,β= {12} α-β2= {20} α-β {R=0}

Similarly with β replaced by -β.

Now let Δ be the Dynkin diagram of A, and let s be an automorphism of Δ. In terms of the matrix A, this means that s is a permutation of {1,2,,n} such that

asi,sj= ai,j

for all i,j. Let k be the order of s. If k1 (i.e. if s1) there are just 5 possibilities:

A2,1 k=2 A2-1,2 k=2 D+1,3 k=2 E6 k=2 D4 k=3

(In each case, vertices of Δ in the same vertical line are in the same s-orbit.) Thus k=2 or 3 in every case.

The graph automorphism s determines an automorphism (also denoted by s) of period k of the Lie algebra 𝔤=𝔤(A) by the rule

s(ei)=esi, s(fi)=fsi, s(hi)=hsi (1in)

This is clear from the construction of 𝔤(A) in Chapter I, since the relations (1.2) are stable under s.

By transposition, s also acts on 𝔥*: (sλ)(h)=λ (s-1h) (h𝔥,λ𝔥*) We have sαj=αsj, because

(sαj)(hi)= αj(s-1hi) =αj(hs-1i) =as-1i,j= ai,sj= αsj(hi).

The scalar product on 𝔤 (hence on 𝔥 and 𝔥*) is s-invariant.

Since 𝔥 is stable under s, it follows that s permutes the root-spaces 𝔤α and hence also the roots αR:s(𝔤α) =𝔤sα: and this action agrees with that already described on 𝔥*, because if x𝔤α and h𝔥 we have

[h,sx]=s [s-1h,x]=s (α(s-1h)x) =(sα)(h)sx.

Moreover, since s permutes the simple roots αi, it follows that s permutes R+. Hence α+sα is never zero, and α-sα is never a root. This observation, together with (4.5), proves the first part of

(4.6) Let αR and assume αsα. Then

  1. α,sα=0 or -1;
  2. If α,sα=-1 (so that β=α+sαR) then k=2 and s acts as -1 on 𝔤β.

Proof of (ii).

If k=3 then α,s2α= α,s-1α= sα,α=-1, and sα,s2α= α,sα=-1, hence α+sα+s2α2 =6-2·3=0 and therefore α+sα+ s2α=0; which is plainly impossible. Hence k=2.

Let eα generate 𝔤α, then seα generates 𝔤sα and x=[eα,seα] is a nonzero element of 𝔤β. Hence x generates 𝔤β, and sx= [seα,eα] =-x.

Let ω be a primitive kth roots of unity (assumed to lie in the ground field K if k=3). For each integer r define

𝔤(r) = { x𝔤:sx =ωrx }

so that 𝔤(r) is the ωr-eigenspace of s in 𝔤, and depends only on r and k. We have

𝔤=r=0k-1 𝔤(r) (1)

the decomposition being

x=r=0k-1 x(r)

where

x(r)=1k i=0k-1 ω-irsix

Also

[ 𝔤(p), 𝔤(q) ] 𝔤(p+q) (2)

for all p,q, so that (1) is a /k-grading of 𝔤.

In particular, 𝔤(0) is the Lie algebra of s-invariants of 𝔤, and each 𝔤(r) is a 𝔤(0)-module under the adjoint action.

Next we have

(4.7) The restriction of the bilinear form x,y to 𝔤(p)×𝔤(q) is

  1. zero if p+q0(modk)
  2. nondegenerate if p+q0(modk).

Proof.

Let x𝔤(p), y𝔤(q). Then

x,y= sx,sy= ωp+q x,y

which proves (a); then (b) follows because x,y is nondegenerate on 𝔤.

Now let

L (𝔤,s) = r tr𝔤(r) L(𝔤) L (𝔤,s) = L(𝔤,s)Kc L(𝔤) L^ (𝔤,s) = L(𝔤,s) KdL^(𝔤)

It follows from (2) that L(𝔤,s) is a subalgebra of L(𝔤), and then that L(𝔤,s) (resp. L^(𝔤,s)) is a subalgebra of L^(𝔤) (resp. L^(𝔤)).

Our aim is now to show that L^(𝔤,s) 𝔤(A(k)) where A(k) is an indecomposable Cartan matrix of affine type, to be defined presently.

Let Δi (1i) be the orbits of s in Δ, and number the vertices of Δ so that iΔi. With one exception (case A2) Δi is discrete (no joining edges). In the exceptional case, Δi is connected (of type A2). Define

ui= { 1, ifΔiis discrete , 2, ifΔiis connected ,

and put

u=max1i ui

(so that u=1 except in case A2, where u=2.)

Let

Ei= ui12 jΔi ej, Fi= ui12 jΔi fj, Hi= ui12 jΔi hj

for 1i. These elements are all fixed by s, hence they generate a subalgebra 𝔤 of 𝔤(0). Let 𝔥 be the subspace of 𝔥 spanned by the Hi, and note that 𝔥=𝔥(0)= {h𝔥:sh=h}.

Next define

aij=ui pΔi apj (1i,j)

and let A= (aij) 1i,j .

We have then

Hi,Hj = uiuj pΔi qΔj apq = ujΔj aij.

(4.8)

  1. A is an indecomposable Cartan matrix of finite type, given by the following table k 2 2 2 2 2 3 A A2 A2(2) A2-1(2) DH(3) E6 D4 A A1 D C B F4 G2
  2. 𝔤𝔤(A).

Proof.
  1. It is straightforward to verify from the definition (4) that A is a Cartan matrix, and that aij 3. Let Δ,Δ be the Dynkin diagrams of A and A. Then Δ is derived from Δ by the following rules (which are a restatement of (4)):

    is replaced by is replaced by is replaced by is replaced by is replaced by

    (In the left-handed column, vertices in the same vertical line are in the same s-orbit). Hence the type of A is as stated in the table above.

  2. It is straightforward to verify that the generators Ei,Fi,Hi of 𝔤 satisfy the relations (1.2) for the matrix A. To complete the proof, it will be enough to verify that they satisfy Serre's relations

    (adEi) 1-aij Ej= (adFi) 1-aij Fj=0 (ij). ()

    For it will then follow from (2.???) that 𝔤 is a homomorphic image of 𝔤(A); but 𝔤(A) is simple, hence 𝔤 𝔤(A).

    To prove (), there are two cases to consider, according as the orbit ΔiΔ is discrete or connected.

    1. Suppose Δi discrete. If it consists of the single element i, then we have aij=aij, Ei=ei, and () follows from the corresponding relation for 𝔤.

      If Δi consists of k elements, then for pq in Δi we have αp,αq=0 and therefore by (4.6) αp+αq is not a root, so that [ep,eq]=0 and hence adep, adeq commute. Hence

      (adEi) 1-aij Ej= ( pΔi adep ) 1-aij ( qΔj eq )

      is a sum of terms pΔi (adep) np eq, where qΔj and

      pΔi np=1-aij =1-pΔi apq,

      so that np1-apq for at least one pΔi, and therefore (adep) np eq=0. It follows that (adEi) 1-aij Ej=0, and likewise with the E's replaced by ???

    2. Suppose Δi connected. Then k=2 and Δi={i,si}. Since Δ contains no cycles, at least one of aij, asi,j is zero. If both are zero, then [Ei,ej]=0 and therefore [Ei,Ej]=0. If say aij=0, asi,j=-1, then aij=-2, and we have to show that ( adei+ adesi ) 3 ej=0.

      Let x=adei, y=adesi. Then xej=y2ej=0; Moreover z=[x,y]= ad[ei,esi] commutes with x and y (because 2αi+αsi and αi+2αsi are not roots), hence x2yej= xzej=zx ej=0 and yxyej=yzej =zyej=-yxyej, so that we have altogether

      xej=y2ej= x2yej=yxy ej=0

      and therefore

      (x+y)3ej= (x+y)2yej= (x+y)xyej=0.

𝔥 is a Cartan subalgebra of 𝔤. Let p:𝔥*𝔥* be the restriction map, and let αi=p(αi) (1i). Then

αj(Hi)= ej ( uipΔi hp ) =aij

so that the αi are the simple roots of 𝔤 (relative to 𝔥). If αR, say α=1nmiαi the p(α)=????Q in particular p(α)0.

Let R be the root system of 𝔤 and let Q=1 αi be the root lattice.

We have Hi,Hj =ujΔj aij, from which it follows that

αi, αj = (uiΔi) -1 aij = Δi-1 kΔi αk,αj = παi,αj

where π=1k i=0k-1 si:𝔥*𝔥*. Hence, by linearity, we have

p(λ),p(μ) =π(λ),μ

for all λ,μ𝔥*.

In particular

αi2=2 (uiΔi)-1

and therefore αi2 =2u-1 or 2(ku)-1 for all αR.

(4.9) Let α,βR be such that p(α)=p(β). Then α,β are in the same s-orbit in R.

Proof.

Suppose not, i.e. βsiα for 0ik-1. Since p(β-siα)= p(β-α)=0, it follows that β-siαR, hence (4.5) β,siα0. But then

p(α)2= p(α), p(β) =β,πα0

so that p(α)=0, which is impossible.

Since 𝔤 is subalgebra of 𝔤(0), each 𝔤(r) (and 𝔤) is a 𝔤-module. Let S(r) (resp. S) be the set of nonzero weights of 𝔤(r) (resp. 𝔤) as 𝔤-module (or 𝔥-module).

Clearly

RS(0); S=r=0k-1 S(r);

also

p(R)=SQ

because R is the set of weights of 𝔤 as 𝔥-module. By (4.9) the fibres of p:RS are the orbits of S in R, hence have 1 or k elements.

  1. If α=sα then s(𝔤α)=𝔤α, hence seα=ωr eα for some r=0,,k-1, and correspondingly eα𝔤(r), hence p(α)s(r) with multiplicity 1, for this value of r.
  2. If αsα then eα,seα, ,sk-1eα are linearly independent, hence eα(r)0 for each r=0,1,,k-1. It follows that p(α)S(r) with multiplicity 1, for each r=0,1,,k-1.

Thus all nonzero weights of each 𝔤(r) occur with multiplicity 1.

For αR there are three (mutually exclusive) possibilities:

  1. α=sα;
  2. αsα; α,sα=0;
  3. αsα; α,sα=-1.

Proof.

We have

p(α)2 = α,πα = { α,α =2, in case (i), 1k α,α =2k, in case (ii), 12 ( α,α+ α,sα ) =12, in case (iii),

(by (4.6), since k=2 in case (iii)).

Suppose first that u=1. Then case (iii) does not occur. For by (2.34) we have

min { λ2: λQ,λ0 } =2k

and p(α)Q. Again by (2.34), if p(α)2 =2k (case (ii)) then p(α)R; whilst if p(α)2=2, i.e. if α=sα, then if α=i=1n miαi we have mi=msi and therefore p(α)= i=1 Δi miαi, whence again by (2.34) we have p(α)R. So if u=1 we have

S=R

and therefore (since RS(0)S)

S(0)=R, S(r)=Rshort (1rk-1)

where Rshort is the set of short roots αR (i.e. with α2 =2k).

There remains the case A2(2), where u=2. In this case α2=1 or 12 for αR, and we find by direct calculation that

S(o)=R, S(1)=R 2Rshort.

Thus in all cases 𝔤(0)=𝔤.

Now let ψ be the highest short root of R (i.e., ψ is the highest root of R), and put

φ=uφ.

Then φ is the highest weight of 𝔤(r) (1rk-1), and is the only weight of 𝔤(r) such that φ+αiS(r), for i=1,,.

It follows that 𝔤(r) is simple (1rk-1). as a 𝔤-module. For in any case, by complete reducibility, 𝔤(r) is a direct sum of simple 𝔤-modules. Let M be one of these, with highest weight λS(r). One sees easily that λ0, hence Mλ=𝔤λ(r) (because this space is 1-dimensional). But then [ Ei,𝔤λ(r) ] =[Ei,Mλ]=0 for 1i, and therefore (Lemma) λ+αiS(r) and consequently λ=φ.

(Alternatively: instead of using completed reducibility, use the dimension formula to compute dimL(φ) in each case.)

We now proceed as in the case considered previously.

Normalize the scalar product on 𝔤 so that we have

φ,φ =2

(no renormalization in case A2(2), because then ψ2 =12, u=2, hence φ2 =2). Choose Eφ 𝔤φ(-1), Fφ= 𝔤-φ(1) such that

Eφ, Fφ =1

(this is possible by (4.7)). Then we have

[ Eφ, Fφ ] =Hφ

(where Hφ is the image of φ in 𝔥), because if H𝔥 we have

H, [ Eφ, Fφ ] = [H,Eφ] ,Fφ = φ(H) Eφ, Fφ = φ(H).

Then define

E0=tFφ t𝔤(1), F0=t-1 Eφ t-1𝔤(-1) ,H0=- Hφ+c

and set 𝔥Δ=𝔥 kckd. Extend each αS to a linear form (also denoted by α) on 𝔥Δ by setting α(c)= α(d)=0; also define δ𝔥Δ* by

δ ( 𝔥kc ) =0,δ (d)=1 (δ=restriction of δfor𝔥Δ)

Finally set

α0=δ -φ;

we have

φ= i=1 ai αi

with coefficients ai1, so that if we define a0=1 we have

i=0 ai αi =δ

Let

aij= αj (Hi) (0i,j)

and let A(k)= (aij) 0i,j . The matrix A(k) has A as a principal submatrix. One then verifies that

  1. A(k) is an indecomposable Cartan matrix of affine type;
  2. ( 𝔥Δ, (Hi) 0i , (αi) 0i ) is a minimal realization of A(k);
  3. the Ei,Fi,𝔥Δ satisfy the defining relations (1.2)
  4. the weights of 𝔥Δ in L^(𝔤,s) are α+rδ where αS(r), r, and rδ with multiplicity =dim𝔥(r)= multiplicity of ωm as eigenvalues of s on 𝔥. Thus mrδ= { , ifr0 (modk), n-k-1, otherwise, (because if the latter multiplicity is then +(k-1)=n)
  5. L^(𝔤,s) 𝔤(A(k)) L(𝔤,s) 𝔤(A(k)) L(𝔤,s) 𝔤(A(k))

References

I.G. Macdonald
Issac Newton Institute for the Mathematical Sciences
20 Clarkson Road
Cambridge CB3 OEH U.K.

Version: October 30, 2001

page history