Complex Reflection Groups

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au

Last update: 10 January 2014

Section I

Let V be a complex vector space and let H be a Hermitian form on V (i.e., H(x,y)=H(y,x)). Suppose aV with H(a,a)0 and λ\{0}. Define Ra,λ:VV by Ra,λ(v)=v+ (λ-1) H(v,a) H(a,a) a(vV).

Proposition 1.

(a) Ra,λGL(V)
(b) Ra,λ·Ra,μ=Ra,λμ
(c) H(Ra,λ(v),Ra,λ(w)) =H(v,w) for all v,wVλλ=1.
(d) If one of λ or μ is not 1, then Ra,λ=Rb,μ a=b and λ=μ
(e) If TGL(V) and H(Tv,Tw)=H(v,w) for all v,wV, then TRa,λT-1=RTa,λ.

Proof.

Note that Ra,λ(a)=λa and if va (i.e., if H(v,a)=0), then Ra,λ(v)=v. Now the linearity of Ra,λ is obvious. Since Ra,1=I, (b) would imply that Ra,λ is invertible and thus (a) follows from (b). To prove (b) we let vV and compute Ra,λ Ra,μ(v) = Ra,λ ( v+(μ-1) H(v,a) H(a,a) a ) = v+(λ-1) H(v,a) H(a,a) a+(μ-1) H(v,a) H(a,a) ·λa = v+(λμ-1) H(v,a) H(a,a) ·a = Ra,λμ(v). For (c) we let v,wV and compute H(Ra,λ(v),Ra,λ(w)) = H ( v+(λ-1) H(v,a) H(a,a) a,w+(λ-1) H(w,a) H(a,a) a ) = H(v,w)+(λλ-1) H(a,w) H(v,a) H(a,a) which implies (c). In (d) we suppose without loss of generality that λ1. To prove the implication from left to right we evaluate both sides of Ra,λ=Rb,μ at a to obtain (1) λa=a+(μ-1) H(a,b) H(b,b) bor (λ-1)a= (μ-1) H(a,b) H(b,b) ·b. Thus λ-10 implies there is a γ with b=γa. Inserting this in (1) we obtain λ-1=μ-1 and hence λ=μ. The implication from right to left is trivial. For (e) we let vV and compute TRa,λT-1 (v) = T ( T-1v+(λ-1) H(T-1v,a) H(a,a) a ) = v+(λ-1) H(T-1v,a) H(a,a) Ta = v+(λ-1) H(v,Ta) H(Ta,Ta) Ta = RTa,λ(v).

Definition. If λ is a primitive pth root of unity we call Ra,λ a unitary reflection of order p. Writing V=aa we see that there is a basis in which the matrix for Ra,λ is ( λ 1 1 1 ) .

Lemma 1: Let Γ𝒞. Suppose for some pair of distinct integers 1i,j there is an edge joining the ith and jth vertices. Then cos2(π/qij)- sin2(π2pi-π2pj) 0.

Proof.

Letting a=π/qij and b=|π2pi-π2pj| we see that a,b0 and a+bπ/2. Hence cos(a)sin(b) yielding the lemma.

Let Γ𝒞 and let V be an dimensional complex vector space with basis {v1,,v}. We can associate with Γ a Hermitian form H=H(Γ) on V as follows. If v=λivi and w=μivi are vectors in V, we define H(v,w)=i,j αijλiμj where A=(αij) is the matrix with αii=sinπ/pi, αij=- { cos2π/qij- sin2(π2pi-π2pj) } 12 if there is an edge joining the ith and jth vertices, and αij=0 otherwise. [Cox1967, p.129,130] Notice that the quantity under the radical is non negative by Lemma 1, so that A is a real symmetric matrix.

We fix a notation as follows: For Γ𝒞, we let W=W(Γ) denote the group given by the presentation corresponding to Γ and let r1,,r be the generators corresponding to the vertices of Γ. Let εj=e2πi/pj, 1j, and define Sj=Rvj,εj, 1j, using the Hermitian form H=H(Γ) and the vector space V with basis {v1,,v} as described above. Finally let G=G(Γ)=S1,,SGL(V).

Theorem 1: Let Γ𝒞, W=W(Γ), and G=G(Γ). The correspondence riSi can be extended to a homomorphism θ of W onto G.

Proof.

It suffices to show that for each 1i we have Sipi=I, and that for each pair of distinct integers 1i, j we have SiSjSi=SjSiSj with qij factors on each side. The first of these conditions is obviously satisfied from the definition of Si; in fact Si has order pi. The second is vacuously satisfied if the graph Γ has only one vertex. So, to begin, we assume the graph Γ has two vertices. If the vertices are not joined by an edge then using {v1,v2} as a basis the matrices for S1 and S2 are both diagonal and hence commute. So we can assume Γ is p1 q p2 (q=q12). The verification of the theorem for this graph occurs in [Cox1962]. We will give a somewhat different argument here. We first compute the eigenvalues of S1S2.

The characteristic equation of S1S2 is λ2-tr(S1S2)λ +Det(S1S2)=0. Now in the basis {v1,v2} S1= ( ε1 (1-ε1)c sin(π/p1) 0 1 ) ,S2= ( 10 (1-ε2)c sin(π/p2) ε2 ) , and thus, S1S2= ( ε1+c2(1-ε1)(1-ε2) sin(π/p1)sin(π/p2) ε2(1-ε1)c sin(π/p1) (1-ε2)c sin(π/p1) ε2 ) where c={cos2(π/q)-sin2(π2p1-π2p2)}12.

Letting θ1=eπi/p1, θ2=eπi/p2 and using sin(x)=12(e-ix-eix) we see that tr(S1S2)=θ1θ2(θ1θ2+θ2θ1-4c2). Then putting Φ=πp1-πp2 we use eiΦ+e-iΦ=2cosΦ=2-4sin2(Φ/2) to obtain tr(S1S2) = θ1θ2 ( 2-4sin2 (Φ/2)-4c2 ) = θ1θ2 ( 2-4cos2 (π/q) ) = -2θ1θ2 cos(2π/q). Hence the characteristic equation of S1S2 is 0 = λ2+2θ1θ2 cos(2π/q)λ+ θ12θ22 = λ2+θ1θ2 ( e2πi/q+ e-2πi/q ) λ+θ12θ22 = ( λ+θ1θ2 e2πi/q ) ( λ+θ1θ2 e-2πi/q ) yielding the roots λ1 = eπi(1p1+1p2+(1-2/q)) and λ2 = eπi(1p1+1p2-(1-2/q)). Note that q2 forces λ1λ2 and thus there is an invertible matrix P such that P-1S1S2P= [λ100λ2]. If q is even the relation we must check is (S1S2)q/2=(S2S1)q/2. But a glance at the expressions for λ1 and λ2 reveals that if q is even, then λ1q/2=λ2q/2. Let a denote this common value. Then P-1 (S1S2)q/2 P = (P-1S1S2P)q/2 =[λ1q/200λ2q/2] =aI2. Hence (S1S2)q/2=aI2 and thus (S2S1)q/2=aI2 also.

Now we assume q is odd. So here we have p1=p2= say p. Thus ε1=ε2=e2πi/p and let ε=e2πi/p. Define T1=P-1S1P, T2=P-1S2P, and let D=P-1S1S2P. Finally, put T1=[xuvy].

Then T2=P-1S2P = P-1S1-1P P-1S1S2P = T1-1D = [y/ε-u/ε-v/εx/ε] [λ100λ2] = [ yλ1ε -uλ2ε -vλ1ε xλ2ε ] = [ yγ -uγ -vγ xγ ] where γ=λ1ε=eπi(1-2/q).

Now comparing the traces of S1 and T1, and those of S2 and T2 we obtain the equations 1+ε=x+y and 1+ε=γy+γx. Thus x+y=γy+γx. Using γγ=1 we obtain γy-x=γ(γy-x). Since q2 we have γ1 and thus γy=x. Since q is odd the relation we must check is (S1S2)q-12S1=S2(S1S2)q-12. Clearly it suffices to show P-1 (S1S2)q-12 S1P = P-1S2 (S1S2)q-12 P. P-1 (S1S2)q-12 S1P = (P-1S1S2P)q-12 P-1S1P= Dq-12T1 = [ λ1q-12x λ1q-12u λ2q-12v λ2q-12y ] P-1S2 (S1S2)q-12P = P-1S2P (P-1S1S2P)q-12 =T2Dq-12 = [ λ1q-12γy -λ2q-12γu -λ1q-12γv λ2q-12γx ] . Thus we will have the desired equality if and only if γy=x and λ1q-12=-λ2q-12γ. The first of these conditions was obtained above. For the second we compute (λ1λ2-1)q-12= (e-4πiq)q-12= e2πiq=-γ. We now have the theorem for graphs with one or two vertices.

Now consider a graph Γ with 3 vertices. Let 1i, j be distinct. Put P=vi,vj and Q=P ("" is taken with respect to H=H(Γ).

Now if H is non degenerate on P we will have V=PQ. But by the argument given for the case where the graph had two vertices we see that Si and Sj satisfy the required relationship when restricted to P. Now Si and Sj obviously satisfy the condition on Q as they are both the identity transformation on Q. Hence SiSjSi=SjSiSj on the whole space V.

So we are led to assume that H is degenerate on P. Now H is not identically zero on P as H(vi,vi)=sinπ/pi0. In fact we must further have H(vi,vj)0 for otherwise H would be non degenerate on P. Thus we see that dim(PQ)=1 and since dim(P)+dim(Q) we have dim(Q)-2. If dim(Q)=-1 we have V=P+Q and there is an -2 dimensional subspace Q of Q such that V=PQ. Since Si and Sj are the identity on Q we can argue as in the non degenerate case to obtain the desired result. So we assume Q has dimensional -2. Hence VP+Q; therefore there is some basis vector vk not in P+Q and we have V=P+vk+Q. Hence there is an -3 dimensional subspace Q of Q such that V=PvkQ. Now Si and Sj are the identity transformation on Q so it suffices to check SiSjSi=SjSiSj on the subspace Pvk=vi,vj,vk.

Recall we are assuming that H is degenerate on vi,vj. Thus Det ( αiiαij αijαjj ) =0. When expanded the equation is sin(π/pi)sin(π/pj)-cos2(π/qij)+sin2(π/2pi-π/2pj)=0. Using sin2(a+b)-sin2(a-b)=sin(2a)sin(2b) together with some half angle formulas one obtains the equivalent condition: cos(πpi+πpj) =cos(π-2πqij). Since the arguments on each side are in the interval [0,π] over which cos is one to one we have that H is degenerate on vi,vj if and only 1pi+1pj+2qij=1 [Cox1974, p.110]. The solutions to this equation subject to the restrictions pi,pj2, qij3, and pi=pj if qij is odd are given in the table: Table 1 qij 3 4 4 6 6 8 12 pi 6 3 4 2 3 2 2 pj 6 6 4 6 3 4 3 (pipj) Now recall that vk is not in P+Q. So in particular vk is not orthogonal to both vi and vj. Hence the portion of the graph Γ involving the ith, jth, and kth vertices must look like: pi pk pj qik qij qjk where the dotted lines joining the ith and kth, and the jth and kth vertices indicate that there mayor may not be an edge joining those vertices but that at least one of those two edges actually occurs. Further the numbers qij, pi, pj occur in one of the columns of Table 1.

For example, one possibility is 3 p 6 q 4 q (pi=3, pj=6, qij=4, pk=p, qik=q, qjk=q).

Let U=vi,vj,vk. Using the given basis {vi,vj,vk} for U the matrix for Si restricted to U is [ ω 1-ωα x 0 1 0 0 0 1 ] while the matrix for Sj restricted to U is [ 1 0 0 -ωα -ω2 y 0 0 1 ] . Here ω=e2πi/3, α=31/4, x=(ω-1)H(vk,vi)H(vi,vi), and y=(-ω2-1)H(vk,vj)H(vj,vj). So SiSj|U= [ ω2 1-ω2α x+1-ωαy -ωα -ω2 y 0 0 1 ] , and thus, (SiSj|U)2= [ 1 0 -ωx+1-ωαy 0 1 -ωαx+(1-ω)y 0 0 1 ] . Similarly, SjSi|U= [ ω 1-ωα x -ω2α -ω -ωαx+y 0 0 1 ] , and thus, (SjSi|U)2= [ 1 0 -ωx+1-ωαy 0 1 -ωαx+(1-ω)y 0 0 1 ] . So we have verified (SiSj)2=(SjSi)2.

A similar computation can be done in the remaining six cases to verify that the desired condition, SiSjSi=SjSiSj, is always satisfied.

For 6 p 6 q q one finds that SiSjSi=SjSiSj= ( 0 -1 ωy+x -1 0 ωx+y 0 0 1 ) .

With 4 p 4 q 4 q one computes that (SiSj)2= (SjSi)2= ( 1 0 (1-i)(x+y) 0 1 (1-i)(x+y) 0 0 1 ) .

In the case of p 6 q 6 q one has that (SiSj)3= (SjSi)3= ( 1 0 (1-ω)2y+(ω2-ω)x 0 1 (2-2ω)y+2(ω2-ω)x 0 0 1 ) .

For 3 p 3 q 6 q one finds that (SiSj)3= (SjSi)3= ( 1 0 -3ω(x+y) 0 1 -3ω(x+y) 0 0 1 ) .

If we have p 4 q 8 q we calculate that (SiSj)4= (SjSi)4= ( 1 0 -4ix+2β3(1-i)y 0 1 -4βix+4(1-i)y 0 0 1 ) where β=21/4.

Finally, for p 3 q 12 q one finds that (SiSj)6= (SjSi)6= ( 1 0 6(ω2-ω)x-12α2ωy 0 1 62(ω2-ω)αx-12ωy 0 0 1 ) where α=31/4.

These verifications complete the proof of the theorem.

Corollary 1. Let Γ𝒞. Then the order of ri in W(Γ) is pi.

Notes and references

This is a typed version of David W. Koster's thesis Complex Reflection Groups.

This thesis was submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Mathematics) at the University of Wisconsin - Madison, 1975.

page history