Examples in representation theory

Examples in representation theory

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au

and

Department of Mathematics
University of Wisconsin, Madison
Madison, WI 53706 USA
ram@math.wisc.edu

Last updates: 8 March 2010

Basic theory

  1. Let A be an algebra of d×d matrices. Since all matrices in A commute with all elements of A - , A A - - . Also, I n A = M n A and M n A = I n A . Hence I n A = I n A = .
  2. Schur's Lemma. Let W 1 and W 2 be irreducible representations of A of dimensions d 1 and d 2 . If B is a d 1 × d 2 matrix such that W 1 a B=B W 2 a ,for allaA, then either
  3. W 1 W 2 and B=0 , or
  4. W 1 W 2 and if W 1 = W 2 then B=c I d 1 for some c.
  5. Proof.
    B determines a inear transformation B: W 1 W 2 . Since Ba=aB for all aA we have that B a w 1 =Ba w 1 =aB w 1 =aB w 1 , for all aA and w 1 W 1 . Thus B is an A -module homomorphism. kerB and imB are submodules of W 1 and W 2 respectively and are therefore equal to either 0 or equal to W 1 or W 2 respectively. If ker B= W 1 or imB=0 then B=0 . In the remaining case B is a bijection, and thus an isomorphism between W 1 and W 2 . In this case we have that d 1 = d 2 . Thus the matrix B is square and invertible. Now suppose that W 1 = W 2 and let c be an eigenvalue of B. Then the matrtix c I d -B is such that W 1 a c I d 1 -B = c I d 1 -B W 1 a for allaA. The arument preceding shows that c I d 1 -B is either invertible or 0. But if c is an eigenvalue of B then det c I d 1 -B =0. Thus c I d 1 -B =0.

  6. Suppose that V is a completely decomposable representation of an algebra A and that V λ W λ m λ where the W λ are nonisomorphic irreducible representations of A. Schur's lemma shows that the A -homomorphisms from W λ to V form a vector space Hom A W λ V m λ . The multiplicity of the irreducible representation W λ un V is m λ =dim Hom A W λ V .
  7. Suppose that V is a completely decomposable representation of an algebra A and that V λ W λ m λ where the W λ are nonisomorphic irreducible representations of A and let dim W λ = d λ . Then V A i W λ m λ A λ I m λ W λ A λ W λ A . If we view elements of λ I m λ W λ A as block diagonal matrices with m λ blocks of size d λ × d λ for each λ , then by using Ex 1 and Schur's lemma we get that V A λ I m λ W λ A = λ M m λ W λ A = λ M m λ I d λ .
  8. Let V be an A -module and let p be an idempotent of A. Then pV is a subspace of V and the action of p on V is a projection from V to pV. If p 1 , p 2 A are orthogonal idempotents of A then p 1 V and p 2 V are mutually orthogonal subspaces of V, since if p 1 v= p 2 v' for some v,v'V then p 1 v= p 1 p 1 v= p 1 p 2 v'=0. So V= p 1 V p 2 V.
  9. Let p be an idempotent in A and suppose that for every aA,pap=kp for some constant k. If p is not minimal then p= p 1 + p 2 , where p 1 , p 2 A are idempotents such that p 1 p 2 = p 2 p 1 =0. Then p 1 =p p 1 p=kp for some constant k. This implies that p 1 = p 1 p 1 =k p 1 p 1 =k p 1 , giving that either k=1 or p 1 =0. So p is minimal.
  10. Let A be a finite dimensional algebra and suppose that zA is an idempotent of A. If z is not minimal then z= p 1 + p 2 where p 1 and p 2 are orthogonal idempotents of A. If any idempotent in this sum is not minimal we can decompose it into a sum of orthogonal idempotents. We continue this process until we have decomposed z as a sum of minimal orthogonal idempotents. At any particular stage in this process z is expresed as a sum of orthogonal idempotents, z= i p i . So zA= i p i A. None of the spaces p i A is 0 since p i = p i .1 p i A and the spacers p i A are all mutually orthogonal. Thus, since zA is finite dimensional it will only take a finite number of steps to decompose z into minimal idempotents. A partition of unity is a decomposition of 1 into minimal orthogonal idempotents.

Finite dimensional algebras

  1. Let 𝒜= a i and = b i be two bases of A and let 𝒜*= a i * and *= b i * be the associated dual bases with respect to a nondegenerate trace t on A. Then b i = j s ij a j ,and b i *= j t ij a j *,and for some constants s ij and t ij . Then δ ij = b i b j * = k s ik a k l t jl a l * = k,l s ik t jl a k a l * = k,l s ik t jl δ kl = k s ik t jk . In matrix notation this says that the matrices S= s ij and T= t ij are such that S T t =I. Then, in the setting of Proposition 2.6, i V 1 b i C V 2 b i * = i j s ij V 1 a j C k t ik V 2 a k * = j,k i s ij t ik V 1 a j C V 2 a k * = j,k δ jk V 1 a j C V 2 a k * = j V 1 a j C V 2 a j *. This shows that the matrix C of proposition 2.6 is independent of the choice of basis.
  2. Let A be the algebra of elements of the form c 1 + c 2 e, c 1 , c 2 , where e 2 =0. A is commutative and t defined by t c 1 + c 2 e = c 1 + c 2 is a nondegenerate trace on A. The regular representation A of A is not completely decomposable. The subspace e A is invariant and its complemenetary subspace is not. The trace of the regular representation is given explicitly by tr 1 =2 and tr e =0. tr is degenerate. There is no matrix representation of A that has trace given by t .
  3. Suppose G is a finite group and that A=G is its group algebra. The the group elements gG form a basis of A. So, using 2.7, the trace of the regular representation can be expressed in the form tr a = gG ag | g = gG a | 1 = G a | 1 , where 1 denotes the identity in G and a | g denotes the coefficient of g in a. Since tr g -1 g = G 0 for each gG, tr is nondegenerate. If we set t a =a | 1 then t is a trace on A and g -1 gG is the dual basis to the basis g gG with respect to the trace.
  4. Let t be the trace of a faithful realisation φ of an algebra A (ie for each aA, t a is given by the styandard trace of φ a where φ is an injective homomorphism φ:A M d ). Let A = aA| t ab =0for allbA . A is an ideal of A. Let a A . Then tr a k-1 a =tr a k =0 for all k. If λ 1 ,, λ d are the eigenvalues of φ a then t a k = λ 1 k + λ 2 k ++ λ d k = p k λ =0 for all k>0 , where p k represents the k -th power symmetric functions [Mac]. Since the power symmetric functions generate the ring of symmetric functions this means that the elementary symmetric functions e k λ =0 for k>0 , [Mac] p17, 2.14. Since the characteristic polynomial of φ a can be written in the form char φ a t = t d - e 1 λ t d-1 + e 2 λ t d-2 +± e d λ , we get that char φ a t = t d . But then the Cayley-Hamilton theorem implies that φ a d =0. Since φ is injective we have that a d =0. So a is nilpotent. Let J be an ideal of nilpotent elements and suppose that aJ. For every element bA,baJ and ba is nilpotent. This implies that φ ba is nilpotent. By noting that a matrix is nilpotent only if in Jordan block form the diagonal contains all zeros we see that t ba =0. Thus a A . So A can be defined as the largest ideal of nilpotent elements. Furthermore, since the regular representation of A is always faitful, A is equal to the set aA| tr ab =0for allbA where tr is the trace of the regular representation of A.
  5. Let 𝒜 be the basis and t the trace of a faithful realisation of an algebra A as in Ex3 and let G 𝒜 be the Gram matrix with respect to the basis 𝒜 and the trace t as given by 2.2 and 2.3. If is another basis of A then G = P t G 𝒜 P, where P is the change of basis matrix from 𝒜 to . So the rank of the Gram matrix is independent of the choice of the basis 𝒜.

    Choose a basis a 1 , a 2 ,, a k of A ( A defined in Ex 3) and extend this basis to a basis a 1 a 2 a k b 1 b s of A. The Gram matrix with respect to this basis is of the form 0 0 0 G B where G B denotes the Gram matrix on b 1 , b 2 b s . So the rank of the Gram matrix is certainly less than or equal to s .

    Suppose that the rows of G B are linearly dependent. Then for some contants c 1 , c 2 ,, c s , not all zero, c 1 t b 1 b i + c 2 t b 2 b i ++ c s t b s b i =0 for all 1is. So t j c j b j b i =0,for alli. This implies that j c j b j A . This is a contradiction to the construction of the b j . So the rows of B B are linearly independent.

    Thus the rank of the Gram matrix is s or equivalently the corank of the Gram matrix of A is equal to the dimension of the radical A . Thus the trace tr of the regular representation of A is nondegenerate iff A = 0 .

  6. Let W be an irreducible representation of an arbitrary algebra A and let d=dimW. Denote W A by A W . Note that representation W is also an irreducible representation of A W ( W a =a for all a A W ).

    We show that tr is nondegenerate on A W , ie that if a A W ,a0 , then there exists b A W such that tr ba 0. Since a is a nonzero matrix there exists some wW such that a w0. Thus Aaw=W. So there exists some wW such that aw0. Now AawW is an A -invariant subspace of W and not 0 since aw0. Thus Aaw=W . So there exists some b A W such that baw=w. This shows that ba is not nilpotent. So tr ba 0. So tr is nondegenerate on A W . This means that A W = λ M d λ for some d λ . But since by Schur's lemma A W = I d , where d=dimW, we see that W A = A W = M d .

  7. Let A be a finite dimensional algebra and let A denote the regular representation of A. The set A is the same as the set A , but we distinguish elements of A by writing a A.

    A linear transformation B of A is in the centraliser of A if for every element aA and x A , Ba x =aB x . Let B 1 = b . Then B a = Ba 1 = aB 1 = a b = ab . So B acts on a A by right multiplication on b. Conversely it is easy to see that the action of right multiplication commutes with the action of left mutliplication since a x b=a x b , for all a,bA and x A . So the centraliser algebra of the regular represnetation is the algebra of matrices determined by the action of right multiplication of elements of A.

Matrix units and characters

  1. If A is commutative and semisimple then all irreducible representations of A are one dimensional. This is not necessarily true for algebras over fields which are not algebraically closed (since Schur's lemma takes a different form).
  2. If R is a ring with identity and M n R denotes n×n matrices with entries in R . the ideals of M n R are of the form M n I where I is an ideal of R.
  3. If V is a vector space over and V* is the space of -valued functions on V then dimV*=dimV. If B is a basis of V then the functions δ b ,bB, determined by δ b b i = 1, if  b= b i , 0, otherwise, for b i B, form a basis of V*. If A is a semisimple algebra isomorphic to M d = λ A ~ M d λ , A ~ an index set for the irreducible representations W λ of A, then dimA= λ A ~ d λ 2 , and the functions W ij λ ( W ij λ a the ij -th entry of the matrix W λ a ,aA) on A form a basis of A*. The W ij λ are simply the functions δ e ij λ for an appropriate set of matrix units e ij λ of A. Thi shows that the coordinate functions of the irreducible representations are linearly independent. Since χ λ = i W ii λ , the irreducible characters are also linearly independent.
  4. Let A be a semisimple algebra. Virtual characters are elements of the vector space R A consisting of the -linear span of the irreducible characters of A. We know that there is a one-to-one correspondence between the minimal central idempotents of A and the irreducible characters of A. Since the minimal central idempotents of A form a basis of the center Z A of A, we ca define a vector space isomorphism φ:Z A R A by setting φ z λ = χ λ for each λ A ~ and extending linearly to all of Z A .

    Given a nondegenerate trace t on A with trace vector t λ it is more natural to define φ by setting φ z λ / t λ = χ λ . Then, for zZ A , φ z a = t za , since φ z μ / t μ a = t z μ / t μ a = t 1 t μ z μ a = 1 t μ t μ χ μ a = χ μ a .

  5. If A is a semisimple algebra isomorphic to M b = λ A ~ M d λ , A ~ an index set for the irreducible representations W λ of A, then the right regular representation decomposes as A λA W λ d λ . If matrix units e ij λ are given by (3.7) then tr e ii λ =tr d λ E ii λ = d λ . So the trace of the regular representation of A, tr, is given by the trace vector t = t λ , where t λ = d λ for each λ A ~ .
  6. Let A be a semisimple algebra and let B*= g* be a dual basis to B= g of A with respect to the tracde of the regular representation of A. We can define an inner product on the space R A of virtual characters, Ex 4, of A by χ χ' = gB χ g χ' g* . The irreducible characters of A are orthonormak with respect to this inner product. Nate that χ,χ' are characters of representations V and V' respectively, then, by Ex4 and Theorem 3.9, χ χ' =dim Hom A V V' . If χ λ is the character of the irreducible representation W λ of A then χ χ' gives the multiplicity of W λ in the representation V as in Section 1, Ex 3.
  7. Let A be a semisimple algebra and t = t λ be a non-degnerate trace on A. Let B be a basis of A and for each gB let g* denote the element of the dual basis to B ith respect to the trace t such that t gg* =1. For each aA define a = gB gag*. By Section 2, Ex 1, the element a is independent of the choice of the basis B. By using a set of matrix units e ij λ of A we get a = i,j,λ 1 t λ e ij λ a e ji λ = i,j,λ 1 t λ a jj λ e ii λ = λ 1 t λ j a jj λ i e ii λ = λ 1 t λ χ λ a z λ . So χ λ a = d λ t λ χ λ a . By 3.9 gB t λ 2 d λ χ μ g* g = λ gB t λ 2 d λ 1 t λ χ λ g χ μ g* z λ = λ δ λμ z λ = z μ . Thus the g ,gB, span the center of A.
  8. Let G be a finite group and let A=G . Let t be the trace on A given by t a =a | 1 , where 1 is the identity in G. By Ex 5 and Section 2 Ex 3 the trace vector of t is given by t λ = d λ G where d λ is the dimnesion of the irreducible representation of G corresponding to λ.

    If hG, then the element h = gB ghg*= gB gh g -1 is a multiple of the sum of the elements of G that are conjugate to h. Let Λ be an index set of the conjugacy classes of G and for each λΛ , let C λ denote the sum of the elements in the conjugacy class indexed by λ . The C λ are linearly independent elements of G . Furthermore by Ex 7 they span the center of G. Thus Λ must also be an index set for the irreducible representations of G. So we see that the irreducible representations of the group algebra of a finite group are indexed by the conjugacy classes.

  9. Let G be a finite group and let C λ denote the conjugacy classes of G. Note that since tr V hg h -1 =tr V h V g V h -1 =tr V g for any representation V of G and all g,hG, characters of G are constant on conjugacy classes. Using theorem 3.8, G δ λμ = g χ λ g χ μ g -1 = ρ g C ρ χ λ g χ μ g -1 = ρ C ρ χ λ ρ χ μ ρ' , where ρ' is such that C ρ' is the conjugacy class which contains the inverses of the elements in C ρ . Define matrices Ξ= Ξ λρ and Ξ'= Ξ' λρ by Ξ λρ = χ λ ρ and Ξ λρ '= C ρ χ λ ρ' . By Ex 8 these matrices are square. In matrix notation the above is Ξ Ξ 't = G I, but then we also have that Ξ 't Ξ = G I, or equivalently that λ χ λ ρ ' χ λ τ = G C ρ δ ρτ .
  10. This example gives a generalisation of the preceding example. Let A be a semisimple algebra and suppose that B is a basis of A and that there is a partition of B into classes such that if b and b'B are in the same classes then for every λ A ~ , χ λ b = χ λ b ' . The fact that the characters are linearly independent implies that the number of classes must be the same as the number of irreducible characters χ λ . Thus we can inbox the classes of B by the elements of A ~ . Assume that we have fixed such a correspondence and denote the classses of B by C λ ,λ A ~ .

    Let t be a nondegenerate trace on A and let G be the Gram matrix with respect to the basis B and the trace t . If gB, let g* denote the element of the dual basis to B , with respect to the trace t , such that t gg* =1. Let G -1 =C= c gg' and recall that g*= g'B c gg' g'. Then d λ t λ δ λμ = gB χ λ g χ μ g* = gB χ λ g χ μ g'B c gg' g' = g,g'B χ λ g c gg' χ μ g' . Collecting g,g'B by class size gives d λ t λ δ λμ = ρ,τ g C ρ ,g' C τ χ λ g c gg' χ μ g' where χ λ ρ denotes the value of the charactwr χ λ ρ at elements of the class C ρ . Now define a matrix C = c ρτ with entries c ρτ = g C ρ ,g' C τ c gg' , and let Ξ= Ξ λρ and Ξ'= Ξ' λρ be matrices given by Ξ ρλ = χ λ ρ and Ξ' λρ = t λ d λ χ λ ρ . Note that all of these matrices are square. Then the above gives that I=Ξ C Ξ' t . So I= C Ξ' t Ξ, or equivalently that δ ρτ = σ,λ c ρσ t λ d λ χ λ σ χ λ τ = σ,λ g C ρ ,g' C σ c gg' t λ d λ χ λ σ χ λ τ = λ g'B g C ρ c gg' χ λ σ χ λ τ = g C ρ λ χ λ g* χ λ τ .

Double centraliser nonsense

  1. Let G be a group and let V and W be two representations of G . Define an action of G on the vector space VW by g vw = gv gw , for all gG,vV and wW (see also Section 5 Ex 4). In matrix form, the representation VW is given by setting V d W g =V g W g , for each gG. Note, however, that if we extend this action to an action of A=G on VW, then for a general aA, a vw is not equal to av aw and V d W a is not equal to V a W a .
  2. Theorem 4.6 gives that there is a one-to-one correspondence between minimal central idempotents z λ C of C and characters χ A λ of irreducible representations of A of A appearing in the decomposition of V . Let χ C λ be the irreducible characters of C and for each λ set d λ C = χ C λ 1 , so that the d λ are the dimensions of the irreducible representations of C. The Frobenius map is the map F: Z C R A 1 d λ C z λ X χ A λ . Let t:CA be the trace of the action of CA on the representation V. By taking traces on each side of the isomorphism in Theorem 4.11 we have that t qa = λ χ C λ q χ A λ a . Let t C = t λ C be a nondegenerate trace on C , let B be a basis of C and for each gB let g* be the element of the dual bsis to B with respect to the trace t C such that t C gg* =1. Then, for any zZ C , the center of C, F z = gB t C zg* t g. , since, using 3.8 and 3.9, F z μ C d μ C = g 1 d μ C t C z μ C g* t g. = g tμC d μ C χ C μ g* t g. = g tμC d μ C χ C μ g* λ χ C λ q χ A λ . = g tμC d μ C δ μλ dλC t λ C χ A λ . = χ μ A . .

    If we apply the inverse F -1 of the Frobenius map to (4.13) we get F -1 t q. = λ χ C λ q zλC d λ C . Formula 3.13 shows that F -1 t q. = λ t λ C dλ C z λ C q . In the case that t C is the trace of the regular representation λ t λ C dλ C z λ C =1 and F -1 t q. = q .

Centralisers

  1. Let A,B and C be vector spaces. A map f:A×BC is bilinear if f a 1 + a 2 b =f a 1 b +f a 2 b , f a b 1 + b 2 =f a b 1 +f a b 2 , f αa b =f a αb =αf ab , for all a, a 1 , a 2 A,b, b 1 , b 2 B,α.
  2. The tensor product is given by a vector space AB and a map i:A×BAB such that for every bilinear map f:A×BC there exists a linear map f - :ABC such that the following diagram commutes:

    A×B C AB i f f

    One constructs the tensor product AB as the vector space of elements ab,aA,bB, with relations a 1 + a 2 b= a 1 b+ a 2 b, a b 1 + b 2 =a b 1 +a b 2 , αa b=a αb =α ab , for all a, a 1 , a 2 A,b, b 1, b 2 B and α. The map i:A×BAB is given by i ab =ab. Using the above universal mapping property one gets easily that the tensor product is unique in the sense that any two tensor products of A and B are isomorphic.

    If R is an algebra and A is a right R -module (a vector space that affords an antirepresentation of R ) and B is a left R -module them one forms the vector space A R B as above except that we require a bilinear map f:A×BC to satisfy the additional condition f ar b =f a rb for all rR. Then the tensor product A R B once again is constructed by using the vector space of elements ab,aA,bB, with the relations above and the additional relation arb=arb, for all rR.

  3. Let AB be semisimple algebras such that A is a subalgebra of B Let A ~ and B ~ be index sets of the irreducible representations of A and B respectively, and suppose that f ij μ ,μ A ~ , is a complete set of matrix units of A

    [Bt] There exists a complete set of matrix units e rs λ ,λ B ~ , of B that is a refinement of the f ij μ in the sense that for each μ A ~ and each i , f ii μ = e rr λ , for some set of e rr λ .

    Proof.
    Suppose that B λ B ~ M d λ . Let z λ B be the minimal central idempotent of B such that I λ = B z λ is the minimal ideal corresponding to the λ block of matrices in λ M d λ .

    For each μ A ~ and each i decompose f ii μ into minimal orthogonal idempotents of B (Section 1, Ex 7), f ii μ = p j . Label each p j appearing in this sum by the element λ B ~ which indexes the minimal ideal I λ =B p j B of B . Then 1= μ,i f ii μ = λ B ~ j=1 d λ p j λ . Now B=1.B.1= λ,μ B ~ 1i d λ ,1j d μ p i λ B p j μ . If λμ then the space p i λ B p j μ = p i λ B z μ B p j μ = p i λ z μ B B p j μ =0 for all i,j. Since p i λ = p i λ .1. p i λ p i λ I p i λ and p i λ B p j λ p j λ B p i λ = p i λ I λ p i λ 0, we know that p i λ B p j λ is not zero for any 1i,j d λ . Furthermore, since the dimension of B is λ d λ 2 each of the spaces p i λ B p j λ is one dimensional.

    For each p i λ define e ii λ = p i λ . For each λ and each 1i<j d λ let e ii λ be some element of p i λ B p j λ . Then choose e ii λ p j λ B p i λ such that e ij λ e ji λ = e ii λ . This defines a complete set of matrix units of B.

  4. Let G be a finite group and let H be a subgroup of G. Let R= g i be a set of representatives for the left cosets gH of H in G. The action of G on the cosets of H in G by left multiplication defines a representation π H of H in G. This representation is a permutation representation of G. Let gG. The entries π H g i'i of the matrix π H g are given by π H g i'i = δ i'k where k is such that g g i g k H.

    Let V be a representation of H. Let B= v j be a basis of V. Then the elements g v j where gG, v j B span G H V. The fourth relation in 5.1 gives that the set g i v j , g i R, v j B forms a basis of G H V.

    Let gG and suppose that g g i = g k h, where hH and g k R. Then g g i v j = g k h v j = g k h v j = j g k v j' V h j'j = i',j' g i' v j' V h j'j δ i'k = i',j' g i' v j' V h j'j π H g i'i . Then χ V H G g = g i R, v j B g g i v j | g i v j = g i , v j ,g g i g i H V g i -1 g g i jj .

    Since characters are constant on conjugacy classes we have that χ V H G g = 1 H hH g i ; h -1 g i -1 g g i hH χ V h -1 g i -1 g g i h = 1 H aH,a C g χ V a , where C g denotes the conjugacy class of g. This is an alternate proof of Theorem 5.8 for the special case of inducing from a subgroup H of a group G to the group G.

  5. Define G d G to be the subalgebra of the algebra GG consisting of the span of the elements gg , gG. Then GG d G as algebras.

    Let V 1 and V 2 be representations of G. Then the restriction of the GG representation V= V 1 V 2 to the algebra G d G is the Kronecker product (Section 4, Ex 1) V 1 d V 2 = V 1 V 2 GG G d G of V 1 and V 2 . Since GG d G we can view V 1 d V 2 as a representation of G.

    Let V λ and V μ be irreducible representations of G such that V λ v μ appears as an irreducible component of the GG representation V 1 V 2 . The decomposition of the Kronecker product V λ d V μ = V 1 V 2 GG G d G ν g λμ ν V ν into irreducible representations V ν of G is given by the branching rule for GGG d G. Let C 1 and C 2 be the centralisers of the representations V 1 and V 2 respectively. Let C be the centraliser of the GG representation V= v 1 V 2 . Applying Theorem 5.9 to V where A=GG and G d G=BG shows that the g λμ ν are also given by the branching rule for C 1 C 2 C.

References

[BG] A. Braverman and D. Gaitsgory, Crystals via the affine Grassmanian, Duke Math. J. 107 no. 3, (2001), 561-575; arXiv:math/9909077v2, MR1828302 (2002e:20083)

page history