# Root Systems

root_system haskell

In this note we give an elementary account of the (crystallographic) root systems with Haskell implementation. The complete code file is here. It uses HaskellForMaths as a dependency. In fact, I started the implementation by modifying Math.Projects.RootSystem, a module of HaskellForMaths, and adding in positive roots and longest elements.

We show algorithmically and combinatorially how to generate positive roots using the root poset (which I learned by reading Chapter 12 of [{kirson12}]) and to compute the longest element of a root system (which I learned by reading [{humphreys92}]).

Let $$E = \real^q$$ for some positive integer $$q$$ with the standard inner product, and let $$\alpha, \beta \in E$$ be two nonzero vectors. Let $$a_{\alpha, \beta}$$ be twice the projection of $$\beta$$ on $$\alpha$$:

$a_{\alpha, \beta} = 2 {\braket{\alpha, \beta} \over \braket{\alpha, \alpha}}.$

The reflection of $$\beta$$ about $$\alpha$$ is

$s_\alpha \beta = \beta - a_{\alpha, \beta} \alpha.$

Let $$0 \notin \Delta \subset V$$, then one can generate a set $$R \supset \Delta$$ closed under reflections:

For any $$\alpha, \beta \in R$$, $$s_\alpha \beta \in R$$.

Furthermore, if $$\Delta$$ and $$R$$ satisfy the following conditions we call $$R$$ a root system with simple roots $$\Delta$$:

• The elements of $$\Delta$$ are linearly independent
• For any $$\alpha \in R$$, the only multiples of $$\alpha$$ in $$R$$ are $$\alpha$$ and $$-\alpha$$.
• For all $$\alpha, \beta \in R$$, $$a_{\alpha, \beta} \in \intg$$
• Any $$\alpha \in R$$ can be written as $$\alpha = \sum_{\beta \in \Delta} c_\beta \beta$$ where either all the $$c_\beta$$'s are nonnegative integers (in which case $$\alpha$$ is called a positive root), or all of them are nonpositive integers ($$\alpha$$ called a negative root).

We write $$\Delta = \{\alpha_1, ..., \alpha_n\}$$. $$n$$ is called the rank of $$R$$.

Here is a table of the root systems of classical Lie algebras, a standard choice of their simple roots, the dimension of the host space $$E$$ and the number of all roots

Type $$\Delta$$ $$q$$ $$\#R$$
$$A_n$$ $$\{e_i - e_{i + 1}, i = 1 : n\}$$ $$n + 1$$ $$n(n + 1)$$
$$B_n$$ $$\{e_i - e_{i + 1}, i = 1 : n - 1\} \cup \{e_n\}$$ $$n$$ $$2 n^2$$
$$C_n$$ $$\{e_i - e_{i + 1}, i = 1 : n - 1\} \cup \{2 e_n\}$$ $$n$$ $$2 n^2$$
$$D_n$$ $$\{e_i - e_{i + 1}, i = 1 : n - 1\} \cup \{e_{n - 1} + e_n\}$$ $$n$$ $$2 n (n - 1)$$
Some discussion of $$A_n$$ can be found in the last section of representation_sl_n.

## The positive roots

Denote the set of positive roots by $$\Pi$$ and the set of negative roots by $$- \Pi = \{- \alpha: \alpha \in \Pi\}$$. There are equal number of positive and negative roots: $$|\Pi| = |- \Pi| =: N$$. So we can write $$R = \{\alpha_1, ..., \alpha_{2 N}\}$$ where the first $$n$$ roots are the simple roots.

The quantities $$a_{\alpha_i, \alpha_j} =: a_{ij}$$ are called the Dynkin indices. When $$i, j \le n$$ the Dynkin indices form the Cartan matrix.

The positive roots form a nice poset. Given a root $$\beta$$ with simple root decomposition $$\beta = \sum_{i = 1 : n} c_i \alpha_i$$ we call $$c_{1 : r}$$ the root index of $$\beta$$. The root poset is defined as: $$\beta \le \beta'$$ if $$c_i \le c_i'$$ for all $$i$$ where $$c$$ and $$c'$$ are the root indices of $$\beta$$ and $$\beta'$$ respectively.

This way the roots are graded by the sum of the root indices $$\sum c_i$$, e.g. assuming $$\beta = \alpha_1 + 2 \alpha_3$$ is a root then it is a level-3 root.

It turns out that the positive roots can be generated by "adding one root at a time", that is: starting with the simple roots $$\alpha_{1 : r}$$ which are also the lowest roots, we generate the second level of roots by adding one root to obtain roots in the form of $$\alpha_i + \alpha_j$$, then the third level by adding a root to roots in the second level and so on and so forth.

How does one generate all the positive roots? By using simple root strings. First all roots are part of simple root strings. A simple root string is a succession of roots differing by increment of a fixed simple root. For example, in root system of type $$G_2$$, let $$\Delta = \{\alpha_1 = (1, -1, 0), \alpha_2 = (-2, 1, 1)\}$$. Then $$\alpha_1, \alpha_1 + \alpha_2, 2 \alpha_1 + \alpha_2$$ and $$3 \alpha_1 + \alpha_2$$ form a simple root string where each root has one more $$\alpha_1$$ than the previous root. Moreover, the directed graph induced from the root strings has the nice property that it is connected and the simple roots are the ancestors of all other roots. Therefore all the positive roots of level $$k$$ can be obtained from all the roots of level $$k - 1$$ if one knows the successors in the simple root strings of each level $$k - 1$$ root.

Given a root $$\alpha$$ with root index $$c$$, to determine its successors in simple root strings, one can do the following:

• Write down $$m_{1 : n}$$ where $$m_i = \begin{cases} 2 \delta_{ij} & \text{if }\alpha = \alpha_j \in \Delta \\ \max\{j \ge 0: \alpha - j \alpha_i \in \Pi\} & \text{otherwise.} \end{cases}$$
• Compute $$p_{1 : n}$$ where $$p_i = m_i - a_{\alpha_i, \alpha}$$
• Then for any $$i$$ such that $$p_i > 0$$, $$\alpha + \alpha_i$$ is a successor of $$\alpha$$ in the $$\alpha_i$$-string.

See Chapter 12 of [{kirson12}] for a working example of type $$G_2$$.

Here is the exerpt of the relevant Haskell code (the complete version can be found here):

-- |all positive roots
positiveRoots :: SimpleSystem -> [[Q]]
positiveRoots ss = (positiveRoots' $cartanMatrix' ss) <<*>> ss -- |return root indices of all positive roots positiveRoots' :: [[Q]] -> [[Q]] positiveRoots' cm = positiveRoots'' [] (iMx$ length cm)
where positiveRoots'' :: [[Q]] -> [[Q]] -> [[Q]]
positiveRoots'' pr npr
| null npr = pr
| otherwise = positiveRoots'' (pr ++ npr) (S.toList . S.fromList $mconcat$ zipWith newRootIndices npr (pIndex pr cm <$> npr)) -- |calculate the successors of ri in the simple string -- |given root index ri and its p-vector pi newRootIndices :: [Q] -> [Q] -> [[Q]] newRootIndices ri pi = (ri <+>) <$> (go pi [] 1)
where go :: [Q] -> [Int] -> Int -> [[Q]]
go [] ys n = basisElt n <$> ys go (x:xs) ys k = go xs (if x == 0 then ys else ys ++ [k]) (k + 1) -- |given the already-found positive roots allRootIndices, -- |the transposed Cartan Matrix cm and the root index, calculate its p-vector pIndex :: [[Q]] -> [[Q]] -> [Q] -> [Q] pIndex allRootIndices cm rootIndex = (mIndex rootIndex allRootIndices) <-> (dynkinIndex' rootIndex cm) -- |given the root index and all already-found positive roots, calculate its m-vector mIndex :: [Q] -> [[Q]] -> [Q] mIndex rootIndex allRootIndices = if isMultBasis rootIndex then 2 *> rootIndex else maxV$ filter isMultBasis [rootIndex <-> r | r <- allRootIndices]


One may perform a QuickCheck to verify, for example, the positive roots together with their negation form all the roots:

-- | test the positive roots and negative roots form all the roots
prop_positiveRoots1 :: Int -> Bool
prop_positiveRoots1 n = prop_positiveRoots1' t m
where (t, m) = int2TypeInt n

prop_positiveRoots1' :: Type -> Int -> Bool
prop_positiveRoots1' t n =
let pr = positiveRoots (simpleSystem t n) in
(S.fromList $pr ++ ((-1) *>> pr)) == (S.fromList$ allRoots (simpleSystem t n))

-- |Auxilieary function to transform an Int to a Root system for quick check
int2TypeInt :: Int -> (Type, Int)
int2TypeInt n
| n > 108 = int2TypeInt (n mod 108 + 1)
| n > 105 = (E, n - 100)
| n > 96 = (G, 2)
| n > 88 = (F, 4)
| n > 85 = (E, n - 80)
| n > 62 = (D, n - 60)
| n > 40 = (C, n - 40)
| n > 20 = (B, n - 20)
| n > 0 = (A, n)
| otherwise = int2TypeInt (n mod 108 + 1)

*RootSystem Test.QuickCheck> quickCheck prop_positiveRoots1
+++ OK, passed 100 tests.


## The longest element

The Weyl group $$W$$ is the group generated by $$s_\alpha$$ for $$\alpha \in R$$. Therefore any $$w \in W$$ can be written as a string of reflections $$w = s_{\alpha_{i_l}} ... s_{\alpha_{i_2}} s_{\alpha_{i_1}}$$. The length $$\ell(w)$$ of $$w$$ is the smallest such $$l$$, and the rewriting of $$w$$ into a product of $$\ell(w)$$ reflections is called a reduced decomposition. The longest element of $$W$$ is the unique element $$\sigma_0$$ such that $$\ell(\sigma_0) > \ell(w)$$ for $$w \in W$$ and $$w \neq \sigma_0$$.

As it turns out, $$\sigma_0 \Pi = - \Pi$$, $$\ell(\sigma_0) = N$$ and one can think of the reduced decomposition $$\sigma_0 = s_{\alpha_{i_N}} s_{\alpha_{i_{N - 1}}} ... s_{\alpha_{i_2}} s_{\alpha_{i_1}}$$ as sending $$\Pi$$ to $$- \Pi$$ one element at a time:

\begin{align} |s_{\alpha_{i_1}} \Pi \cap \Pi| &= N - 1\\ |s_{\alpha_{i_2}} s_{\alpha_{i_1}} \Pi \cap \Pi| &= N - 2\\ &...\\ |s_{\alpha_{i_N}} ... s_{\alpha_{i_2}} s_{\alpha_{i_1}} \Pi \cap \Pi| &= 0 \end{align}

Denote $$\Pi^{(j)} := s_{\alpha_{i_j}} s_{\alpha_{i_{j - 1}}} ... s_{\alpha_{i_2}} s_{\alpha_{i_1}} \Pi$$. More specifically, the following Claim helps one find an admissible $$\alpha_{i_{j + 1}}$$ given $$\Pi^{(j)}$$.

Claim. Any element of $$\Delta \cap \Pi^{(j)}$$ is an admissable $$\alpha_{i_{j + 1}}$$.

Therefore, one can find $$\alpha_{i_1}$$ be picking an arbitrary root in $$\Delta$$ since $$\Delta \cap \Pi = \Delta$$. Then one can pick any $$\alpha_{i_2} \in \Delta \cap s_{\alpha_{i_1}} \Pi$$ and so on and so forth, until at step $$N$$, $$\Pi^{(N)} = - \Pi$$ and hence $$\Delta \cap \Pi^{(N)} = \emptyset$$ and we are done.

Proof. First note that given a root system $$R$$, there can be multiple choices of simple roots / positive roots, each of which we call a simple system / positive system, as opposed to our fixed simple roots $$\Delta$$ / positive roots $$\Pi$$. The following lemma is useful.

Lemma.

1. For any $$w \in W$$, $$w \Pi$$ is a positive system.
2. For any $$\alpha \in \Delta$$, $$s_\alpha \Pi = (\Pi - \{\alpha\}) \cup \{-\alpha\}$$.

(End of Lemma)

We now return to the proof of the Claim, by induction:

Basis. By Item 2 of Lemma, for any $$\alpha \in \Delta = \Delta \cap \Pi$$, $$s_\alpha \Pi \cap \Pi = N - 1$$.

Induction. Given $$\Pi^{(j)}$$ we know that $$\Pi^{(j)} \cap \Pi = N - j$$. Pick any $$\alpha \in \Delta \cap \Pi^{(j)}$$, then by Item 1 of the Lemma, $$- \alpha \notin \Pi^{(j)}$$, and by Item 2 we have

$s_\alpha \Pi \cap \Pi^{(j)} = N - j - 1$

which is equivalent to

$\Pi \cap s_\alpha \Pi^{(j)} = N - j - 1.$

$$\square$$

Here is the Haskell implementation:

-- |The indices of the simple roots in the reduced decomposition of the longest elements
longestElementIndex :: SimpleSystem -> [Int]
longestElementIndex ss = (+1) <$> fromJust <$> flip elemIndex ss <$> longestElement ss -- |The reduced decomposition of the longest elements longestElement :: SimpleSystem -> [[Q]] longestElement ss = longestElement' [] posRoots where posRoots = positiveRoots ss longestElement' :: [[Q]] -> [[Q]] -> [[Q]] longestElement' xs rs = let ys = (S.fromList ss) S.intersection (S.fromList rs) in if S.null ys then xs else let alpha = (head (S.toList ys)) in longestElement' (alpha:xs) (s alpha <$> rs)


Some examples running the code:

*RootSystem Test.QuickCheck> longestElementIndex  (simpleSystem A 5)
[5,4,3,2,1,5,4,3,2,5,4,3,5,4,5]
*RootSystem Test.QuickCheck> longestElementIndex  (simpleSystem D 5)
[1,2,3,5,4,3,2,1,2,3,5,4,3,2,3,5,4,3,5,4]
*RootSystem Test.QuickCheck> longestElementIndex  (simpleSystem F 4)
[4,3,2,1,3,2,3,4,3,2,1,3,2,3,4,1,2,3,2,1,2,3,2,3]
*RootSystem Test.QuickCheck> longestElementIndex  (simpleSystem G 2)
[1,2,1,2,1,2]


We also check that the length of the longest element is half of the number of total roots:

prop_longestElement :: Int -> Bool
prop_longestElement n = let (t, m) = int2TypeInt n in
numRoots t m == 2 * (length $longestElement$ simpleSystem t m)

*RootSystem Test.QuickCheck> quickCheck prop_longestElement
+++ OK, passed 100 tests.


## References

• [humphreys92] Reflection Groups and Coxeter Groups, , , oct 1992.
• [kirson12] Introductory Algebra for Physicists, , 2012. [ link ]