Letter Distributions in the English Language and Their Relations

Letter Distributions in the English Language and Their Relations

The following packages will be required for this post:

1
2
3
library(gplots) # heatmap
library(stringr) # string cleaning
library(viridis) # colourblind-friendly palette

Motivation

There are many unspoken but unanimously understood rules in the English language. For example, starting a word with the letter ‘Q’ feels completely reasonable; “queue”, “quark”, “quizzical”, and many other words do just this. But can you think of any English word ending with the letter ‘Q’? Unless you are a scrabble-nerd, I imagine that this is a near impossible task. In my research I could only find 6 examples and most of these were alternate spellings for already obscure words. For example, the shortest of such words is ‘suq’ an alternative spelling of ‘souk’, a North African or Middle Eastern marketplace. Great for annoying your opponents in scrabble but not so much for everyday speech. There are many more such rules. A large amount of which have exceptions though in the majority of cases are obeyed. Yet they rarely cross our mind. They just look natural. They just look English.

There are many other conventions guiding the construction of English words that involve how letters relate together. Examples being that a ‘q’ is almost always followed by a ‘U’ and that only certain letters can be doubled - certainly a word with a double-x would not look like it belongs in the English language. I do not wish to analyse such rules in this post but instead focus on the patterns that arise in the distributions of individual letters. I will return to this more general idea in a blog post I wish to write in the future concerning the use of Markov chains in language generation which will be linked here when published.

The idea for this blog post was spurred on by the data collection for my recent blog post concerning the optimum layout of the T9 typing system. In this, I sourced data on the letter frequencies of various languages including English. Alongside the frequency tables describing the general letter distributions in English there were also tables looking at the same data for just the first letter, second, or third. Whereas the most common letter in the English language without restriction on position is ‘E’, making up 13% of all use, when we just look at the first position, the dominant letter becomes ‘T’ with a whopping 16% of all use (‘E’ contributes only 2.8%). These are fascinating but not very reflective measures of the overall distribution of letters within words. They don’t account for the different lengths of words in English and, more importantly, that very long words are far less common than the typical lengths. In order to have a serious look at how letters are distributed within words, what we would really want is a sort of continuous probability distribution on $(0,1)$ with the value at each point in that interval relating to how likely it is for a particular letter to be located at that proportion of a word.

I spent hours searching for this sort of data but to no avail. It simply didn’t appear to exist yet. This might have been an appropriate point to put this project to rest. On the other hand, with the right skills and data sources, there’s no reason why this information can’t be constructed from scratch. The process of such data collection and the analysis that follows is what I wish to discuss in this post.

Data Collection

In order to complete the desired analysis we need words. Lots of words. My favourite location for random text data is the website ‘textfiles.com‘. This contains a massive selection of free-to-access books in ‘.txt’ format. They range over a wide spectrum of genres and styles featuring both fiction and non-fiction books from throughout history. We will be specifically looking at the collection of fiction books they have, located here. This page contains 328 hyperlinks, each of which leads to a text file transcription of a classic piece of fiction. In order to make this post self-contained and reproducible, instead of downloading these files manually, we will download a selection of them directly from the site using R. Due to the simple structure of the site and its constitute webpages, this is extremely easy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# minimum number of words we wish to source
nwords <- 10^6

# url of webpage listing available fiction books
data_source <- "http://textfiles.com/etext/FICTION/"

# download HTML code of this webpage
html <- scan(file = data_source,
what = character(0),
sep = "\n",
quiet = T)

# extract links to text files from each line of HTML code
books <- str_match(html, "<A HREF=\"(.*?)\"")

# remove any lines that didn't contain a link
books <- books[!is.na(books[,1]),]

# extract the relative url from the matched links
books <- books[,2]

# make url absolute
links <- paste0(data_source, books)

# randomly order the links to the books
links <- sample(links)

# define empty vector to store words
words <- character(0)

# loop through links, downloading contents until we have enough words
for (l in links) {
words <- append(words,
scan(file = l,
what = character(0),
sep = " ",
quote = "",
quiet = T))
if (length(words) >= nwords) break
}

We now have a list of at least one million random words taken from a variety of fictional works. Let’s take a look at a small sample of them.

1
sample(words, 40)
  1. 'Elvira'
  2. 'at'
  3. 'in'
  4. 'stay."'
  5. 'least,'
  6. 'like'
  7. 'The'
  8. 'meant'
  9. ''
  10. 'tender,'
  11. 'all'
  12. 'I\'ve'
  13. 'month.'
  14. 'them,'
  15. ''
  16. 'is'
  17. 'years'
  18. ''
  19. 'half-unburn\'d.'
  20. 'but'
  21. 'process'
  22. 'for'
  23. 'and'
  24. 'Contek,'
  25. ''
  26. 'three'
  27. 'state'
  28. ''
  29. 'board'
  30. 'has'
  31. 'He'
  32. 'there?"'
  33. 'done,'
  34. 'attempt'
  35. 'in'
  36. 'jolt'
  37. 'her'
  38. 'is'
  39. 'Greek,'
  40. 'child'

The data is awfully messy, but it is there. We now use the stringr package and some base R functions to perform a bit of text-tidying.

1
2
3
4
5
6
7
8
# remove any characters that are not standard letters
words <- str_replace_all(words, "[^[a-zA-Z]]", "")

# remove any blank strings
words <- words[words != ""]

# convert to upper-case for consistency
words <- toupper(words)

Taking a look at a sample of the words now gives a more pleasing result.

1
sample(words, 40)
  1. 'AND'
  2. 'NOT'
  3. 'FIRST'
  4. 'OF'
  5. 'DOOR'
  6. 'VERTUOUS'
  7. 'TOLD'
  8. 'BECOME'
  9. 'POSITION'
  10. 'HALICARNASSUS'
  11. 'OF'
  12. 'IT'
  13. 'ROLLED'
  14. 'MONEY'
  15. 'I'
  16. 'AMMON'
  17. 'BORDER'
  18. 'SEIZED'
  19. 'HOW'
  20. 'TO'
  21. 'THAS'
  22. 'SIDES'
  23. 'I'
  24. 'MY'
  25. 'AS'
  26. 'OF'
  27. 'LAST'
  28. 'MEANS'
  29. 'WILL'
  30. 'INVOLUNTARY'
  31. 'TILL'
  32. 'AS'
  33. 'HAD'
  34. 'AGAIN'
  35. 'MANETTE'
  36. 'LAST'
  37. 'REJOICE'
  38. 'DO'
  39. 'MADE'
  40. 'WILL'

Notice that we had the choice to either entirely remove any words containing a hyphen or apostrophe or simply omit the offending characters from the results. I went for the latter as I don’t believe it will have a significant effect on the result and so would be an unnecessary waste of data.

Patterns and Relations

Generating probability distributions

Now that we have our word-bank, we can begin analysing it. We first need a way of converting this list into a collection of probability distributions, one for each letter, representing where in a random word that letter is likely to be situated.

My first idea was to use a weighted kernel density estimation. For each letter in a word, you take its position as a proportion (e.g. the third letter in a four letter word has position $\frac{2\times3 - 1}{4 \times 2} = \frac{5}{8}$) and then place a kernel at this point weighted by the inverse of the length of the word. If this sounds complicated, don’t worry; it was no good. Although it did produce some rather pleasing results, it did not scale well. It took around a minute to process just a thousand words so the idea of it working for one million or more is almost comical.

Instead, I decided the best way was to proceed was to compute a probability distribution directly. For a specific letter of a particular word the process goes as follows:

  • Equally split the interval $(0,1)$ into a number of classes equal to the length of the word
  • Consider the class whose number (when ordered in an increasing fashion) is the same as the position of the letter in the word
  • Increase the value all points in this interval by an amount proportional to the length of the interval

When repeated for all letters and words, this will give rise to a reasonably continuous pseudo-PDF. We code this up as follows.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# create matrix to store the values of the points in the 
# interval discretised with a resolution of 500
count_mtrx <- matrix(0, nrow = 500, ncol = 26)
rownames(count_mtrx) <- 1:500
colnames(count_mtrx) <- c(LETTERS)

# loop through all words
for (w in words) {
# split word into characters
chars <- strsplit(w, split = "")[[1]]
len <- length(chars)
# loop over all letter positions in the word
for (pos in 1:len) {
# limits of the sub-interval
lower_limit <- floor(500 * (pos - 1) / len) + 1
upper_limit <- ceiling(500 * pos / len)
# discrete points to increase the value of
updating_cells <- lower_limit:upper_limit
# increase value of these cells
new_val <- count_mtrx[updating_cells, chars[pos]] + 1 / len
count_mtrx[updating_cells, chars[pos]] <- new_val
}
}

# scale values so the area under the curve is approximately one
freq_mtrx <- apply(count_mtrx, 2, function(x) 500 * x / sum(x))

We now have a 500x26 matrix, each column representing a letter and each row representing the discrete approximation of the probability density function we designed. Let’s take a look at these distributions.
@caption=’The probabalitity distribution of English letters within a random word’

1
2
3
4
5
6
7
8
9
10
11
# split plotting window into a 4x7 grid and reduce margins
par(mfrow = c(4,7), mar = c(rep(c(2,1), 2)))
for (l in LETTERS) {
# position bottom row at centre
if (l == "V") plot.new()
plot(freq_mtrx[, l],
main = l,
xlab = "", ylab = "",
xaxt = 'n', yaxt = 'n',
type = "l")
}

There is clearly a lot of noise in the distributions (which is unsurprising considering the primitiveness of the method used to generate them) but their overall shapes are clearly visible. To highlight these trends, we will fade the current piecewise function and add a LOESS approximation to smooth out the noise.
@caption=’The probabalitity distribution of English letters within a random word using LOESS smoothing’

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# split plotting window into a 4x7 grid and reduce margins
par(mfrow = c(4,7), mar = c(rep(c(2,1), 2)))
for (l in LETTERS) {
# position bottom row at centre
if (l == "V") plot.new()
plot(freq_mtrx[, l],
main = l,
xlab = "", ylab = "",
xaxt = 'n', yaxt = 'n',
type = "l", col = "#00000080")
# add LOESS curve
lines(lowess(freq_mtrx[, l], f = .2, iter = 10),
col = "red", lwd = 2)
}

Comparision and clustering

Looking at these distributions we can see similarity between between certain letters. For example, ‘B’ and ‘J’ both feature heavily at the start of a word and then become less likely to appear as you progress through until at about halfway through the word, the odds drop sharply. Other similar distributions are that of ‘A’ and ‘I’ which both start with moderate density then become more common halfway through the word then drop off towards the end.

It would nice to have a way to numerically quantify the similarity of two such distributions. My first instinct was to use a two-sample Kolmogorov-Smirnov test, a non-parametric hypothesis test for the equality of the two continuous distributions. The hypothesis test itself has little relevance to us as we know the distributions are different already but the test statistic $D$ measuring dissimilarity of the distributions from 0 to 1 would be a useful measurement to obtain. This test, however, behaved very strangely in many ways. For example it said that Z was more similar to R than U which is clearly wrong. I’m not sure of the exact cause of this behaviour but I assume that it is to do with the test being more sensitive to points in common than to differences. As has been the spirit of this post throughout, if there is no working existing solution, why not make your own? I therefore decided to use the trusted $L^2$-norm, a generalisation of the Euclidean distance to vectors or functions, such as the discrete approximations of our densities. Using this metric we can build a dissimilarity matrix as follows.

1
2
3
4
5
6
7
8
9
10
11
12
# function to calculate L2_distance from two rows of the
# count matrix using the LOESS approximations
L2_distance <- Vectorize(function(x, y) {
x_smooth <- lowess(freq_mtrx[, x], f = .2, iter = 10)$y
y_smooth <- lowess(freq_mtrx[, y], f = .2, iter = 10)$y
sqrt(sum((x_smooth - y_smooth)^2))
})

# generate dissimilarity matrix
dissim_matrix <- outer(LETTERS, LETTERS, L2_distance)
rownames(dissim_matrix) <- LETTERS
colnames(dissim_matrix) <- LETTERS

Taking a look at the first few values of the matrix we get the following.

1
dissim_matrix[1:10, 1:5]
ABCDE
A 0.00000016.77646312.7874427.2829722.359695
B16.776463 0.00000018.0404537.3906036.416457
C12.78743718.040450 0.0000029.0389625.082159
D27.28297237.39060429.03896 0.0000011.546242
E22.35969536.41645725.0821611.54624 0.000000
F18.64071231.65725618.3748815.34660 9.030299
G11.75300120.59233814.9896818.3806917.744923
H 6.09556317.44990217.2947431.6145526.434965
I 2.70575917.94577914.6683428.0974022.766470
J23.697318 9.31852520.3649040.6973441.005730

As we can see, every letter has a dissimilarity with itself of zero since they have exactly the same distribution. As stated before ‘A’ and ‘I’ have similar distributions and this is reflected by the small dissimilarity value of 2.7. We can visualise the entire matrix using a heat map.
@caption=’Heatmap of the dissimilarity matrix for smoothed letter distributions using the $L^2$-norm’

1
2
3
4
5
6
7
8
9
10
11
12
13
# don't show a value for letters paired with themselves
dissim_matrix_na <- dissim_matrix
diag(dissim_matrix_na) <- NA

# create a heatmap using the gplots package
gplots::heatmap.2(dissim_matrix_na,
dendrogram = 'none',
Rowv = FALSE,
Colv = FALSE,
trace = 'none',
na.color = "white",
# reverse viridis scale
col = viridis::viridis_pal(direction = -1))

This closely matches our expectation. ‘E’ and ‘J’ have very different distributions and so have a dark cell with a dissimilarity value in the 30s whereas ‘Q’ and ‘W’ are very similar and so have a light cell with a value under 15.

Now that we have a form of distance metric between any two letter distributions, we can perform cluster analysis. Since we don’t have the exact coordinates of the distributions in some general space (though we could formulate this using a method I will soon discuss in another blog post) we can’t use k-means clustering. We are instead forced to use an agglomerative method such as hierarchical clustering. We will use the complete linkage method of clustering. This is chosen by elimination more than anything else. We would like a monotone distance measure so that our resulting dendrogram has no inversions so centroid (UPGMC) and median (WPGMA) methods are out of the question. Furthermore, single and average (UPGMA) methods do not force enough similarity within clusters. Lastly, Ward’s method of minimum variance will aim to find spherical clusters which is an unreasonable assumption for our data. We therefore proceed with complete linkage to produce the following dendrogram.

1
2
clust <- hclust(as.dist(dissim_matrix), method = "complete")
plot(clust, xlab = "", sub = "")

This, again, matches the behaviour we would expect. The distributions that we previously said were similar such as ‘B’, ‘J’, ‘Q’, and ‘W’ are clustered very close together whereas the highly dissimilar distributions such as ‘E’ and ‘J’ only connect at the max height of 41

From this dendrogram it appears that an appropriate cut would be somewhere between a height of 16 and 21. The exact choice would depend on the number of clusters we were after. I decided to cut at a height of 16, giving the following groupings.
@caption=’The probabalitity distribution of English letters within a random word using LOESS smoothing and coloured by their cluster group’

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
groups <- cutree(clust, h = 16)
palette <- rainbow(length(unique(groups)))

par(mfrow = c(4,7), mar = c(rep(c(2,1), 2)))
for (l in LETTERS) {
# position bottom row at centre
if (l == "V") plot.new()
plot(freq_mtrx[, l],
main = l,
xlab = "", ylab = "",
xaxt = 'n', yaxt = 'n',
type = "l", col = "#00000080")
lines(lowess(freq_mtrx[, l], f = .2, iter = 10),
col = palette[groups[l]], lwd = 2)
}

Attempting to categorise these leads to the following possible interpretations of the groupings:

  • The red letters are those which feature most prominently in the middle of words, very little at the end and occasionally at the beginning
  • The yellow letters are those that are common at the start of a word but become less likely the further towards the end you get
  • The green letters are most common at the start of words but barely feature in the middle. They are also reasonably prominent at the end of words (‘G’ being an outlier due to the common use of ‘ing’)
  • The blue letters feature often at the ends of words but less so at the beginning
  • The purple letters appear most often in the middle of the words

The rules have clear exceptions. For example, ‘C’ could just as easily be categorised as a new colour since it isn’t all to similar to the other green letters (this is evident form its late join to that branch of the dendrogram). In general though, this clustering does a promising job of grouping the letters by their functions in words.

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×