Group Abstract Group Abstract

Message Boards Message Boards

Reproduce standard letter-frequencies in English language?

POSTED BY: Vitaliy Kaurov
9 Replies

There clearly are differences resulting from the precise corpus I choose. This website lists a great data resource for some counting exercises and appears to give results very similar to the original post by Vitaliy. I downloaded the engnews _ 2015 3M-words data file and get this:

wordlist = 
  Import["/Users/thiel/Desktop/eng_news_2015_3M/eng_news_2015_3M-words.txt", "TSV"];
allchars = {#[[1, 1]], Total[#[[All, 2]]]} & /@ 
   GatherBy[Flatten[Thread @{ToLowerCase[Characters[ToString[#[[1]]]]], #[[2]]} & /@ Select[wordlist, Length[#] == 4 &][[All, {3, 4}]], 1], First];

and then

standardallchars = 
 Reverse@SortBy[Select[allchars, MemberQ[CharacterRange["a", "z"], #[[1]]] &], Last];
BarChart[standardallchars[[All, 2]], BarOrigin -> Bottom, 
 BaseStyle -> 15, ChartLabels -> standardallchars[[All, 1]], 
 AspectRatio -> 1, PlotTheme -> "Detailed"]

enter image description here

They offer data for different English speaking countries. It would be interesting to see their differences.

Cheers,

Marco

POSTED BY: Marco Thiel

This does not really contribute much, because I have not used a standard corpus, but ResourceSearch provides lots of texts. And it is really easy to use:

texts = Get /@ ResourceSearch["text"];
letters = Flatten[Characters[texts]];
smallletters = ToLowerCase[letters];
letterfreqs = SortBy[Tally[smallletters], Last];
standardchars = Reverse@Select[letterfreqs, MemberQ[CharacterRange["a", "z"], #[[1]]] &];
BarChart[standardchars[[All, 2]], BarOrigin -> Bottom, BaseStyle -> 15, ChartLabels -> standardchars[[All, 1]], AspectRatio -> 1, PlotTheme -> "Detailed"]

enter image description here

There are more than 44 million characters. I used minimal thinking and cleaning...

Cheers,

M.

PS: Should we try using the British National Corpus?

POSTED BY: Marco Thiel

This is wonderful, @Marco Thiel, thank you. You and @Anton Antonov completely convinced me that standard frequencies result from tally of text corpus and not dictionary items. I am certain British National Corpus will give the same result.

POSTED BY: Vitaliy Kaurov

Hi @Vitaliy Kaurov ,

my main issue is that there should actually be some information relating to this in the Wolfram Language. My first idea was to use WordFrequencyData as weights for the dictionary words - being aware that this would still change the letter frequencies as there are grammatical issues such as more "s" because of plural forms and more "-ing" for example. The code would be very simple:

words = DictionaryLookup[];
AbsoluteTiming[wordfreq = Select[Transpose[{words, Normal[WordFrequencyData[words]][[All, 2]]}], NumberQ[#[[2]]] &];]
letterfreq = {#[[1, 1]], Total[#[[All, 2]]]} & /@ GatherBy[Flatten[Thread @{ToLowerCase[Characters[#[[1]]]], #[[2]]} & /@ 
Select[wordfreq, NumberQ[#[[-1]]] &], 1], First]

The problem is that this doesn't work form me. It appears that it doesn't like so many word frequencies being polled. If I only run this for the first say 100 words, it appears to work just fine:

wordfreq = {#, WordFrequencyData[#]} & /@ words[[1 ;; 100]];
letterfreq = {#[[1, 1]], Total[#[[All, 2]]]} & /@ GatherBy[Flatten[Thread @{ToLowerCase[Characters[#[[1]]]], #[[2]]} & /@ Select[wordfreq, NumberQ[#[[-1]]] &], 1], First];

It also works with the slightly modified:

AbsoluteTiming[
wordfreq = Select[Transpose[{words[[1 ;; 100]], Normal[WordFrequencyData[words[[1 ;; 100]]]][[All, 2]]}], NumberQ[#[[2]]] &];]

1000 also seems to work; 10000 does not. I am not quite sure whether it simply times out. Perhaps you could go ahead and run this with some magic internal account and check what you get?

Cheers,

Marco

POSTED BY: Marco Thiel

@Marco, true, some data calls can be slow. But in general for those cases that work the fresh first call should be not map ( = many calls, slower), but a putting list as an argument (= single call, faster). On a repeated evaluation for cashed data, the logic can be different in terms of timings. Compare below. I doubt though this info can help to run on the whole dictionary, or I will be able to do it. To tell you more I'd have to dig around a bit.

Fresh first call

AbsoluteTiming[WordFrequencyData[words[[101 ;; 200]]];]
{20.195363`, Null}
AbsoluteTiming[WordFrequencyData[#] & /@ words[[201 ;; 300]];]
{62.140271`, Null}

Repeated cashed call

AbsoluteTiming[WordFrequencyData[words[[101 ;; 200]]];]
{6.006477`, Null}
AbsoluteTiming[WordFrequencyData[#] & /@ words[[201 ;; 300]];]
{1.912346`, Null}
POSTED BY: Vitaliy Kaurov

Dear Vitaliy,

I know that the single call did not work and the multiple one did work at least up to some small-ish number. In other words:

wordfreq = {#, WordFrequencyData[#]} & /@ words[[1 ;; 100]];

works and

wordfreq = Select[Transpose[{words, Normal[WordFrequencyData[words]][[All, 2]]}], NumberQ[#[[2]]] &];

appears to time out because of

WordFrequencyData[words]] 

I can cut everything into tiny pieces and do it 100 at a time and it will work ok, but be very slow. If I call it on the entire list of words it always times out. I can do something like

WordFrequencyData[words[[1;;100]]]] 

but it kind of defeats the purpose. I am just running some alternative thing, to see whether I can get around the problem. The main idea is to use frequency data from a corpus. I guess I am still making a mistake interpreting the data ( there is one bit I don't understand yet), but this it the outline of the procedure. I use frequency data from the British National Corpus from this website. The data set is this one. It also contains information about the frequency of different grammatical variations of the words. If I import the file

wordfreqsBNC = Import["/Users/thiel/Desktop/1_1_all_fullalpha.txt", "TSV"];

and then clean it up a wee bit:

wordfreqsBNC = {If[#[[2]] != "@", #[[2]], #[[4]]], #[[6]], #[[7]]} & /@wordfreqsBNC;

it looks like this:

wordfreqsBNC[[-27502 ;; -27490]] // TableForm

enter image description here

I believe that the second column is a sort of number of occurrences. If that was right this should do the trick:

wordfreqsBNCinDictionary = 
Select[wordfreqsBNC, (DictionaryWordQ[ToString[#[[1]]]] && ! 
StringContainsQ[ToString[#[[1]]], 
CharacterRange["0", "9"]]) &];

allcharacters = {#[[1, 1]], Total[#[[All, 2]]]} & /@ 
GatherBy[Flatten[
Thread @{ToLowerCase[Characters[#[[1]]]], #[[2]]} & /@ 
wordfreqsBNCinDictionary[[All, {1, 2}]], 1], First];

standardchars = 
Reverse@SortBy[
Select[allcharacters, MemberQ[CharacterRange["a", "z"], #[[1]]] &],
Last]

This gives:

enter image description here

which is obviously very far away from what we expect according to your first post and the analysis of a larger number of texts...

BarChart[standardchars[[All, 2]], BarOrigin -> Bottom, 
 BaseStyle -> 15, ChartLabels -> standardchars[[All, 1]], 
 AspectRatio -> 1, PlotTheme -> "Detailed"]

enter image description here

There is a little problem with how I use the data. I think that the list gives one number of occurrences for the "main entry" and them other numbers for the different grammatical forms of the words. I might be doing some double counting here. I'll try to sort this out.

Cheers,

Marco

POSTED BY: Marco Thiel
POSTED BY: Anton Antonov

Thank you very much, Anton! The thought that they use regular text corpus crossed my mind, but the Wikipedia article says:

Analysis of entries in the Concise Oxford dictionary is published by the compilers.

This is a bit confusing. I have not been able to find exact definition of procedure and data used for this standard letter-frequency sequence. If anyone knows - please comment.

POSTED BY: Vitaliy Kaurov
POSTED BY: Anton Antonov
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard