Group Abstract Group Abstract

Message Boards Message Boards

3
|
9.2K Views
|
16 Replies
|
7 Total Likes
View groups...
Share
Share this post:

Simple Key Phrase Extraction For URL Classification

I have text that I need to pull out the main keyword phrases from. I don't want to split the whole thing by whitespaces and count them. I am looking for 2-5 word phrases not single keywords.

I tried Classify, FindClusters, RegularExpressions and many others. I think this is dead simple but no amount of searching in the help docs and online has lead me to an easy solution.

Basically I want to:

  1. Send queries to an API with a specific URL
  2. Import all links from that URL
  3. Sort by Commonest and pick the top 10
  4. Import the text from each url separately
  5. Generate the most common 2-5 word combinations on the page
  6. Run a classify on the urls to match the keyword samples

Expected Result:

{"phrase one"->"URL 1" "phrase 1"->"URL 1" "phrase 1 and more"->"URL 1" "phrases two","ULR 2" "phrases two and more","ULR 2" "phrases 2","ULR 2"}

POSTED BY: David Johnston
16 Replies

Ah, yes, you can also use StringMatchQ. I didn't mention it because I thought you'd want to filter urls by a substring (eg. their domains).

Thanks

POSTED BY: Hernan Moraldo

OMG, I feel like such an idiot. The answer is StringMatchQ instead of StringFreeQ. lol

bigList = {"a", "item1", "item2", "item3", "item4", "item6", "item7", 
   "item8", "item9", "i5"};
excludedElements = {"item1", "item2", "item5", "item6", "i"};
Select[bigList, ! StringMatchQ[#, excludedElements] &]
POSTED BY: David Johnston

You can use more complex patterns to tell it where the string you want to exclude have to be. For example if I just say "i", this code will exclude all elements that have an "i" within them:

In[7]:= bigList = {"a", "item1", "item2", "item3", "item4", "item6", 
   "item7", "item8", "item9", "i5"};
excludedElements = {"item1", "item2", "item5", "item6", "i"};
Select[bigList, StringFreeQ[#, excludedElements] &]

Out[9]= {"a"}

I can instead specify that elements need to be excluded only if the "i" is surrounded by word boundaries (like symbols, spaces, etc., but not including digits):

In[22]:= bigList = {"a", "item1", "item2", "item3", "item4", "item6", 
   "item7", "item8", "item9", "i.5"};
excludedElements = {"item1", "item2", "item5", "item6", 
   WordBoundary ~~ "i" ~~ WordBoundary};
Select[bigList, StringFreeQ[#, excludedElements] &]

Out[24]= {"a", "item3", "item4", "item7", "item8", "item9"}

(notice the new pattern helps get rid of "i.5" but not of "item3").

Such string patterns are very useful for this kind of thing; check http://reference.wolfram.com/language/tutorial/WorkingWithStringPatterns.html for some examples and http://reference.wolfram.com/language/tutorial/RegularExpressions.html in case you want to use regular expressions as well. (Although it is usually enough with string patterns).

POSTED BY: Hernan Moraldo

lol Is "markovClassifier.m" just a name you give to be able to retrieve it again later or is it a specific function?

It is a file name that is used to retrieve the data later.

POSTED BY: Hernan Moraldo

I ran into a problem trying to use this for another purpose. This actually excludes the list item if any part of the exclude string exists. It is not matching whole list items. For partial domain matching this works great. However, for just excluding a list of stopwords it does not work the way I need it.

Example:

excluding the letter a would exclude all words and phrases with the letter a.

How would you rewrite this if it was to be required to match exactly?

In[345]:= 
bigList = {"a", "item1", "item2", "item3", "item4", "item6", "item7", 
   "item8", "item9"};
excludedElements = {"item1", "item2", "item5", "item6"};
Select[bigList, StringFreeQ[#, excludedElements] &]

Out[347]= {"a", "item3", "item4", "item7", "item8", "item9"}
POSTED BY: David Johnston

Okay, here is what I got so far. Not working of course. lol Is "markovClassifier.m" just a name you give to be able to retrieve it again later or is it a specific function? Basically I am trying to test to see if there is a classification already. Probably should name it the same as the urlTarget so that way if I change that url parameter it will do a new fresh classification but if I use the same url again it will use the saved version.

CloudDeploy[
 APIFunction[{"textSample" -> "String", "urlTarget" -> "URL"}, 
  checkURL;, "JSON" &];
 checkURL = {If[urlTarget != "", checkSample, "Empty URL Parameter"]};
 checkSample = {If[textSample != "", func, 
    "Empty Text Sample Parameter"]};
 func = {
   If[CloudEvaluate[Get["markovClassifier.m"]] != "", 
    c[textSample, "Probabilities"],
    excludedDomains = {"cart", "my-account", "pricing"};
    includedDomains = {urlTarget};
    linkList = 
     Commonest[Select[Select[Import[urlTarget, {"HTML", "Hyperlinks"}],
        StringFreeQ[#, excludedDomains] &], ! 
         StringFreeQ[#, includedDomains] &], 20];
    texts = Import /@ linkList;
    c = Classify[texts -> linkList];
    With[{c = Compress[c]}, 
     CloudEvaluate[Put[Uncompress[c], "markovClassifier.m"]]];
    Quiet[
     CloudDeploy[
      FormFunction[{"text" -> "String"}, 
       Get["markovClassifier.m"][#text] &], "myform"]];
    c[textSample, "Probabilities"]
    ]};
 ]
POSTED BY: David Johnston

Regarding how to use it with CloudDeploy, you can do something like:

urls = {"http://en.wikipedia.org/wiki/Albert_Einstein", 
   "http://en.wikipedia.org/wiki/Niels_Bohr", 
   "http://en.wikipedia.org/wiki/Michael_Faraday", 
   "http://en.wikipedia.org/wiki/Stephen_Hawking", 
   "http://en.wikipedia.org/wiki/Paul_Dirac"};

texts = Import /@ urls;

c = Classify[texts -> urls]

With[{c = Compress[c]},
 CloudEvaluate[Put[Uncompress[c], "markovClassifier.m"]]
 ]

Quiet[CloudDeploy[
  FormFunction[{"text" -> "String"}, 
   Get["markovClassifier.m"][#text] &], "myform"]]

However I am looking into why the last line throws a message if you don't use Quiet on it.

POSTED BY: Hernan Moraldo
POSTED BY: Hernan Moraldo

That was an awesome tip. Thanks! Here is what I built from your snippet.

urlTarget = "http://www.businesstexter.com"
excludedDomains = {"cart", "my-account", "pricing"};
includedDomains = {urlTarget};

Commonest[
 Select[Select[Import[urlTarget, {"HTML", "Hyperlinks"}], 
   StringFreeQ[#, excludedDomains] &], ! 
    StringFreeQ[#, includedDomains] &], 20]
POSTED BY: David Johnston
POSTED BY: Hernan Moraldo

You are awesome! Playing with it now. :)

I didn't see the PUT in this code though. Is there any part of the sequence its best used or should it be PUT separately?

POSTED BY: David Johnston
POSTED BY: Hernan Moraldo
POSTED BY: David Johnston

I was unable to get the first example to work. I love the simplicity of the second example but, I need more fine tune control. I got it to work just fine but the accuracy of the results was lacking because the pages don't have enough text. I actually want to filter out the phrase lists and also extrapolate on them with dictionary and synonym functions.

For some reason, I can't figure out how to filter the list of links, before importing them, either. I just want links that contain the original target domain. Actually I would like to follow the links 3 levels deep, create one large list of links and then filter them, order them by which ones appear the most times and then import the top X number of them and run the nGram/Classify on them.

POSTED BY: David Johnston

This was amazingly helpful. Thank you very very much!

POSTED BY: David Johnston

You could use something like this to find the n-grams:

getNGrams[text_] := With[
  {words = StringSplit[text]},
  StringJoin[Riffle[#, " "]] & /@ 
   Flatten[Partition[words, #, #, 1] & /@ Range[2, 5], 1]
  ]

And then you could use Classify using those n-grams as features.

However this is really not necessary. Classify will classify texts using Markov models, which is pretty much the same than using the n-grams as you wanted to:

urls = {"http://en.wikipedia.org/wiki/Albert_Einstein", 
   "http://en.wikipedia.org/wiki/Niels_Bohr", 
   "http://en.wikipedia.org/wiki/Michael_Faraday", 
   "http://en.wikipedia.org/wiki/Stephen_Hawking", 
   "http://en.wikipedia.org/wiki/Paul_Dirac"};

texts = Import /@ urls;

c = Classify[texts -> urls]

Now I test it using a short text taken from the Dirac page:

In[21]:= c["Heisenberg recollected a conversation among young participants"]

Out[21]= "http://en.wikipedia.org/wiki/Paul_Dirac"
POSTED BY: Hernan Moraldo
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard