This Thursday, June 7 2011, will be the first meeting of the Bay Area NLP group, at Chomp HQ in San Francisco, where I will be giving a talk on NLTK titled "NLTK: the Good, the Bad, and the Awesome". I'll be sharing some of the things I've learned using NLTK, operating text-processing.com, and doing random consulting on natural language processing. I'll also explain why NLTK-Trainer exists and how awesome it is for training NLP models. So if you're in the area and have some time Thursday evening, come by and say hi.
Update on 07/10/2011: slides are online from my talk: NLTK: the Good, the Bad, and the Awesome.
I recently did an interview with Zoltan Varju (@zoltanvarju) about Python, NLTK, and my demos & APIs at text-processing.com, which you can read here. There's even a bit about Erlang & functional programming, as well as some insight into what I've been working on at Weotta. And last week, the text-processing.com API got a write up (and a nice traffic boost) from Garrett Wilkin (@garrettwilkin) on programmableweb.com.
NLTK Trainer includes 2 scripts for analyzing both a tagged corpus and the coverage of a part-of-speech tagger.
Analyze a Tagged Corpus
You can get part-of-speech tag statistics on a tagged corpus using
analyze_tagged_corpus.py. Here's the tag counts for the treebank corpus:
$ python analyze_tagged_corpus.py treebank loading nltk.corpus.treebank 100676 total words 12408 unique words 46 tags Tag Count ======= ========= # 16 $ 724 '' 694 , 4886 -LRB- 120 -NONE- 6592 -RRB- 126 . 3874 : 563 CC 2265 CD 3546 DT 8165 EX 88 FW 4 IN 9857 JJ 5834 JJR 381 JJS 182 LS 13 MD 927 NN 13166 NNP 9410 NNPS 244 NNS 6047 PDT 27 POS 824 PRP 1716 PRP$ 766 RB 2822 RBR 136 RBS 35 RP 216 SYM 1 TO 2179 UH 3 VB 2554 VBD 3043 VBG 1460 VBN 2134 VBP 1321 VBZ 2125 WDT 445 WP 241 WP$ 14 WRB 178 `` 712 ======= =========
analyze_tagged_corpus.py sorts by tags, but you can sort by the highest count using
--sort count --reverse. You can also see counts for simplified tags using
$ python analyze_tagged_corpus.py treebank --simplify_tags loading nltk.corpus.treebank 100676 total words 12408 unique words 31 tags Tag Count ======= ========= 7416 # 16 $ 724 '' 694 ( 120 ) 126 , 4886 . 3874 : 563 ADJ 6397 ADV 2993 CNJ 2265 DET 8192 EX 88 FW 4 L 13 MOD 927 N 19213 NP 9654 NUM 3546 P 9857 PRO 2698 S 1 TO 2179 UH 3 V 6000 VD 3043 VG 1460 VN 2134 WH 878 `` 712 ======= =========
Analyze Tagger Coverage
You can analyze the coverage of a part-of-speech tagger against any corpus using
analyze_tagger_coverage.py. Here's the results for the treebank corpus using NLTK's default part-of-speech tagger:
$ python analyze_tagger_coverage.py treebank loading tagger taggers/maxent_treebank_pos_tagger/english.pickle analyzing tag coverage of treebank with ClassifierBasedPOSTagger Tag Found ======= ========= # 16 $ 724 '' 694 , 4887 -LRB- 120 -NONE- 6591 -RRB- 126 . 3874 : 563 CC 2271 CD 3547 DT 8170 EX 88 FW 4 IN 9880 JJ 5803 JJR 386 JJS 185 LS 12 MD 927 NN 13166 NNP 9427 NNPS 246 NNS 6055 PDT 21 POS 824 PRP 1716 PRP$ 766 RB 2800 RBR 130 RBS 33 RP 213 SYM 1 TO 2180 UH 3 VB 2562 VBD 3035 VBG 1458 VBN 2145 VBP 1318 VBZ 2124 WDT 440 WP 241 WP$ 14 WRB 178 `` 712 ======= =========
If you want to analyze the coverage of your own pickled tagger, use
--tagger PATH/TO/TAGGER.pickle. You can also get detailed metrics on Found vs Actual counts, as well as Precision and Recall for each tag by using the
--metrics argument with a corpus that provides a
tagged_sents method, like treebank:
$ python analyze_tagger_coverage.py treebank --metrics loading tagger taggers/maxent_treebank_pos_tagger/english.pickle analyzing tag coverage of treebank with ClassifierBasedPOSTagger Accuracy: 0.995689 Unknown words: 440 Tag Found Actual Precision Recall ======= ========= ========== ============= ========== # 16 16 1.0 1.0 $ 724 724 1.0 1.0 '' 694 694 1.0 1.0 , 4887 4886 1.0 1.0 -LRB- 120 120 1.0 1.0 -NONE- 6591 6592 1.0 1.0 -RRB- 126 126 1.0 1.0 . 3874 3874 1.0 1.0 : 563 563 1.0 1.0 CC 2271 2265 1.0 1.0 CD 3547 3546 0.99895833333 0.99895833333 DT 8170 8165 1.0 1.0 EX 88 88 1.0 1.0 FW 4 4 1.0 1.0 IN 9880 9857 0.99130434782 0.95798319327 JJ 5803 5834 0.99134948096 0.97892938496 JJR 386 381 1.0 0.91489361702 JJS 185 182 0.96666666666 1.0 LS 12 13 1.0 0.85714285714 MD 927 927 1.0 1.0 NN 13166 13166 0.99166034874 0.98791540785 NNP 9427 9410 0.99477911646 0.99398073836 NNPS 246 244 0.99029126213 0.95327102803 NNS 6055 6047 0.99515235457 0.99722414989 PDT 21 27 1.0 0.66666666666 POS 824 824 1.0 1.0 PRP 1716 1716 1.0 1.0 PRP$ 766 766 1.0 1.0 RB 2800 2822 0.99305555555 0.975 RBR 130 136 1.0 0.875 RBS 33 35 1.0 0.5 RP 213 216 1.0 1.0 SYM 1 1 1.0 1.0 TO 2180 2179 1.0 1.0 UH 3 3 1.0 1.0 VB 2562 2554 0.99142857142 1.0 VBD 3035 3043 0.990234375 0.98065764023 VBG 1458 1460 0.99650349650 0.99824868651 VBN 2145 2134 0.98852223816 0.99566473988 VBP 1318 1321 0.99305555555 0.98281786941 VBZ 2124 2125 0.99373040752 0.990625 WDT 440 445 1.0 0.83333333333 WP 241 241 1.0 1.0 WP$ 14 14 1.0 1.0 WRB 178 178 1.0 1.0 `` 712 712 1.0 1.0 ======= ========= ========== ============= ==========
These additional metrics can be quite useful for identifying which tags a tagger has trouble with. Precision answers the question "for each word that was given this tag, was it correct?", while Recall answers the question "for all words that should have gotten this tag, did they get it?". If you look at
PDT, you can see that Precision is 100%, but Recall is 66%, meaning that every word that was given the
PDT tag was correct, but 6 out of the 27 words that should have gotten
PDT were mistakenly given a different tag. Or if you look at
JJS, you can see that Precision is 96.6% because it gave
JJS to 3 words that should have gotten a different tag, while Recall is 100% because all words that should have gotten
JJS got it.
NLTK trainer makes it easy to train part-of-speech taggers with various algorithms using
Training Sequential Backoff Taggers
The fastest algorithms are the sequential backoff taggers. You can specify the backoff sequence using the
--sequential argument, which accepts any combination of the following letters:
For example, to train the same kinds of taggers that were used in Part of Speech Tagging with NLTK Part 1 - Ngram Taggers, you could do the following:
python train_tagger.py treebank --sequential ubt
You can rearrange
ubt any way you want to change the order of the taggers (though
ubt is generally the most accurate order).
Training Affix Taggers
--sequential argument also recognizes the letter
a, which will insert an AffixTagger into the backoff chain. If you do not specify the
--affix argument, then it will include one AffixTagger with a 3-character suffix. However, you can change this by specifying one or more
--affix N options, where
N should be a positive number for prefixes, and a negative number for suffixes. For example, to train an
aubt tagger with 2 AffixTaggers, one that uses a 3 character suffix, and another that uses a 2 character prefix, specify the
--affix argument twice:
python train_tagger.py treebank --sequential aubt --affix -3 --affix 2
The order of the
--affix arguments is the order in which each AffixTagger will be trained and inserted into the backoff chain.
Training Brill Taggers
python train_tagger.py treebank --sequential aubt --brill
The default training options are a maximum of 200 rules with a minimum score of 2, but you can change that with the
--min_score arguments. You can also change the rule template bounds, which defaults to 1, using the
Training Classifier Based Taggers
Many of the arguments used by train_classifier.py can also be used to train a ClassifierBasedPOSTagger. If you don't want this tagger to backoff to a sequential backoff tagger, be sure to specify
--sequential ''. Here's an example for training a NaiveBayesClassifier based tagger, similar to what was shown in Part of Speech Tagging Part 4 - Classifier Taggers:
python train_tagger.py treebank --sequential '' --classifier NaiveBayes
If you do want to backoff to a sequential tagger, be sure to specify a cutoff probability, like so:
python train_tagger.py treebank --sequential ubt --classifier NaiveBayes --cutoff_prob 0.4
Any of the NLTK classification algorithms can be used for the
--classifier argument, such as
MEGAM, and every algorithm other than
NaiveBayes has specific training options that can be customized.
Phonetic Feature Options
You can also include phonetic algorithm features using the following arguments:
||Use metaphone feature|
||Use double metaphone feature|
||Use soundex feature|
||Use NYSIIS feature|
||Use caverphone feature|
These options create phonetic codes that will be included as features along with the default features used by the ClassifierBasedPOSTagger. The
--double-metaphone algorithm comes from metaphone.py, while all the other phonetic algorithm have been copied from the advas project (which appears to be abandoned).
I created these options after discussions with Michael D Healy about Twitter Linguistics, in which he explained the prevalence of regional spelling variations. These phonetic features may be able to reduce that variation where a tagger is concerned, as slightly different spellings might generate the same phonetic code.
A tagger trained with any of these phonetic features will be an instance of
nltk_trainer.tagging.taggers.PhoneticClassifierBasedPOSTagger, which means
nltk_trainer must be included in your
PYTHONPATH in order to load & use the tagger. The simplest way to do this is to install nltk-trainer using
python setup.py install.
Microsoft/Bing recently introduced its Speller Challenge, and I immediately thought about using my spelling replacer code from Chapter 2, Replacing and Correcting Words, in Python Text Processing with NLTK Cookbook. The API is now online, and can be accessed by doing a
GET request to
http://text-processing.com/api/spellcorrect/?runID=replacers&q=WORD. With an Expected F1 of ~0.5, I'm currently at number 12 on the Leaderboard, though I don't expect that position to last long (I was at 10 when I first wrote this). I'm actually quite suprised the score is as high as it is considering the simplicity / lack of sophistication - it means there's merit in replacing repeating character and/or that Enchant generally gives decent spelling suggestions when controlled by edit distance. Here's an outline of the code, which should make sense if you're familiar with the
replacers module from Replacing and Correcting Words in Python Text Processing with NLTK Cookbook:
repeat_replacer = RepeatReplacer() spelling_replacer = SpellingReplacer() def replacer_suggest(word): suggest = repeat_replacer.replace(word) if suggest == word: suggest = spelling_replacer.replace(word) return [(suggest, 1.0)]
It has come to my attention that there are two errors in Chapter 2, Replacing and Correcting Words of Python Text Processing with NLTK Cookbook. My thanks to the reader who went out of their way to verify my mistakes and send in corrections.
In Lemmatizing words with WordNet, on page 29, under How it works..., I said that "cooking" is not a noun and does not have a lemma. In fact, cooking is a noun, and as such is its own lemma. Of course, "cooking" is also a verb, and the verb form has the lemma "cook".
In Removing repeating characters, on page 35, under How it works..., I explained the
repeat_regexp match groups incorrectly. The actual match grouping of the word "looooove" is
(looo)(o)o(ve) because the pattern matching is greedy. The end result is still correct.
Following up on the previous post showing the tag coverage of the NLTK 2.0b9 default tagger on the treebank corpus, below are the same metrics applied to the conll2000 corpus, using the
analyze_tagger_coverage.py script from nltk-trainer.
NLTK Default Tagger Performance on CoNLL2000
The default tagger is 93.9% accurate on the conll2000 corpus, which is to be expected since both treebank and conll2000 are based on the Wall Street Journal. You can see all the metrics shown below for yourself by running
python analyze_tagger_coverage.py conll2000 --metrics. In many cases, the Precision and Recall metrics are significantly lower than 1, even when the Found and Actual counts are similar. This happens when words are given the wrong tag (creating false positives and false negatives) while the overall tag frequency remains about the same. The
CC tag is a great example of this: the Found count is only 3 higher than the Actual count, yet Precision is 68.75% and Recall is 73.33%. This tells us that the number of words that were mis-tagged as
CC, and the number of
CC words that were not given the
CC tag, are approximately equal, creating similar counts despite the false positives and false negatives.
Unknown Words in CoNLL2000
The conll2000 corpus has 0 words tagged with
-NONE-, yet the default tagger is unable to identify 50 unique words. Here's a sample: boiler-room, so-so, Coca-Cola, top-10, AC&R, F-16, I-880, R2-D2, mid-1992. For the most part, the unknown words are symbolic names, acronyms, or two separate words combined with a "-". You might think this can solved with better tokenization, but for words like F-16 and I-880, tokenizing on the "-" would be incorrect.
Missing Symbols and Rare Tags
The default tagger apparently does not recognize parentheses or the
SYM tag, and has trouble with many of the more rare tags, such as
UH. These failures highlight the need for training a part-of-speech tagger (or any NLP object) on a corpus that is as similar as possible to the corpus you are analyzing. At the very least, your training corpus and testing corpus should share the same set of part-of-speech tags, and in similar proportion. Otherwise, mistakes will be made, such as not recognizing common symbols, or finding
-RRB- tags where they do not exist.
For some research I'm doing with Michael D. Healy, I need to measure part-of-speech tagger coverage and performance. To that end, I've added a new script to nltk-trainer:
analyze_tagger_coverage.py. This script will tag every sentence of a corpus and count how many times it produces each tag. If you also use the
--metrics option, and the corpus reader provides a
tagged_sents() method, then you can get detailed performance metrics by comparing the tagger's results against the actual tags.
NLTK Default Tagger Performance on Treebank
Below is a table showing the performance details of the NLTK 2.0b9 default tagger on the treebank corpus, which you can see for yourself by running
python analyze_tagger_coverage.py treebank --metrics. The default tagger is 99.57% accurate on treebank, and below you can see exactly on which tags it fails. The Found column shows the number of occurrences of each tag produced by the default tagger, while the Actual column shows the actual number of occurrences in the treebank corpus. Precision and Recall, which I've explained in the context of classification, show the performance for each tag. If the Precision is less than 1, that means the tagger gave the tag to a word that it shouldn't have (a false positive). If the Recall is less than 1, it means the tagger did not give the tag to a word that it should have (a false negative).
Unknown Words in Treebank
Suprisingly, the treebank corpus contains 6592 words tags with
-NONE-. But it's not that bad, since it's only 440 unique words, and they are not regular words at all:
*-106, and many more similar looking tokens.
If you've been considering buying Python Text Processing with NLTK 2.0 Cookbook, but haven't yet, below are a couple reviews that may help convince you how awesome it is
Jaganadh says in his review of Python Text Processing with NLTK Cookbook at Jaggu's World:
The eight chapter a revolutionary one which deals with Distributed data processing and handling large scale data with NLTK. (...) This chapter will be really helpful for industry people who is looking for to adopt NLTK in to NLP projects.
I give 9 out of 10 for the book. Natural Language Processing students, teachers, professional hurry and bag a copy of this book.
Sum-Wai says in his review of Python Text Processing with NLTK Cookbook at Tips Tank:
I like it where in each recipe, the author provides extra knowledge on the particular problem, like how a problem can be enhance and solve in another way, or what we need to do if the problem on hand changed, and some extra technical tips, which is very nice and useful.
If you’re thinking about the O’Reilly’s NLTK book – Natural Language Processing with Python, IMHO this book and the O’Reilly NLTK book complements each other. The O’Reilly NLTK book focuses more on getting you to know NLP and the features and usage of NLTK , while Python Text Processing with NLTK teaches us how we would implement NLP/NLTK with tools like MongoDB into solving real world problems.
And Neil Kodner, @neilkod, says:
I'm loving python text processing with nltk cookbook by @japerk, its an excellent companion to the O'Reilly NLTK book
Christmas is coming up, and who doesn't think about python text processing during the holidays?
If you want a reviewer copy to write your own review, contact Packt at email@example.com. And if you do write a review and want to let me know about it, leave a comment here, or contact me on twitter.
It all started with an email to the baypiggies mailing list. An acquisition editor for Packt was looking for authors to expand their line of python cookbooks. For some reason I can't remember, I thought they wanted to put together a multi-author cookbook, where each author contributes a few recipes. That sounded doable, because I'd already written a number of articles that could serve as the basis for a few recipes. So I replied with links to the following articles:
The reply back was:
The next step is to come up with around 8-14 topics/chapters and around 80-100 recipes for the book as a whole.
My first reaction was "WTF?? No way!" But luckily, I didn't send that email. Instead, I took a couple days to think it over, and realized that maybe I could come up with that many recipes, if I broke my knowledge down into small pieces. I also decided to choose recipes that I didn't already know how to write, and use them as motivation for learning & research. So I replied back with a list of 92 recipes, and got to work. Not surprisingly, the original list of 92 changed significantly while writing the book, and I believe the final recipe count is 81.
I was keenly aware that there'd be some necessary overlap with the original NLTK book, Natural Language Processing with Python. But I did my best to minimize that overlap, and to present a different take on similar content. And there's a number of recipes that (as far as I know) you can't find anywhere else, the largest group of which can be found in Chapter 6, Transforming Chunks and Trees. I'm very pleased with the result, and I hope everyone who buys the book is too. I'd like to think that Python Text Processing with NLTK 2.0 Cookbook is the practical companion to the more teaching oriented Natural Language Processing with Python.
If you'd like a taste of the book, checkout the online sample chapter (pdf) Chapter 3, Custom Corpora, which details how many of the included corpus readers work, how to use them, and how to create your own corpus readers. The last recipe shows you how to create a corpus reader on top of MongoDB, and it should be fairly easy to modify for use with any other database.
Packt has also published two excerpts from Chapter 8, Distributed Processing and Handling Large Datasets, which are partially based on those original 2 articles: