NLTK 2.0.1, a.k.a NLTK 2, was recently released, and what follows is my favorite changes, new features, and highlights from the ChangeLog.
The SVMClassifier adds support vector machine classification thru SVMLight with PySVMLight. This is a much needed addition to the set of supported classification algorithms. But even more interesting...
The SklearnClassifier provides a general interface to text classification with scikit-learn. While scikit-learn is still pre-1.0, it is rapidly becoming one of the most popular machine learning toolkits, and provides more advanced feature extraction methods for classification.
NLTK has moved development and hosting to github, replacing google code and SVN. The primary motivation is to make new development easier, and already a Python 3 branch is under active development. I think this is great, since github makes forking & pull requests quite easy, and it's become the de-facto "social coding" site.
Coinciding with the github move, the documentation was updated to use Sphinx, the same documentation generator used by Python and many other projects. While I personally like Sphinx and restructured text (which I used to write this post), I'm not thrilled with the results. The new documentation structure and NLTK homepage seem much less approachable. While it works great if you know exactly what you're looking for, I worry that new/interested users will have a harder time getting started.
Since the 0.9.9 release, a number of new corpora and corpus readers have been added:
And here's a few final highlights:
- The HunposTagger, which wraps hunpos.
- The StanfordTagger plus 2 subclasses for NER and POS tagging with the Stanford POS Tagger.
- The SnowballStemmer, which supports 13 different languages. You can try it out at my online stemming demo.
I think NLTK's ideal role is be a standard interface between corpora and NLP algorithms. There are many different corpus formats, and every algorithm has its own data structure requirements, so providing common abstract interfaces to connect these together is very powerful. It allows you to test the same algorithm on disparate corpora, or try multiple algorithms on a single corpus. This is what NLTK already does best, and I hope that becomes even more true in the future.
NLTK Trainer includes 2 scripts for analyzing both a tagged corpus and the coverage of a part-of-speech tagger.
Analyze a Tagged Corpus
You can get part-of-speech tag statistics on a tagged corpus using
analyze_tagged_corpus.py. Here's the tag counts for the treebank corpus:
$ python analyze_tagged_corpus.py treebank loading nltk.corpus.treebank 100676 total words 12408 unique words 46 tags Tag Count ======= ========= # 16 $ 724 '' 694 , 4886 -LRB- 120 -NONE- 6592 -RRB- 126 . 3874 : 563 CC 2265 CD 3546 DT 8165 EX 88 FW 4 IN 9857 JJ 5834 JJR 381 JJS 182 LS 13 MD 927 NN 13166 NNP 9410 NNPS 244 NNS 6047 PDT 27 POS 824 PRP 1716 PRP$ 766 RB 2822 RBR 136 RBS 35 RP 216 SYM 1 TO 2179 UH 3 VB 2554 VBD 3043 VBG 1460 VBN 2134 VBP 1321 VBZ 2125 WDT 445 WP 241 WP$ 14 WRB 178 `` 712 ======= =========
analyze_tagged_corpus.py sorts by tags, but you can sort by the highest count using
--sort count --reverse. You can also see counts for simplified tags using
$ python analyze_tagged_corpus.py treebank --simplify_tags loading nltk.corpus.treebank 100676 total words 12408 unique words 31 tags Tag Count ======= ========= 7416 # 16 $ 724 '' 694 ( 120 ) 126 , 4886 . 3874 : 563 ADJ 6397 ADV 2993 CNJ 2265 DET 8192 EX 88 FW 4 L 13 MOD 927 N 19213 NP 9654 NUM 3546 P 9857 PRO 2698 S 1 TO 2179 UH 3 V 6000 VD 3043 VG 1460 VN 2134 WH 878 `` 712 ======= =========
Analyze Tagger Coverage
You can analyze the coverage of a part-of-speech tagger against any corpus using
analyze_tagger_coverage.py. Here's the results for the treebank corpus using NLTK's default part-of-speech tagger:
$ python analyze_tagger_coverage.py treebank loading tagger taggers/maxent_treebank_pos_tagger/english.pickle analyzing tag coverage of treebank with ClassifierBasedPOSTagger Tag Found ======= ========= # 16 $ 724 '' 694 , 4887 -LRB- 120 -NONE- 6591 -RRB- 126 . 3874 : 563 CC 2271 CD 3547 DT 8170 EX 88 FW 4 IN 9880 JJ 5803 JJR 386 JJS 185 LS 12 MD 927 NN 13166 NNP 9427 NNPS 246 NNS 6055 PDT 21 POS 824 PRP 1716 PRP$ 766 RB 2800 RBR 130 RBS 33 RP 213 SYM 1 TO 2180 UH 3 VB 2562 VBD 3035 VBG 1458 VBN 2145 VBP 1318 VBZ 2124 WDT 440 WP 241 WP$ 14 WRB 178 `` 712 ======= =========
If you want to analyze the coverage of your own pickled tagger, use
--tagger PATH/TO/TAGGER.pickle. You can also get detailed metrics on Found vs Actual counts, as well as Precision and Recall for each tag by using the
--metrics argument with a corpus that provides a
tagged_sents method, like treebank:
$ python analyze_tagger_coverage.py treebank --metrics loading tagger taggers/maxent_treebank_pos_tagger/english.pickle analyzing tag coverage of treebank with ClassifierBasedPOSTagger Accuracy: 0.995689 Unknown words: 440 Tag Found Actual Precision Recall ======= ========= ========== ============= ========== # 16 16 1.0 1.0 $ 724 724 1.0 1.0 '' 694 694 1.0 1.0 , 4887 4886 1.0 1.0 -LRB- 120 120 1.0 1.0 -NONE- 6591 6592 1.0 1.0 -RRB- 126 126 1.0 1.0 . 3874 3874 1.0 1.0 : 563 563 1.0 1.0 CC 2271 2265 1.0 1.0 CD 3547 3546 0.99895833333 0.99895833333 DT 8170 8165 1.0 1.0 EX 88 88 1.0 1.0 FW 4 4 1.0 1.0 IN 9880 9857 0.99130434782 0.95798319327 JJ 5803 5834 0.99134948096 0.97892938496 JJR 386 381 1.0 0.91489361702 JJS 185 182 0.96666666666 1.0 LS 12 13 1.0 0.85714285714 MD 927 927 1.0 1.0 NN 13166 13166 0.99166034874 0.98791540785 NNP 9427 9410 0.99477911646 0.99398073836 NNPS 246 244 0.99029126213 0.95327102803 NNS 6055 6047 0.99515235457 0.99722414989 PDT 21 27 1.0 0.66666666666 POS 824 824 1.0 1.0 PRP 1716 1716 1.0 1.0 PRP$ 766 766 1.0 1.0 RB 2800 2822 0.99305555555 0.975 RBR 130 136 1.0 0.875 RBS 33 35 1.0 0.5 RP 213 216 1.0 1.0 SYM 1 1 1.0 1.0 TO 2180 2179 1.0 1.0 UH 3 3 1.0 1.0 VB 2562 2554 0.99142857142 1.0 VBD 3035 3043 0.990234375 0.98065764023 VBG 1458 1460 0.99650349650 0.99824868651 VBN 2145 2134 0.98852223816 0.99566473988 VBP 1318 1321 0.99305555555 0.98281786941 VBZ 2124 2125 0.99373040752 0.990625 WDT 440 445 1.0 0.83333333333 WP 241 241 1.0 1.0 WP$ 14 14 1.0 1.0 WRB 178 178 1.0 1.0 `` 712 712 1.0 1.0 ======= ========= ========== ============= ==========
These additional metrics can be quite useful for identifying which tags a tagger has trouble with. Precision answers the question "for each word that was given this tag, was it correct?", while Recall answers the question "for all words that should have gotten this tag, did they get it?". If you look at
PDT, you can see that Precision is 100%, but Recall is 66%, meaning that every word that was given the
PDT tag was correct, but 6 out of the 27 words that should have gotten
PDT were mistakenly given a different tag. Or if you look at
JJS, you can see that Precision is 96.6% because it gave
JJS to 3 words that should have gotten a different tag, while Recall is 100% because all words that should have gotten
JJS got it.
NLTK trainer makes it easy to train part-of-speech taggers with various algorithms using
Training Sequential Backoff Taggers
The fastest algorithms are the sequential backoff taggers. You can specify the backoff sequence using the
--sequential argument, which accepts any combination of the following letters:
For example, to train the same kinds of taggers that were used in Part of Speech Tagging with NLTK Part 1 - Ngram Taggers, you could do the following:
python train_tagger.py treebank --sequential ubt
You can rearrange
ubt any way you want to change the order of the taggers (though
ubt is generally the most accurate order).
Training Affix Taggers
--sequential argument also recognizes the letter
a, which will insert an AffixTagger into the backoff chain. If you do not specify the
--affix argument, then it will include one AffixTagger with a 3-character suffix. However, you can change this by specifying one or more
--affix N options, where
N should be a positive number for prefixes, and a negative number for suffixes. For example, to train an
aubt tagger with 2 AffixTaggers, one that uses a 3 character suffix, and another that uses a 2 character prefix, specify the
--affix argument twice:
python train_tagger.py treebank --sequential aubt --affix -3 --affix 2
The order of the
--affix arguments is the order in which each AffixTagger will be trained and inserted into the backoff chain.
Training Brill Taggers
python train_tagger.py treebank --sequential aubt --brill
The default training options are a maximum of 200 rules with a minimum score of 2, but you can change that with the
--min_score arguments. You can also change the rule template bounds, which defaults to 1, using the
Training Classifier Based Taggers
Many of the arguments used by train_classifier.py can also be used to train a ClassifierBasedPOSTagger. If you don't want this tagger to backoff to a sequential backoff tagger, be sure to specify
--sequential ''. Here's an example for training a NaiveBayesClassifier based tagger, similar to what was shown in Part of Speech Tagging Part 4 - Classifier Taggers:
python train_tagger.py treebank --sequential '' --classifier NaiveBayes
If you do want to backoff to a sequential tagger, be sure to specify a cutoff probability, like so:
python train_tagger.py treebank --sequential ubt --classifier NaiveBayes --cutoff_prob 0.4
Any of the NLTK classification algorithms can be used for the
--classifier argument, such as
MEGAM, and every algorithm other than
NaiveBayes has specific training options that can be customized.
Phonetic Feature Options
You can also include phonetic algorithm features using the following arguments:
||Use metaphone feature|
||Use double metaphone feature|
||Use soundex feature|
||Use NYSIIS feature|
||Use caverphone feature|
These options create phonetic codes that will be included as features along with the default features used by the ClassifierBasedPOSTagger. The
--double-metaphone algorithm comes from metaphone.py, while all the other phonetic algorithm have been copied from the advas project (which appears to be abandoned).
I created these options after discussions with Michael D Healy about Twitter Linguistics, in which he explained the prevalence of regional spelling variations. These phonetic features may be able to reduce that variation where a tagger is concerned, as slightly different spellings might generate the same phonetic code.
A tagger trained with any of these phonetic features will be an instance of
nltk_trainer.tagging.taggers.PhoneticClassifierBasedPOSTagger, which means
nltk_trainer must be included in your
PYTHONPATH in order to load & use the tagger. The simplest way to do this is to install nltk-trainer using
python setup.py install.
Following up on the previous post showing the tag coverage of the NLTK 2.0b9 default tagger on the treebank corpus, below are the same metrics applied to the conll2000 corpus, using the
analyze_tagger_coverage.py script from nltk-trainer.
NLTK Default Tagger Performance on CoNLL2000
The default tagger is 93.9% accurate on the conll2000 corpus, which is to be expected since both treebank and conll2000 are based on the Wall Street Journal. You can see all the metrics shown below for yourself by running
python analyze_tagger_coverage.py conll2000 --metrics. In many cases, the Precision and Recall metrics are significantly lower than 1, even when the Found and Actual counts are similar. This happens when words are given the wrong tag (creating false positives and false negatives) while the overall tag frequency remains about the same. The
CC tag is a great example of this: the Found count is only 3 higher than the Actual count, yet Precision is 68.75% and Recall is 73.33%. This tells us that the number of words that were mis-tagged as
CC, and the number of
CC words that were not given the
CC tag, are approximately equal, creating similar counts despite the false positives and false negatives.
Unknown Words in CoNLL2000
The conll2000 corpus has 0 words tagged with
-NONE-, yet the default tagger is unable to identify 50 unique words. Here's a sample: boiler-room, so-so, Coca-Cola, top-10, AC&R, F-16, I-880, R2-D2, mid-1992. For the most part, the unknown words are symbolic names, acronyms, or two separate words combined with a "-". You might think this can solved with better tokenization, but for words like F-16 and I-880, tokenizing on the "-" would be incorrect.
Missing Symbols and Rare Tags
The default tagger apparently does not recognize parentheses or the
SYM tag, and has trouble with many of the more rare tags, such as
UH. These failures highlight the need for training a part-of-speech tagger (or any NLP object) on a corpus that is as similar as possible to the corpus you are analyzing. At the very least, your training corpus and testing corpus should share the same set of part-of-speech tags, and in similar proportion. Otherwise, mistakes will be made, such as not recognizing common symbols, or finding
-RRB- tags where they do not exist.
For some research I'm doing with Michael D. Healy, I need to measure part-of-speech tagger coverage and performance. To that end, I've added a new script to nltk-trainer:
analyze_tagger_coverage.py. This script will tag every sentence of a corpus and count how many times it produces each tag. If you also use the
--metrics option, and the corpus reader provides a
tagged_sents() method, then you can get detailed performance metrics by comparing the tagger's results against the actual tags.
NLTK Default Tagger Performance on Treebank
Below is a table showing the performance details of the NLTK 2.0b9 default tagger on the treebank corpus, which you can see for yourself by running
python analyze_tagger_coverage.py treebank --metrics. The default tagger is 99.57% accurate on treebank, and below you can see exactly on which tags it fails. The Found column shows the number of occurrences of each tag produced by the default tagger, while the Actual column shows the actual number of occurrences in the treebank corpus. Precision and Recall, which I've explained in the context of classification, show the performance for each tag. If the Precision is less than 1, that means the tagger gave the tag to a word that it shouldn't have (a false positive). If the Recall is less than 1, it means the tagger did not give the tag to a word that it should have (a false negative).
Unknown Words in Treebank
Suprisingly, the treebank corpus contains 6592 words tags with
-NONE-. But it's not that bad, since it's only 440 unique words, and they are not regular words at all:
*-106, and many more similar looking tokens.
If you liked the NLTK demos, then you'll love the text processing APIs. They provide all the functionality of the demos, plus a little bit more, and return results in JSON. Requests can contain up to 10,000 characters, instead of the 1,000 character limit on the demos, and you can do up to 100 calls per day. These limits may change in the future depending on usage & demand. If you'd like to do more, please fill out this survey to let me know what your needs are.
If you want to see what NLTK can do, but don't want to go thru the effort of installation and learning how to use it, then check out my Python NLTK demos.
It currently demonstrates the following functionality:
- part-of-speech tagging with the default NLTK pos tagger
- chunking and named entity recognition with the default NLTK chunker
- sentiment analysis with a combination of a naive bayes classifier and a maximum entropy classifier, both trained on the movie reviews corpus
If you like it, please share it. If you want to see more, leave a comment below. And if you are interested in a service that could apply these processes to your own data, please fill out this NLTK services survey.
Other Natural Language Processing Demos
Here's a list of similar resources on the web:
With the latest 2.0 beta releases (2.0b8 as of this writing), NLTK has included a ClassifierBasedTagger as well as a pre-trained tagger used by the nltk.tag.pos_tag method. Based on the name, then pre-trained tagger appears to be a
ClassifierBasedTagger trained on the treebank corpus using a MaxentClassifier. So let's see how a classifier tagger compares to the brill tagger.
NLTK Training Sets
For the brown corpus, I trained on 2/3 of the reviews, lore, and romance categories, and tested against the remaining 1/3. For conll2000, I used the standard train.txt vs test.txt. And for treebank, I again used a 2/3 vs 1/3 split.
import itertools from nltk.corpus import brown, conll2000, treebank brown_reviews = brown.tagged_sents(categories=['reviews']) brown_reviews_cutoff = len(brown_reviews) * 2 / 3 brown_lore = brown.tagged_sents(categories=['lore']) brown_lore_cutoff = len(brown_lore) * 2 / 3 brown_romance = brown.tagged_sents(categories=['romance']) brown_romance_cutoff = len(brown_romance) * 2 / 3 brown_train = list(itertools.chain(brown_reviews[:brown_reviews_cutoff], brown_lore[:brown_lore_cutoff], brown_romance[:brown_romance_cutoff])) brown_test = list(itertools.chain(brown_reviews[brown_reviews_cutoff:], brown_lore[brown_lore_cutoff:], brown_romance[brown_romance_cutoff:])) conll_train = conll2000.tagged_sents('train.txt') conll_test = conll2000.tagged_sents('test.txt') treebank_cutoff = len(treebank.tagged_sents()) * 2 / 3 treebank_train = treebank.tagged_sents()[:treebank_cutoff] treebank_test = treebank.tagged_sents()[treebank_cutoff:]
Naive Bayes Classifier Taggers
There are 3 new taggers referenced below:
cposis an instance of ClassifierBasedPOSTagger using the default NaiveBayesClassifier. It was trained by doing
cpos, but has the
raubttagger from part 2 as a backoff tagger by doing
bcposis a BrillTagger using
cposas its initial tagger instead of
postag is NLTK's pre-trained tagger used by the pos_tag function. It can be loaded using
Tagger accuracy was determined by calling the evaluate method with the test set on each trained tagger. Here are the results:
The above results are quite interesting, and lead to a few conclusions:
- Training data is hugely significant when it comes to accuracy. This is why
postagtakes a huge nose dive on
brown, while at the same time can get near 100% accuracy on
- A ClassifierBasedPOSTagger does not need a backoff tagger, since
cposaccuracy is exactly the same as for
craubtacross all corpora.
- The ClassifierBasedPOSTagger is not necessarily more accurate than the
bcraubttagger from part 3 (at least with the default feature detector). It also takes much longer to train and tag (more details below) and so may not be worth the tradeoff in efficiency.
- Using brill tagger will nearly always increase the accuracy of your initial tagger, but not by much.
I was also surprised at how much more accurate
postag was compared to
cpos. Thinking that
postag was probably trained on the full treebank corpus, I did the same, and re-evaluated:
cpos = ClassifierBasedPOSTagger(train=treebank.tagged_sents()) cpos.evaluate(treebank_test)
The result was 98.08% accuracy. So the remaining 2% difference must be due to the MaxentClassifier being more accurate than the naive bayes classifier, and/or the use of a different feature detector. I tried again with
classifier_builder=MaxentClassifier.train and only got to 98.4% accuracy. So I can only conclude that a different feature detector is used. Hopefully the NLTK leaders will publish the training method so we can all know for sure.
On the nltk-users list, there was a question about which tagger is the most computationaly economic. I can't tell you the right answer, but I can definitely say that ClassifierBasedPOSTagger is the wrong answer. During accuracy evaluation, I noticed that the
cpos tagger took a lot longer than
braubt. So I ran timeit on the tag method of each tagger, and got the following results:
This was run with python 2.6.4 on an Athlon 64 Dual Core 4600+ with 3G RAM, but the important thing is the relative times.
braubt is over 246 times faster than
cpos! To put it another way,
braubt can process over 66666 words/sec, where
cpos can only do 270 words/sec and
postag only 483 words/sec. So the lesson is: do not use a classifier based tagger if speed is an issue.
import nltk, timeit text = nltk.word_tokenize('And now for something completely different') setup = 'import nltk.data, nltk.tag; tagger = nltk.data.load(nltk.tag._POS_TAGGER)' t = timeit.Timer('tagger.tag(%s)' % text, setup) print 'timing postag 1000 times' spent = t.timeit(number=1000) print 'took %.5f secs/pass' % (spent / 1000)
There's also a significant difference in the file size of the pickled taggers (trained on treebank):
I think there's a lot of room for experimentation with classifier based taggers and their feature detectors. But if speed is an issue for you, don't even bother. In that case, stick with a simpler tagger that's nearly as accurate and orders of magnitude faster.
There's a number of options for distributed processing and mapreduce in python. Before execnet surfaced, I'd been using Disco to do distributed NLTK. Now that I've happily switched to distributed NLTK with execnet, I can explain some of the differences and why execnet is so much better for my purposes.
Disco is a mapreduce framework for python, with an erlang core. This is very cool, but unfortunately introduces overhead costs when your functions are not pure (meaning they require external code and/or data). And part of speech tagging with NLTK is definitely not pure; the map function requires a part of speech tagger in order to do anything. So to use a part of speech tagger within a Disco map function, it must be loaded inline, which means unpickling the object before doing any work. And since a pickled part of speech tagger can easily exceed 500K, unpickling it can take over 2 seconds. When every map call has a fixed overhead of 2 seconds, your mapreduce task can take orders of magnitude longer to complete.
As an example, let's say you need to do 6000 map calls, at 1 second of pure computation each. That's 100 minutes, not counting overhead. Now add in the 2s fixed overhead on each call, and you're at 300 minutes. What should be just over 1.6 hours of computation has jumped to 5 hours.
execnet provides a very different computational model: start some gateways and communicate thru message channels. In my case, all the fixed overhead can be done up-front, loading the part of speech tagger once per gateway, resulting in greatly reduced compute times. I did have to change my old Disco based code to work with execnet, but I actually ended up with less code that's easier to understand.
If you're just doing pure mapreduce computations, then consider using Disco. After the one time setup (which can be non-trivial), writing the functions will be relatively easy, and you'll get a nice web UI for configuration and monitoring. But if you're doing any dirty operations that need expensive initialization procedures, or can't quite fit what you need into a pure mapreduce framework, then execnet is for you.