Category Archives: books

Python 3 Text Processing with NLTK 3 Cookbook

Python Text Processing with NLTK 3 Cookbook

After many weekend writing sessions, the 2nd edition of the NLTK Cookbook, updated for NLTK 3 and Python 3, is available at Amazon and Packt. Code for the book is on github at nltk3-cookbook. Here’s some details on the changes & updates in the 2nd edition:

First off, all the code in the book is for Python 3 and NLTK 3. Most of it should work for Python 2, but not all of it. And NLTK 3 has made many backwards incompatible changes since version 2.0.4. One of the nice things about Python 3 is that it’s unicode all the way. No more issues with ASCII versus unicode strings. However, you do have to deal with byte strings in a few cases. Another interesting change is that hash randomization is on by default, which means that if you don’t set the PYTHONHASHSEED environment variable, training accuracy can change slightly on each run, because the iteration order of dictionaries is no longer consistent by default.

In Chapter 1, Tokenizing Text and WordNet Basics, I added a recipe for training a sentence tokenizer using the PunktSentenceTokenizer. This is surprisingly easy, and you can find the code in chapter1.py.

Chapter 2, Replacing and Correcting Words, shows the additional languages supported by the SnowballStemmer. An unfortunate removal from this chapter is <span class="pre">babelizer</span>, which was a fun library to use, but is no longer supported by Yahoo.

NLTK 3 replaced <span class="pre">simplify_tags</span> with universal tagset mappings, so I updated Chapter 3, Creating Custom Corpora to show how to use these tagset mappings to get the universal tags.

In Chapter 4, Part-of-Speech Tagging, the last recipe shows how to use train_tagger.py from NLTK-Trainer to replicate most of the tagger training recipes detailed earlier in the chapter. NLTK-Trainer was largely inspired by my experience writing Python Text Processing with NLTK 2.0 Cookbook, after realizing that many aspects of training part-of-speech taggers could be encapsulated in a command line script.

Chapter 5, Extracing Chunks, adds examples for using train_chunker.py to train phrase chunkers.

Chapter 7, Text Classification, adds coverage of train_classifier.py, along with examples of using the SklearnClassifier, which provides access to many of the scikit-learn classification algorithms. The scikit-learn classifiers tend to be at least as accurate as NLTK’s classifiers, are often faster to train, and have much smaller memory & disk footprints. And since NLTK 3 removed support for scipy based <span class="pre">MaxentClassifier</span> algorithms and SVM classifiers, the choice of which classifers to use has become very easy: when in doubt, choose SklearnClassifier (code examples can be found in chapter7.py).

There are a few library changes in Chapter 9, Parsing Specific Data Types:

  • <span class="pre">timex</span> and SimpleParse recipes have been removed due to lack of Python 3 compatibility
  • uses beautifulsoup4 with examples of UnicodeDammit
  • chardet was replaced with charade, which is compatible with both Python 2 & 3. But since publication, charade was merged back into chardet and is no longer maintained. I recommend installing chardet and replacing all instances of the <span class="pre">charade</span> module name with <span class="pre">chardet</span>.

So if you want to learn the latest & greatest NLTK 3, pickup your copy of Python 3 Text Processing with NLTK 3 Cookbook, and checkout the code at nltk3-cookbook. If you like the book, please review it at Amazon or goodreads.

Instant PyGame Book Review

Pygame for Python Game Development How-toThis is a review of the book Instant Pygame for Python Game Development How-to, by Ivan Idris. Packt asked me to review the book, and I agreed because like many developers, I’ve thought about writing my own game, and I’ve been curious about the capabilities of pygame. It’s a short book, ~120 pages, so this is a short review.

The book covers pygame basics like drawing images, rendering text, playing sounds, creating animations, and altering the mouse cursor. The author has helpfully posted some video demos of some of the exercises, which are linked from the book. I think this is a great way to show what’s possible, while also giving the reader a clear idea of what they are creating & what should happen. After the basic intro exercises, I think the best content was how to manipulate pixel arrays with numpy (the author has also written two books on numpy: NumPy Beginner’s Guide & NumPy Cookbook), how to create & use sprites, and how to make your own version of the game of life.

There were 3 chapters whose content puzzled me. When you’ve got such a short book on a specific topic, why bring up matplotlib, profiling, and debugging? These chapters seemed off-topic and just thrown in there randomly. The organization of the book could have been much better too, leading the reader from the basics all the way to a full-fledged game, with each chapter adding to the previous chapters. Instead, the chapters sometimes felt like unrelated low-level examples.

Overall, the book was a quick & easy read, that rapidly introduces you to basic pygame functionality, and leads you on to more complex activities. My main takeaway is that pygame provides an easy to use & low-level framework for building simple games, and can be used to create more complex games (but probably not FPS or similar graphically intensive games). The ideal games would probably be puzzle based and/or dialogue heavy, and only require simple interactions from the user. So if you’re interested in building such a game in Python, you should definitely get a copy of Instant Pygame for Python Game Development How-to.

Avogadro Corp Book Review / AI Speculation

Avogadro CorpAvogadro Corp: The Singularity Is Closer Than It Appears, by William Hertling, is the first sci-fi book I’ve read with a semi-plausible AI origin story. That’s because the premise isn’t so simple as “increased computing power -> emergent AI”. It’s a much more well defined formula: ever increasing computing power + powerful language processing + never ending stream of training data + goal oriented behavior + deep integration into internet infrastructure -> AI. The AI in the story is called ELOPe, which stands for Email Language Optimization Program, and its function is essentially to improve the quality of emails. WARNING there will be spoilers below, but only enough to describe ELOPe and speculate about how it might be implemented.

What is ELOPe

The idea behind ELOPe is to provide writing suggestions as a feature of a popular web-based email service. These writing suggestions are designed to improve the outcome of your email, whatever that may be. To take an example from the book, if you’re requesting more compute resources for a project, then ELOPe’s job is to offer writing suggestions that are most likely to get your request approved. By taking into account your own past writings, who you’re sending the email to, and what you’re asking for, it can go as far as completely re-writing the email to achieve the optimal outcome.

Using the existence of ELOPe as a given, the author writes a enjoyable story that is (mostly) technically accurate with plenty of details, without being boring. If you liked Daemon by Daniel Suarez, or you work with any kind of natural language / text-processing technology, you’ll probably enjoy the story. I won’t get into how an email writing suggestion program goes from that to full AI & takes over the world as a benevolent ghost in the wires – for that you need to read the book. What I do want to talk about is how this email optimization system could be implemented.

How ELOPe might work

Let’s start by defining the high-level requirements. ELOPe is an email optimizer, so we have the sender, the receiver, and the email being written as inputs. The output is a re-written email that preserves the “voice” of the sender while using language that will be much more likely to achieve the sender’s desired outcome, given who they’re sending the email to. That means we need the following:

  1. ability to analyze the email to determine what outcome is desired
  2. prior knowledge of how the receiver has responded to other emails with similar outcome topics, in order to know what language produced the best outcomes (and what language produced bad outcomes)
  3. ability to re-write (or generate) an email whose language is consistent with the sender, while also using language optimized to get the best response from the receiver

Topic Analysis

Determining the desired outcome for an email seems to me like a sophisticated combination of topic modeling and deep linguistic parsing. The goal would be to identify the core reason for the email: what is the sender asking for, and what would be an optimal response?

Being able to do this from a single email is probably impossible, but if you have access to thousands, or even millions of email chains, accurate topic modeling is much more do-able. Nearly every email someone sends will have some similarity to past emails sent by other people in similar situations. So you could create feature vectors for every email chain (using deep semantic parsing), then cluster the chains using feature similarity. Now you have topic clusters, and from that you could create training data for thousands of topic classifiers. Once you have the classifiers, you can run those in parallel to determine the most likely topic(s) of a single email.

Obviously it would be very difficult to create accurate clusters, and even harder to do so at scale. Language is very fuzzy, humans are inconsistent, and a huge fraction of email is spam. But the core of the necessary technology exists, and can work very well in limited conditions. The ability to parse emails, extract textual features, and cluster & classify feature vectors are functionality that’s available in at least a few modern programming libraries today (i.e. Python, NLTK & scikit-learn). These are areas of software technology that are getting a lot of attention right now, and all signs indicate that attention will only increase over time, so it’s quite likely that the difficulty level will decrease significantly over the next 10 years. Moving on, let’s assume we can do accurate email topic analysis. The next hurdle is outcome analysis.

Outcome Analysis

Once you can determine topics, now you need to learn about outcomes. Two email chains about acquiring compute resources have the same topic, but one chain ends with someone successfully getting access to more compute resources, while the other ends in failure. How do you differentiate between these? This sounds like next-generation sentiment analysis. You need to go deeper than simple failure vs. success, positive vs. negative, since you want to know which email chains within a given topic produced the best responses, and what language they have in common. In other words, you need a language model that weights successful outcome language much higher than failure outcome language. The only way I can think of doing this with a decent level of accuracy is massive amounts of human verified training data. Technically do-able, but very expensive in terms of time and effort.

What really pushes the bounds of plausibility is that the language model can’t be universal. Everyone has their own likes, dislikes, biases, and preferences. So you need language models that are specific to individuals, or clusters of individuals that respond similarly on the same topic. Since these clusters are topic specific, every individual would belong to many (topic, cluster) pairs. Given N topics and an average of M clusters within each topic, that’s N*M language models that need to be created. And one of the major plot points of the book falls out naturally: ELOPe needs access to huge amounts of high end compute resources.

This is definitely the least do-able aspect of ELOPe, and I’m ignoring all the implicit conceptual knowledge that would be required to know what an optimal outcome is, but let’s move on 🙂

Language Generation

Assuming that we can do topic & outcome analysis, the final step is using language models to generate more persuasive emails. This is perhaps the simplest part of ELOPe, assuming everything else works well. That’s because natural language generation is the kind of technology that works much better with more data, and it already exists in various forms. Google translate is a kind of language generator, chatbots have been around for decades, and spammers use software to spin new articles & text based on existing writings. The differences in this case are that every individual would need their own language generator, and it would have to be parameterized with pluggable language models based on the topic, desired outcome, and receiver. But assuming we have good topic & receiver specific outcome analysis, plus hundreds or thousands of emails from the sender to learn from, then generating new emails, or just new phrases within an email, seems almost trivial compared to what I’ve outlined above.

Final Words

I’m still highly skeptical that strong AI will ever exist. We humans barely understand the mechanisms of own intelligence, so to think that we can create comparable artificial intelligence smells of hubris. But it can be fun to think about, and the point of sci-fi is to tell stories about possible futures, so I have no doubt various forms of AI will play a strong role in sci-fi stories for years to come.

Python 3 Web Development Review

Python 3 Web DevelopmentThe problem with Python 3 Web Development Beginner’s Guide, by Michel Anders, is one of expectations (disclaimer: I received a free eBook from Packt for review). Let’s start with the title… First we have Python 3 Web Development. This immediately sets the wrong expectations because:

  1. There’s almost as much jQuery & Javascript as there is Python.
  2. Most of the Python code is not Python 3 specific, and the code that is could easily be translate to Python 2.
  3. Much of the Python code either uses CherryPy or is for generating HTML. This is not immediately obvious, but becomes apparent in Chapter 3 (which is available as a free PDF download: Chapter 3 – Tasklist I Persistence).

Second, this book is also supposed to be a Beginner’s Guide, but that is definitely not the case. To really grasp what’s going on, you need to already know the basics of HTML, jQuery interaction, and how HTTP works. Chapter 1 is an excellent introduction to HTTP and web application development, but the book as a whole is not beginner material. I think that anything that uses Python metaclasses automatically becomes at least intermediate level, if not expert, and the main thrust of Chapter 7 is refactoring all your straightforward database code to use complicated metaclasses.

However, if you mentally rewrite the title to be “Web Framework Development from scratch using CherryPy and jQuery”, then you’ve got the right idea. The book steps you through web app development with CherryPy, database models with sqlite3, and plenty of HTML and jQuery for interface generation and interaction. While creating example applications, you slowly build up a re-usable framework. It’s an interesting approach, but unfortunately it gets muddied up with inline HTML rendering. I never thought a language as simple and elegant as Python could be reduced to the ugliness of common PHP, but generating HTML with string interpolation inside the same functions that are accessing the database gets pretty close. I kept expecting the author to introduce template rendering, which is a major part of most modern web development frameworks, but it never happened, despite the plethora of excellent Python templating libraries.

While reading this book, I often had the recurring thought “I’m so glad I use Django“. If your aim is rapid application development, this is not the book for you. However, if you’re interested in creating your own web development framework, or would at least like to understand how a framework like Django could be created, then buy a copy Python 3 Web Development.

Python Testing Cookbook Review

Python Testing CookbookPython Testing Cookbook, by Greg L Turnquist (@gregturn), goes far beyond Unit Testing, but overall it’s a mixed bag. Here’s a breakdown for each chapter (disclaimer: I received a free eBook from Packt for review):

  1. Basic introduction to testing with unittest, which is great if you’re just starting with Python and testing.
  2. Good coverage of nose. I was pleasantly surprised at how easy it is to write nose plugins.
  3. Deep coverage of using doctest and writing testable docstrings. You can download a free PDF of Chapter 3 here.
  4. BDD with a cool nose plugin, and how to use mock or mockito for testing with mock objects. I wish the author had expressed an opinion in favor of either mock or mockito, but he didn’t, so I will: use Fudge. Chapter 4 also covers the Lettuce DSL, which I think is pretty neat, but since every step requires writing a function handler, I’m not convinced it’s actually easier or better than writing doctests or unit tests.
  5. Acceptance testing with Pyccuracy and Robot Framework, which both give you a way to use Selenium from Python. I thought the DSLs used seemed too “magic”, but I that’s probably because I didn’t know the command words, and they weren’t highlighted or adequately explained.
  6. How to install and use Jenkins and TeamCity, and how to display XML reports produced using NoseXUnit. This is a very useful chapter for anyone thinking about or setting up continuous integration.
  7. This chapter is supposed to be about test coverage, and does introduce coverage, but the examples get needlessly complicated. Previous chapters used a simple shopping cart example, but this chapter uses network events, which really distracts from the tests. The author also writes unittests that just print the results intead of actually testing results with assertions.
  8. More network event complexity while trying to demonstrate smoke testing and load testing. This chapter would have made a lot more sense in a book about network programming and how to test network events. Pyro is used with very little explanation, and MySQL and SQLlite are brought in too, increasing the complexity even more.
  9. This chapter is filled with useful advice, but there’s no actual code examples. Instead, the advice is shoehorned into the cookbook format, which I felt detracted from the otherwise great content.

Throughout the book, the author presents a kind of “main script” that he updates at the end of many of the chapters. However, the script also contains stub functions that are never touched and barely explained, making their existance completely unnecessary. There’s also far too many import *, which can make it difficult to understand the code. But I did learn enough new things that I think Python Testing Cookbook is worth buying and reading. Leaving out Chapters 7 and 8, I think the book is a great resource if you’re just getting started with testing, you want to do continuous integration, and/or you want to get non-programmers involved in the testing process. There’s also a blog about the book, which has links to other reviews.

Upcoming Python Book Reviews

Programming Collective IntelligenceProgramming Collective Intelligence

I recently finished reading Programming Collective Intellegince and will be posting a review soon. The TL;DR review is: get it if want an great introduction to machine learning with Python. It covers a lot of complex algorithms in a simple way, and provides some great example use cases.

Python Testing CookbookPython Testing Cookbook

Testing is something nearly every developer can do more of, and this Python Testing Cookbook looks to be full of techniques for integrating testing at various levels of a project. As a preview, you can download a PDF of Chapter 3 – Creating Testable Documentation with doctest.

Python 3 Web Development

Python 3 Web Development Beginner’s Guide

I haven’t used Python 3 yet, so Python 3 Web Development Beginner’s Guide is a good excuse to do so. I also haven’t done any web development outside of Django in a few years, and I’m interested to see how it compares to doing it from scratch. As a preview, you can download a PDF of Chapter 3 – Tasklist I Persistence.

Kindle 3

Kindle 3

I’m reading all of these on a Kindle 3, which has worked out surprisingly well. It’s obviously not good for copy & pasting code snippets, but that’s generally a bad idea anyway. And if don’t want to type code in yourself, you can always download it from the publisher’s site.

Python Text Processing with NLTK Cookbook Chapter 2 Errata

It has come to my attention that there are two errors in Chapter 2, Replacing and Correcting Words of Python Text Processing with NLTK Cookbook. My thanks to the reader who went out of their way to verify my mistakes and send in corrections.

In Lemmatizing words with WordNet, on page 29, under How it works…, I said that “cooking” is not a noun and does not have a lemma. In fact, cooking is a noun, and as such is its own lemma. Of course, “cooking” is also a verb, and the verb form has the lemma “cook”.

In Removing repeating characters, on page 35, under How it works…, I explained the repeat_regexp match groups incorrectly. The actual match grouping of the word “looooove” is (looo)(o)o(ve) because the pattern matching is greedy. The end result is still correct.

Python Text Processing with NLTK Book Reviews

If you’ve been considering buying Python Text Processing with NLTK 2.0 Cookbook, but haven’t yet, below are a couple reviews that may help convince you how awesome it is 🙂

Jaganadh says in his review of Python Text Processing with NLTK Cookbook at Jaggu’s World:

The eight chapter a revolutionary one which deals with Distributed data processing and handling large scale data with NLTK. (…) This chapter will be really helpful for industry people who is looking for to adopt NLTK in to NLP projects.

I give 9 out of 10 for the book. Natural Language Processing students, teachers, professional hurry and bag a copy of this book.

Sum-Wai says in his review of Python Text Processing with NLTK Cookbook at Tips Tank:

I like it where in each recipe, the author provides extra knowledge on the particular problem, like how a problem can be enhance and solve in another way, or what we need to do if the problem on hand changed, and some extra technical tips, which is very nice and useful.

If you’re thinking about the O’Reilly’s NLTK book – Natural Language Processing with Python, IMHO this book and the O’Reilly NLTK book complements each other. The O’Reilly NLTK book focuses more on getting you to know NLP and the features and usage of NLTK , while Python Text Processing with NLTK teaches us how we would implement NLP/NLTK with tools like MongoDB into solving real world problems.

And Neil Kodner, @neilkod, says:

I’m loving python text processing with nltk cookbook by @japerk, its an excellent companion to the O’Reilly NLTK book

Christmas is coming up, and who doesn’t think about python text processing during the holidays?

If you want a reviewer copy to write your own review, contact Packt at reviewrequest@packtpub.com. And if you do write a review and want to let me know about it, leave a comment here, or contact me on twitter.

The Beginning of Python Text Processing with NLTK Cookbook

It all started with an email to the baypiggies mailing list. An acquisition editor for Packt was looking for authors to expand their line of python cookbooks. For some reason I can’t remember, I thought they wanted to put together a multi-author cookbook, where each author contributes a few recipes. That sounded doable, because I’d already written a number of articles that could serve as the basis for a few recipes. So I replied with links to the following articles:

The reply back was:

The next step is to come up with around 8-14 topics/chapters and around 80-100 recipes for the book as a whole.

My first reaction was “WTF?? No way!” But luckily, I didn’t send that email. Instead, I took a couple days to think it over, and realized that maybe I could come up with that many recipes, if I broke my knowledge down into small pieces. I also decided to choose recipes that I didn’t already know how to write, and use them as motivation for learning & research. So I replied back with a list of 92 recipes, and got to work. Not surprisingly, the original list of 92 changed significantly while writing the book, and I believe the final recipe count is 81.

I was keenly aware that there’d be some necessary overlap with the original NLTK book, Natural Language Processing with Python. But I did my best to minimize that overlap, and to present a different take on similar content. And there’s a number of recipes that (as far as I know) you can’t find anywhere else, the largest group of which can be found in Chapter 6, Transforming Chunks and Trees. I’m very pleased with the result, and I hope everyone who buys the book is too. I’d like to think that Python Text Processing with NLTK 2.0 Cookbook is the practical companion to the more teaching oriented Natural Language Processing with Python.

If you’d like a taste of the book, checkout the online sample chapter (pdf) Chapter 3, Custom Corpora, which details how many of the included corpus readers work, how to use them, and how to create your own corpus readers. The last recipe shows you how to create a corpus reader on top of MongoDB, and it should be fairly easy to modify for use with any other database.

Packt has also published two excerpts from Chapter 8, Distributed Processing and Handling Large Datasets, which are partially based on those original 2 articles:

Python Text Processing with NLTK Cookbook

My new book, Python Text Processing with NLTK 2.0 Cookbook, has been published. You can find it at both Packt and Amazon. For those of you that pre-ordered it, thank you, and I hope you receive your copy soon.

The Packt page has a lot more details, including the Table of Contents and a sample chapter (pdf). The sample chapter is Chapter 3, Creating Custom Corpora, which covers the following:

  • creating your own corpora
  • using many of the included corpus readers
  • creating custom corpus readers
  • creating a corpus reader on top of MongoDB

I hope you find Python Text Processing with NLTK Cookbook useful, informative, and maybe even fun.