This is a review of the book Instant Pygame for Python Game Development How-to, by Ivan Idris. Packt asked me to review the book, and I agreed because like many developers, I've thought about writing my own game, and I've been curious about the capabilities of pygame. It's a short book, ~120 pages, so this is a short review.
The book covers pygame basics like drawing images, rendering text, playing sounds, creating animations, and altering the mouse cursor. The author has helpfully posted some video demos of some of the exercises, which are linked from the book. I think this is a great way to show what's possible, while also giving the reader a clear idea of what they are creating & what should happen. After the basic intro exercises, I think the best content was how to manipulate pixel arrays with numpy (the author has also written two books on numpy: NumPy Beginner's Guide & NumPy Cookbook), how to create & use sprites, and how to make your own version of the game of life.
There were 3 chapters whose content puzzled me. When you've got such a short book on a specific topic, why bring up matplotlib, profiling, and debugging? These chapters seemed off-topic and just thrown in there randomly. The organization of the book could have been much better too, leading the reader from the basics all the way to a full-fledged game, with each chapter adding to the previous chapters. Instead, the chapters sometimes felt like unrelated low-level examples.
Overall, the book was a quick & easy read, that rapidly introduces you to basic pygame functionality, and leads you on to more complex activities. My main takeaway is that pygame provides an easy to use & low-level framework for building simple games, and can be used to create more complex games (but probably not FPS or similar graphically intensive games). The ideal games would probably be puzzle based and/or dialogue heavy, and only require simple interactions from the user. So if you're interested in building such a game in Python, you should definitely get a copy of Instant Pygame for Python Game Development How-to.TrekCovers Find a fast & reliable isp in NYC. Point of Sale Software
Avogadro Corp: The Singularity Is Closer Than It Appears, by William Hertling, is the first sci-fi book I've read with a semi-plausible AI origin story. That's because the premise isn't so simple as "increased computing power -> emergent AI". It's a much more well defined formula: ever increasing computing power + powerful language processing + never ending stream of training data + goal oriented behavior + deep integration into internet infrastructure -> AI. The AI in the story is called ELOPe, which stands for Email Language Optimization Program, and its function is essentially to improve the quality of emails. WARNING there will be spoilers below, but only enough to describe ELOPe and speculate about how it might be implemented.
What is ELOPe
The idea behind ELOPe is to provide writing suggestions as a feature of a popular web-based email service. These writing suggestions are designed to improve the outcome of your email, whatever that may be. To take an example from the book, if you're requesting more compute resources for a project, then ELOPe's job is to offer writing suggestions that are most likely to get your request approved. By taking into account your own past writings, who you're sending the email to, and what you're asking for, it can go as far as completely re-writing the email to achieve the optimal outcome.
Using the existence of ELOPe as a given, the author writes a enjoyable story that is (mostly) technically accurate with plenty of details, without being boring. If you liked Daemon by Daniel Suarez, or you work with any kind of natural language / text-processing technology, you'll probably enjoy the story. I won't get into how an email writing suggestion program goes from that to full AI & takes over the world as a benevolent ghost in the wires - for that you need to read the book. What I do want to talk about is how this email optimization system could be implemented.
How ELOPe might work
Let's start by defining the high-level requirements. ELOPe is an email optimizer, so we have the sender, the receiver, and the email being written as inputs. The output is a re-written email that preserves the "voice" of the sender while using language that will be much more likely to achieve the sender's desired outcome, given who they're sending the email to. That means we need the following:
- ability to analyze the email to determine what outcome is desired
- prior knowledge of how the receiver has responded to other emails with similar outcome topics, in order to know what language produced the best outcomes (and what language produced bad outcomes)
- ability to re-write (or generate) an email whose language is consistent with the sender, while also using language optimized to get the best response from the receiver
Determining the desired outcome for an email seems to me like a sophisticated combination of topic modeling and deep linguistic parsing. The goal would be to identify the core reason for the email: what is the sender asking for, and what would be an optimal response?
Being able to do this from a single email is probably impossible, but if you have access to thousands, or even millions of email chains, accurate topic modeling is much more do-able. Nearly every email someone sends will have some similarity to past emails sent by other people in similar situations. So you could create feature vectors for every email chain (using deep semantic parsing), then cluster the chains using feature similarity. Now you have topic clusters, and from that you could create training data for thousands of topic classifiers. Once you have the classifiers, you can run those in parallel to determine the most likely topic(s) of a single email.
Obviously it would be very difficult to create accurate clusters, and even harder to do so at scale. Language is very fuzzy, humans are inconsistent, and a huge fraction of email is spam. But the core of the necessary technology exists, and can work very well in limited conditions. The ability to parse emails, extract textual features, and cluster & classify feature vectors are functionality that's available in at least a few modern programming libraries today (i.e. Python, NLTK & scikit-learn). These are areas of software technology that are getting a lot of attention right now, and all signs indicate that attention will only increase over time, so it's quite likely that the difficulty level will decrease significantly over the next 10 years. Moving on, let's assume we can do accurate email topic analysis. The next hurdle is outcome analysis.
Once you can determine topics, now you need to learn about outcomes. Two email chains about acquiring compute resources have the same topic, but one chain ends with someone successfully getting access to more compute resources, while the other ends in failure. How do you differentiate between these? This sounds like next-generation sentiment analysis. You need to go deeper than simple failure vs. success, positive vs. negative, since you want to know which email chains within a given topic produced the best responses, and what language they have in common. In other words, you need a language model that weights successful outcome language much higher than failure outcome language. The only way I can think of doing this with a decent level of accuracy is massive amounts of human verified training data. Technically do-able, but very expensive in terms of time and effort.
What really pushes the bounds of plausibility is that the language model can't be universal. Everyone has their own likes, dislikes, biases, and preferences. So you need language models that are specific to individuals, or clusters of individuals that respond similarly on the same topic. Since these clusters are topic specific, every individual would belong to many
(topic, cluster) pairs. Given
N topics and an average of
M clusters within each topic, that's
N*M language models that need to be created. And one of the major plot points of the book falls out naturally: ELOPe needs access to huge amounts of high end compute resources.
This is definitely the least do-able aspect of ELOPe, and I'm ignoring all the implicit conceptual knowledge that would be required to know what an optimal outcome is, but let's move on
Assuming that we can do topic & outcome analysis, the final step is using language models to generate more persuasive emails. This is perhaps the simplest part of ELOPe, assuming everything else works well. That's because natural language generation is the kind of technology that works much better with more data, and it already exists in various forms. Google translate is a kind of language generator, chatbots have been around for decades, and spammers use software to spin new articles & text based on existing writings. The differences in this case are that every individual would need their own language generator, and it would have to be parameterized with pluggable language models based on the topic, desired outcome, and receiver. But assuming we have good topic & receiver specific outcome analysis, plus hundreds or thousands of emails from the sender to learn from, then generating new emails, or just new phrases within an email, seems almost trivial compared to what I've outlined above.
I'm still highly skeptical that strong AI will ever exist. We humans barely understand the mechanisms of own intelligence, so to think that we can create comparable artificial intelligence smells of hubris. But it can be fun to think about, and the point of sci-fi is to tell stories about possible futures, so I have no doubt various forms of AI will play a strong role in sci-fi stories for years to come.
When I first created text-processing.com, in the summer of 2010, my initial intention was to provide an online demo of NLTK's capabilities. I trained a bunch of models on various NLTK corpora using nltk-trainer, then started making some simple Django forms to display the results. But as I was doing this, I realized I could fairly easily create an API based on these models. Instead of rendering HTML, I could just return the results as JSON.
I wasn't sure if anyone would actually use the API, but I knew the best way to find out was to just put it out there. So I did, initially making it completely open, with a rate limit of 1000 calls per day per IP address. I figured at the very least, I might get some PHP or Ruby users that wanted the power of NLTK without having to interface with Python. Within a month, people were regularly exceeding that limit, and I quietly increased it to 5000 calls/day, while I started searching for the simplest way to monetize the API. I didn't like what I found.
Before Mashape, your options for monetizing APIs were either building a custom solution for authentication, billing, and tracking, or pay thousands of dollars a month for an "enterprise" solution from Mashery or Apigee. While I have no doubt Mashery & Apigee provide quality services, they are not in the price range for most developers. And building a custom solution is far more work than I wanted to put into it. Even now, when companies like Stripe exist to make billing easier, you'd still have to do authentication & call tracking. But Stripe didn't exist 2 years ago, and the best billing option I could find was Paypal, whose API documentation is great at inducing headaches. Lucky for me, Mashape was just opening up for beta testing, and appeared to be in the process of solving all of my problems
Mashape was just what I needed to monetize the text-processing API, and it's improved tremendously since I started using it. They handle all the necessary details, like integrated billing, plus a lot more, such as usage charts, latency & uptime measurements, and automatic client library generation. This last is one of my favorite features, because the client libraries are generated using your API documentation, which provides a great incentive to accurately document the ins & outs of your API. Once you've documented your API, downloadable libraries in 5 different programming languages are immediately available, making it that much easier for new users to consume your API. As of this writing, those languages are Java, PHP, Python, Ruby, and Objective C.
Here's a little history for the curious: Mashape originally did authentication and tracking by exchanging tokens thru an API call. So you had to write some code to call their token API on every one of your API calls, then check the results to see if the call was valid, or if the caller had reached their limit. They didn't have all of the nice charts they have now, and their billing solution was the CEO manually handling Paypal payments. But none of that mattered, because it worked, and from conversations with them, I knew they were focused on more important things: building up their infrastructure and positioning themselves as a kind of app-store for APIs.
Mashape has been out of beta for a while now, with automated billing, and a custom proxy server for authenticating, routing, and tracking all API calls. They're releasing new features on a regular basis, and sponsoring events like MusicHackDay. I'm very impressed with everything they're doing, and on top of that, they're good hard-working people. I've been over to their "hacker house" in San Francisco a few times, and they're very friendly and accomodating. And if you're ever in the neighborhood, I'm sure they'd be open to a visit.
Once I had integrated Mashape, which was maybe 20 lines of code, the money started rolling in :). Just kidding, but using the typical definition of profit, when income exceeds costs, the text-processing API was profitable within a few months, and has remained so ever since. My only monetary cost is a single Linode server, so as long as people keep paying for the API, text-processing.com will remain online. And while it has a very nice profit margin, total monthly income barely approaches the cost of living in San Francisco. But what really matters to me is that text-processing.com has become a self-sustaining excuse for me to experiment with natural language processing techniques & data sets, test my models against the market, and provide developers with a simple way to integrate NLP into their own projects.
So if you've got an idea for an API, especially if it's something you could charge money for, I encourage you to build it and put it up on Mashape. All you need is a working API, a unique image & name, and a Paypal account for receiving payments. Like other app stores, Mashape takes a 20% cut of all revenue, but I think it's well worth it compared to the cost of replicating everything they provide. And unlike some app stores, you're not locked in. Many of the APIs on Mashape also provide alternative usage options (including text-processing), but they're on Mashape because of the increased exposure, distribution, and additional features, like client library generation. SaaS APIs are becoming a significant part of modern computing infrastructure, and Mashape provides a great platform for getting started.
Now that NLTK versions 2.0.1 & higher include the SklearnClassifier (contributed by Lars Buitinck), it's much easier to make use of the excellent scikit-learn library of algorithms for text classification. But how well do they work?
Below is a table showing both the accuracy & F-measure of many of these algorithms using different feature extraction methods. Unlike the standard NLTK classifiers, sklearn classifiers are designed for handling numeric features. So there are 3 different values under the
feats column for each algorithm.
bow means bag-of-words feature extraction, where every word gets a 1 if present, or a 0 if not.
int means word counts are used, so if a word occurs twice, it gets the number 2 as its feature value (whereas with
bow it would still get a 1). And
tfidf means the TfidfTransformer is used to produce a floating point number that measures the importance of a word, using the tf-idf algorithm.
All numbers were determined using nltk-trainer, specifically,
python train_classifier.py movie_reviews --no-pickle --classifier sklearn.ALGORITHM --fraction 0.75. For
int features, the option
--value-type int was used, and for
tfidf features, the options
--value-type float --tfidf were used. This was with NLTK 2.0.3 and sklearn 0.12.1.
|algorithm||feats||accuracy||neg f-measure||pos f-measure|
As you can see, the best algorithms are BernoulliNB, MultinomialNB, LogisticRegression, LinearSVC, and NuSVC. Surprisingly,
tfidf features either provide a very small performance increase, or significantly decrease performance. So let's see if we can improve performance with the same techniques used in previous articles in this series, specifically bigrams and high information words.
Below is a table showing the accuracy of the top 5 algorithms using just
unigrams (the default, a.k.a single words), and using unigrams +
bigrams (pairs of words) with the option
--ngrams 1 2.
MultinomialNB got a modest boost in accuracy, putting them on-par with the rest of the algorithms. But we can do better than this using feature scoring.
As I've shown previously, eliminating low information features can have significant positive effects. Below is a table showing the accuracy of each algorithm at different score levels, using the option
--min_score SCORE (and keeping the
--ngrams 1 2 option to get bigram features).
|algorithm||score 1||score 2||score 3|
NuSVC all get a nice gain of ~4-5%, but the most interesting results are from the
MultinomialNB algorithms, which drop down significantly at
--min_score 1, but then skyrocket up to 97% with
--min_score 2. The only explanation I can offer for this is that Naive Bayes classification, because it does not weight features, can be quite sensitive to changes in training data (see Bayesian Poisoning for an example).
If you haven't yet tried using scikit-learn for text classification, then I hope this article convinces you that it's worth learning. NLTK's SklearnClassifier makes the process much easier, since you don't have to convert feature dictionaries to numpy arrays yourself, or keep track of all known features. The Scikits classifiers also tend to be more memory efficient than the standard NLTK classifiers, due to their use of sparse arrays.
NLTK 2.0.1, a.k.a NLTK 2, was recently released, and what follows is my favorite changes, new features, and highlights from the ChangeLog.
The SVMClassifier adds support vector machine classification thru SVMLight with PySVMLight. This is a much needed addition to the set of supported classification algorithms. But even more interesting...
The SklearnClassifier provides a general interface to text classification with scikit-learn. While scikit-learn is still pre-1.0, it is rapidly becoming one of the most popular machine learning toolkits, and provides more advanced feature extraction methods for classification.
NLTK has moved development and hosting to github, replacing google code and SVN. The primary motivation is to make new development easier, and already a Python 3 branch is under active development. I think this is great, since github makes forking & pull requests quite easy, and it's become the de-facto "social coding" site.
Coinciding with the github move, the documentation was updated to use Sphinx, the same documentation generator used by Python and many other projects. While I personally like Sphinx and restructured text (which I used to write this post), I'm not thrilled with the results. The new documentation structure and NLTK homepage seem much less approachable. While it works great if you know exactly what you're looking for, I worry that new/interested users will have a harder time getting started.
Since the 0.9.9 release, a number of new corpora and corpus readers have been added:
And here's a few final highlights:
- The HunposTagger, which wraps hunpos.
- The StanfordTagger plus 2 subclasses for NER and POS tagging with the Stanford POS Tagger.
- The SnowballStemmer, which supports 13 different languages. You can try it out at my online stemming demo.
I think NLTK's ideal role is be a standard interface between corpora and NLP algorithms. There are many different corpus formats, and every algorithm has its own data structure requirements, so providing common abstract interfaces to connect these together is very powerful. It allows you to test the same algorithm on disparate corpora, or try multiple algorithms on a single corpus. This is what NLTK already does best, and I hope that becomes even more true in the future.
I've given a few talks & presentations recently, so for anyone that doesn't follow japerk on twitter, here are some links:
- Weotta's MongoDB presentation from Tuesday, Feb 21 at the SF MongoDB meetup
- Corpus Bootstrapping with NLTK from Tuesday, Feb 28, during the Deep Data session at Strata
- PyCon NLTK Tutorial code from Thursday, March 8 at PyCon 2012
I also want to recommend 2 books that helped me mentally prepare for these talks:
My PyCon tutorial, Introduction to NLTK, now has over 40 people registered. This is about twice as many people as I was expecting, but I'm glad so many people want to learn NLTK Because of the large class size, it'd really helpful to have a couple assistants with at least some NLTK experience, including, but not limited to:
* installing NLTK
* installing & using NLTK on Windows
* installing & using nltk-trainer
* creating custom corpora
* using WordNet
If you're interested in helping out, please read Tutorial Assistants and contact me, japerk -- at -- gmail. Thanks!
At the end of February and the beginning of March, I'll be giving 3 talks in the SF Bay Area and one in St Louis, MO. In chronological order...
How Weotta uses MongoDB
Grant and I will be helping 10gen celebrate the opening of their new San Francisco office on Tuesday, February 21, by talking about
How Weotta uses MongoDB. We'll cover some of our favorite features of MongoDB and how we use it for local place & events search. Then we'll finish with a preview of Weotta's upcoming MongoDB powered local search APIs.
NLTK Jam Session at NICAR 2012
On Thursday, February 23, in St Louis, MO, I'll be demonstrating how to use NLTK as part of the NewsCamp workshop at NICAR 2012. This will be a version of my PyCon NLTK Tutorial with a focus on news text and corpora like treebank.
Corpus Bootstrapping with NLTK at Strata 2012
As part of the Strata 2012 Deep Data program, I'll talk about Corpus Bootstrapping with NLTK on Tuesday, February 28. The premise of this talk is that while there's plenty of great algorithms and methods for natural language processing, most of them require a training corpus, and chances are the training corpus you really need doesn't exist. So how can you quickly create a quality corpus at minimal cost? I'll cover specific real-world examples to answer this question.
NLTK Tutorial at PyCon 2012
Introduction to NLTK will be a 3 hour tutorial at PyCon on Thursday, March 8th. You'll get to know NLTK in depth, learn about corpus organization, and train your own models manually & with nltk-trainer. My goal is that you'll walk out with at least one new NLP superpower that you can put to use immediately.
Fuzzy matching is a general term for finding strings that are almost equal, or mostly the same. Of course almost and mostly are ambiguous terms themselves, so you'll have to determine what they really mean for your specific needs. The best way to do this is to come up with a list of test cases before you start writing any fuzzy matching code. These test cases should be pairs of strings that either should fuzzy match, or not. I like to create doctests for this, like so:
def fuzzy_match(s1, s2): ''' >>> fuzzy_match('Happy Days', ' happy days ') True >>> fuzzy_match('happy days', 'sad days') False ''' # TODO: fuzzy matching code return s1 == s2
Once you've got a good set of test cases, then it's much easier to tailor your fuzzy matching code to get the best results.
The first step before doing any string matching is normalization. The goal with normalization is to transform your strings into a normal form, which in some cases may be all you need to do. While
'Happy Days' != ' happy days ', with simple normalization you can get
'Happy Days'.lower() == ' happy days '.strip().
The most basic normalization you can do is to lowercase and strip whitespace. But chances are you'll want to more. For example, here's a simple normalization function that also removes all punctuation in a string.
import string def normalize(s): for p in string.punctuation: s = s.replace(p, '') return s.lower().strip()
normalize function, we can make the above fuzzy matching function pass our simple tests.
def fuzzy_match(s1, s2): ''' >>> fuzzy_match('Happy Days', ' happy days ') True >>> fuzzy_match('happy days', 'sad days') False ''' return normalize(s1) == normalize(s2)
If you want to get more advanced, keep reading...
Beyond just stripping whitespace from the ends of strings, it's also a good idea replace all whitespace occurrences with a single space character. The regex function for doing this is
re.sub('\s+', s, ' '). This will replace every occurrence of one or more spaces, newlines, tabs, etc, essentially eliminating the significance of whitespace for matching.
You may also be able to use regular expressions for partial fuzzy matching. Maybe you can use regular expressions to identify significant parts of a string, or perhaps split a string into component parts for further matching. If you think you can create a simple regular expression to help with fuzzy matching, do it, because chances are, any other code you write to do fuzzy matching will be more complicated, less straightforward, and probably slower. You can also use more complicated regular expressions to handle specific edge cases. But beware of any expression that takes puzzling out every time you look at it, because you'll probably be revisiting this code a number of times to tweak it for handling new cases, and tweaking complicated regular expressions is a sure way to induce headaches and eyeball-bleeding.
The edit distance (aka Levenshtein distance) is the number of single character edits it would take to transform one string into another. Thefore, the smaller the edit distance, the more similar two strings are.
If you want to do edit distance calculations, checkout the standalone editdist module. Its
distance function takes 2 strings and returns the Levenshtein edit distance. It's also implemented in C, and so is quite fast.
Fuzzywuzzy is a great all-purpose library for fuzzy string matching, built (in part) on top of Python's difflib. It has a number of different fuzzy matching functions, and it's definitely worth experimenting with all of them. I've personally found
token_set_ratio to be the most useful.
If you want to do some custom fuzzy string matching, then NLTK is a great library to use. There's word tokenizers, stemmers, and it even has its own edit distance implementation. Here's a way you could combine all 3 to create a fuzzy string matching function.
from nltk import metrics, stem, tokenize stemmer = stem.PorterStemmer() def normalize(s): words = tokenize.wordpunct_tokenize(s.lower().strip()) return ' '.join([stemmer.stem(w) for w in words]) def fuzzy_match(s1, s2, max_dist=3): return metrics.edit_distance(normalize(s1), normalize(s2)) <= max_dist
Finally, an interesting and perhaps non-obvious way to compare strings is with phonetic algorithms. The idea is that 2 strings that sound same may be the same (or at least similar enough). One of the most well known phonetic algorithms is Soundex, with a python soundex algorithm here. Another is Double Metaphone, with a python metaphone module here. You can also find code for these and other phonetic algorithms in the nltk-trainer phonetics module (copied from a now defunct sourceforge project called advas). Using any of these algorithms, you get an encoded string, and then if 2 encodings compare equal, the original strings match. Theoretically, you could even do fuzzy matching on the phonetic encodings, but that's probably pushing the bounds of fuzziness a bit too far.
On September 14, 2011, I'll be giving a 20 minute overview of NLTK for the San Francisco Python Meetup Group. Since it's only 20 minutes, I can't get into too much detail, but I plan to quickly cover the basics of:
- tokenization and why it's not as easy as
- part-of-speech tagging and why it's important
- chunking and named entity recognition
- text classification and how it works for sentiment analysis
- training your own models with nltk-trainer
I'll also be soliciting feedback for a NLTK Tutorial at PyCON 2012. So if you'll be at the meetup and are interested in attending a NLTK tutorial, come find me and tell me what you'd want to learn.
Updated 9/15/2011: Slides from the talk are online - NLTK in 20 minutes