Tag Archives: doctest

Fuzzy String Matching in Python

Fuzzy matching is a general term for finding strings that are almost equal, or mostly the same. Of course almost and mostly are ambiguous terms themselves, so you’ll have to determine what they really mean for your specific needs. The best way to do this is to come up with a list of test cases before you start writing any fuzzy matching code. These test cases should be pairs of strings that either should fuzzy match, or not. I like to create doctests for this, like so:

def fuzzy_match(s1, s2):
	'''
	>>> fuzzy_match('Happy Days', ' happy days ')
	True
	>>> fuzzy_match('happy days', 'sad days')
	False
	'''
	# TODO: fuzzy matching code
	return s1 == s2

Once you’ve got a good set of test cases, then it’s much easier to tailor your fuzzy matching code to get the best results.

Normalization

The first step before doing any string matching is normalization. The goal with normalization is to transform your strings into a normal form, which in some cases may be all you need to do. While 'Happy Days' != ' happy days ', with simple normalization you can get 'Happy <span class="pre">Days'.lower()</span> == ' happy days '.strip().

The most basic normalization you can do is to lowercase and strip whitespace. But chances are you’ll want to more. For example, here’s a simple normalization function that also removes all punctuation in a string.

import string

def normalize(s):
	for p in string.punctuation:
		s = s.replace(p, '')

	return s.lower().strip()

Using this normalize function, we can make the above fuzzy matching function pass our simple tests.

def fuzzy_match(s1, s2):
	'''
	>>> fuzzy_match('Happy Days', ' happy days ')
	True
	>>> fuzzy_match('happy days', 'sad days')
	False
	'''
	return normalize(s1) == normalize(s2)

If you want to get more advanced, keep reading…

Regular Expressions

Beyond just stripping whitespace from the ends of strings, it’s also a good idea replace all whitespace occurrences with a single space character. The regex function for doing this is re.sub('\s+', s, ' '). This will replace every occurrence of one or more spaces, newlines, tabs, etc, essentially eliminating the significance of whitespace for matching.

You may also be able to use regular expressions for partial fuzzy matching. Maybe you can use regular expressions to identify significant parts of a string, or perhaps split a string into component parts for further matching. If you think you can create a simple regular expression to help with fuzzy matching, do it, because chances are, any other code you write to do fuzzy matching will be more complicated, less straightforward, and probably slower. You can also use more complicated regular expressions to handle specific edge cases. But beware of any expression that takes puzzling out every time you look at it, because you’ll probably be revisiting this code a number of times to tweak it for handling new cases, and tweaking complicated regular expressions is a sure way to induce headaches and eyeball-bleeding.

Edit Distance

The edit distance (aka Levenshtein distance) is the number of single character edits it would take to transform one string into another. Thefore, the smaller the edit distance, the more similar two strings are.

If you want to do edit distance calculations, checkout the standalone editdist module. Its distance function takes 2 strings and returns the Levenshtein edit distance. It’s also implemented in C, and so is quite fast.

Fuzzywuzzy

Fuzzywuzzy is a great all-purpose library for fuzzy string matching, built (in part) on top of Python’s difflib. It has a number of different fuzzy matching functions, and it’s definitely worth experimenting with all of them. I’ve personally found ratio and token_set_ratio to be the most useful.

NLTK

If you want to do some custom fuzzy string matching, then NLTK is a great library to use. There’s word tokenizers, stemmers, and it even has its own edit distance implementation. Here’s a way you could combine all 3 to create a fuzzy string matching function.

from nltk import metrics, stem, tokenize

stemmer = stem.PorterStemmer()

def normalize(s):
	words = tokenize.wordpunct_tokenize(s.lower().strip())
	return ' '.join([stemmer.stem(w) for w in words])

def fuzzy_match(s1, s2, max_dist=3):
	return metrics.edit_distance(normalize(s1), normalize(s2)) <= max_dist

Phonetics

Finally, an interesting and perhaps non-obvious way to compare strings is with phonetic algorithms. The idea is that 2 strings that sound same may be the same (or at least similar enough). One of the most well known phonetic algorithms is Soundex, with a python soundex algorithm here. Another is Double Metaphone, with a python metaphone module here. You can also find code for these and other phonetic algorithms in the nltk-trainer phonetics module (copied from a now defunct sourceforge project called advas). Using any of these algorithms, you get an encoded string, and then if 2 encodings compare equal, the original strings match. Theoretically, you could even do fuzzy matching on the phonetic encodings, but that’s probably pushing the bounds of fuzziness a bit too far.

Python Testing Cookbook Review

Python Testing CookbookPython Testing Cookbook, by Greg L Turnquist (@gregturn), goes far beyond Unit Testing, but overall it’s a mixed bag. Here’s a breakdown for each chapter (disclaimer: I received a free eBook from Packt for review):

  1. Basic introduction to testing with unittest, which is great if you’re just starting with Python and testing.
  2. Good coverage of nose. I was pleasantly surprised at how easy it is to write nose plugins.
  3. Deep coverage of using doctest and writing testable docstrings. You can download a free PDF of Chapter 3 here.
  4. BDD with a cool nose plugin, and how to use mock or mockito for testing with mock objects. I wish the author had expressed an opinion in favor of either mock or mockito, but he didn’t, so I will: use Fudge. Chapter 4 also covers the Lettuce DSL, which I think is pretty neat, but since every step requires writing a function handler, I’m not convinced it’s actually easier or better than writing doctests or unit tests.
  5. Acceptance testing with Pyccuracy and Robot Framework, which both give you a way to use Selenium from Python. I thought the DSLs used seemed too “magic”, but I that’s probably because I didn’t know the command words, and they weren’t highlighted or adequately explained.
  6. How to install and use Jenkins and TeamCity, and how to display XML reports produced using NoseXUnit. This is a very useful chapter for anyone thinking about or setting up continuous integration.
  7. This chapter is supposed to be about test coverage, and does introduce coverage, but the examples get needlessly complicated. Previous chapters used a simple shopping cart example, but this chapter uses network events, which really distracts from the tests. The author also writes unittests that just print the results intead of actually testing results with assertions.
  8. More network event complexity while trying to demonstrate smoke testing and load testing. This chapter would have made a lot more sense in a book about network programming and how to test network events. Pyro is used with very little explanation, and MySQL and SQLlite are brought in too, increasing the complexity even more.
  9. This chapter is filled with useful advice, but there’s no actual code examples. Instead, the advice is shoehorned into the cookbook format, which I felt detracted from the otherwise great content.

Throughout the book, the author presents a kind of “main script” that he updates at the end of many of the chapters. However, the script also contains stub functions that are never touched and barely explained, making their existance completely unnecessary. There’s also far too many import *, which can make it difficult to understand the code. But I did learn enough new things that I think Python Testing Cookbook is worth buying and reading. Leaving out Chapters 7 and 8, I think the book is a great resource if you’re just getting started with testing, you want to do continuous integration, and/or you want to get non-programmers involved in the testing process. There’s also a blog about the book, which has links to other reviews.

Django Application Conventions

A Django application is really just a python package with a few conventionally named modules. Most apps will not need all of the modules described below, but it’s important to follow the naming conventions and code organization because it will make your application easier to use. Following these conventions gives you a common model for understanding and building the various pieces of a Django application. It also makes it possible for others who share the same common model to quickly understand your code, or at least have an idea of where certain parts of code are located and how everything fits together. This is especially important for reusable applications. For examples, I highly recommend browsing through the code of applications in django.contrib, as they all (mostly) follow the same conventional code organization.

models.py

models.py is the only module that’s required by Django, even if you don’t have any code in it. But chances are that you’ll have at least 1 database model, signal handler, or perhaps an API connection object. models.py is the best place to put these because it is the one app module that is guarenteed to be imported early. This also makes it a good location for connection objects to NoSQL databases such as Redis or MongoDB. Generally, any code that deals with data access or storage should go in models.py, except for simple lookups and queries.

managers.py

Model managers are sometimes placed in a separate managers.py module. This is optional, and often overkill, as it usually makes more sense to define custom managers in models.py. However, if there’s a lot going in your custom manager, or if you have a ton of models, it might make sense to separate the manager classes for clarity’s sake.

admin.py

To make your models viewable within Django’s Admin system, then create an admin.py module with ModelAdmin objects for each necessary model. These models can then be autodiscovered if you use the admin.autodiscover() call in your top level urls.py.

views.py

View functions (or classes) have 3 responsibilities:

  1. request handling
  2. form processing
  3. template rendering

If a view function is doing anything else, then you’re doing it wrong. There are many things that fall under request handling, such as session management and authentication, but any code that does not directly use the request object, or that will not be used to render a template, does not belong here. One valid is exception is sending signals, but I’d argue that a form or models.py is a better location. View functions should be short & simple, and any data access should be primarily read-only. Code that updates data in a database should either be in models.py or the save() method of a form.

Keep your view functions short & simple – this will make it clear how a specific request will produce a corresponding response, and where potential bottlenecks are. Speed has business value, and the easiest way to speed up code is to make it simpler. Do less, and move the complexity elsewhere, such as forms.py.

Use decorators generously for validating requests. require_GET, require_POST, or require_http_methods should go first. Next, use login_required or permission_required as necessary. Finally, use ajax_request or render_to from django annoying so that your view can simply return a dict of data that will be translated into a JSON response or a RequestContext. It’s not unheard of to have view functions with more decorators than lines of code, and that’s ok because the process flow is still clear, since each decorator has a specific purpose. However, if you’re distributing a pluggable app, then do not use render_to. Instead, use a template_name keyword argument, which will allow developers to override the default template name if they wish. This template name should be prefixed by an appropriate subdirectory. For example, django.contrib.auth.views uses the template subdirectory registration/ for all its templates. This encourages template organization to mirror application organization.

If you have lots of views that can be grouped into separate functionality, such as account management vs everything else, then you can create separate view modules. A good way to do this is to create a views subpackage with separate modules within it. The comments contrib app organizes its views this way, with the user facing comments views in views/comments.py, and the moderator facing moderation views in views/moderation.py.

decorators.py

Before you write your own decorators, checkout the http decorators, admin.views.decorators, auth.decorators, and annoying.decorators. What you want may already be implemented, but if not, you’ll at least get to see a bunch of good examples for how to write useful decorators.

If you do decide to write your own decorators, put them in decorators.py. This module should contain functions that take a function as an argument and return a new function, making them higher order functions. This enables you to attach many decorators to a single view function, since each decorators wraps the function returned from the next decorator, until the final view function is reached.

You can also create functions that take arguments, then return a decorator. So instead of being a decorator itself, this kind of function generates and returns a decorator based on the arguments provided. render_to is such a higher order function: it takes a template name as an argument, then returns a decorator that renders that template.

middleware.py

Any custom request/response middleware should go in middleware.py. Two commonly used middleware classes are AuthenticationMiddleware and SessionMiddleware. You can think of middleware as global view decorators, in that a middleware class can pre-process every request or post-process every response, no matter what view is used.

urls.py

It’s good practice to define urls for all your application’s views in their own urls.py. This way, these urls can be included in the top level urls.py with a simple include call. Naming your urls is also a good idea – see django.contrib.comments.urls for an example.

forms.py

Custom forms should go in forms.py. These might be model forms, formsets, or any kind of data validation & transformation that needs to happen before storing or passing on request data. The incoming data will generally come from a request QueryDict, such as request.GET or request.POST, though it could also come from url parameters or view keyword arguments. The main job of forms.py is to transform that incoming data into a form suitable for storage, or for passing on to another API.

You could have this code in a view function, but then you’d be mixing data validation & transformation in with request processing & template rendering, which just makes your code confusing and more deeply nested. So the secondary job of forms.py is to contain complexity that would otherwise be in a view function. Since form validation is often naturally complicated, this is appropriate, and keeps the complexity confined to a well defined area. So if you have a view function that’s accessing more than one variable in request.GET or request.POST, strongly consider using a form instead – that’s what they’re for!

Forms often save data, and the convention is to use a save method that can be called after validation. This is how model forms behave, but you can do the same thing in your own non-model forms. For example, let’s say you want to update a list in Redis based on incoming request data. Instead of putting the code in a view function, create a Form with the necessary fields, and implement a save() method that updates the list in redis based on the cleaned form data. Now your view simply has to validate the form and call save() if the data is valid.

There should generally be no template rendering in forms.py, except for sending emails. All other template rendering belongs in views.py. Email template rendering & sending should also be implemented in a save() method. If you’re creating a pluggable app, then the template name should be a keyword argument so that developers can override it if they want. The PasswordResetForm in django.contrib.auth.forms provides a good example of how to do this.

tests.py

Tests are always a good idea (even if you’re not doing TDD), especially for reusable apps. There are 2 places that Django’s test runner looks for tests:

  1. doctests in models.py
  2. unit tests or doctests in tests.py

You can put doctests elsewhere, but then you have to define your own test runner to run them. It’s often easier to just put all non-model tests into tests.py, either in doctest or unittest form. If you’re testing views, be sure to use Django’s TestCase, as it provides easy access to the test client, making view testing quite simple. For a complete account of testing Django, see Django Testing and Debugging.

backends.py

If you need custom authentication backends, such as using an email address instead of a username, put these in backends.py. Then include them in the AUTHENTICATION_BACKENDS setting.

signals.py

If your app is defining signals that others can connect to, signals.py is where they should go. If you look at django.contrib.comments.signals, you’ll see it’s just a few lines of code with many more lines of comments explaining when each signal is sent. This is about right, as signals are essentially just global objects, and what’s important is how they are used, and in what context they are sent.

management.py

The post_syncdb signal is a management signal that can only be connected to within a module named management.py. So if you need to connect to the post_syncdb signal, management.py is the only place to do it.

feeds.py

To define your own syndication feeds, put the subclasses in feeds.py, then import them in urls.py.

sitemaps.py

Custom Sitemap classes should go in sitemaps.py. Much like the classes in admin.py, Sitemap subclasses are often fairly simple. Ideally, you can just use GenericSitemap and bypass custom Sitemap objects altogether.

context_processors.py

If you need to write custom template context processors, put them in context_processors.py. A good case for a custom context processor is to expose a setting to every template. Context processors are generally very simple, as they only return a dict with no more than a few key-values. And don’t forget to add them to the TEMPLATE_CONTEXT_PROCESSORS setting.

templatetags

The templatetags subpackage is necessary when you want to provide custom template tags or filters. If you’re only creating one templatetag module, give it the same name as your app. This is what django.contrib.humanize does, among others. If you have more than one templatetag module, then you can namespace them by prefixing each module with the name of your app name followed by an underscore. And be sure to create __init__.py in templatetags/, so python knows it’s a proper subpackage.

management/commands

If you want to provide custom management commands that can be used through manage.py or django-admin.py, these must be modules with the commands/ subdirectory of a management/ subdirectory. Both of these subdirectories must have __init__.py to make them python subpackages. Each command should be a separate module whose name will be the name of the command. This module should contain a single class named Command, which must inherit from BaseCommand or a BaseCommand subclass. For example, django.contrib.auth provides 2 custom management commands: changepassword and createsuperuser. Both of these commands are modules of the same name within django.contrib.auth.management.commands. For more details, see creating Django management commands.

Test Driven Development in Python

One of my favorite aspects of Python is that it makes practicing TDD very easy. What makes it so frictionless is the doctest module. It allows you to write a test at the same time you define a function. No setup, no boilerplate, just write a function call and the expected output in the docstring. Here’s a quick example of a fibonacci function.

def fib(n):
        '''Return the nth fibonacci number.
        >>> fib(0)
        0
        >>> fib(1)
        1
        >>> fib(2)
        1
        >>> fib(3)
        2
        >>> fib(4)
        3
        '''
        if n == 0:
                return 0
        elif n == 1:
                return 1
        else:
                return fib(n - 1) + fib(n - 2)

If you want to run your doctests, just add the following three lines to the bottom of your module.

if __name__ == '__main__':
        import doctest
        doctest.testmod()

Now you can run your module to run the doctests, like python fib.py.

So how well does this fit in with the TDD philosophy? Here’s the basic TDD practices.

  1. Think about what you want to test
  2. Write a small test
  3. Write just enough code to fail the test
  4. Run the test and watch it fail
  5. Write just enough code to pass the test
  6. Run the test and watch it pass (if it fails, go back to step 4)
  7. Go back to step 1 and repeat until done

And now a step-by-step breakdown of how to do this with doctests, in excruciating detail.

1. Define a new empty method

def fib(n):
'''Return the nth fibonacci number.'''
pass

if __name__ == '__main__':
import doctest
doctest.testmod()

2. Write a doctest

def fib(n):
        '''Return the nth fibonacci number.
        >>> fib(0)
        0
        '''
        pass

3. Run the module and watch the doctest fail

python fib.py
**********************************************************************
File "fib1.py", line 3, in __main__.fib
Failed example:
    fib(0)
Expected:
    0
Got nothing
**********************************************************************
1 items had failures:
   1 of   1 in __main__.fib
***Test Failed*** 1 failures.

4. Write just enough code to pass the failing doctest

def fib(n):
        '''Return the nth fibonacci number.
        >>> fib(0)
        0
        '''
        return 0

5. Run the module and watch the doctest pass

python fib.py

6. Go back to step 2 and repeat

Now you can start filling in the rest of function, one test at time. In practice, you may not write code exactly like this, but the point is that doctests provide a really easy way to test your code as you write it.

Unit Tests

Ok, so doctests are great for simple tests. But what if your tests need to be a bit more complex? Maybe you need some external data, or mock objects. In that case, you’ll be better off with more traditional unit tests. But first, take a little time to see if you can decompose your code into a set of smaller functions that can be tested individually. I find that code that is easier to test is also easier to understand.

Running Tests

For running my tests, I use nose. I have a tests/ directory with a simple configuration file, nose.cfg

[nosetests]
verbosity=3
with-doctest=1

Then in my Makefile, I add a test command so I can run make test.

test:
        @nosetests --config=tests/nose.cfg tests PACKAGE1 PACKAGE2

PACKAGE1 and PACKAGE2 are optional paths to your code. They could point to unit test packages and/or production code containing doctests.

And finally, if you’re looking for a continuous integration server, try Buildbot.