metaprogramming and politics

Decentralize. Take the red pill.

Parametrizing Python tests, generalized.

with 2 comments

Parametrizing test runs is a kind of a hot topic with Python test tools. py.test recently grew a new pytest_generate_tests hook to parametrize tests. I am going to introduce it by providing ports of Michael Foord‘s recent experiments with parametrizing unittest.py test cases and an example from Rob Collins testscenarios unittest extension. The gist of the new hook is that it allows to easily implement and combine these schemes. It builds on the general idea of allowing python test functions to receive function arguments ("funcargs") – and defining mechanisms on how to provide them.

The parametrizer example, ported

The idea of Michael Foord‘s Parametrizer example is to define multiple sets of parameters and have specified test functions receive those arguments. Here is a direct port of Michael’s example to use py.test’s new hook:

#./test_parametrize.py
import py

def pytest_generate_tests(metafunc):
    # called once per each test function
    for funcargs in metafunc.cls.params[metafunc.function.__name__]:
        # schedule a new test function run with applied **funcargs
        metafunc.addcall(funcargs=funcargs)

class TestClass:
    params = {
        'test_equals': [dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)],
        'test_zerodivision': [dict(a=1, b=0), dict(a=3, b=2)],
    }

    def test_equals(self, a, b):
        assert a == b

    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

py.test automatically discovers both the pytest_generate_tests hook and the two test functions. For each test function it calls the hook, passing it a metafunc object which provides meta information about the test function and allows to add new tests during collection. Let’s see what just collecting the tests produces:

$ py.test --collectonly test_parametrize.py

<Module 'test_parametrize.py'>
  <Class 'TestClass'>
    <Instance '()'>
      <FunctionCollector 'test_equals'>
        <Function 'test_equals[0]'>
        <Function 'test_equals[1]'>
        <Function 'test_equals[2]'>
      <FunctionCollector 'test_zerodivision'>
        <Function 'test_zerodivision[0]'>
        <Function 'test_zerodivision[1]'>

So we collected 5 actual runs of test functions. Let now run the test functions:

$ py.test test_parametrize.py

========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: test_parametrize.py

test_parametrize.py F.F.F

============================== FAILURES ===============================
________________ TestClass.test_equals.test_equals[0] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 1, b = 2

    def test_equals(self, a, b):
>       assert a == b
E       assert 1 == 2

test_parametrize.py:14: AssertionError
________________ TestClass.test_equals.test_equals[2] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 5, b = 4

    def test_equals(self, a, b):
>       assert a == b
E       assert 5 == 4

test_parametrize.py:14: AssertionError
__________ TestClass.test_zerodivision.test_zerodivision[1] ___________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 3, b = 2

    def test_zerodivision(self, a, b):
>       py.test.raises(ZeroDivisionError, "a/b")
E       ExceptionFailure: 'DID NOT RAISE'

test_parametrize.py:17: ExceptionFailure
================= 3 failed, 2 passed in 0.13 seconds =================

You can easily see the failing tests and the parameters that the tests received. It also showcases py.test traceback reporting but that’s for another discussion.

The parametrizer example, decorated

So, you say, what about having a decorator specifying test parameters? Here is the same example, letting our hook implement a decorator scheme:

#./test_parametrize2.py

import py

def params(funcarglist):
    def wrapper(function):
        function.funcarglist = funcarglist
        return function
    return wrapper

def pytest_generate_tests(metafunc):
    for funcargs in getattr(metafunc.function, 'funcarglist', ()):
        metafunc.addcall(funcargs=funcargs)

# actual test code, above support code can live elsewhere

class TestClass:
    @params([dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)], )
    def test_equals(self, a, b):
        assert a == b

    @params([dict(a=1, b=0), dict(a=3, b=2)])
    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

This variant leaves the "test specification" tightly coupled. Running it with py.test test_parametrize2.py provides the some output as for the first example port.

A quick port of "testscenarios"

Finally, let’s also port Rob Collin’s testscenario example. Here is the implementation of the full mechanism with py.test and the tests in funcarg-style:

#./test_parametrize3.py

def pytest_generate_tests(metafunc):
    for scenario in metafunc.cls.scenarios:
        metafunc.addcall(id=scenario[0], funcargs=scenario[1])

scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})

class TestSampleWithScenarios:
    scenarios = [scenario1, scenario2]

    def test_demo(self, attribute):
        assert isinstance(attribute, str)

Let’s run it:


$ py.test -v test_parametrize3.py

================================ test session starts ================================
python: platform linux2 -- Python 2.6.2 -- /usr/bin/python
test object 1: test_parametrize3.py

test_parametrize3.py:14: TestSampleWithScenarios.test_demo[basic] PASS
test_parametrize3.py:14: TestSampleWithScenarios.test_demo[advanced] PASS

============================= 2 passed in 0.06 seconds ==============================

Easy, isn’t it?

Playing yourself

If you want to play with the examples yourself, you can use hg clone https://bitbucket.org/hpk42/py-trunk/ and setup.py install it. In the example/parametrize/ direcrory you can tweak and run the test examples. Let me know of comments or problems you may encounter.

Conclusion: deprecating "yield"

The three ports show that pytest_generate_tests is a hook that allows to implement many custom parametrization schemes. You can implement the hook in a test module or in local or global plugin, sharing it in your project or in the community. The hook also integrates well with other usages of funcargs, see the extensive pytest funcarg documentation.

The new way to parametrize test is meant to substitute yield usage of test-functions aka "generative tests", also used by nosetests. yield-style Generative tests have received criticism and despite being the one who invented them, i mostly agree and recommend not using them anymore.

I’d like to thank Samuele Pedroni and Ronny Pfannschmitt who helped to evolve the new hook and pushed me for implementing it. Oh, and did i emphasize that working feedback-based and documentation driven is so much better than going wild on hypothetical usages?

have fun, holger

Written by holger krekel

May 13, 2009 at 2:55 pm

2 Responses

Subscribe to comments with RSS.

  1. So, you will either need to buy a domain or create a simple website to send your
    traffic to. Squidoo is a site started by entrepreneur Seth Godin.

    The best money in taking paid surveys is to refer people to the survey opportunities.

    affiliate

    February 13, 2014 at 3:36 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: