metaprogramming and politics

Decentralize. Take the red pill.

Archive for May 2009

code-centered issue tracking?

with 12 comments

Is there anything that allows code-centered issue tracking? Recently, Gustavo Niemeyer had an interesting piece up where he suggests private/protected syntax for Python. His point is that code collaboration doesn’t otherwise work in larger uncontrolled environments of dev groups. I agree with most of his observations but not his conclusion. I’d rather like to see reduced communication costs for changing code. Here is an example of what i mean. If see a code fragment like this:


def somefunc(self, x, y, z):
    self._cache = func(x) + other(y)
    self.z = self._cache + third(z)

i want to be able to visually mark this code, write a comment like:

hey, i need the third(z) value, and doing self.z-self._cache feels bad – can you help?

and i want my development environment to automatically route this question with exact code refs to the maintainers of the code. This should not take longer than 20 seconds and be automatically managed.

On the receiving side, as the maintainer, i want to get notified and be able to say:

getissues mypkg/subpkg

and have it automatically list me all files and issues for it. So i easily see the above question, do a patch, and issue:

sendpatch PATCHNAME

and type in a message. Maybe automatically CCed to other library maintainers, a mailing list etc. As the original sender i get back a mail and can use a cmdline tool to apply the patch, give feedback and so forth …

IOW, i want to have tools that automatically manage the issue-addressing, code-referencing, finding-out about package info/release numbers, format the comment related to the cited code, send the mail, register an issue automatically and so on. Is there something like this?

If not: doesn’t we already have most of the pieces? What makes sense to use for it? Mercurial and Patch queues? Maybe the new Bitbucket API? Integrate with existing issue tracker? First goal would be to have it manage itself, i guess 🙂

cheers, holger

Written by holger krekel

May 18, 2009 at 11:35 am

Posted in metaprogramming

Putting test-hooks into local and global plugins

with one comment

(updated to match 1.0 API and features) I’d like to clarify py.test’s hook lookup and showcase the ease of writing things into per-project or global plugins. The pytest_generate_tests was discussed in the last blog post on parametrizing tests. Here is where you can write this hook down:

  • in the test module
  • in a local plugin
  • or in a global plugin

The last blog post showed how to put the hook directly into the test module. Let’s take a look now what putting a hook into a "local or global plugin" means.

Putting a Hook in a local plugin

Putting the generate-hook into a local plugin means to create a conftest.py file in your test directory or package root directory with these contents:


#./conftest.py
def pytest_generate_tests(metafunc):
     # exactly same implementation as module-level one

As with test files and functions and arguments, conftest.py files and the exact name ConftestPlugin will be automatically discovered and the plugin be instantiated.

Put hook into a global plugin

Putting the generate-hook in a global cross-project plugin requires to invent a file or package name with a fixed pytest_ prefix. Here is how you would write down the generate hook into a self-contained pytest_mygen.py file:

#./pytest_mygen.py

def pytest_generate_tests(metafunc):
      # exactly same implementation as module-level one

The hook name including its metafunc argument needs to be used exactly as decribed – loading your plugin will otherwise result in an error.

Activating a global plugin

While local plugins are automatically discovered, global plugins need to specified. To activate a global plugin pytest_mygen you can use either of the following three ways:

py.test -p mygen             # for command line activation

export PYTEST_PLUGINS=mygen  # for shell/env activation

pytest_plugins = "mygen"     # in a test module or conftest.py

py.test loads command-line or environment-specified plugins very early so that plugins can add command line options.

multiple pytest_generate_hook implementations

All existing pytests_generate_hooks hooks will be called once for each test function. You can have multiple hooks but the generate-hook usually only acts on a specific funcarg by doing a check like this:

if "myarg" in metafunc.funcargnames:
    ...

So, you say, what about a test function with multiple arguments – could each function argument come from a different generating provider factory?

This would mean that multiple generators act independently but want to collaborate and combine their values for a given test functions. Well, if you encounter a real need for it, please come forward and we’ll think up a fitting API extension. A couple of days ago i had a "combining funcargs" API call implemented but decided to remove it because i try hard these days to only add features that have a proven need.

Putting test support code into plugins FTW

Lastly, let me point out that putting the pytest_generate_tests hook into a plugin allows the actual test code to stay igorant about the exact way how or where parametrization is implemented. For example, adding command line options for influencing the generation or selection of parameter sets, including randomizing, would not change a single character in the test module and test code.

have fun and let me know what you think, holger

Written by holger krekel

May 14, 2009 at 12:01 pm

Posted in metaprogramming

Parametrizing Python tests, generalized.

with one comment

Parametrizing test runs is a kind of a hot topic with Python test tools. py.test recently grew a new pytest_generate_tests hook to parametrize tests. I am going to introduce it by providing ports of Michael Foord‘s recent experiments with parametrizing unittest.py test cases and an example from Rob Collins testscenarios unittest extension. The gist of the new hook is that it allows to easily implement and combine these schemes. It builds on the general idea of allowing python test functions to receive function arguments ("funcargs") – and defining mechanisms on how to provide them.

The parametrizer example, ported

The idea of Michael Foord‘s Parametrizer example is to define multiple sets of parameters and have specified test functions receive those arguments. Here is a direct port of Michael’s example to use py.test’s new hook:

#./test_parametrize.py
import py

def pytest_generate_tests(metafunc):
    # called once per each test function
    for funcargs in metafunc.cls.params[metafunc.function.__name__]:
        # schedule a new test function run with applied **funcargs
        metafunc.addcall(funcargs=funcargs)

class TestClass:
    params = {
        'test_equals': [dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)],
        'test_zerodivision': [dict(a=1, b=0), dict(a=3, b=2)],
    }

    def test_equals(self, a, b):
        assert a == b

    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

py.test automatically discovers both the pytest_generate_tests hook and the two test functions. For each test function it calls the hook, passing it a metafunc object which provides meta information about the test function and allows to add new tests during collection. Let’s see what just collecting the tests produces:

$ py.test --collectonly test_parametrize.py

<Module 'test_parametrize.py'>
  <Class 'TestClass'>
    <Instance '()'>
      <FunctionCollector 'test_equals'>
        <Function 'test_equals&#91;0&#93;'>
        <Function 'test_equals&#91;1&#93;'>
        <Function 'test_equals&#91;2&#93;'>
      <FunctionCollector 'test_zerodivision'>
        <Function 'test_zerodivision&#91;0&#93;'>
        <Function 'test_zerodivision&#91;1&#93;'>

So we collected 5 actual runs of test functions. Let now run the test functions:

$ py.test test_parametrize.py

========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: test_parametrize.py

test_parametrize.py F.F.F

============================== FAILURES ===============================
________________ TestClass.test_equals.test_equals[0] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 1, b = 2

    def test_equals(self, a, b):
>       assert a == b
E       assert 1 == 2

test_parametrize.py:14: AssertionError
________________ TestClass.test_equals.test_equals[2] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 5, b = 4

    def test_equals(self, a, b):
>       assert a == b
E       assert 5 == 4

test_parametrize.py:14: AssertionError
__________ TestClass.test_zerodivision.test_zerodivision[1] ___________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 3, b = 2

    def test_zerodivision(self, a, b):
>       py.test.raises(ZeroDivisionError, "a/b")
E       ExceptionFailure: 'DID NOT RAISE'

test_parametrize.py:17: ExceptionFailure
================= 3 failed, 2 passed in 0.13 seconds =================

You can easily see the failing tests and the parameters that the tests received. It also showcases py.test traceback reporting but that’s for another discussion.

The parametrizer example, decorated

So, you say, what about having a decorator specifying test parameters? Here is the same example, letting our hook implement a decorator scheme:

#./test_parametrize2.py

import py

def params(funcarglist):
    def wrapper(function):
        function.funcarglist = funcarglist
        return function
    return wrapper

def pytest_generate_tests(metafunc):
    for funcargs in getattr(metafunc.function, 'funcarglist', ()):
        metafunc.addcall(funcargs=funcargs)

# actual test code, above support code can live elsewhere

class TestClass:
    @params([dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)], )
    def test_equals(self, a, b):
        assert a == b

    @params([dict(a=1, b=0), dict(a=3, b=2)])
    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

This variant leaves the "test specification" tightly coupled. Running it with py.test test_parametrize2.py provides the some output as for the first example port.

A quick port of "testscenarios"

Finally, let’s also port Rob Collin’s testscenario example. Here is the implementation of the full mechanism with py.test and the tests in funcarg-style:

#./test_parametrize3.py

def pytest_generate_tests(metafunc):
    for scenario in metafunc.cls.scenarios:
        metafunc.addcall(id=scenario[0], funcargs=scenario[1])

scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})

class TestSampleWithScenarios:
    scenarios = [scenario1, scenario2]

    def test_demo(self, attribute):
        assert isinstance(attribute, str)

Let’s run it:


$ py.test -v test_parametrize3.py

================================ test session starts ================================
python: platform linux2 -- Python 2.6.2 -- /usr/bin/python
test object 1: test_parametrize3.py

test_parametrize3.py:14: TestSampleWithScenarios.test_demo[basic] PASS
test_parametrize3.py:14: TestSampleWithScenarios.test_demo[advanced] PASS

============================= 2 passed in 0.06 seconds ==============================

Easy, isn’t it?

Playing yourself

If you want to play with the examples yourself, you can use hg clone https://bitbucket.org/hpk42/py-trunk/ and setup.py install it. In the example/parametrize/ direcrory you can tweak and run the test examples. Let me know of comments or problems you may encounter.

Conclusion: deprecating "yield"

The three ports show that pytest_generate_tests is a hook that allows to implement many custom parametrization schemes. You can implement the hook in a test module or in local or global plugin, sharing it in your project or in the community. The hook also integrates well with other usages of funcargs, see the extensive pytest funcarg documentation.

The new way to parametrize test is meant to substitute yield usage of test-functions aka "generative tests", also used by nosetests. yield-style Generative tests have received criticism and despite being the one who invented them, i mostly agree and recommend not using them anymore.

I’d like to thank Samuele Pedroni and Ronny Pfannschmitt who helped to evolve the new hook and pushed me for implementing it. Oh, and did i emphasize that working feedback-based and documentation driven is so much better than going wild on hypothetical usages?

have fun, holger

Written by holger krekel

May 13, 2009 at 2:55 pm