metaprogramming and politics

Decentralize. Take the red pill.

Archive for the ‘metaprogramming’ Category

pylib 1.0.0 released: the testing-with-python innovations continue

with 5 comments

Took a few betas but finally i uploaded a 1.0.0 py lib release, featuring the mature and powerful py.test tool and "execnet-style" elastic distributed programming. With the new release, there are many new advanced automated testing features – here is a quick summary:

  • funcargs – pythonic zero-boilerplate fixtures for Python test functions :
    • totally separates test code, test configuration and test setup
    • ideal for integration and functional tests
    • allows for flexible and natural test parametrization schemes
  • new plugin architecture, allowing easy-to-write project-specific and cross-project single-file plugins. The most notable new external plugin is oejskit which naturally enables running and reporting of javascript-unittests in real-life browsers.
  • many new features done in easy-to-improve default plugins, highlights:
    • xfail: mark tests as "expected to fail" and report separately.
    • pastebin: automatically send tracebacks to pocoo paste service
    • capture: flexibly capture stdout/stderr of subprocesses, per-test …
    • monkeypatch: safely monkeypatch modules/classes from within tests
    • unittest: run and integrate traditional unittest.py tests
    • figleaf: generate html coverage reports with the figleaf module
    • resultlog: generate buildbot-friendly reporting output
  • distributed testing and elastic distributed execution:
    • new unified "TX" URL scheme for specifying remote processes
    • new distribution modes "–dist=each" and "–dist=load"
    • new sync/async ways to handle 1:N communication
    • improved documentation

The py lib continues to offer most of the functionality used by the testing tool in independent namespaces.

Some non-test related code, notably greenlets/co-routines and api-generation now live as their own projects which simplifies the installation procedure because no C-Extensions are required anymore.

The whole package should work well with Linux, Win32 and OSX, on Python 2.3, 2.4, 2.5 and 2.6. (Expect Python3 compatibility soon!)

For more info, see the py.test and py lib documentation:

http://pytest.org

http://pylib.org

have fun, holger

Advertisements

Written by holger krekel

August 4, 2009 at 10:05 am

code-centered issue tracking?

with 12 comments

Is there anything that allows code-centered issue tracking? Recently, Gustavo Niemeyer had an interesting piece up where he suggests private/protected syntax for Python. His point is that code collaboration doesn’t otherwise work in larger uncontrolled environments of dev groups. I agree with most of his observations but not his conclusion. I’d rather like to see reduced communication costs for changing code. Here is an example of what i mean. If see a code fragment like this:


def somefunc(self, x, y, z):
    self._cache = func(x) + other(y)
    self.z = self._cache + third(z)

i want to be able to visually mark this code, write a comment like:

hey, i need the third(z) value, and doing self.z-self._cache feels bad – can you help?

and i want my development environment to automatically route this question with exact code refs to the maintainers of the code. This should not take longer than 20 seconds and be automatically managed.

On the receiving side, as the maintainer, i want to get notified and be able to say:

getissues mypkg/subpkg

and have it automatically list me all files and issues for it. So i easily see the above question, do a patch, and issue:

sendpatch PATCHNAME

and type in a message. Maybe automatically CCed to other library maintainers, a mailing list etc. As the original sender i get back a mail and can use a cmdline tool to apply the patch, give feedback and so forth …

IOW, i want to have tools that automatically manage the issue-addressing, code-referencing, finding-out about package info/release numbers, format the comment related to the cited code, send the mail, register an issue automatically and so on. Is there something like this?

If not: doesn’t we already have most of the pieces? What makes sense to use for it? Mercurial and Patch queues? Maybe the new Bitbucket API? Integrate with existing issue tracker? First goal would be to have it manage itself, i guess 🙂

cheers, holger

Written by holger krekel

May 18, 2009 at 11:35 am

Posted in metaprogramming

Putting test-hooks into local and global plugins

leave a comment »

(updated to match 1.0 API and features) I’d like to clarify py.test’s hook lookup and showcase the ease of writing things into per-project or global plugins. The pytest_generate_tests was discussed in the last blog post on parametrizing tests. Here is where you can write this hook down:

  • in the test module
  • in a local plugin
  • or in a global plugin

The last blog post showed how to put the hook directly into the test module. Let’s take a look now what putting a hook into a "local or global plugin" means.

Putting a Hook in a local plugin

Putting the generate-hook into a local plugin means to create a conftest.py file in your test directory or package root directory with these contents:


#./conftest.py
def pytest_generate_tests(metafunc):
     # exactly same implementation as module-level one

As with test files and functions and arguments, conftest.py files and the exact name ConftestPlugin will be automatically discovered and the plugin be instantiated.

Put hook into a global plugin

Putting the generate-hook in a global cross-project plugin requires to invent a file or package name with a fixed pytest_ prefix. Here is how you would write down the generate hook into a self-contained pytest_mygen.py file:

#./pytest_mygen.py

def pytest_generate_tests(metafunc):
      # exactly same implementation as module-level one

The hook name including its metafunc argument needs to be used exactly as decribed – loading your plugin will otherwise result in an error.

Activating a global plugin

While local plugins are automatically discovered, global plugins need to specified. To activate a global plugin pytest_mygen you can use either of the following three ways:

py.test -p mygen             # for command line activation

export PYTEST_PLUGINS=mygen  # for shell/env activation

pytest_plugins = "mygen"     # in a test module or conftest.py

py.test loads command-line or environment-specified plugins very early so that plugins can add command line options.

multiple pytest_generate_hook implementations

All existing pytests_generate_hooks hooks will be called once for each test function. You can have multiple hooks but the generate-hook usually only acts on a specific funcarg by doing a check like this:

if "myarg" in metafunc.funcargnames:
    ...

So, you say, what about a test function with multiple arguments – could each function argument come from a different generating provider factory?

This would mean that multiple generators act independently but want to collaborate and combine their values for a given test functions. Well, if you encounter a real need for it, please come forward and we’ll think up a fitting API extension. A couple of days ago i had a "combining funcargs" API call implemented but decided to remove it because i try hard these days to only add features that have a proven need.

Putting test support code into plugins FTW

Lastly, let me point out that putting the pytest_generate_tests hook into a plugin allows the actual test code to stay igorant about the exact way how or where parametrization is implemented. For example, adding command line options for influencing the generation or selection of parameter sets, including randomizing, would not change a single character in the test module and test code.

have fun and let me know what you think, holger

Written by holger krekel

May 14, 2009 at 12:01 pm

Posted in metaprogramming

Parametrizing Python tests, generalized.

with one comment

Parametrizing test runs is a kind of a hot topic with Python test tools. py.test recently grew a new pytest_generate_tests hook to parametrize tests. I am going to introduce it by providing ports of Michael Foord‘s recent experiments with parametrizing unittest.py test cases and an example from Rob Collins testscenarios unittest extension. The gist of the new hook is that it allows to easily implement and combine these schemes. It builds on the general idea of allowing python test functions to receive function arguments ("funcargs") – and defining mechanisms on how to provide them.

The parametrizer example, ported

The idea of Michael Foord‘s Parametrizer example is to define multiple sets of parameters and have specified test functions receive those arguments. Here is a direct port of Michael’s example to use py.test’s new hook:

#./test_parametrize.py
import py

def pytest_generate_tests(metafunc):
    # called once per each test function
    for funcargs in metafunc.cls.params[metafunc.function.__name__]:
        # schedule a new test function run with applied **funcargs
        metafunc.addcall(funcargs=funcargs)

class TestClass:
    params = {
        'test_equals': [dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)],
        'test_zerodivision': [dict(a=1, b=0), dict(a=3, b=2)],
    }

    def test_equals(self, a, b):
        assert a == b

    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

py.test automatically discovers both the pytest_generate_tests hook and the two test functions. For each test function it calls the hook, passing it a metafunc object which provides meta information about the test function and allows to add new tests during collection. Let’s see what just collecting the tests produces:

$ py.test --collectonly test_parametrize.py

<Module 'test_parametrize.py'>
  <Class 'TestClass'>
    <Instance '()'>
      <FunctionCollector 'test_equals'>
        <Function 'test_equals&#91;0&#93;'>
        <Function 'test_equals&#91;1&#93;'>
        <Function 'test_equals&#91;2&#93;'>
      <FunctionCollector 'test_zerodivision'>
        <Function 'test_zerodivision&#91;0&#93;'>
        <Function 'test_zerodivision&#91;1&#93;'>

So we collected 5 actual runs of test functions. Let now run the test functions:

$ py.test test_parametrize.py

========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: test_parametrize.py

test_parametrize.py F.F.F

============================== FAILURES ===============================
________________ TestClass.test_equals.test_equals[0] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 1, b = 2

    def test_equals(self, a, b):
>       assert a == b
E       assert 1 == 2

test_parametrize.py:14: AssertionError
________________ TestClass.test_equals.test_equals[2] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 5, b = 4

    def test_equals(self, a, b):
>       assert a == b
E       assert 5 == 4

test_parametrize.py:14: AssertionError
__________ TestClass.test_zerodivision.test_zerodivision[1] ___________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 3, b = 2

    def test_zerodivision(self, a, b):
>       py.test.raises(ZeroDivisionError, "a/b")
E       ExceptionFailure: 'DID NOT RAISE'

test_parametrize.py:17: ExceptionFailure
================= 3 failed, 2 passed in 0.13 seconds =================

You can easily see the failing tests and the parameters that the tests received. It also showcases py.test traceback reporting but that’s for another discussion.

The parametrizer example, decorated

So, you say, what about having a decorator specifying test parameters? Here is the same example, letting our hook implement a decorator scheme:

#./test_parametrize2.py

import py

def params(funcarglist):
    def wrapper(function):
        function.funcarglist = funcarglist
        return function
    return wrapper

def pytest_generate_tests(metafunc):
    for funcargs in getattr(metafunc.function, 'funcarglist', ()):
        metafunc.addcall(funcargs=funcargs)

# actual test code, above support code can live elsewhere

class TestClass:
    @params([dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)], )
    def test_equals(self, a, b):
        assert a == b

    @params([dict(a=1, b=0), dict(a=3, b=2)])
    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

This variant leaves the "test specification" tightly coupled. Running it with py.test test_parametrize2.py provides the some output as for the first example port.

A quick port of "testscenarios"

Finally, let’s also port Rob Collin’s testscenario example. Here is the implementation of the full mechanism with py.test and the tests in funcarg-style:

#./test_parametrize3.py

def pytest_generate_tests(metafunc):
    for scenario in metafunc.cls.scenarios:
        metafunc.addcall(id=scenario[0], funcargs=scenario[1])

scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})

class TestSampleWithScenarios:
    scenarios = [scenario1, scenario2]

    def test_demo(self, attribute):
        assert isinstance(attribute, str)

Let’s run it:


$ py.test -v test_parametrize3.py

================================ test session starts ================================
python: platform linux2 -- Python 2.6.2 -- /usr/bin/python
test object 1: test_parametrize3.py

test_parametrize3.py:14: TestSampleWithScenarios.test_demo[basic] PASS
test_parametrize3.py:14: TestSampleWithScenarios.test_demo[advanced] PASS

============================= 2 passed in 0.06 seconds ==============================

Easy, isn’t it?

Playing yourself

If you want to play with the examples yourself, you can use hg clone https://bitbucket.org/hpk42/py-trunk/ and setup.py install it. In the example/parametrize/ direcrory you can tweak and run the test examples. Let me know of comments or problems you may encounter.

Conclusion: deprecating "yield"

The three ports show that pytest_generate_tests is a hook that allows to implement many custom parametrization schemes. You can implement the hook in a test module or in local or global plugin, sharing it in your project or in the community. The hook also integrates well with other usages of funcargs, see the extensive pytest funcarg documentation.

The new way to parametrize test is meant to substitute yield usage of test-functions aka "generative tests", also used by nosetests. yield-style Generative tests have received criticism and despite being the one who invented them, i mostly agree and recommend not using them anymore.

I’d like to thank Samuele Pedroni and Ronny Pfannschmitt who helped to evolve the new hook and pushed me for implementing it. Oh, and did i emphasize that working feedback-based and documentation driven is so much better than going wild on hypothetical usages?

have fun, holger

Written by holger krekel

May 13, 2009 at 2:55 pm

py.test: shrinks code, grows plugins and integration testing

leave a comment »

Just before Pycon i uploaded the lean and mean py.test 1.0.0b1 beta release. A lot of code got moved out, most notably the greenlets C-extension. This simplifies packaging and increases the py lib’s focus on test facilities. It now has a pluginized architecture and provides funcargs which tremendously help with writing functional and integration tests. One such example are py.test’s own acceptance tests checking behaviour of the command line tool from a user perspective. Other features include a zero-install mechanism for distributing tests which also allow to conveniently drive cross-platform integration tests.

Unittesting, functional and integration testing are now official targets. No doubt, Test category naming is a slippery subject and it’s a good idea to consider test category names as labels rather than "either-or" categories. In the end, tests are about being useful for software development which today means coding for a wide variety of environments and involving integration and deployment issues on every corner. I think that testing tools yet have to develop their full potential. In my opinion, automated testing and deployment techniques are to fully integrate with each other, and i consider coding of distributed integration test scenarios as key to that.

The upcoming final py.test 1.0 release i’d like to make a starting point for facilitating the integration of many more test methods and test mechanisms via plugins. Some people already contributed a pytest_figleaf (for coverage testing) and pytest_twisted (for running twisted style tests) although i am still only finalizing API details and writing up docs. So I am very happy how things are turning out and also motivated by the positive feedback on the two testing tutorials that Brian Dorsey and me gave at Pycon (see his writeup).

Btw, if you use the quickstart and encounter any problems, please use the brand new issue tracker on bitbucket. I started hosting a mercurial py.test trunk repository and so far it’s been a positive experience and I guess I fully switch to mercurial soon. Alas, i probably drop setuptools before i go 1.0 with py.test – it simply causes too many troubles. py.test’s trunk has a straightforward setup.py and i intend to release a second beta with refined docs and removed setuptools. Stay tuned for many more news in May – right now, i am looking forward to some 1-week offline holiday 🙂

cheers, holger

Written by holger krekel

April 18, 2009 at 9:05 pm

new: simultanously test your code on all platforms

with 6 comments

It is now convenient to do test runs that transparently distribute a normal test run to multiple CPUs or processes. You can do local code changes and immediately distribute tests of your choice simultanously on all available platforms and python versions.

A small example. Suppose you have a Python pkg directory containing the typical __init__.py and a test_something.py test file with this content:

import sys, os

def test_pathsep():
    assert os.sep == "/"

def test_platform():
    assert sys.platform != "darwin"

Without further ado (no special configuration files, no remote installations) i can now run:

py.test pkg/test_something.py --dist=each --rsyncdir=pkg --tx socket=192.168.1.102:8888  --tx ssh=noco --tx popen//python=python2.4

This will rsync the "pkg" directory to the specified test execution places and then run tests on my windows machine (reachable through the specified socket-connection), my mac laptop (reachable through ssh) and in a local python 2.4 process. Here is the full output of the distributed test run which shows 6 tests (4 passed, 2 failed), i.e. our 2 tests multiplied by 3 platforms. It shows one expected failure like so:

[2] ssh=noco -- platform darwin, Python 2.5.1-final-0 cwd: /Users/hpk/pyexecnetcache

    def test_platform():
>       assert sys.platform != "darwin"
E       assert 'darwin' != 'darwin'
E        +  where 'darwin' = sys.platform

/Users/hpk/pyexecnetcache/pkg/test_something.py:8: AssertionError

Hope you find the output obvious enough. I’ve written up some docs in the new distributing tests section.

I also just uploaded a 1.0 alpha py lib release so that you might type "easy_install -U py" to get the alpha release (use at your own risk!). Did i mention that it passes all of its >1000 tests on all platforms simultanously? 🙂

This is all made possible by py.execnet which is the underlying mechanism for instantiating local and remote processes through SSH- or socket servers and executing code in them, without requiring any prior installation on the remote sides. zero installation really means that you only need a working python interpreter, nothing more. You can make use of this functionality without using py.test. In fact, i plan to soon separate py.test and make the py lib smaller and smaller …

so, enjoy, hope it makes as much sense to you as it makes to me 🙂 And hope to see some of you at Pycon … btw, anybody interested to join me for a Drum and Base party thursday night? cheers, holger

Written by holger krekel

March 23, 2009 at 6:12 pm

Python, connectivity, diving … bits from OpenBossa

with 3 comments

holger-dive1

There is a lot of Python usage on Nokia Internet Tables and from INDT developers. When asked, about half the audience, so something like 50 people raised their hand for using and prefering Python. The new iphone-like Canola2 interface was done in 4 months of development, told one of the developers, Gustavo Barbieri. Realizing more features in less time, compared to Canola 1 in C. Doesn’t mean there aren’t problems but overall they were quite happy. Kate Alhola also presented "Freemantle", the new alpha software release which apparently does all its rendering via OpenGL-ES2.0 (this is OpenGL for Embedded systems). This was a talk packed with technical details – what i got is that Nokia is going quite open about their intentions and software.

Marcel Holtmann presented a new linux connection manager which completely does away with networking scripts and automates a lot of management. One of its goals is to allow to have an IP address and efficiently work with DNS, without the need for 5 processes to startup first. It can handle IP connections via wlan, ethernet, gsm, bluetooth and works with a plugin arch. All core components are GPLed and User Interfaces communicate via the DBUS.

Then Hadi Nahari presented his view on mobile and cloud security. In his view, these two worlds have more security aspects in common. Hadi currently works a security architect for Paypal and previously did that for EBay. He pushes for talking and thinking about "End to End" security, and "security assets in motion". Looking at how my private information on a mobile phone is secured, how it moves from one execution environment to the next, is as important as looking how the backends handle this data. True enough, many of the recent problems actually arose in the backends, not the end user devices.

my diving group

Apart from the relaxed schedule, i am enjoying the views and people here. I spontanously joined a group with Stefan Seefeld from CodeSourcery (he does Boost+Python bindings, was another interesting talk!) and had my first diving session. And i am scheduled to do my PyPy talk in 2 hours. Curious to see how that goes.

cheers,

holger

Written by holger krekel

March 11, 2009 at 11:09 am

Posted in metaprogramming