metaprogramming and politics

Decentralize. Take the red pill.

Archive for the ‘metaprogramming’ Category

new: simultanously test your code on all platforms

with 7 comments

It is now convenient to do test runs that transparently distribute a normal test run to multiple CPUs or processes. You can do local code changes and immediately distribute tests of your choice simultanously on all available platforms and python versions.

A small example. Suppose you have a Python pkg directory containing the typical and a test file with this content:

import sys, os

def test_pathsep():
    assert os.sep == "/"

def test_platform():
    assert sys.platform != "darwin"

Without further ado (no special configuration files, no remote installations) i can now run:

py.test pkg/ --dist=each --rsyncdir=pkg --tx socket=  --tx ssh=noco --tx popen//python=python2.4

This will rsync the "pkg" directory to the specified test execution places and then run tests on my windows machine (reachable through the specified socket-connection), my mac laptop (reachable through ssh) and in a local python 2.4 process. Here is the full output of the distributed test run which shows 6 tests (4 passed, 2 failed), i.e. our 2 tests multiplied by 3 platforms. It shows one expected failure like so:

[2] ssh=noco -- platform darwin, Python 2.5.1-final-0 cwd: /Users/hpk/pyexecnetcache

    def test_platform():
>       assert sys.platform != "darwin"
E       assert 'darwin' != 'darwin'
E        +  where 'darwin' = sys.platform

/Users/hpk/pyexecnetcache/pkg/ AssertionError

Hope you find the output obvious enough. I’ve written up some docs in the new distributing tests section.

I also just uploaded a 1.0 alpha py lib release so that you might type "easy_install -U py" to get the alpha release (use at your own risk!). Did i mention that it passes all of its >1000 tests on all platforms simultanously? 🙂

This is all made possible by py.execnet which is the underlying mechanism for instantiating local and remote processes through SSH- or socket servers and executing code in them, without requiring any prior installation on the remote sides. zero installation really means that you only need a working python interpreter, nothing more. You can make use of this functionality without using py.test. In fact, i plan to soon separate py.test and make the py lib smaller and smaller …

so, enjoy, hope it makes as much sense to you as it makes to me 🙂 And hope to see some of you at Pycon … btw, anybody interested to join me for a Drum and Base party thursday night? cheers, holger

Written by holger krekel

March 23, 2009 at 6:12 pm

Python, connectivity, diving … bits from OpenBossa

with 3 comments


There is a lot of Python usage on Nokia Internet Tables and from INDT developers. When asked, about half the audience, so something like 50 people raised their hand for using and prefering Python. The new iphone-like Canola2 interface was done in 4 months of development, told one of the developers, Gustavo Barbieri. Realizing more features in less time, compared to Canola 1 in C. Doesn’t mean there aren’t problems but overall they were quite happy. Kate Alhola also presented "Freemantle", the new alpha software release which apparently does all its rendering via OpenGL-ES2.0 (this is OpenGL for Embedded systems). This was a talk packed with technical details – what i got is that Nokia is going quite open about their intentions and software.

Marcel Holtmann presented a new linux connection manager which completely does away with networking scripts and automates a lot of management. One of its goals is to allow to have an IP address and efficiently work with DNS, without the need for 5 processes to startup first. It can handle IP connections via wlan, ethernet, gsm, bluetooth and works with a plugin arch. All core components are GPLed and User Interfaces communicate via the DBUS.

Then Hadi Nahari presented his view on mobile and cloud security. In his view, these two worlds have more security aspects in common. Hadi currently works a security architect for Paypal and previously did that for EBay. He pushes for talking and thinking about "End to End" security, and "security assets in motion". Looking at how my private information on a mobile phone is secured, how it moves from one execution environment to the next, is as important as looking how the backends handle this data. True enough, many of the recent problems actually arose in the backends, not the end user devices.

my diving group

Apart from the relaxed schedule, i am enjoying the views and people here. I spontanously joined a group with Stefan Seefeld from CodeSourcery (he does Boost+Python bindings, was another interesting talk!) and had my first diving session. And i am scheduled to do my PyPy talk in 2 hours. Curious to see how that goes.



Written by holger krekel

March 11, 2009 at 11:09 am

Posted in metaprogramming

Girls dancing, Openbossa 2009 opening

leave a comment »

openbossa dancing girls entering

So the third openbossa conference was just opened by a girls group playing an african rhythm, "marakatou" i was told. Reminds me that i’d like to go out with people to some Drum and Base dancing, latest in Chicago. The Openbossa conference is organised by INDT, Nokia’s brazillian Instituto Nokia de Tecnologia. There is hardly anything that i would naturally associate with Finland here, though 🙂 Well, in fact i don’t know very much of Finnland except for the nice people i’ve met so far and the Helsinki episode of Jim Jarmush’s "Night on Earth" movie.

Looking forward to some talks on the Programme, a number of which deal with Python. Most attendants are male and come from Brazil, are working for or have some relation to Nokia. Seems like kind of a Nokia developers meetup with invites to people they find interesting. Not the worst way to do a company outing.

Drumming girls at Openbossa

Written by holger krekel

March 9, 2009 at 12:56 pm

Posted in metaprogramming

Mobiles, Python, PyPy and the Zone VM

with 2 comments

Next week i am going to OpenBossa, a developer conference organised by Nokia’s research institute INDT in Brazil. It’s about free and open source developments on small internet devices. I’ll be talking about PyPy on Maemo. In a nutshell, last year we made PyPy cross-compile for the Maemo/Linux platform. It turned out to use less RAM than CPython for virtually all Python objects. It also starts new Python processes faster. We have ideas to make it feasible to run hundreds of small isolated python processes on machines like my 190 Euro mobile or even cheaper ones. I’ll post my slides with some more details next week and probably also twitter and blog a bit about my conference experiences.

To be honest, I’ve long ignored developments in the mobile device world. Maybe mostly because I want to interact with people through open and freely available networks. The way in which mobile networks are run for commercial purposes hampered my use of mobiles and my general interest.

But things seem to be changing rapidly. I appreciate that. There are more and more phones who connect both to mobile networks and to WLANs. The sudden possibility to use the internet whereever i have WLAN access opens up the device for me. My small mobile has 128 MByte and a 250 MHZ ARM CPU. It runs Python. I can freely download tools from other people and use them. Or I can hack something up or get in contact with developers. I can use EMail, Internet radio, Twitter or browse the internet to begin with. I find the software generally likeable although a bit shaky to install and control sometimes.

Well, I’d like to change various aspects of the User interface but they are probably quite hard-wired, e.g. text completion. Nevertheless, i am thrilled by the overall technical opportunities. This device could soon operate my working environment (it has a 16 Gig SD-RAM card for 15 Euros) and become a center piece of collaboration and communication. For this i’d like to trust the software and the people behind it. So I look forward to meet up with developers more familiar with mobile worlds. And to hang around Porto de Galinhas which i have been told is a beautiful place.

Of course, I am also curious to present and discuss how PyPy, with some more efforts, could provide a perfectly fitting Pythonic environment for mobiles. My vision is to have a Python environment for driving all technical aspects of the phone and to instantly share scripts and apps with fellows worldwide. All on the basis of a decentral network comprised of devices participating in it. Similar to what Charles Stross describes in his books as the "Zone VM": a distributed virtual machine and environment.

cheers, holger

Written by holger krekel

March 4, 2009 at 8:35 pm

Posted in metaprogramming

Tagged with , ,

Monkeypatching in unit tests, done right

with 16 comments

[updated, thanks to marius].

I am currently preparing my testing tutorials for Pycon and here is an example i’d lke to share already.

The problem: In a test function we want to patch an Environment variable and see if our application handles something related to it correctly. The direct approach for doing this in a test function might look like this:

def test_envreading(self):
    old = os.environ['ENV1']
    os.environ['ENV1'] = 'myval'
        val = myapp().readenv()
        assert val == "myval"
        os.environ['ENV1'] = old

If we needed to do this several times for test functions we’d have a lot of repetetive boilerplatish code. The try-finally and undo-related code does not even take into account that ENV1 might not have been set originally.

Most experienced people would use setup/teardown methods to get less-repetetive testing code. We might end up with something slightly more general like this:

def setup_method(self, method):
    self._oldenv = os.environ.copy()

def teardown_method(self, method):

def test_envreading(self):
    os.environ['ENV1'] = "myval"
    val = myapp().readenv()
    assert val == "myval"

This avoids repetition of setup code but it scatters what belongs to the test function across three functions. All other functions in the Testcase class will get the service of a preserved environment although they might not need it. If i want to move away this testing function i will need to take care to copy the setup code as well. Or i start subclassing Test cases to share code. If we then start to need modifying other dicts or classes we have to add code in three places.

Monkeypatching the right way

Here is a version of the test function which uses pytest’s monkeypatch` plugin. The plugin does one thing: it provides a monkeypatch object for each test function that needs it. The resulting test function code then looks like this:

def test_envreading(self, monkeypatch):
    monkeypatch.setitem(os.environ, 'ENV1', 'myval')
    val = myapp().readenv()
    assert val == "myval"

Here monkeypatch.setitem() will memorize old settings and modify the environment. When the test function finishes the monkeypatch object restores the original setting. This test function is free to get moved across files. No other test function or code place is affected or required to change when it moves.

Let’s take a quick look at the “providing” side, i.e. the plugin which provides “Monkeypatch” instances to test functions. It makes use of pytest’s new pyfuncarg protocol.

The plugin itself is free to get refined and changed as well, without affecting the existing test code. The following 71 lines of code make up the plugin, including tests:

class MonkeypatchPlugin:
    """ setattr-monkeypatching with automatical reversal after test. """
    def pytest_pyfuncarg_monkeypatch(self, pyfuncitem):
        monkeypatch = MonkeyPatch()
        return monkeypatch

notset = object()

class MonkeyPatch:
    def __init__(self):
        self._setattr = []
        self._setitem = []

    def setattr(self, obj, name, value):
        self._setattr.insert(0, (obj, name, getattr(obj, name, notset)))
        setattr(obj, name, value)

    def setitem(self, dictionary, name, value):
        self._setitem.insert(0, (dictionary, name, dictionary.get(name, notset)))
        dictionary[name] = value

    def finalize(self):
        for obj, name, value in self._setattr:
            if value is not notset:
                setattr(obj, name, value)
                delattr(obj, name)
        for dictionary, name, value in self._setitem:
            if value is notset:
                del dictionary[name]
                dictionary[name] = value

def test_setattr():
    class A:
        x = 1
    monkeypatch = MonkeyPatch()
    monkeypatch.setattr(A, 'x', 2)
    assert A.x == 2
    monkeypatch.setattr(A, 'x', 3)
    assert A.x == 3
    assert A.x == 1

    monkeypatch.setattr(A, 'y', 3)
    assert A.y == 3
    assert not hasattr(A, 'y')

def test_setitem():
    d = {'x': 1}
    monkeypatch = MonkeyPatch()
    monkeypatch.setitem(d, 'x', 2)
    monkeypatch.setitem(d, 'y', 1700)
    assert d['x'] == 2
    assert d['y'] == 1700
    monkeypatch.setitem(d, 'x', 3)
    assert d['x'] == 3
    assert d['x'] == 1
    assert 'y' not in d

def test_monkeypatch_plugin(testdir):
    sorter = testdir.inline_runsource("""
        pytest_plugins = 'pytest_monkeypatch',
        def test_method(monkeypatch):
            assert monkeypatch.__class__.__name__ == "MonkeyPatch"
    res = sorter.countoutcomes()
    assert tuple(res) == (1, 0, 0), res

I can also imagine some nice plugin which supports mock objects – patching methods with some preset behaviour or tracing calls between components.

have fun, holger

Written by holger krekel

March 3, 2009 at 1:48 pm

New Plugin architecture and plugins for py.test

with one comment

I just merged the plugin branch and am very happy about it. Part of the effort was driven by moving core functionality to become a plugin: Terminal reporting is now fully a plugin, contained in a single file including tests. It does it work solely by looking at testing events. Plugins can also add new aspects to tests files – for example the plugin adds ReST syntax, referential integrity and URL checking for Text files. (I used it for checking my blog post and its links, btw).

Pytest’s good old files are still useful: you can define project or directory specific settings, including which plugins to use. For now, many old extensions should work unmodified, as exemplified by PyPy‘s extensive files. It’s easy to port a conftest file to a plugin. In fact, you can first define a local "ConftestPlugin" and later move it to become a cross-project one – a matter of renaming the file and the class, done!

To serve as guiding examples, I drafted some initial plugins and implemented neccessary hooks within py.test core.

If you wan’t to get a feel on how plugins are implemented, here is the plugin which adds a command line option to allow logging of all testing events. It’s instructive to look at how it’s done as well as the output because it shows which testing events are generated.

class EventlogPlugin:
    """ log pytest events to a file. """

    def pytest_addoption(self, parser):
        parser.addoption("--eventlog", dest="eventlog",
            help="write all pytest events to the given file.")

    def pytest_configure(self, config):
        eventlog = config.getvalue('eventlog')
        if eventlog:
            self.eventlogfile = open(eventlog).open('w')

    def pytest_unconfigure(self, config):
        if hasattr(self, 'eventlogfile'):
            del self.eventlogfile

    def pyevent(self, eventname, *args, **kwargs):
        if hasattr(self, 'eventlogfile'):
            print >>self.eventlogfile, eventname, args, kwargs

This plugin code is complete, except that the original file contains tests. The eventlog plugin methods above are called in the following way:

  • def pytest_addoption(self, parser) is called before
    commandline arguments are parsed.
  • def pytest_configure(self, config) is called after parsing
    arguments and before any reporting, collection or running
    of tests takes place.
  • def pytest_event(self, eventname, *args, **kwargs) is called
    for each testing event. Events have names and come with
    arguments which are supplied by the event producing site.
  • def pytest_unconfigure(self, config) is called after
    all test items have been processed.

If you want to start writing your own plugin, please use an svn checkout of:

and activate it by e.g. python develop.

If you want to write a plugin named pytest_XYZ, you can tell pytest to use it by setting the environment variable PYTEST_PLUGINS=XYZ or by putting pytest_plugins = 'xyz' into a test module or file.

A good way to contribute is to copy an existing plugin file to your home dir and put it somewhere into your PYTHONPATH. py.test will use your version instead of the default one and you can play with it untill you are happy (and see to also add some tests showing the new behaviour).

If you have questions or problems, you are invited to post here or to the py-dev mailing list. I’d definitely like to pluginize more of pytest and add hooks as needed and am happy for feedback and suggestions before i freeze the API for 1.0.


Written by holger krekel

February 27, 2009 at 11:22 am

New way to organize Python test code

with 5 comments

py.test just grew a new way to provide test state for a test function. First the problem: those of us dealing with writing tests in the hundreds or thousands usually setup test state at class, method or module level. Then we access it indirectly, through self, local or global helper functions. For larger applications, this usually leads to scattered, complex and boilerplatisch test code. This then stands in the way of refactoring the original code base … but wait, weren’t tests meant to ease refactoring, not hinder it?

Here is the idea: Python Test functions use their function definition to state their needs and the test tools calls a function that provides the value. For example, consider this test function:

   def test_ospath(self, tempdir):
      # needs tempdir to create files etc.


py.test provides the value for tempdir by calling a matching method that looks like this:

  def pytest_pyfuncarg_tempdir(pyfuncitem):
      # use pyfuncitem to access test context, cmdline opts etc.


This matching provider function returns a value for tempdir that is then supplied to the test function. For more complex purposes, the pyfuncitem argument provides full access to the test collection process including cmdline options, test options, project specific configuration. You can write down this provider method in the test
module, in configuration files or in a plugin.

Once i started using this new paradigm, i couldn’t resist and refactored pytest’s own tests to use the new method everywhere. Here are my findings so far:

  • self contained test functions: i don’t need to wade through unneccessary layers and indirection of test setup.
  • fewer imports: my test modules don’t need to import modules that are only needed for setting up application state.
  • easy test state setup: I can place test support code in one place and i can grep for pytest_pyfuncarg_NAME. I can reuse this setup code easily across modules, directories or even projects. Think about providing test database object or mocking objects.
  • more flexible test grouping: I can logically group tests however i like, independently from test setup requirements. I found it very easy to shuffle test functions between classes or modules because they are rather self-contained.

Written by holger krekel

February 22, 2009 at 5:23 pm