metaprogramming and politics

Decentralize. Take the red pill.

Archive for the ‘metaprogramming’ Category

metaprogramming in Python: What CPython, PyPy, Pyramid, pytest and politics have in common …

leave a comment »

Metaprogramming in Python too often revolves around metaclasses, which are just a narrow application of the “meta” idea and not a great one at that. Metaprogramming more generally deals with reasoning about program code, about taking a “meta” stance on it.  A metaprogram takes a program as input, often just partial programs like functions or classes. Here are a few applications of metaprogramming:

  • CPython is a metaprogram written in C. It takes Python program code as input and interprets it, so that it runs at a higher level than C.
  • PyPy is a metaprogramm written in Python. It takes RPython program code as input and generates a C-level metaprogram (the PyPy interpreter) which itself interprets Python programs and takes another meta stance by generating Assembler pieces for parts of the interpreation execution. If you like, PyPy is a metaprogram generating metaprograms whereas CPython and typical compilers like GCC are “just” a metaprogram.
  • Pyramid is a metaprogram that takes view, model definitions and http-handling code as input and executes them, thereby raising code on a higher level to implement the “Pyramid application” language.
  • pytest is a metaprogram written in Python, taking test, fixture and plugin functions as input and executing them in a certain manner, thereby implementing a testing language.
  • metaclasses: in Python they allow to intercept class creation and introspect methods and attributes, amending their behaviour. Because metaclass-code usually executes at import time, it often uses global state for implementing non-trivial meta aspects.

Apart from these concrete examples, language compilers, testing tools and web frameworks all have metaprogramming aspects. Creating big or small “higher” level or domain-specific languages within Python is as a typical example of metaprogramming. Python is actually a great language for metaprogramming although it could be better.

In future blog posts i plan to talk about some good metaprogramming practise, particularly:

  • keep the layers/levels separate by good naming and API design
  • define a concise “language” for the programs you take as input
  • avoid creating global state in your metaprograms (and elsewhere)
    which can easily happen with meta-classes executing at import time

Lastly, i see metaprogramming at work not only when coding in a computer language. Discussing the legal framing for executing programs on the internet is some kind of metaprogramming, especially if you consider licensing and laws as human-interpreted code which affects how programs can be written, constructed and executed. In reverse, web applications increasingly affect how we interact with each other other, thereby implementing rules formerly dealt with in the arena of politics. Therefore, metaprogramming and politics are fundamentally connected topics.

have metafun, i. e. take fun stuff as input to generate more of it 🙂 holger

Written by holger krekel

November 22, 2012 at 3:04 pm

execution locals: better than thread locals/globals

with 6 comments

While many agree that global state is evil, the so called “thread locals” are not much better. Even though they help to separate state on a per-thread or per-greenlet basis, they still are global within that context. In particular (thread) global state means that:

  • Invoked functions can change bindings of an invoking function as a side effect
  • thread locals may linger around even if their state is not used or became invalid

Meet “execution locals” which avoid these problems. Find the code released on PyPI:

http://pypi.python.org/pypi/xlocal

It’s some 60 lines of code and tested on python2.5 up to python3.3 and pypy and ready to be played with. I inline its README.txt below in case you can’t or don’t want to switch reading context. One more note: If I were to design a new language i’d probably remove “globals” all together and only offer something like the “xlocal” type with a more straight forward syntax.

execution locals: killing global state (including thread locals)

The xlocal module provides execution locals aka “xlocal” objects which implement a more restricted variant of “thread locals”. An “xlocal” instance allows to manage its attributes on a per-execution basis in a manner similar to how real locals work:

  • Invoked functions cannot change the binding for the invoking function
  • existence of a binding is local to a code block (and everything it calls)

Attribute bindings for an xlocal object will not leak outside a context-managed code block and they will not leak to other threads or greenlets. By contrast, both process-globals and “thread locals” do not implement these properties.

Let’s look at a basic example:

# content of example.py

from xlocal import xlocal

xcurrent = xlocal()

def output():
    print "hello world", xcurrent.x

if __name__ == "__main__":
    with xcurrent(x=1):
        output()

If we execute this module, the output() function will see a xcurrent.x==1 binding:

$ python example.py
hello world 1

Here is what happens in detail: xcurrent(x=1) returns a context manager which sets/resets the x attribute on the xcurrent object. While remaining in the same thread/greenlet, all code triggered by the with-body (in this case just the output() function) can access xcurrent.x. Outside the with- body xcurrent.x would raise an AttributeError. It is also not allowed to directly set xcurrent attributes; you always have to explicitely mark their life-cycle with a with-statement. This means that invoked code:

  • cannot rebind xlocal state of its invoking functions (no side effects, yay!)
  • xlocal state does not leak outside the with-context (lifecylcle control)

Another module may now reuse the example code:

# content of example_call.py
import example

with example.xcurrent(x=3):
    example.output()

which when running …:

$ python example_call.py
hello world 3

will cause the example.output() function to print the xcurrent.x binding as defined at the invoking with xcurrent(x=3) statement.

Other threads or greenlets will never see this xcurrent.x binding; they may even set and read their own distincit xcurrent.x object. This means that all threads/greenlets can concurrently call into a function which will always see the execution specific x attribute.

Usage in frameworks and libraries invoking “handlers”

When invoking plugin code or handler code to perform work, you may not want to pass around all state that might ever be needed. Instead of using a global or thread local you can safely pass around such state in execution locals. Here is a pseudo example:

xcurrent = xlocal()

def with_xlocal(func, **kwargs):
    with xcurrent(**kwargs):
        func()

def handle_request(request):
    func = gethandler(request)  # some user code
    spawn(with_xlocal(func, request=request))

handle_request will run a user-provided handler function in a newly spawned execution unit (for example spawn might map to threading.Thread(…).start() or to gevent.spawn(…)). The generic with_xlocal helper wraps the execution of the handler function so that it will see a xcurrent.request binding. Multiple spawns may execute concurrently and xcurrent.request will carry the execution-specific request object in each of them.

Issues worth noting

If a method decides to memorize an attribute of an execution local, for example the above xcurrent.request, then it will keep a reference to the exact request object, not the per-execution one. If you want to keep a per-execution local, you can do it this way for example:

Class Renderer:
    @property
    def request(self):
        return xcurrent.request

this means that Renderer instances will have an execution-local self.request object even if the life-cycle of the instance crosses execution units.

Another issue is that if you spawn new execution units, they will not implicitely inherit execution locals. Instead you have to wrap your spawning function to explicitely set execution locals, similar to what we did in the above “invoking handlers” section.

Written by holger krekel

November 16, 2012 at 2:22 pm

If i were to design a new programming language …

with 17 comments

I’d see to base syntax and semantics on Python3, but strip and rebase it:

  • no C: implement the interpreter in RPython, get a JIT for free and implementation bits from PyPy’s Python interpreter (parsing, IO, etc.)
  • no drags-you-down batteries: lean interpreter core and a standard battery distro which is tested against the last N interpreter versions + current
  • no yield: use greenlets to implement all of what yield provides and more
  • no underlying blocking on IO: base it all on event loop, yet provide synchronous programming model through greenlets
  • no c-level API nor ctypes: use cffi to interface with c-libraries
  • no global state: just support state bound to execution context/stack
  • no GIL: support free threading and Automatic Mutual Exclusion for dealing with shared state
  • no setup.py: have a thought-through story and tools from the start for packaging, installation, depending/interfacing between packages
  • no import, no sys.modules: provide an object with which you can access other packages’s objects and introspect/interact with one’s own package
  • no testing as an afterthought: everything needs to be easily testable, empowered assert statement and branch-coverage supported from the core.
  • no extensibility as an afterthought: support plugins and loose coupling through builtin 1:N calling mechanism (event notification on steroids)
  • no unsafe code: support IO/CPU/RAM sandboxing as a core feature
  • no NIH syndrome: provide a bridge to a virtualenv’ed Python interpreter allowing to leverage existing good crap

Anything else? Probably! Discussion needed? Certainly. Unrealistic? Depends on who would participate — almost all of the above has projects, PEPs and code showcasing viability.

Btw, did you know that when we started PyPy we initially did this under the heading of “Minimal Python”? Some of the above ideas above and their underlying motivations were already mentioned when I invited to the first PyPy sprint almost 10 years ago:

http://mail.python.org/pipermail/python-dev/2003-January/032427.html

I learned since then that Python has more complex innards than it seems but i still believe it could be both simpler and more powerful.

holger

Written by holger krekel

November 13, 2012 at 3:29 pm

Ring of Python talk at pycon

with 3 comments

Just did my talk on Ring of Python talk at Pycon US 2010, discussing competition and features of Python interpreters and co-operation issues around the most important issue in my oppinion: deployment. Also showcased execnet as a generic Python2Python bridge, connecting Python2.4, Python 3.1, Jython and IronPython. Got some nice feedback, also about the presentation style, actually i was using Prezi. You may go to the following page, click into the flash app and hit “cursor right”: Ring of Python .

Written by holger krekel

February 19, 2010 at 11:06 pm

Posted in metaprogramming

Elastic Python deployment networks

with 2 comments

Time for a bit of fiction on distributed Python deployment. As some of you know, py.execnet imperatively and elastically executes code in local or remote python processes, maintaining channels for exchanging data. Execnet has the wonderful zero-install feature which means no software except a Python interpreter is required remotely. The connection between Python interpreters is direct, i.e. the connecting side needs to know how to start the remote side. And it’s non-transitive meaning: given a A->B and a B->C connection there is no support for getting a A->C connection mediated by B.

I’d like to lift these restriction and introduce the concept of a deployment network through which execnet connections can be mediated. I am pondering a command line tool that creates a network of Python intepreters on multiple hosts like this:

execnet start mynet ssh=linuxbox.org socket=windowsbox.com ssh=osx.com

This would create a "mynet" deployment network of four Python interpreters running on different hosts and platforms: one local process and three remote processes connected to it. Let’s add a remote Jython process to the "mynet" deployment network:

execnet addhost mynet ssh=remote//python=jython

We can generally use the ‘mynet’ handle to work with this newly instantiated deployment network. For example, to get a fresh process on a certain platform from a Python program:

mynet = execnet.connect('mynet')
gateway = mynet.makegateway(platform="java")

The first line connects to our local ‘mynet’ process. The second line creates a gateway to a fresh Python interpreter, in this case a Jython process. The bootstrapping of the Jython-side gateway object is determined by the initiating client side. The two subprocesses communicate through an IO-connection that is mediated by the ‘mynet’ deployment network.

This is very exciting because the zero-installation feature is preserved on two levels: the deployment processes work on software coming from a single point, the command line above. And our "on-top" gateways operate with software determined from the initiating side, from the python code above. Interaction between the two worlds is limited to a connect operation and providing IO mediatiation. This means the deployment network facilities can evolve independently from the "on-top" execnet-elastic programs.

Conceptually it’s a very reliable and robust setting. The mynet processes should be able to run as robustly as unix shells. They are to provide a solid base for writing and deploying Python applications that span multiple interpreters. They don’t run applications in-process.

This is not all fiction. The current development version of py.execnet already works across and between CPython2.4 through to CPython 3.1, Jython and PyPy. And i intend to release execnet as a separate package soon, providing the basis for implementing the above fiction and lots of other on-top fun 🙂

Written by holger krekel

September 26, 2009 at 9:29 pm

Posted in metaprogramming

pylib 1.0.0 released: the testing-with-python innovations continue

with 5 comments

Took a few betas but finally i uploaded a 1.0.0 py lib release, featuring the mature and powerful py.test tool and "execnet-style" elastic distributed programming. With the new release, there are many new advanced automated testing features – here is a quick summary:

  • funcargs – pythonic zero-boilerplate fixtures for Python test functions :
    • totally separates test code, test configuration and test setup
    • ideal for integration and functional tests
    • allows for flexible and natural test parametrization schemes
  • new plugin architecture, allowing easy-to-write project-specific and cross-project single-file plugins. The most notable new external plugin is oejskit which naturally enables running and reporting of javascript-unittests in real-life browsers.
  • many new features done in easy-to-improve default plugins, highlights:
    • xfail: mark tests as "expected to fail" and report separately.
    • pastebin: automatically send tracebacks to pocoo paste service
    • capture: flexibly capture stdout/stderr of subprocesses, per-test …
    • monkeypatch: safely monkeypatch modules/classes from within tests
    • unittest: run and integrate traditional unittest.py tests
    • figleaf: generate html coverage reports with the figleaf module
    • resultlog: generate buildbot-friendly reporting output
  • distributed testing and elastic distributed execution:
    • new unified "TX" URL scheme for specifying remote processes
    • new distribution modes "–dist=each" and "–dist=load"
    • new sync/async ways to handle 1:N communication
    • improved documentation

The py lib continues to offer most of the functionality used by the testing tool in independent namespaces.

Some non-test related code, notably greenlets/co-routines and api-generation now live as their own projects which simplifies the installation procedure because no C-Extensions are required anymore.

The whole package should work well with Linux, Win32 and OSX, on Python 2.3, 2.4, 2.5 and 2.6. (Expect Python3 compatibility soon!)

For more info, see the py.test and py lib documentation:

http://pytest.org

http://pylib.org

have fun, holger

Written by holger krekel

August 4, 2009 at 10:05 am

code-centered issue tracking?

with 12 comments

Is there anything that allows code-centered issue tracking? Recently, Gustavo Niemeyer had an interesting piece up where he suggests private/protected syntax for Python. His point is that code collaboration doesn’t otherwise work in larger uncontrolled environments of dev groups. I agree with most of his observations but not his conclusion. I’d rather like to see reduced communication costs for changing code. Here is an example of what i mean. If see a code fragment like this:


def somefunc(self, x, y, z):
    self._cache = func(x) + other(y)
    self.z = self._cache + third(z)

i want to be able to visually mark this code, write a comment like:

hey, i need the third(z) value, and doing self.z-self._cache feels bad – can you help?

and i want my development environment to automatically route this question with exact code refs to the maintainers of the code. This should not take longer than 20 seconds and be automatically managed.

On the receiving side, as the maintainer, i want to get notified and be able to say:

getissues mypkg/subpkg

and have it automatically list me all files and issues for it. So i easily see the above question, do a patch, and issue:

sendpatch PATCHNAME

and type in a message. Maybe automatically CCed to other library maintainers, a mailing list etc. As the original sender i get back a mail and can use a cmdline tool to apply the patch, give feedback and so forth …

IOW, i want to have tools that automatically manage the issue-addressing, code-referencing, finding-out about package info/release numbers, format the comment related to the cited code, send the mail, register an issue automatically and so on. Is there something like this?

If not: doesn’t we already have most of the pieces? What makes sense to use for it? Mercurial and Patch queues? Maybe the new Bitbucket API? Integrate with existing issue tracker? First goal would be to have it manage itself, i guess 🙂

cheers, holger

Written by holger krekel

May 18, 2009 at 11:35 am

Posted in metaprogramming

Putting test-hooks into local and global plugins

leave a comment »

(updated to match 1.0 API and features) I’d like to clarify py.test’s hook lookup and showcase the ease of writing things into per-project or global plugins. The pytest_generate_tests was discussed in the last blog post on parametrizing tests. Here is where you can write this hook down:

  • in the test module
  • in a local plugin
  • or in a global plugin

The last blog post showed how to put the hook directly into the test module. Let’s take a look now what putting a hook into a "local or global plugin" means.

Putting a Hook in a local plugin

Putting the generate-hook into a local plugin means to create a conftest.py file in your test directory or package root directory with these contents:


#./conftest.py
def pytest_generate_tests(metafunc):
     # exactly same implementation as module-level one

As with test files and functions and arguments, conftest.py files and the exact name ConftestPlugin will be automatically discovered and the plugin be instantiated.

Put hook into a global plugin

Putting the generate-hook in a global cross-project plugin requires to invent a file or package name with a fixed pytest_ prefix. Here is how you would write down the generate hook into a self-contained pytest_mygen.py file:

#./pytest_mygen.py

def pytest_generate_tests(metafunc):
      # exactly same implementation as module-level one

The hook name including its metafunc argument needs to be used exactly as decribed – loading your plugin will otherwise result in an error.

Activating a global plugin

While local plugins are automatically discovered, global plugins need to specified. To activate a global plugin pytest_mygen you can use either of the following three ways:

py.test -p mygen             # for command line activation

export PYTEST_PLUGINS=mygen  # for shell/env activation

pytest_plugins = "mygen"     # in a test module or conftest.py

py.test loads command-line or environment-specified plugins very early so that plugins can add command line options.

multiple pytest_generate_hook implementations

All existing pytests_generate_hooks hooks will be called once for each test function. You can have multiple hooks but the generate-hook usually only acts on a specific funcarg by doing a check like this:

if "myarg" in metafunc.funcargnames:
    ...

So, you say, what about a test function with multiple arguments – could each function argument come from a different generating provider factory?

This would mean that multiple generators act independently but want to collaborate and combine their values for a given test functions. Well, if you encounter a real need for it, please come forward and we’ll think up a fitting API extension. A couple of days ago i had a "combining funcargs" API call implemented but decided to remove it because i try hard these days to only add features that have a proven need.

Putting test support code into plugins FTW

Lastly, let me point out that putting the pytest_generate_tests hook into a plugin allows the actual test code to stay igorant about the exact way how or where parametrization is implemented. For example, adding command line options for influencing the generation or selection of parameter sets, including randomizing, would not change a single character in the test module and test code.

have fun and let me know what you think, holger

Written by holger krekel

May 14, 2009 at 12:01 pm

Posted in metaprogramming

Parametrizing Python tests, generalized.

with one comment

Parametrizing test runs is a kind of a hot topic with Python test tools. py.test recently grew a new pytest_generate_tests hook to parametrize tests. I am going to introduce it by providing ports of Michael Foord‘s recent experiments with parametrizing unittest.py test cases and an example from Rob Collins testscenarios unittest extension. The gist of the new hook is that it allows to easily implement and combine these schemes. It builds on the general idea of allowing python test functions to receive function arguments ("funcargs") – and defining mechanisms on how to provide them.

The parametrizer example, ported

The idea of Michael Foord‘s Parametrizer example is to define multiple sets of parameters and have specified test functions receive those arguments. Here is a direct port of Michael’s example to use py.test’s new hook:

#./test_parametrize.py
import py

def pytest_generate_tests(metafunc):
    # called once per each test function
    for funcargs in metafunc.cls.params[metafunc.function.__name__]:
        # schedule a new test function run with applied **funcargs
        metafunc.addcall(funcargs=funcargs)

class TestClass:
    params = {
        'test_equals': [dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)],
        'test_zerodivision': [dict(a=1, b=0), dict(a=3, b=2)],
    }

    def test_equals(self, a, b):
        assert a == b

    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

py.test automatically discovers both the pytest_generate_tests hook and the two test functions. For each test function it calls the hook, passing it a metafunc object which provides meta information about the test function and allows to add new tests during collection. Let’s see what just collecting the tests produces:

$ py.test --collectonly test_parametrize.py

<Module 'test_parametrize.py'>
  <Class 'TestClass'>
    <Instance '()'>
      <FunctionCollector 'test_equals'>
        <Function 'test_equals&#91;0&#93;'>
        <Function 'test_equals&#91;1&#93;'>
        <Function 'test_equals&#91;2&#93;'>
      <FunctionCollector 'test_zerodivision'>
        <Function 'test_zerodivision&#91;0&#93;'>
        <Function 'test_zerodivision&#91;1&#93;'>

So we collected 5 actual runs of test functions. Let now run the test functions:

$ py.test test_parametrize.py

========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: test_parametrize.py

test_parametrize.py F.F.F

============================== FAILURES ===============================
________________ TestClass.test_equals.test_equals[0] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 1, b = 2

    def test_equals(self, a, b):
>       assert a == b
E       assert 1 == 2

test_parametrize.py:14: AssertionError
________________ TestClass.test_equals.test_equals[2] _________________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 5, b = 4

    def test_equals(self, a, b):
>       assert a == b
E       assert 5 == 4

test_parametrize.py:14: AssertionError
__________ TestClass.test_zerodivision.test_zerodivision[1] ___________

self = <test_parametrize.TestClass instance at 0x994f8ac>, a = 3, b = 2

    def test_zerodivision(self, a, b):
>       py.test.raises(ZeroDivisionError, "a/b")
E       ExceptionFailure: 'DID NOT RAISE'

test_parametrize.py:17: ExceptionFailure
================= 3 failed, 2 passed in 0.13 seconds =================

You can easily see the failing tests and the parameters that the tests received. It also showcases py.test traceback reporting but that’s for another discussion.

The parametrizer example, decorated

So, you say, what about having a decorator specifying test parameters? Here is the same example, letting our hook implement a decorator scheme:

#./test_parametrize2.py

import py

def params(funcarglist):
    def wrapper(function):
        function.funcarglist = funcarglist
        return function
    return wrapper

def pytest_generate_tests(metafunc):
    for funcargs in getattr(metafunc.function, 'funcarglist', ()):
        metafunc.addcall(funcargs=funcargs)

# actual test code, above support code can live elsewhere

class TestClass:
    @params([dict(a=1, b=2), dict(a=3, b=3), dict(a=5, b=4)], )
    def test_equals(self, a, b):
        assert a == b

    @params([dict(a=1, b=0), dict(a=3, b=2)])
    def test_zerodivision(self, a, b):
        py.test.raises(ZeroDivisionError, "a/b")

This variant leaves the "test specification" tightly coupled. Running it with py.test test_parametrize2.py provides the some output as for the first example port.

A quick port of "testscenarios"

Finally, let’s also port Rob Collin’s testscenario example. Here is the implementation of the full mechanism with py.test and the tests in funcarg-style:

#./test_parametrize3.py

def pytest_generate_tests(metafunc):
    for scenario in metafunc.cls.scenarios:
        metafunc.addcall(id=scenario[0], funcargs=scenario[1])

scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})

class TestSampleWithScenarios:
    scenarios = [scenario1, scenario2]

    def test_demo(self, attribute):
        assert isinstance(attribute, str)

Let’s run it:


$ py.test -v test_parametrize3.py

================================ test session starts ================================
python: platform linux2 -- Python 2.6.2 -- /usr/bin/python
test object 1: test_parametrize3.py

test_parametrize3.py:14: TestSampleWithScenarios.test_demo[basic] PASS
test_parametrize3.py:14: TestSampleWithScenarios.test_demo[advanced] PASS

============================= 2 passed in 0.06 seconds ==============================

Easy, isn’t it?

Playing yourself

If you want to play with the examples yourself, you can use hg clone https://bitbucket.org/hpk42/py-trunk/ and setup.py install it. In the example/parametrize/ direcrory you can tweak and run the test examples. Let me know of comments or problems you may encounter.

Conclusion: deprecating "yield"

The three ports show that pytest_generate_tests is a hook that allows to implement many custom parametrization schemes. You can implement the hook in a test module or in local or global plugin, sharing it in your project or in the community. The hook also integrates well with other usages of funcargs, see the extensive pytest funcarg documentation.

The new way to parametrize test is meant to substitute yield usage of test-functions aka "generative tests", also used by nosetests. yield-style Generative tests have received criticism and despite being the one who invented them, i mostly agree and recommend not using them anymore.

I’d like to thank Samuele Pedroni and Ronny Pfannschmitt who helped to evolve the new hook and pushed me for implementing it. Oh, and did i emphasize that working feedback-based and documentation driven is so much better than going wild on hypothetical usages?

have fun, holger

Written by holger krekel

May 13, 2009 at 2:55 pm

py.test: shrinks code, grows plugins and integration testing

leave a comment »

Just before Pycon i uploaded the lean and mean py.test 1.0.0b1 beta release. A lot of code got moved out, most notably the greenlets C-extension. This simplifies packaging and increases the py lib’s focus on test facilities. It now has a pluginized architecture and provides funcargs which tremendously help with writing functional and integration tests. One such example are py.test’s own acceptance tests checking behaviour of the command line tool from a user perspective. Other features include a zero-install mechanism for distributing tests which also allow to conveniently drive cross-platform integration tests.

Unittesting, functional and integration testing are now official targets. No doubt, Test category naming is a slippery subject and it’s a good idea to consider test category names as labels rather than "either-or" categories. In the end, tests are about being useful for software development which today means coding for a wide variety of environments and involving integration and deployment issues on every corner. I think that testing tools yet have to develop their full potential. In my opinion, automated testing and deployment techniques are to fully integrate with each other, and i consider coding of distributed integration test scenarios as key to that.

The upcoming final py.test 1.0 release i’d like to make a starting point for facilitating the integration of many more test methods and test mechanisms via plugins. Some people already contributed a pytest_figleaf (for coverage testing) and pytest_twisted (for running twisted style tests) although i am still only finalizing API details and writing up docs. So I am very happy how things are turning out and also motivated by the positive feedback on the two testing tutorials that Brian Dorsey and me gave at Pycon (see his writeup).

Btw, if you use the quickstart and encounter any problems, please use the brand new issue tracker on bitbucket. I started hosting a mercurial py.test trunk repository and so far it’s been a positive experience and I guess I fully switch to mercurial soon. Alas, i probably drop setuptools before i go 1.0 with py.test – it simply causes too many troubles. py.test’s trunk has a straightforward setup.py and i intend to release a second beta with refined docs and removed setuptools. Stay tuned for many more news in May – right now, i am looking forward to some 1-week offline holiday 🙂

cheers, holger

Written by holger krekel

April 18, 2009 at 9:05 pm