metaprogramming and politics

Decentralize. Take the red pill.

Thoughts on arguing end-to-end crypto and surveillance

with 24 comments

Many Western governors are pushing for laws mandating all private communication can be secretly read and analyzed for them.The latest attack targets the one technology that still enables some privacy on a massively surveilled internet: end-to-end encryption. As hackers or IT people we can not afford to lament that the public doesn’t understand the significance of end-to-end crypto or privacy if we don’t appreciate its value for societies at home and abroad ourselves.

Responding to the renewed surveillance attacks with quick technical or narrow economic counter arguments is not going to work. An appropriate response needs to consider the political history and context of the current crypto and surveillance debates. Moreover, to stem the never-ending waves of new secret agency laws a re-framing of the common security debates is crucial to avoid the never-ending succession of new powers for government.

Let me start by rejecting the idea that governmental surveillance attacks have anything to do with fighting ruthless killers (“terrorists”) however often this claim is repeated in broadcast media. This is not to disregard the power of repetition, see the endlessly repeated claims of the existence of “Weapons of mass destruction” as a pretext for the Iraq war, or the fact that advertisements work. But despite endless repetition, governmental surveillance attacks don’t have anything to do with fighting terrorists. To turn it around, and i think the burden ought to be on the framers, where is the hard evidence that mass surveillance of civilians has significant effect, if any, on preventing terrorist attacks against civilians? And even if surveillance would prevent a few attacks how would it compare to the dangers of more government power?

The “fight terrorists with surveillance” discussion framing is seriously flawed also for another reason. Within it you are always going to lose the argument against more surveillance. If not now then after the next terror event. Because proponents can always argue they were right: if no attack happens it proves surveillance works and we need more of it. If an attack happens it also proves we need more surveillance. In this framed logic there can never be any rolling back of government powers.

The way out is to unframe the discussion and discuss the political and historical contexts of “terror attacks” and “expanding surveillance” separately. Let’s start with surveillance. If fighting terrorism is a red herring what are the motivations and politics of expanding government surveillance?

Governors worry about their power base

Governors of all kinds worry that people decide to change things in ways which endanger the power their associated networks hold. And they are particularly afraid today because they know there are many reasons why people want to change things in more fundamental ways. As much as people have lost trust in governors, governors have lost trust into people to keep them and their ilk in power.

The fear of governors seems justified if you look at the example of Spain in 2015: big parts of Spain’s social movements associate with a very new party on the block: Podemos. It aims to win the election in December and currently is leading the polls against the two parties which have governed Spain since 1975. It could actually happen despite the German chancellor Merkel supporting the Spanish president Rajoy who just introduced draconian laws against protesters and is generally sending his troops everywhere to avert the decline of his power network. Having to resort to direct repression is a sign of lost political power and in the case of Spain, panic. If you remember that Spain is a major EU country it’s understandable that many other governors in the West are worried something similar might happen to them soon.

Governors are always afraid they could lose their sight and grip over what people in their constituency are up to. Today it is not enough to have buddies in broadcast media which frame the discussion and interpretation of events to the governor’s liking. You also need to understand and contain, if possible, wider internet discussions before they can effect change you don’t want. Governors learned from Hannah Arendt that private discussions form the basis for public opinions which in turn structure and determine governmental power. If that weren’t the case how could feminist and really any social struggle have succeeded? It certainly never was the broadcast media or governors who first talked about and demanded rights for women or other oppressed groups.

How to contain decentralized communication?

New realities are co-created in a more decentralized manner and quicker than ever. Communication platforms grew in the last decade because of the interests of people to communicate and connect with one another. Maybe that’s due to a lost sense of community in disintegrating city neighborhoods which make people use “social media”. But in any case, Youtube, Twitter, Gmail, Facebook and IOS/Android app platforms became big because they facilitated decentralized communication and sharing between people. This presents a problem to governors because web communications are harder to contain in acceptable ways.

For a typical broadcast media discussion format you can send allied experts and construct “position” and “opposition” and thus frame the discussion. For example, it’s acceptable to discuss the virtues and dangers of “press freedom”, how to deal with “islamist militants” or how to “defend our values and rights”. Western Governors find it much less acceptable to link the Hebdo killing of or the rise of the “Islamic State” to the recent Western wars in Iraq, Libya and Syria, or to the everyday killing of civilians through Western drones and torture. Governors can’t yet directly contain such unacceptable linking activities and they are worried about it. For the time being, they try to frame it as irrelevant and repeat the “we are being attacked by ruthless killers” on broadcast media some more. It still kind of works but it’s unclear for how long.

What helps to contain discussions is to implant “You are being watched!” into the minds of people discussing the future of their governance. Putting up some public examples of punishment for unacceptable dissent refines the message into “Watch your words (and internet links!)” … also known as internalized or self-censorship. That’s not just effective for governors in Saudi Arabia but for their Western allies as well. The recent US sentencing of journalist Barret Brown to 48 months of prison for posting a link to some leaked data on an IRC channel can be seen as an example of a public punishment with chilling effects.

Arguments and national tactics against crypto attacks

Governors have long realized they can exploit central communication platform ownership to tap into most private communications. But to their apparent shock, many IT companies in the Post-Snowden era are implementing decentralized encryption because they in turn want to assure users that they can’t surveil their private messages. As a reaction, governors are conspiring to prevent decentralized encryption reaching the masses which would see them losing their current in-depth access to private communication.  Psychologically speaking, losing power is always harder to accept than not having it in the first place.

A response to the crypto attacks which I consider optimistic, if not shallow, is “it’s not technically feasible to regulate or ban end-to-end crypto”. It underestimates the ability of governors to write laws which will drastically change the playing field even if in an incremental manner. To begin with, why shouldn’t it be possible to prevent companies from distributing apps which incorporate decentralized encryption? Google and Apple already employ their own regulation on what kind of apps are distributed through their stores. Another regulation on decentralized-crypto apps can probably be added by the governors in the US. And that would prevent decentralized encryption reaching the masses at least in the short term.

As to government access to end-to-end encryption, it’s true that backdooring crypto would make people more vulnerable against all kind of exploiting attacks, not just governmental ones. Governors might frame this dillema by claiming that security against physical attacks is more important than security against someone reading your messages. Such an argument already incorporates the flawed “it’s all about anti-terror” framing.  The increased vulnerability of everyone’s devices is a bit of a tricky issue for governors given they couldn’t protect their own data against Snowden. If neccessary, governors will try to make concessions. Some applications such as online banking could be allowed to use non-backdoored crypto. They have all the banking data already, anyway. They probably will want to exempt governmental communication itself as well. With that we’d end up with a complete reversal of the democratic principle: public governments to act in secret and private communication to be constantly surveilled.

Western Governors have learned from the last Cryptowars battles. They know full well that they can only break private communication encryption if they outlaw it in a synchronized international manner. Otherwise they would have a harder time to overcome national arguments like “companies are going to leave the country if you ban decentral encryption”. Therefore, we need to fend off attacks on decentralized crypto in at least some Western countries to make such commercial arguments useful. Concretely, US companies like Google and Apple will more strongly resist if the EU does not also illegalize decentralized crypto.

It is as crucial to prevent EU crypto regulations as it was two decades ago. During the crypto battles in the 1990ties I studied with the deeply inspiring Prof. Andreas Pfitzmann who consulted the German government on crypto regulation. Along with other colleagues and groups he tirelessly worked and finally turned the tides and prevented Germany and thus the EU from introducing government backdoors to crypto algorithms. This in turn lead France and then the US to drop their plans and eventually relax crypto export regulations to keep their companies competitive. Today, we are back to square zero and must again convince some EU governments or parliaments to refrain from crypto banning laws. It’s a fight we better not lose.

Lastly, I’d like to be clear if maybe controversial on the dreadful Anti-Terror topic: If the Western governments want to stop killers from targetting western individuals they first need to stop ruthlessly killing and terrorizing individuals from abroad. Nothing else will bring more physical security against terrorist attacks. It reminds me of the 2500 year old question from the chinese politician and philosopher Confucius: “The way out is via the door. Why is it that no one will use this method?”

Written by holger krekel

January 24, 2015 at 9:02 pm

Running tests against multiple devices/resources (in parallel)

with one comment

devicesHow to best distribute tests against multiple devices or resources with pytest? This interesting question came up during my training in Lviv (Ukraine) at an embedded systems company. Distributing tests to processes can serve two purposes:

  • running the full test suite against each device to verify they all work according to the test specification
  • distributing the test load to several devices of the same type in order to minimize overall test execution time.

The solution to both problems is easy if you use two pytest facilities:

  • the general fixture mechanism: we write a fixture function which provides a device object which is pre-configured for use in tests.
  • the pytest-xdist plugin: we use it to run subprocesses and communicate configuration data for the device fixture from the master process to the subprocesses.

To begin with, let’s configure three devices that are each reachable by a separate IP address. We create a list of ip addresses in a file:

# content of devices.json
["192.168.0.1", "192.168.0.2", "192.168.0.3"]

We now create a local pytest plugin which reads the configuration data, implements a per-process device fixture and the master-to-slave communication to configure each subprocess according to our device list:

# content of conftest.py

import pytest

def read_device_list():
    import json
    with open("devices.json") as f:
        return json.load(f)

def pytest_configure(config):
     # read device list if we are on the master
     if not hasattr(config, "slaveinput"):
        config.iplist = read_device_list()

def pytest_configure_node(node):
    # the master for each node fills slaveinput dictionary
    # which pytest-xdist will transfer to the subprocess
    node.slaveinput["ipadr"] = node.config.iplist.pop()

@pytest.fixture(scope="session")
def device(request):
    slaveinput = getattr(request.config, "slaveinput", None)
    if slaveinput is None: # single-process execution
        ipadr = read_device_list()[0]
    else: # running in a subprocess here
        ipadr = slaveinput["ipadr"]
    return Device(ipadr)

class Device:
    def __init__(self, ipadr):
        self.ipadr = ipadr

    def __repr__(self):
        return "<Device ip=%s>" % (self.ipadr)

We can now write tests that simply make use of the device fixture by using its name as an argument to a test function:

# content of test_device.py
import time

def test_device1(device):
    time.sleep(2)  # simulate long test time
    assert 0, device

def test_device2(device):
    time.sleep(2)  # simulate long test time
    assert 0, device

def test_device3(device):
    time.sleep(2)  # simulate long test time
    assert 0, device

Let’s first run the tests in a single-process, only using a single device (also using some reporting option to shorten output):

$ py.test test_device.py -q --tb=line
FFF
================================= FAILURES =================================
/tmp/doc-exec-9/test_device.py:5: AssertionError: <Device ip=192.168.0.1>
/tmp/doc-exec-9/test_device.py:9: AssertionError: <Device ip=192.168.0.1>
/tmp/doc-exec-9/test_device.py:13: AssertionError: <Device ip=192.168.0.1>
3 failed in 6.02 seconds

As to be expected, we get six seconds execution time (3 tests times 2 seconds each).

Now let’s run the same tests in three subprocesses, each using a different device:

$ py.test --tx 3*popen --dist=each test_device.py -q --tb=line
gw0 I / gw1 I / gw2 I
gw0 [3] / gw1 [3] / gw2 [3]

scheduling tests via EachScheduling
FFFFFFFFF
================================= FAILURES =================================
E   AssertionError: <Device ip=192.168.0.1>
E   AssertionError: <Device ip=192.168.0.3>
E   AssertionError: <Device ip=192.168.0.2>
E   AssertionError: <Device ip=192.168.0.1>
E   AssertionError: <Device ip=192.168.0.3>
E   AssertionError: <Device ip=192.168.0.2>
E   AssertionError: <Device ip=192.168.0.3>
E   AssertionError: <Device ip=192.168.0.1>
E   AssertionError: <Device ip=192.168.0.2>
9 failed in 6.52 seconds

We just created three subprocesses each running three tests. Instead of 18 seconds execution time (9 tests times 2 seconds per test) we roughly got 6 seconds, a 3-times speedup. Each subprocess ran in parallel three tests against “its” device.

Let’s also run with load-balancing, i.e. distributing the tests against three different devices so that each device executes one test:

$ py.test --tx 3*popen --dist=load test_device.py -q --tb=line
gw0 I / gw1 I / gw2 I
gw0 [3] / gw1 [3] / gw2 [3]

scheduling tests via LoadScheduling
FFF
================================= FAILURES =================================
E   AssertionError: <Device ip=192.168.0.3>
E   AssertionError: <Device ip=192.168.0.2>
E   AssertionError: <Device ip=192.168.0.1>
3 failed in 2.50 seconds

Here each test runs in a separate process against its device, overall more than halfing the test time compared to what it would take in a single-process (3*2=6 seconds). If we had many more tests than subproceses than load-scheduling would distribute tests in real-time to the process which has finished executing other tests.

Note that the tests themselves do not need to be aware of the distribution mode. All configuration and setup is contained in the conftest.py file.

To summarize the behaviour of the hooks and fixtures in conftest.py:

  • pytest_configure(config) is called both on the master and each subprocess. We can distinguish where we are by checking for presence of config.slaveinput.
  • pytest_configure_node(node) is called for each subprocess. We can fill the slaveinput dictionary which the subprocess slave can then read via its config.slaveinput dictionary.
  • the device fixture only is called when a test needs it. In distributed mode, tests are only collected and executed in a subprocess. In non-distributed mode, tests are run single-process. The Device class is just a stub — it will need to grow methods for actual device communication. The tests can then simply use those device methods.

I’d like to thank Anton and the participants of my three day testing training in Lviv (Ukraine) for bringing up this and many other interesting questions.

 

I am giving another such professional testing course 25-27th November at the Python Academy in Leipzig. There are still two seats available. Me and other trainers can also be booked for on-site/in-house trainings worldwide.

Written by holger krekel

November 12, 2013 at 7:43 am

Defeating Sauron with the “Trust on first use” principle

with 4 comments

photo from Alexandre Duret-Lutz Gandalf and Frodo did the right thing when they went for destroying the power of the all-seeing eye. The idea of a central power that knows everything undermines our ability to self-govern and influence important changes in society, it undermines a foundation of democracy.

As against Sauron, it seems like an impossible fight to try to protect our communication against present-day espionage cartels.  I see glimmers of hope, though. Certainly not much in the political space. Somehow our politicians are themselves too interested to use the eye on select targets — even if only the ones which Sauron allows them to see.

My bigger hope lies with technologists who are working on designing better communication systems. We still have time during which we can reduce Sauron’s sight. But to begin with, how do we prevent passive spying attacks against our communications?

A good part of the answer lies in the Trust on first use principle. The mobile Threema application is a good example: when two people first connect with each other, they exchange communication keys and afterwards use it to perform end-to-end encrypted communications. The key exchange can happen in full sight of the eye, yet the subsequent communication will be illegible. No question, the eye can notice that the two are communicating with unknown content but if too many of them do that this fact becomes less significant.

Of course, the all-seeying eye can send a Nazgul to stand in the middle of the communication to deceive both ends and listen in. But it needs to do so from the beginning and continously if it wants to avoid the victims from noticing. And those two can at any time meet to verify their encryption keys and would realize there  was a Nazgul-in-the-middle attack.

By contrast, both SSL and GPG operate with a trust model where we can hear Sauron’s distant laughter. The one is tied to a thousand or so “root authorities”, which can be easily reined in as need be. The other mandates and propagates such a high level of initial mistrust between us that we find it simply too inconvenient to use.

Societies and our social interactions are fundamentally build on trust. Let’s design systems which build on initial trust and which help to identify after-the-fact when it was compromised. If the eye has bad dreams, then i am sure massively deployed trust-on-first-use communication systems are among them.

Written by holger krekel

October 26, 2013 at 7:04 am

pytest-2.4.0: new fixture features, many bug fixes

with one comment

The just released pytest-2.4.0 brings many improvements and numerous bug fixes while remaining plugin- and test-suite compatible (apart from a few supposedly very minor incompatibilities). See below for a full list of details. New feature highlights:
  • new yield-style fixtures pytest.yield_fixture, allowing to use existing with-style context managers in fixture functions.
  • improved pdb support: import pdb ; pdb.set_trace() now works without requiring prior disabling of stdout/stderr capturing. Also the --pdb options works now on collection and internal errors and we introduced a new experimental hook for IDEs/plugins to intercept debugging: pytest_exception_interact(node, call, report).
  • shorter monkeypatch variant to allow specifying an import path as a target, for example: monkeypatch.setattr("requests.get", myfunc)
  • better unittest/nose compatibility: all teardown methods are now only called if the corresponding setup method succeeded.
  • integrate tab-completion on command line options if you have argcomplete configured.
  • allow boolean expression directly with skipif/xfail if a “reason” is also specified.
  • a new hook pytest_load_initial_conftests allows plugins like pytest-django to influence the environment before conftest files import django.
  • reporting: color the last line red or green depending if failures/errors occured or everything passed.

The documentation has been updated to accomodate the changes, see http://pytest.org

To install or upgrade pytest:

pip install -U pytest # or
easy_install -U pytest

Many thanks to all who helped, including Floris Bruynooghe, Brianna Laugher, Andreas Pelme, Anthon van der Neut, Anatoly Bubenkoff, Vladimir Keleshev, Mathieu Agopian, Ronny Pfannschmidt, Christian Theunert and many others.

may nice fixtures and passing tests be with you,

holger krekel

Changes between 2.3.5 and 2.4

known incompatibilities:

  • if calling –genscript from python2.7 or above, you only get a standalone script which works on python2.7 or above. Use Python2.6 to also get a python2.5 compatible version.
  • all xunit-style teardown methods (nose-style, pytest-style, unittest-style) will not be called if the corresponding setup method failed, see issue322 below.
  • the pytest_plugin_unregister hook wasn’t ever properly called and there is no known implementation of the hook – so it got removed.
  • pytest.fixture-decorated functions cannot be generators (i.e. use yield) anymore. This change might be reversed in 2.4.1 if it causes unforeseen real-life issues. However, you can always write and return an inner function/generator and change the fixture consumer to iterate over the returned generator. This change was done in lieu of the new pytest.yield_fixture decorator, see below.

new features:

  • experimentally introduce a new pytest.yield_fixture decorator which accepts exactly the same parameters as pytest.fixture but mandates a yield statement instead of a return statement from fixture functions. This allows direct integration with “with-style” context managers in fixture functions and generally avoids registering of finalization callbacks in favour of treating the “after-yield” as teardown code. Thanks Andreas Pelme, Vladimir Keleshev, Floris Bruynooghe, Ronny Pfannschmidt and many others for discussions.

  • allow boolean expression directly with skipif/xfail if a “reason” is also specified. Rework skipping documentation to recommend “condition as booleans” because it prevents surprises when importing markers between modules. Specifying conditions as strings will remain fully supported.

  • reporting: color the last line red or green depending if failures/errors occured or everything passed. thanks Christian Theunert.

  • make “import pdb ; pdb.set_trace()” work natively wrt capturing (no “-s” needed anymore), making pytest.set_trace() a mere shortcut.

  • fix issue181: –pdb now also works on collect errors (and on internal errors) . This was implemented by a slight internal refactoring and the introduction of a new hook pytest_exception_interact hook (see next item).

  • fix issue341: introduce new experimental hook for IDEs/terminals to intercept debugging: pytest_exception_interact(node, call, report).

  • new monkeypatch.setattr() variant to provide a shorter invocation for patching out classes/functions from modules:

    monkeypatch.setattr(“requests.get”, myfunc)

    will replace the “get” function of the “requests” module with myfunc.

  • fix issue322: tearDownClass is not run if setUpClass failed. Thanks Mathieu Agopian for the initial fix. Also make all of pytest/nose finalizer mimick the same generic behaviour: if a setupX exists and fails, don’t run teardownX. This internally introduces a new method “node.addfinalizer()” helper which can only be called during the setup phase of a node.

  • simplify pytest.mark.parametrize() signature: allow to pass a CSV-separated string to specify argnames. For example: pytest.mark.parametrize("input,expected",  [(1,2), (2,3)]) works as well as the previous: pytest.mark.parametrize(("input", "expected"), ...).

  • add support for setUpModule/tearDownModule detection, thanks Brian Okken.

  • integrate tab-completion on options through use of “argcomplete”. Thanks Anthon van der Neut for the PR.

  • change option names to be hyphen-separated long options but keep the old spelling backward compatible. py.test -h will only show the hyphenated version, for example “–collect-only” but “–collectonly” will remain valid as well (for backward-compat reasons). Many thanks to Anthon van der Neut for the implementation and to Hynek Schlawack for pushing us.

  • fix issue 308 – allow to mark/xfail/skip individual parameter sets when parametrizing. Thanks Brianna Laugher.

  • call new experimental pytest_load_initial_conftests hook to allow 3rd party plugins to do something before a conftest is loaded.

Bug fixes:

  • fix issue358 – capturing options are now parsed more properly by using a new parser.parse_known_args method.
  • pytest now uses argparse instead of optparse (thanks Anthon) which means that “argparse” is added as a dependency if installing into python2.6 environments or below.
  • fix issue333: fix a case of bad unittest/pytest hook interaction.
  • PR27: correctly handle nose.SkipTest during collection. Thanks Antonio Cuni, Ronny Pfannschmidt.
  • fix issue355: junitxml puts name=”pytest” attribute to testsuite tag.
  • fix issue336: autouse fixture in plugins should work again.
  • fix issue279: improve object comparisons on assertion failure for standard datatypes and recognise collections.abc. Thanks to Brianna Laugher and Mathieu Agopian.
  • fix issue317: assertion rewriter support for the is_package method
  • fix issue335: document py.code.ExceptionInfo() object returned from pytest.raises(), thanks Mathieu Agopian.
  • remove implicit distribute_setup support from setup.py.
  • fix issue305: ignore any problems when writing pyc files.
  • SO-17664702: call fixture finalizers even if the fixture function partially failed (finalizers would not always be called before)
  • fix issue320 – fix class scope for fixtures when mixed with module-level functions. Thanks Anatloy Bubenkoff.
  • you can specify “-q” or “-qq” to get different levels of “quieter” reporting (thanks Katarzyna Jachim)
  • fix issue300 – Fix order of conftest loading when starting py.test in a subdirectory.
  • fix issue323 – sorting of many module-scoped arg parametrizations
  • make sessionfinish hooks execute with the same cwd-context as at session start (helps fix plugin behaviour which write output files with relative path such as pytest-cov)
  • fix issue316 – properly reference collection hooks in docs
  • fix issue 306 – cleanup of -k/-m options to only match markers/test names/keywords respectively. Thanks Wouter van Ackooy.
  • improved doctest counting for doctests in python modules — files without any doctest items will not show up anymore and doctest examples are counted as separate test items. thanks Danilo Bellini.
  • fix issue245 by depending on the released py-1.4.14 which fixes py.io.dupfile to work with files with no mode. Thanks Jason R. Coombs.
  • fix junitxml generation when test output contains control characters, addressing issue267, thanks Jaap Broekhuizen
  • fix issue338: honor –tb style for setup/teardown errors as well. Thanks Maho.
  • fix issue307 – use yaml.safe_load in example, thanks Mark Eichin.
  • better parametrize error messages, thanks Brianna Laugher
  • pytest_terminal_summary(terminalreporter) hooks can now use “.section(title)” and “.line(msg)” methods to print extra information at the end of a test run.

Written by holger krekel

October 1, 2013 at 9:40 am

Posted in metaprogramming

Tagged with ,

PEP438 is live: speed up python package installs now!

with 17 comments

My “speed up pypi installs” PEP438 has been accepted and transition phase 1 is live: as a package maintainer you can speed up the installation for your packages for all your users now, with the click of a button: Login to https://pypi.python.org and then go to urls for each of your packages, and specify that all release files are hosted from pypi.python.org. Or add explicit download urls with an MD5. Tools such as pip or easy_install will thus avoid any slow crawling of third party sites.

Many thanks to Carl Meyer who helped me write the PEP, and Donald Stufft for implementing most of it, and Richard Jones who accepted it today!   And thanks also to the distutils-sig discussion participants, in particular Phillip Eby and Marc-Andre Lemburg.

 

Written by holger krekel

May 19, 2013 at 7:49 am

Posted in metaprogramming

Tagged with , , ,

If i were to tweet a mysogynist joke …

with 8 comments

If a man were to tweet a mysogynist joke and his followers were men, would that be an issue? What if one of them re-tweets it and one of his female followers complains on twitter? And then many other people start tweeting and re-tweeting this or that and what if this all got the initial tweeter fired from his company? And then her company would fire her as well?

Quite a mess, obviously.  However, I think everyone had their reasons for talking and acting the way they did.  And it boils down to the perspective you are able to feel empathy for.  Here is a possible set of perspectives:

Perspective M: “The other day i was ridiculed by a bunch of girls at the office. I wanted to pay back with a little joke in an environment where i felt safe to do so.”

Perspective F: “Again a tweet with bad mysogynist jokes. I’ve had enough. This time i won’t sit quietly but call it out.”

Perspective C1: “Damn, look what this guy caused. His twitter profile is directly associated with our company. And now he tells bad mysogynist jokes and it’s now all over the internet. We cannot let go this time.”

Perspective C2: “Damn it, look what she caused. She is working in public relations and doesn’t know better than to cause a shitstorm which directly comes back to us a company? We cannot let go.”

I could understand each of these perspectives though i’d have a suspicision that the companies choose a bit of an easy way out. Had they rather put the issue of misogyny at the center of their positioning and communication, rather than focusing just on keeping some damage from the company, everybody would have learned a lesson and the incident could have contributed to a more enjoyable environment, i am sure.

Written by holger krekel

March 23, 2013 at 12:16 pm

Packaging, testing, pypi and my Pycon Russia adventures

leave a comment »

A few days ago I talked at Pycon Russia on packaging and testing and a new PyPI Server implementation and workflow tool i am working on, codenamed devpi. See the slides and the video. The slides are converted from my hovercraft based presentation which you can find here (needs javascript).  devpi tries to solve the “standardization” problem around Python packaging by offering a good index server and a “meta approach” on configuring and invoking setup.py/easy_install/pip, incorporating existing practises and facilitating new ones.  The slides and the talk hopefully clarify a bit of the reasoning behind it.
Besides the good feedback and discussions around my talk, i just had a great few days. It was my first time to Russia and i saw and learned a lot.  One unexpected event was going to a russian Sauna with Amir Salihefendic, Russel Keith-Magee and Anton, a main conference organizer. Between going into the Sauna we had glasses of nice irish Whiskey or walked outside to the snowy freezing cold.  Afterwards some of us went to the conference party and had good (despite being somewhat drunken) discussions with people from Yandex, the biggest russian search engine and several russian devs. All very friendly, competent and funny. The party lasted until 5:30am – with my fellow english-speaking talkers Armin Ronacher, David Cramer (a weekend in Russia) and me being among the very last.

Image

David, Amir, Russel, and our russian hosts

The next days evening saw Amir, David, Armin and two russian guys visiting an Irish pub past midnight. It turned out there is no such thing as a “russian pub”, the concept of “pub” was imported in the last decade mostly in the form of english or irish ones. And it seems IT/Python guys can meet everywhere on the planet and have a good time :)

Image

Ice, Ekaterinburg at night, and an anonymous shop

Going back to content, i felt particularly inspired by Jeff Lindsay’s talk on Autosustainable services. He described how he tries to provide several small web services, and how to organize cost sharing by its users. As services need resources, it’s a different issue than Open Cource collaboration which does not require such to exist.

I heart several good sentences from my fellow talkers, for example one from Russel Keith-Magee describing a dillema of open source communities: “There are many people who can say ‘No’ but few who can say ‘Yes’ to something”. Amir Salihefendic desribed how the “Redis” database solved many problems for him, and some interesting concrete usages of “bitmaps” in his current endeavours like bitmapist.cohort. And of course Armin Ronacher and David Cramer also gave good talks related to their experience, Advanced Flask patterns and scalable web services respectively.  With Armin i also had a good private discussion about the issue of code-signing and verification.  We drafted what we think could work for Python packaging (more separately).  With David, i discussed workflow commands for python packaging as he offered some good thoughts on the matter.

Around the whole conference we were warmly cared for by Yulia’s company it-people.ru who overtook the physical organisation, and by Anton and his friends who organized the program.  Maria Kalinina in particular had cared for the keynote speakers and many other aspects of the conference, and without her, i wouldn’t have made it.  Anton drove us to the Asian European geographic border, and Yulia to the skyscraper of Ekaterinburg, overlooking the third largest city in Russia. Russel and me also took the opportunity to walk around Ekaterinburg, looking at Lenin sculptures, buildings made of ice, frozen lakes, and the many shops and noises in the city.

Image

Iced lake, Lenin forver, The Asia/Europe border

Lastly i went to the university with Russel to talk for two hours to students about “How Open Source can help your career” and we had a lively discussion with them and the lecturer who invited us.  I offered my own background and stated that the very best people in the IT world are today collaborating through open-source.  It’s a totally dominant model for excellence.  (Which doesn’t mean there are not some good proprietary projects, they are just fewer i’d say).

So i can join the many russian participants who thought Pycon Russia was a very good conference. It’s of course mostly interesting for people speaking russian, as only seven talks were in english.   For my part, the intense time i had with both the russian hosts and developers and the english talkers was verymuch worth it – i think there might be a few new collaborations coming from it.  More on that in later blog posts hopefully  :)

Two days ago i left Ekaterinburg and felt a bit sad because of the many contacts i made, which almost felt like the beginning of friendships.

Written by holger krekel

March 1, 2013 at 12:38 pm

Follow

Get every new post delivered to your Inbox.