24 Jun 2017

feedPlanet Python

Stephen Ferg: Python Decorators

In August 2009, I wrote a post titled Introduction to Python Decorators. It was an attempt to explain Python decorators in a way that I (and I hoped, others) could grok.

Recently I had occasion to re-read that post. It wasn't a pleasant experience - it was pretty clear to me that the attempt had failed.

That failure - and two other things - have prompted me to try again.

There is an old saying to the effect that "Every stick has two ends, one by which it may be picked up, and one by which it may not." I believe that most explanations of decorators fail because they pick up the stick by the wrong end.

In this post I will show you what the wrong end of the stick looks like, and point out why I think it is wrong. And I will show you what I think the right end of the stick looks like.

The wrong way to explain decorators

Most explanations of Python decorators start with an example of a function to be decorated, like this:

def aFunction():
    print("inside aFunction")

and then add a decoration line, which starts with an @ sign:

@myDecorator
def aFunction():
    print("inside aFunction")

At this point, the author of the introduction often defines a decorator as the line of code that begins with the "@". (In my older post, I called such lines "annotation" lines. I now prefer the term "decoration" line.)

For instance, in 2008 Bruce Eckel wrote on his Artima blog

A function decorator is applied to a function definition by placing it on the line before that function definition begins.

and in 2004, Phillip Eby wrote in an article in Dr. Dobb's Journal

Decorators may appear before any function definition…. You can even stack multiple decorators on the same function definition, one per line.

Now there are two things wrong with this approach to explaining decorators. The first is that the explanation begins in the wrong place. It starts with an example of a function to be decorated and an decoration line, when it should begin with the decorator itself. The explanation should end, not start, with the decorated function and the decoration line. The decoration line is, after all, merely syntactic sugar - is not at all an essential element in the concept of a decorator.

The second is that the term "decorator" is used incorrectly (or ambiguously) to refer both to the decorator and to the decoration line. For example, in his Dr. Dobb's Journal article, after using the term "decorator" to refer to the decoration line, Phillip Eby goes on to define a "decorator" as a callable object.

But before you can do that, you first need to have some decorators to stack. A decorator is a callable object (like a function) that accepts one argument-the function being decorated.

So… it would seem that a decorator is both a callable object (like a function) and a single line of code that can appear before the line of code that begins a function definition. This is sort of like saying that an "address" is both a building (or apartment) at a specific location and a set of lines (written in pencil or ink) on the front of a mailing envelope. The ambiguity may be almost invisible to someone familiar with decorators, but it is very confusing for a reader who is trying to learn about decorators from the ground up.

The right way to explain decorators

So how should we explain decorators?

Well, we start with the decorator, not the function to be decorated.

One
We start with the basic notion of a function - a function is something that generates a value based on the values of its arguments.

Two
We note that in Python, functions are first-class objects, so they can be passed around like other values (strings, integers, objects, etc.).

Three
We note that because functions are first-class objects in Python, we can write functions that both (a) accept function objects as argument values, and (b) return function objects as return values. For example, here is a function foobar that accepts a function object original_function as an argument and returns a function object new_function as a result.

def foobar(original_function):

    # make a new function
    def new_function():
        # some code

    return new_function

Four
We define "decorator".

A decorator is a function (such as foobar in the above example) that takes a function object as an argument, and returns a function object as a return value.

So there we have it - the definition of a decorator. Anything else that we say about decorators is a refinement of, or an expansion of, or an addition to, this definition of a decorator.

Five
We show what the internals of a decorator look like. Specifically, we show different ways that a decorator can use the original_function in the creation of the new_function. Here is a simple example.

def verbose(original_function):

    # make a new function that prints a message when original_function starts and finishes
    def new_function(*args, **kwargs):
        print("Entering", original_function.__name__)
        original_function(*args, **kwargs)
        print("Exiting ", original_function.__name__)

    return new_function

Six
We show how to invoke a decorator - how we can pass into a decorator one function object (its input) and get back from it a different function object (its output). In the following example, we pass the widget_func function object to the verbose decorator, and we get back a new function object to which we assign the name talkative_widget_func.

def widget_func():
    # some code

talkative_widget_func = verbose(widget_func)

Seven
We point out that decorators are often used to add features to the original_function. Or more precisely, decorators are often used to create a new_function that does roughly what original_function does, but also does things in addition to what original_function does.

And we note that the output of a decorator is typically used to replace the original function that we passed in to the decorator as an argument. A typical use of decorators looks like this. (Note the change to line 4 from the previous example.)

def widget_func():
    # some code

widget_func = verbose(widget_func)

So for all practical purposes, in a typical use of a decorator we pass a function (widget_func) through a decorator (verbose) and get back an enhanced (or souped-up, or "decorated") version of the function.

Eight
We introduce Python's "decoration syntax" that uses the "@" to create decoration lines. This feature is basically syntactic sugar that makes it possible to re-write our last example this way:

@verbose
def widget_func():
    # some code

The result of this example is exactly the same as the previous example - after it executes, we have a widget_func that has all of the functionality of the original widget_func, plus the functionality that was added by the verbose decorator.

Note that in this way of explaining decorators, the "@" and decoration syntax is one of the last things that we introduce, not one of the first.

And we absolutely do not refer to line 1 as a "decorator". We might refer to line 1 as, say, a "decorator invocation line" or a "decoration line" or simply a "decoration"… whatever. But line 1 is not a "decorator".

Line 1 is a line of code. A decorator is a function - a different animal altogether.


Nine
Once we've nailed down these basics, there are a few advanced features to be covered.

Ten - A decorators cookbook

The material that we've covered up to this point is what any basic introduction to Python decorators would cover. But a Python programmer needs something more in order to be productive with decorators. He (or she) needs a catalog of recipes, patterns, examples, and commentary that describes / shows / explains when and how decorators can be used to accomplish specific tasks. (Ideally, such a catalog would also include examples and warnings about decorator gotchas and anti-patterns.) Such a catalog might be called "Python Decorator Cookbook" or perhaps "Python Decorator Patterns".


So that's it. I've described what I think is wrong (well, let's say suboptimal) about most introductions to decorators. And I've sketched out what I think is a better way to structure an introduction to decorators.

Now I can explain why I like Matt Harrison's e-book Guide to: Learning Python Decorators. Matt's introduction is structured in the way that I think an introduction to decorators should be structured. It picks up the stick by the proper end.

The first two-thirds of the Guide hardly talk about decorators at all. Instead, Matt begins with a thorough discussion of how Python functions work. By the time the discussion gets to decorators, we have been given a strong understanding of the internal mechanics of functions. And since most decorators are functions (remember our definition of decorator), at that point it is relatively easy for Matt to explain the internal mechanics of decorators.

Which is just as it should be.


Revised 2012-11-26 - replaced the word "annotation" with "decoration", following terminology ideas discussed in the comments.


24 Jun 2017 6:52pm GMT

Stephen Ferg: Python Decorators

In August 2009, I wrote a post titled Introduction to Python Decorators. It was an attempt to explain Python decorators in a way that I (and I hoped, others) could grok.

Recently I had occasion to re-read that post. It wasn't a pleasant experience - it was pretty clear to me that the attempt had failed.

That failure - and two other things - have prompted me to try again.

There is an old saying to the effect that "Every stick has two ends, one by which it may be picked up, and one by which it may not." I believe that most explanations of decorators fail because they pick up the stick by the wrong end.

In this post I will show you what the wrong end of the stick looks like, and point out why I think it is wrong. And I will show you what I think the right end of the stick looks like.

The wrong way to explain decorators

Most explanations of Python decorators start with an example of a function to be decorated, like this:

def aFunction():
    print("inside aFunction")

and then add a decoration line, which starts with an @ sign:

@myDecorator
def aFunction():
    print("inside aFunction")

At this point, the author of the introduction often defines a decorator as the line of code that begins with the "@". (In my older post, I called such lines "annotation" lines. I now prefer the term "decoration" line.)

For instance, in 2008 Bruce Eckel wrote on his Artima blog

A function decorator is applied to a function definition by placing it on the line before that function definition begins.

and in 2004, Phillip Eby wrote in an article in Dr. Dobb's Journal

Decorators may appear before any function definition…. You can even stack multiple decorators on the same function definition, one per line.

Now there are two things wrong with this approach to explaining decorators. The first is that the explanation begins in the wrong place. It starts with an example of a function to be decorated and an decoration line, when it should begin with the decorator itself. The explanation should end, not start, with the decorated function and the decoration line. The decoration line is, after all, merely syntactic sugar - is not at all an essential element in the concept of a decorator.

The second is that the term "decorator" is used incorrectly (or ambiguously) to refer both to the decorator and to the decoration line. For example, in his Dr. Dobb's Journal article, after using the term "decorator" to refer to the decoration line, Phillip Eby goes on to define a "decorator" as a callable object.

But before you can do that, you first need to have some decorators to stack. A decorator is a callable object (like a function) that accepts one argument-the function being decorated.

So… it would seem that a decorator is both a callable object (like a function) and a single line of code that can appear before the line of code that begins a function definition. This is sort of like saying that an "address" is both a building (or apartment) at a specific location and a set of lines (written in pencil or ink) on the front of a mailing envelope. The ambiguity may be almost invisible to someone familiar with decorators, but it is very confusing for a reader who is trying to learn about decorators from the ground up.

The right way to explain decorators

So how should we explain decorators?

Well, we start with the decorator, not the function to be decorated.

One
We start with the basic notion of a function - a function is something that generates a value based on the values of its arguments.

Two
We note that in Python, functions are first-class objects, so they can be passed around like other values (strings, integers, objects, etc.).

Three
We note that because functions are first-class objects in Python, we can write functions that both (a) accept function objects as argument values, and (b) return function objects as return values. For example, here is a function foobar that accepts a function object original_function as an argument and returns a function object new_function as a result.

def foobar(original_function):

    # make a new function
    def new_function():
        # some code

    return new_function

Four
We define "decorator".

A decorator is a function (such as foobar in the above example) that takes a function object as an argument, and returns a function object as a return value.

So there we have it - the definition of a decorator. Anything else that we say about decorators is a refinement of, or an expansion of, or an addition to, this definition of a decorator.

Five
We show what the internals of a decorator look like. Specifically, we show different ways that a decorator can use the original_function in the creation of the new_function. Here is a simple example.

def verbose(original_function):

    # make a new function that prints a message when original_function starts and finishes
    def new_function(*args, **kwargs):
        print("Entering", original_function.__name__)
        original_function(*args, **kwargs)
        print("Exiting ", original_function.__name__)

    return new_function

Six
We show how to invoke a decorator - how we can pass into a decorator one function object (its input) and get back from it a different function object (its output). In the following example, we pass the widget_func function object to the verbose decorator, and we get back a new function object to which we assign the name talkative_widget_func.

def widget_func():
    # some code

talkative_widget_func = verbose(widget_func)

Seven
We point out that decorators are often used to add features to the original_function. Or more precisely, decorators are often used to create a new_function that does roughly what original_function does, but also does things in addition to what original_function does.

And we note that the output of a decorator is typically used to replace the original function that we passed in to the decorator as an argument. A typical use of decorators looks like this. (Note the change to line 4 from the previous example.)

def widget_func():
    # some code

widget_func = verbose(widget_func)

So for all practical purposes, in a typical use of a decorator we pass a function (widget_func) through a decorator (verbose) and get back an enhanced (or souped-up, or "decorated") version of the function.

Eight
We introduce Python's "decoration syntax" that uses the "@" to create decoration lines. This feature is basically syntactic sugar that makes it possible to re-write our last example this way:

@verbose
def widget_func():
    # some code

The result of this example is exactly the same as the previous example - after it executes, we have a widget_func that has all of the functionality of the original widget_func, plus the functionality that was added by the verbose decorator.

Note that in this way of explaining decorators, the "@" and decoration syntax is one of the last things that we introduce, not one of the first.

And we absolutely do not refer to line 1 as a "decorator". We might refer to line 1 as, say, a "decorator invocation line" or a "decoration line" or simply a "decoration"… whatever. But line 1 is not a "decorator".

Line 1 is a line of code. A decorator is a function - a different animal altogether.


Nine
Once we've nailed down these basics, there are a few advanced features to be covered.

Ten - A decorators cookbook

The material that we've covered up to this point is what any basic introduction to Python decorators would cover. But a Python programmer needs something more in order to be productive with decorators. He (or she) needs a catalog of recipes, patterns, examples, and commentary that describes / shows / explains when and how decorators can be used to accomplish specific tasks. (Ideally, such a catalog would also include examples and warnings about decorator gotchas and anti-patterns.) Such a catalog might be called "Python Decorator Cookbook" or perhaps "Python Decorator Patterns".


So that's it. I've described what I think is wrong (well, let's say suboptimal) about most introductions to decorators. And I've sketched out what I think is a better way to structure an introduction to decorators.

Now I can explain why I like Matt Harrison's e-book Guide to: Learning Python Decorators. Matt's introduction is structured in the way that I think an introduction to decorators should be structured. It picks up the stick by the proper end.

The first two-thirds of the Guide hardly talk about decorators at all. Instead, Matt begins with a thorough discussion of how Python functions work. By the time the discussion gets to decorators, we have been given a strong understanding of the internal mechanics of functions. And since most decorators are functions (remember our definition of decorator), at that point it is relatively easy for Matt to explain the internal mechanics of decorators.

Which is just as it should be.


Revised 2012-11-26 - replaced the word "annotation" with "decoration", following terminology ideas discussed in the comments.


24 Jun 2017 6:52pm GMT

Janusworx: On Starting with Summer Training at #dgplug

I started out with a very vague idea, of learning programming last year.

I went to Pycon India, fell in love with the community, decided to learn software, and came home all charged up.
(Btw, I was so intimidated, I did not speak to a single soul)

The plan was to sort personal issues, tackle a couple of major work projects so that I could then focus on learning, clear the decks and go full steam ahead come April.

While I made headway, I was also missing the hum and bustle of Pycon that had so charged me, but I did remember one session I attended, that had left me smiling was a sponsored talk of all things, by a certain Mr. Das. Off the cuff, naturally, warmly delivered.

So as I was looking for … someone to talk to, somewhere to belong, who comes along but Santa Das.

While that trip didn't quite happen due to personal reasons, we still kept in touch.
(Why he would do that with a newbie, know nothing like me, I do not know. The man has a large heart)

And when the new session of #dgplug was announced, I jumped at the chance!

To close, here's a little something, something about me 1

  1. Yes, I am obviously hiding my big, fat tummy in the pic :) 2
  2. I'm like a poor man's, still failing James Altucher.
  3. Yes, I'm a lot older than most of you :) 3
  4. I've been at this IT thing a long time. (since 1997, in fact) 4
  5. And yes, only now do I get the bright idea to learn software.
  6. I love the fact, that I get you to be my plus-minus-equal
  7. You folks make me feel all warm and enthusiastic and welcoming and make me feel like I found my tribe!
  8. I'm still head over heels in love with my better half

I look to learn so much from you and know so much more of you over the coming months.
I wish you all make good art!

  1. My grandma says that :)

  2. dropped 7 kgs to 89. Only another 20 to go!

  3. not necessarily wiser :P

  4. land line telephone fixer man,hardware tech support at small firm, hardware tech support at huge firm, freelance engineer, consulting engineer, consulting manager.

24 Jun 2017 6:33pm GMT

Janusworx: On Starting with Summer Training at #dgplug

I started out with a very vague idea, of learning programming last year.

I went to Pycon India, fell in love with the community, decided to learn software, and came home all charged up.
(Btw, I was so intimidated, I did not speak to a single soul)

The plan was to sort personal issues, tackle a couple of major work projects so that I could then focus on learning, clear the decks and go full steam ahead come April.

While I made headway, I was also missing the hum and bustle of Pycon that had so charged me, but I did remember one session I attended, that had left me smiling was a sponsored talk of all things, by a certain Mr. Das. Off the cuff, naturally, warmly delivered.

So as I was looking for … someone to talk to, somewhere to belong, who comes along but Santa Das.

While that trip didn't quite happen due to personal reasons, we still kept in touch.
(Why he would do that with a newbie, know nothing like me, I do not know. The man has a large heart)

And when the new session of #dgplug was announced, I jumped at the chance!

To close, here's a little something, something about me 1

  1. Yes, I am obviously hiding my big, fat tummy in the pic :) 2
  2. I'm like a poor man's, still failing James Altucher.
  3. Yes, I'm a lot older than most of you :) 3
  4. I've been at this IT thing a long time. (since 1997, in fact) 4
  5. And yes, only now do I get the bright idea to learn software.
  6. I love the fact, that I get you to be my plus-minus-equal
  7. You folks make me feel all warm and enthusiastic and welcoming and make me feel like I found my tribe!
  8. I'm still head over heels in love with my better half

I look to learn so much from you and know so much more of you over the coming months.
I wish you all make good art!

  1. My grandma says that :)

  2. dropped 7 kgs to 89. Only another 20 to go!

  3. not necessarily wiser :P

  4. land line telephone fixer man,hardware tech support at small firm, hardware tech support at huge firm, freelance engineer, consulting engineer, consulting manager.

24 Jun 2017 6:33pm GMT

Weekly Python StackOverflow Report: (lxxix) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2017-06-24 17:55:47 GMT


  1. Loop through a list in Python and modify it - [40/4]
  2. How can I take two tuples to produce a dictionary? - [12/2]
  3. How to receive an update notification when a user enables 2-step verification? - [10/0]
  4. Python: Byte code of a compiled script differs based on how it was compiled - [7/1]
  5. Check if a string contains the list elements - [6/6]
  6. Creating a dictionary for each word in a file and counting the frequency of words that follow it - [6/5]
  7. Pandas: Count the first consecutive True values - [6/4]
  8. Does declaring variables in a function called from __init__ still use a key-sharing dictionary? - [6/2]
  9. What exactly does super() return in Python 3? - [6/2]
  10. Scipy sparse matrix exponentiation: a**16 is slower than a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a? - [6/1]

24 Jun 2017 6:11pm GMT

Weekly Python StackOverflow Report: (lxxix) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2017-06-24 17:55:47 GMT


  1. Loop through a list in Python and modify it - [40/4]
  2. How can I take two tuples to produce a dictionary? - [12/2]
  3. How to receive an update notification when a user enables 2-step verification? - [10/0]
  4. Python: Byte code of a compiled script differs based on how it was compiled - [7/1]
  5. Check if a string contains the list elements - [6/6]
  6. Creating a dictionary for each word in a file and counting the frequency of words that follow it - [6/5]
  7. Pandas: Count the first consecutive True values - [6/4]
  8. Does declaring variables in a function called from __init__ still use a key-sharing dictionary? - [6/2]
  9. What exactly does super() return in Python 3? - [6/2]
  10. Scipy sparse matrix exponentiation: a**16 is slower than a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a? - [6/1]

24 Jun 2017 6:11pm GMT

23 Jun 2017

feedDjango community aggregator: Community blog posts

django-debreach + DRF = sadness

I sunk 4 hours of my life into this problem yesterday so I thought I might post it here for future frustrated nerds like myself.

If you're using django-debreach and Django REST Framework, you're going to run into all kinds of headaches regarding CSRF. DRF will complain with CSRF Failed: CSRF token missing or incorrect. and if you're like me, you'll be pretty confused since I knew there was nothing wrong with the request. My token was being sent, but it appeared longer than it should be.

So here's what was happening and how I fixed it. Hopefully it'll be useful to others.

Django-debreach encrypts the csrf token, which is normally just fine because it does so as part of the chain of middleware layers in every request. However, DRF doesn't respect the csrf portion of that chain. Instead it sets csrf_exempt() on all of its views and then relies on SessionAuthentication to explicitly call CSRFCheck().process_view(). Normally this is ok, but with a not-yet-decrypted csrf token, this process will always fail.

So to fix it all, I had to implement my own authentication class and use that in all of my views. Basically all this does is override SessionAuthentication's enforce_csrf() to first decrypt the token:

class DebreachedSessionAuthentication(SessionAuthentication):

    def enforce_csrf(self, request):

        faux_req = {"POST": request.POST}

        CSRFCryptMiddleware().process_view(faux_req, None, (), {})
        request.POST["csrfmiddlewaretoken"] = faux_req["csrfmiddlewaretoken"]

        SessionAuthentication.enforce_csrf(self, request)

Of course, none of this is necessary if you're running Django 1.10+ and already have Breach attack protection, but if you're stuck on 1.8 (as we are for now) this is the best solution I could find.

23 Jun 2017 9:12pm GMT

Build GraphQL Web APIs with Django Graphene

On the previous tutorial we have introduced GraphQL to build Web APIs .In this tutorial we are going to build a real world example web application with Django ,which makes use of GraphQL and its Python implementation ,Graphene .

Tutorial Parts :

Building Better Django WEB APIs with GraphQL Tutorial

Build GraphQL Web APIs with Django Graphene

Lets start by creating a new virtual environment and install required packages including django ,

Head over to your terminal and enter :

virtualenv graphqlenv 
source graphqlenv/bin/activate 

This will create a new virtual environment and activate it .

Next install django and graphene packages with pip

pip install django 
pip install graphene_django

You can also install graphiql_django which provides a user interface for testing GraphQL queries against your server .

pip install graphiql_django

Next lets create a Django project with a single application :

python django-admin.py startproject inventory . 
cd inventory
python manage.py startapp inventory 

Open settings.py and add inventory and graphenedjango to installedapps array :

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'graphene_django',
    'inventory'
]

Then create your database :

python manage.py migrate 

Create models

Open inventory/models.py then add :

# -*- coding: utf-8 -*-
from __future__ import unicode_literals

from django.db import models

class Product(models.Model):

    sku = models.CharField(max_length=13,help_text="Enter Product Stock Keeping Unit")
    barcode = models.CharField(max_length=13,help_text="Enter Product Barcode (ISBN, UPC ...)")

    title = models.CharField(max_length=200, help_text="Enter Product Title")
    description = models.TextField(help_text="Enter Product Description")

    unitCost = models.FloatField(help_text="Enter Product Unit Cost")
    unit = models.CharField(max_length=10,help_text="Enter Product Unit ")

    quantity = models.FloatField(help_text="Enter Product Quantity")
    minQuantity = models.FloatField(help_text="Enter Product Min Quantity")

    family = models.ForeignKey('Family')
    location = models.ForeignKey('Location')


    def __str__(self):

        return self.title


class Family(models.Model):

    reference = models.CharField(max_length=13, help_text="Enter Family Reference")
    title = models.CharField(max_length=200, help_text="Enter Family Title")
    description = models.TextField(help_text="Enter Family Description")

    unit = models.CharField(max_length=10,help_text="Enter Family Unit ")

    minQuantity = models.FloatField(help_text="Enter Family Min Quantity")


    def __str__(self):

        return self.title


class Location(models.Model):


    reference = models.CharField(max_length=20, help_text="Enter Location Reference")
    title = models.CharField(max_length=200, help_text="Enter Location Title")
    description = models.TextField(help_text="Enter Location Description")

    def __str__(self):

        return self.title


class Transaction(models.Model):

    sku = models.CharField(max_length=13,help_text="Enter Product Stock Keeping Unit")
    barcode = models.CharField(max_length=13,help_text="Enter Product Barcode (ISBN, UPC ...)")

    comment = models.TextField(help_text="Enter Product Stock Keeping Unit")

    unitCost = models.FloatField(help_text="Enter Product Unit Cost")

    quantity = models.FloatField(help_text="Enter Product Quantity")

    product = models.ForeignKey('Product')

    date = models.DateField(null=True, blank=True)

    REASONS = (
        ('ns', 'New Stock'),
        ('ur', 'Usable Return'),
        ('nr', 'Unusable Return'),
    )


    reason = models.CharField(max_length=2, choices=REASONS, blank=True, default='ns', help_text='Reason for transaction')

    def __str__(self):

        return 'Transaction :  %d' % (self.id)

Next create migrations and appy them :

python manage.py makemigrations
python manage.py migrate

Adding an Admin Interface

The next thing is to add these models to the admin interface so we can add some data :

Open inventory/admin.py and add

# -*- coding: utf-8 -*-
from __future__ import unicode_literals

from django.contrib import admin

from .models import Product ,Family ,Location ,Transaction  
# Register your models here.

admin.site.register(Product)
admin.site.register(Family)
admin.site.register(Location)
admin.site.register(Transaction)

Next create a login to be able to access the admin app

python manage.py createsuperuser 

Enter username and password when prompted and hit enter .

Now run your web application with :

python manage.py runserver

Then go to http://127.0.0.1:8000/admin with your browser ,login and submit some data for created models .

Adding GraphQL support : Schema and Object Types

To be able to execute GraphQL queries against our web application we need to add a Schema ,Object Types and a view function which takes GraphQL queries .

Create app schema.py

Create inventory/schema.py then add

We first create a subclass of DjangoObjectType for each model we want to query with GraphQL

import graphene

from graphene_django.types import DjangoObjectType

from .models import Family , Location , Product , Transaction 

class FamilyType(DjangoObjectType):
    class Meta:
        model = Family 

class LocationType(DjangoObjectType):
    class Meta:
        model = Location 

class ProductType(DjangoObjectType):
    class Meta:
        model = Product 

class TransactionType(DjangoObjectType):
    class Meta:
        model = Transaction

Then we create a an abstract query ,a subclass of AbstractType .It's abstract because it's an app level query . For each app you have you need to create an app level abstract query and then combine all abstract queries with a concrete project level query .

You need to create a subclass of List members for each DjangoObjectType then create resolve_xxx() methods for each Query member

class Query(graphene.AbstractType):
    all_families = graphene.List(FamilyType)
    all_locations = graphene.List(LocationType)
    all_products = graphene.List(ProductType)
    all_transactions = graphene.List(TransactionType)

    def resolve_all_families(self, args, context, info):
        return Family.objects.all()

    def resolve_all_locations(self, args, context, info):
        return Location.objects.all()

    def resolve_all_products(self, args, context, info):
        return Product.objects.all()

    def resolve_all_transactions(self, args, context, info):
        return Transaction.objects.all()

Create project level schema.py

Next create a project level Query .

Create a project level schema.py file then add :

import graphene

import inventory.schema 


class Query(inventory.schema.Query, graphene.ObjectType):
    # This class extends all abstract apps level Queries and graphene.ObjectType
    pass

schema = graphene.Schema(query=Query)

So we first create a Query class which extends all abstract queries and also ObjectType then we create a graphene.Schema object which takes Query class as parameter .

Now we need to add a GRAPHINE config object in settings.py

GRAPHENE = {
    'SCHEMA': 'product_inventory_manager.schema.schema'
} 

Create GraphQL view

With GraphQL you don't need multiple endpoint but just one so lets create one

Open urls.py then add

from django.conf.urls import url
from django.contrib import admin

from graphene_django.views import GraphQLView

from product_inventory_manager.schema import schema

urlpatterns = [
    url(r'^admin/', admin.site.urls),
    url(r'^graphql', GraphQLView.as_view(graphiql=True)),
]

We have previously installed a GraphQL package for adding a user interface to test GraphQL queries so if you want to enable it you just set the graphiql parameter to True : graphiql=True

Serving app and testing GraphQL

Now you are ready to test the GraphQL API so start by serving your Django app :

python manage.py runserver 

Then go to localhost:8000/graphql and run some queries :

query {
allProducts {
    id
    sku
}
}   

You should get something like the following , depending on the data you have :

{
"data": {
    "allProducts": [
    {
        "id": "1",
        "sku": "Product001"
    }
    ]
}
}   

You can experiment with other models and you can also add fields but how do you know the name of the query ? It's simple just take the name of the field you create in the abstract query and transform it to camel case .

For example

all_families = graphene.List(FamilyType) => allFamilies

all_locations = graphene.List(LocationType) => allLocations

all_products = graphene.List(ProductType) => allProducts

all_transactions = graphene.List(TransactionType) => allTransactions

Then for each query specify the model fields you want to retrieve .

You can also query relationships .

Suppose we want all families with their products ,you just need to tell GraphQL what you need

query {
allFamilies {
    id
    reference 
    productSet {
        id
        sku 
    }
}
}

For my case I got :

{
"data": {
    "allFamilies": [
    {
        "id": "1",
        "reference": "FM001",
        "productSet": [
        {
            "id": "1",
            "sku": "Product001"
        }
        ]
    },
    {
        "id": "2",
        "reference": "FM001",
        "productSet": []
    }
    ]
}
}

Now what if you need the parent family and location of each product ,that's also easy doable with GraphQL :

query {
    allProducts {
        id
        sku 
        family {
            id
        }
        location {
            id
        }

    }
}

Querying for single items

We have seen how query all items but what if we need just one item by its id for example ,how can we achieve that ?

Go back to your abstract query in app schema.py file then update to be able to query for a single product

Add

product = graphene.Field(ProductType,id=graphene.Int())

Then a resolve_xxx() method

def resolve_product(self, args, context, info):
    id = args.get('id')

    if id is not None:
        return Product.objects.get(pk=id)

    return None

Now you can query for a single product by its id

query {
product(id: 1) {
    sku
    barcode
}
}

In the same way you can add support for getting single families , locations and transactions .

Conclusion


GraphQL is a very powerful technology for building Web APIs and thanks to Django Graphene you can easily add support for GraphQL to your django project .

You can find the code it this GitHub repository

23 Jun 2017 5:00am GMT

feedPlanet Twisted

Hynek Schlawack: Sharing Your Labor of Love: PyPI Quick and Dirty

A completely incomplete guide to packaging a Python module and sharing it with the world on PyPI.

23 Jun 2017 12:00am GMT

22 Jun 2017

feedDjango community aggregator: Community blog posts

Setup Git & A Github Repo

Git is a version control syste...

22 Jun 2017 10:04pm GMT

21 Jun 2017

feedPlanet Twisted

Itamar Turner-Trauring: The bad reasons you're forced to work long hours

Working long hours is unproductive, unhealthy, and unfortunately common. I strongly believe that working less hours is good for you and your employer, yet many companies and managers force you to work long hours, even as it decreases worker productivity.

So why do they do it? Let's go over some of the reasons.

Leading by example

Some managers simply don't understand that working long hours is counter-productive. Consider the founders of a startup. They love their job: the startup is their baby, and they are happy to works long hour to ensure it succeeds. That may well be inefficient and counter-productive, but they won't necessarily realize this.

The employees that join afterwards take their cue from the founders: if the boss is working long hours, it's hard not to do so yourself. And since the founders love what they're building it never occurs to them that long hours might not be for everyone, or even might be an outright negative for the company. Similar situations can also happen in larger organizations, when a team lead or manager put in long hours out of a sense of dedication.

A sense of entitlement

A less tractable problem is a manager who thinks they own your life. Jason Fried describes this as a Managerial Entitlement Complex: the idea that if someone is paying you a salary they are entitled to every minute of your time.

In this situation the problem isn't ignorance on the part of your manager. The problem is that your manager doesn't care about you as a human being or even as an employee. You're a resource provided by the human resources department, just like the office printer is provided by the IT department.

Control beats profits

Another problem is the fact that working hours are easy to measure, and therefore easy to control. When managers or companies see their employees as a cost center (and at least in the US the corporate culture is heavily biased against labor costs) the temptation to "control costs" by measuring and maximizing hours can be hard to resist.

Of course, this results in less output and so it is not rational behavior if the goal is to maximize profits. Would companies would actually choose labor control over productivity? Evidence from other industries suggests they would.

Up until the 1970s many farms in California forced their workers to use a short hoe, which involved bending over continuously. The result was a high rate of worker injuries. Employers liked the short hoe because they could easily control farm workers' labor: because of the way the workers bent over when using the short hoe it was easy to see whether or not they were working.

After a series of strikes and lawsuits by the United Farm Workers the short hoe was banned. The result? According to the CEO of a large lettuce grower, productivity actually went up.

(I learned this story from the book Solving the Climate Crisis through Social Change, by Gar W. Lipow. The book includes a number of other examples and further references.)

Bad incentives, or Cover Your Ass

Bad incentives in one part of the company can result in long hours in another. Consider this scenario: the sales team, which is paid on commission, has promised a customer to deliver a series of features in a month. Unfortunately implementing those features will take 6 months. The sales team doesn't care: they're only paid for sales, and delivering the product isn't their problem.

Now put yourself in the place of the tech lead or manager whose team has to implement those features. You can try to push back against the sales team's promises, but in many companies that will result in being seen as "not a team player." And when the project fails you and your team will be blamed by sales for not delivering on the company's promises.

When you've been set up to fail, your primary goal is to demonstrate that the inevitable failure was not your fault. The obvious and perhaps only way for you to do this is to have your team work long hours, a visible demonstration of commitment and effort. "We did everything we could! We worked 12 hour days, 6 days a week but we just couldn't do it."

Notice that in this scenario the manager may be good at their job; the issue is the organization as a whole.

Hero syndrome

Hero syndrome is another organizational failure that can cause long working hours. Imagine you're an engineer working for a startup that's going through a growth spurt. Servers keep going down under load, the architecture isn't keeping up, and there are lots of other growing pains. One evening the whole system goes down, and you stay up until 4AM bringing it back up. At the next company event you are lauded as a hero for saving the day... but no one devotes any resources to fixing the underlying problems.

The result is hero syndrome: the organization rewards those who save the day at the last minute, rather than work that prevents problems in the first place. And so they end up with a cycle of failure. Tired engineers making mistakes, lack of resources to build good infrastructure, and rewards for engineers who work long hours to try to duct tape a structure that is falling apart.

Avoiding bad companies

Working long hours is not productive. But since many companies don't understand this, when you're looking for a new job be on the lookout for the problems I described above. And if you'd like more tips to help you work a sane, productive workweek, check out my email course, the Programmer's Guide to a Sane Workweek.

21 Jun 2017 4:00am GMT

19 Jun 2017

feedPlanet Twisted

Hynek Schlawack: Why Your Dockerized Application Isn’t Receiving Signals

Proper cleanup when terminating your application isn't less important when it's running inside of a Docker container. Although it only comes down to making sure signals reach your application and handling them, there's a bunch of things that can go wrong.

19 Jun 2017 12:00am GMT

02 May 2017

feedPlanet Plone

Reinout van Rees: HTTPS behind your reverse proxy

We have a setup that looks (simplified) like this:

https://abload.de/img/screenshot2017-05-02a69bku.png

HTTP/HTTPS connections from browsers ("the green cloud") go to two reverse proxy servers on the outer border of our network. Almost everything is https.

Nginx then proxies the requests towards the actual webservers. Those webservers also have nginx on them, which proxies the request to the actual django site running on some port (8000, 5010, etc.).

Until recently, the https connection was only between the browser and the main proxies. Internally inside our own network, traffic was http-only. In a sense, that is OK as you've got security and a firewall and so. But... actually it is not OK. At least, not OK enough.

You cannot trust in only a solid outer wall. You need defense in depth. Network segmentation, restricted access. So ideally the traffic between the main proxies (in the outer "wall") to the webservers inside it should also be encrypted, for instance. Now, how to do this?

It turned out to be pretty easy, but figuring it out took some time. Likewise finding the right terminology to google with :-)

  • The main proxies (nginx) terminate the https connection. Most of the ssl certificates that we use are wildcard certificates. For example:

    server {
      listen 443;
      server_name sitename.example.org;
      location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-Forwarded-Proto https;
        proxy_redirect off;
        proxy_pass http://internal-server-name;
        proxy_http_version 1.1;
      }
      ssl on;
      ....
      ssl_certificate /etc/ssl/certs/wildcard.example.org.pem;
      ssl_certificate_key /etc/ssl/private/wildcard.example.org.key;
    }
    
  • Using https instead of http towards the internal webserver is easy. Just use https instead of http :-) Change the proxy_pass line:

    proxy_pass https://internal-server-name;
    

    The google term here is re-encrypting, btw.

  • The internal webserver has to allow an https connection. This is were we initially made it too hard for ourselves. We copied the relevant wildcard certificate to the webserver and changed the site to use the certificate and to listen on 443, basically just like on the main proxy.

    A big drawback is that you need to copy the certificate all over the place. Not very secure. Not a good idea. And we generate/deploy the nginx config for on the webserver from within our django project. So every django project would need to know the filesystem location and name of those certificates... Bah.

  • "What about not being so strict on the proxy? Cannot we tell nginx to omit a strict check on the certificate?" After a while I found the proxy_ssl_verify nginx setting. Bingo.

    Only, you need 1.7.0 for it. The main proxies are still on ubuntu 14.04, which has an older nginx. But wait: the default is "off". Which means that nginx doesn't bother checking certificates when proxying! A bit of experimenting showed that nginx really didn't mind which certificate was used on the webserver! Nice.

  • So any certificate is fine, really. I did my experimenting with ubuntu's default "snakeoil" self-signed certificate (/etc/ssl/certs/ssl-cert-snakeoil.pem). Install the ssl-cert package if it isn't there.

    On the webserver, the config thus looks like this:

    server {
        listen 443;
        # ^^^ Yes, we're running on https internally, too.
        server_name sitename.example.org;
        ssl on;
        ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
        ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
    
        ...
    }
    

    An advantage: the django site's setup doesn't need to know about specific certificate names, it can just use the basic certificate that's always there on ubuntu.

  • Now what about that "snakeoil" certificate? Isn't it some dummy certificate that is the same on every ubuntu install? If it is always the same certificate, you can still sniff and decrypt the internal https traffic almost as easily as plain http traffic...

    No it isn't. I verified it by uninstalling/purging the ssl-cert package and then re-installing it: the certificate changes. The snakeoil certificate is generated fresh when installing the package. So every server has its own self-signed certificate.

    You can generate a fresh certificate easily, for instance when you copied a server from an existing virtual machine template:

    $ sudo make-ssl-cert generate-default-snakeoil --force-overwrite
    

    As long as the only goal is to encrypt the https traffic between the main proxy and an internal webserver, the certificate is of course fine.

Summary: nginx doesn't check the certificate when proxying. So terminating the ssl connection on a main nginx proxy and then re-encrypting it (https) to backend webservers which use the simple default snakeoil certificate is a simple workable solution. And a solution that is a big improvement over plain http traffic!

02 May 2017 1:13pm GMT

24 Apr 2017

feedPlanet Plone

eGenix: PyDDF Python Spring Sprint 2017

The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.

Ankündigung

PyDDF Python Frühlings-Sprint 2017 in
Düsseldorf


Samstag, 06.05.2017, 10:00-18:00 Uhr
Sonntag, 07.05.2017, 10:00-18:00 Uhr

trivago GmbH, Karl-Arnold-Platz 1A, 40474 Düsseldorf

Informationen

Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung der trivago GmbH ein Python Sprint Wochenende im Mai.

Der Sprint findet am Wochenende 6./7.5.2017 in der trivago Niederlassung am Karl-Arnold-Platz 1A statt (nicht am Bennigsen-Platz 1).

Folgende Themengebiete haben wir als Anregung angedacht:

Openpyxl ist eine Python Bibliothek, mit der man Excel 2010+ Dateien lesen und schreiben kann.

Charlie Clark ist Co-Maintainer des Pakets.

Telegram ist eine Chat-Anwendung, die von vielen Nutzern verwendet wird. Telegram unterstützt das Registrieren von sogenannten Bots - kleinen Programmen, die man vom Chat aus ansteuern kann, um z.B. Informationen zu bekommen.

Im Sprint wollen wir versuchen, einen Telegram-Bot in Python zu schreiben.

Natürlich kann jeder Teilnehmer weitere Themen vorschlagen, z.B.

Anmeldung und weitere Infos

Alles weitere und die Anmeldung findet Ihr auf der Sprint Seite:

Teilnehmer sollten sich zudem auf der PyDDF Liste anmelden, da wir uns dort koordinieren:

Über das Python Meeting Düsseldorf

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.

Viel Spaß !

Marc-Andre Lemburg, eGenix.com

24 Apr 2017 11:00am GMT

03 Apr 2017

feedPlanet Plone

eGenix: Python Meeting Düsseldorf - 2017-04-05

The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.

Ankündigung

Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:

05.04.2017, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf


Neuigkeiten

Bereits angemeldete Vorträge

Stefan Richthofer
"pytypes"

André Aulich
"Python-Webanwendungen als native Desktop-Apps verteilen"

Charlie Clark
"Frankenstein - OO-Komposition statt Vererbung"

Weitere Vorträge können gerne noch angemeldet werden. Bei Interesse, bitte unter info@pyddf.de melden.

Startzeit und Ort

Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.

Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.

Über dem Eingang steht ein großes "Schwimm' in Bilk" Logo. Hinter der Tür direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.

>>> Eingang in Google Street View

Einleitung

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:

Programm

Das Python Meeting Düsseldorf nutzt eine Mischung aus (Lightning) Talks und offener Diskussion.

Vorträge können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit XGA Auflösung steht zur Verfügung.

(Lightning) Talk Anmeldung bitte formlos per EMail an info@pyddf.de

Kostenbeteiligung

Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.

Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.

Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.

Anmeldung

Da wir nur für ca. 20 Personen Sitzplätze haben, möchten wir bitten, sich per EMail anzumelden. Damit wird keine Verpflichtung eingegangen. Es erleichtert uns allerdings die Planung.

Meeting Anmeldung bitte formlos per EMail an info@pyddf.de

Weitere Informationen

Weitere Informationen finden Sie auf der Webseite des Meetings:

http://pyddf.de/

Viel Spaß !

Marc-Andre Lemburg, eGenix.com

03 Apr 2017 8:00am GMT

11 Feb 2016

feedPlanet TurboGears

Christpher Arndt: Organix Roland JX-3P MIDI Expansion Kit

Foreign visitors: to download the Novation Remote SL template for the Roland JX-3P with the Organix MIDI Upgrade, see the link at the bottom of this post. Zu meinem letzten Geburtstag habe ich mir selbst einen Roland JX-3P geschenkt, inklusive einem DT200-Programmer (ein PG-200 Klon). Der JX-3P ist ein 6-stimmiger analoger Polysynth von 1983 und […]

11 Feb 2016 8:42pm GMT

13 Jan 2016

feedPlanet TurboGears

Christpher Arndt: Anmeldung für das PythonCamp 2016 ab Freitag, 15.1.2016

PythonCamp 2016 Kostenloser Wissensaustausch rund um Python (The following is an announcement for a Python "Un-Conference" in Cologne, Germany and therefor directed at a German-speaking audience.) Liebe Python-Fans, es ist wieder soweit: Am Freitag, den 15. Januar öffnen wir die Online-Anmeldung für Teilnehmer des PythonCamps 2016! Die nunmehr siebte Ausgabe des PythonCamps wird erneut durch […]

13 Jan 2016 3:00pm GMT

02 Nov 2015

feedPlanet TurboGears

Matthew Wilson: Mary Dunbar is the best candidate for Cleveland Heights Council

I'll vote for Mary Dunbar tomorrow in the Cleveland Heights election.

Here's why:

02 Nov 2015 5:14pm GMT

03 Aug 2012

feedPySoy Blog

Juhani Åhman: YA Update

Managed to partyally fix the shading rendering issues with the examples.I reckon the rest of rendering issues are opengl ES related, and not something in libsoy side.
I don't know opengl (ES) very well, so i didn't attempt to fix any further.

I finished implementing a rudimentary pointer controller in pysoy's Client.
There is a pointer.py example program for testing it. Unfortunately it keeps crashing once in a while.
I reckon the problem is something with soy.atoms.Position. Regardless, the pointer controller works.

I started to work on getting keyboard controller to work too, and of course mouse buttons for the pointer,
but I got stuck when writing the python bindings for Genie's events (signals). There's no connect method in pysoy, so maybe that needs to implemented, or then some other solution. I will look into this later.

Plan for this week is to finish documenting bodies, scenes and widgets. I'm about 50% done, and it should be straightforward. Next week I'm finally going to attempt to set up Sphinx and generate readable documentation. I reckon I need to refactor many of the docstrings as well.

03 Aug 2012 12:27pm GMT

10 Jul 2012

feedPySoy Blog

Mayank Singh: Mid-term and dualshock 3

Now that SoC mid-term has arrived, here's bit of update about what I have done till now. The wiimote xinput driver IR update is almost done. Though just like it can said about any piece of software, it's never fully complete.
I also corrected the code for Sphere in the libsoy repository to render an actual sphere.
For now I have started up on an integration of dualshock3 controller. I am currently studying the code given here: http://www.pabr.org/sixlinux/sixlinux.en.html and trying to understand how the dualshock works. I also need to write a controller class to be able to grab and move objects around without the help from the physics engine.

10 Jul 2012 3:00pm GMT

04 Jul 2012

feedPySoy Blog

Juhani Åhman: Weeks 5-7 update

I've have mostly finished writing unit tests for atoms now.
I didn't write tests for Morphs tough, since that seem to be still in WIP.
However, I did encounter a rare memory corruption bug that I'm unable to fix at this point,
because I don't know how to debug it properly.
I can't find the location where the error occurrs.

I'm going to spend rest of this week writing doctests and hopefully getting more examples to work.

04 Jul 2012 9:04am GMT

10 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: King Willams Town Bahnhof

Gestern musste ich morgens zur Station nach KWT um unsere Rerservierten Bustickets für die Weihnachtsferien in Capetown abzuholen. Der Bahnhof selber ist seit Dezember aus kostengründen ohne Zugverbindung - aber Translux und co - die langdistanzbusse haben dort ihre Büros.


Größere Kartenansicht




© benste CC NC SA

10 Nov 2011 10:57am GMT

09 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein

Niemand ist besorgt um so was - mit dem Auto fährt man einfach durch, und in der City - nahe Gnobie- "ne das ist erst gefährlich wenn die Feuerwehr da ist" - 30min später auf dem Rückweg war die Feuerwehr da.




© benste CC NC SA

09 Nov 2011 8:25pm GMT

feedPlanet Zope.org

Updated MiniPlanet, now with meta-feed

My MiniPlanet Zope product has been working steady and stable for some years, when suddenly a user request came along. Would it be possible to get a feed of all the items in a miniplanet? With this update it became possible. MiniPlanet is an old-styl...

09 Nov 2011 9:41am GMT

08 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Brai Party

Brai = Grillabend o.ä.

Die möchte gern Techniker beim Flicken ihrer SpeakOn / Klinke Stecker Verzweigungen...

Die Damen "Mamas" der Siedlung bei der offiziellen Eröffnungsrede

Auch wenn weniger Leute da waren als erwartet, Laute Musik und viele Leute ...

Und natürlich ein Feuer mit echtem Holz zum Grillen.

© benste CC NC SA

08 Nov 2011 2:30pm GMT

07 Nov 2011

feedPlanet Zope.org

Welcome to Betabug Sirius

It has been quite some time that I announced_ that I'd be working as a freelancer. Lots of stuff had to be done in that time, but finally things are ready. I've founded my own little company and set up a small website: Welcome to Betabug Sirius!

07 Nov 2011 9:26am GMT

03 Nov 2011

feedPlanet Zope.org

Assertion helper for zope.testbrowser and unittest

zope.testbrowser is a valuable tool for integration tests. Historically, the Zope community used to write quite a lot of doctests, but we at gocept have found them to be rather clumsy and too often yielding neither good tests nor good documentation. That's why we don't use doctest much anymore, and prefer plain unittest.TestCases instead. However, doctest has one very nice feature, ellipsis matching, that is really helpful for checking HTML output, since you can only make assertions about the parts that interest you. For example, given this kind of page:

>>> print browser.contents
<html>
  <head>
    <title>Simple Page</title>
  </head>
  <body>
    <h1>Simple Page</h1>
  </body>
</html>

If all you're interested in is that the <h1> is rendered properly, you can simply say:

>>> print browser.contents
<...<h1>Simple Page</h1>...

We've now ported this functionality to unittest, as assertEllipsis, in gocept.testing. Some examples:

self.assertEllipsis('...bar...', 'foo bar qux')
# -> nothing happens

self.assertEllipsis('foo', 'bar')
# -> AssertionError: Differences (ndiff with -expected +actual):
     - foo
     + bar

self.assertNotEllipsis('foo', 'foo')
# -> AssertionError: "Value unexpectedly matches expression 'foo'."

To use it, inherit from gocept.testing.assertion.Ellipsis in addition to unittest.TestCase.


03 Nov 2011 7:19am GMT

19 Nov 2010

feedPlanet CherryPy

Robert Brewer: logging.statistics

Statistics about program operation are an invaluable monitoring and debugging tool. How many requests are being handled per second, how much of various resources are in use, how long we've been up. Unfortunately, the gathering and reporting of these critical values is usually ad-hoc. It would be nice if we had 1) a centralized place for gathering statistical performance data, 2) a system for extrapolating that data into more useful information, and 3) a method of serving that information to both human investigators and monitoring software. I've got a proposal. Let's examine each of those points in more detail.

Data Gathering

Just as Python's logging module provides a common importable for gathering and sending messages, statistics need a similar mechanism, and one that does not require each package which wishes to collect stats to import a third-party module. Therefore, we choose to re-use the logging module by adding a statistics object to it.

That logging.statistics object is a nested dict:

import logging
if not hasattr(logging, 'statistics'): logging.statistics = {}

It is not a custom class, because that would 1) require apps to import a third-party module in order to participate, 2) inhibit innovation in extrapolation approaches and in reporting tools, and 3) be slow. There are, however, some specifications regarding the structure of the dict.

    {
   +----"SQLAlchemy": {
   |        "Inserts": 4389745,
   |        "Inserts per Second":
   |            lambda s: s["Inserts"] / (time() - s["Start"]),
   |  C +---"Table Statistics": {
   |  o |        "widgets": {-----------+
 N |  l |            "Rows": 1.3M,      | Record
 a |  l |            "Inserts": 400,    |
 m |  e |        },---------------------+
 e |  c |        "froobles": {
 s |  t |            "Rows": 7845,
 p |  i |            "Inserts": 0,
 a |  o |        },
 c |  n +---},
 e |        "Slow Queries":
   |            [{"Query": "SELECT * FROM widgets;",
   |              "Processing Time": 47.840923343,
   |              },
   |             ],
   +----},
    }

The logging.statistics dict has strictly 4 levels. The topmost level is nothing more than a set of names to introduce modularity. If SQLAlchemy wanted to participate, it might populate the item logging.statistics['SQLAlchemy'], whose value would be a second-layer dict we call a "namespace". Namespaces help multiple emitters to avoid collisions over key names, and make reports easier to read, to boot. The maintainers of SQLAlchemy should feel free to use more than one namespace if needed (such as 'SQLAlchemy ORM').

Each namespace, then, is a dict of named statistical values, such as 'Requests/sec' or 'Uptime'. You should choose names which will look good on a report: spaces and capitalization are just fine.

In addition to scalars, values in a namespace MAY be a (third-layer) dict, or a list, called a "collection". For example, the CherryPy StatsTool keeps track of what each worker thread is doing (or has most recently done) in a 'Worker Threads' collection, where each key is a thread ID; each value in the subdict MUST be a fourth dict (whew!) of statistical data about
each thread. We call each subdict in the collection a "record". Similarly, the StatsTool also keeps a list of slow queries, where each record contains data about each slow query, in order.

Values in a namespace or record may also be functions, which brings us to:

Extrapolation

def extrapolate_statistics(scope):
    """Return an extrapolated copy of the given scope."""
    c = {}
    for k, v in scope.items():
        if isinstance(v, dict):
            v = extrapolate_statistics(v)
        elif isinstance(v, (list, tuple)):
            v = [extrapolate_statistics(record) for record in v]
        elif callable(v):
            v = v(scope)
        c[k] = v
    return c

The collection of statistical data needs to be fast, as close to unnoticeable as possible to the host program. That requires us to minimize I/O, for example, but in Python it also means we need to minimize function calls. So when you are designing your namespace and record values, try to insert the most basic scalar values you already have on hand.

When it comes time to report on the gathered data, however, we usually have much more freedom in what we can calculate. Therefore, whenever reporting tools fetch the contents of logging.statistics for reporting, they first call extrapolate_statistics (passing the whole statistics dict as the only argument). This makes a deep copy of the statistics dict so that the reporting tool can both iterate over it and even change it without harming the original. But it also expands any functions in the dict by calling them. For example, you might have a 'Current Time' entry in the namespace with the value "lambda scope: time.time()". The "scope" parameter is the current namespace dict (or record, if we're currently expanding one of those instead), allowing you access to existing static entries. If you're truly evil, you can even modify more than one entry at a time.

However, don't try to calculate an entry and then use its value in further extrapolations; the order in which the functions are called is not guaranteed. This can lead to a certain amount of duplicated work (or a redesign of your schema), but that's better than complicating the spec.

After the whole thing has been extrapolated, it's time for:

Reporting

A reporting tool would grab the logging.statistics dict, extrapolate it all, and then transform it to (for example) HTML for easy viewing, or JSON for processing by Nagios etc (and because JSON will be a popular output format, you should seriously consider using Python's time module for datetimes and arithmetic, not the datetime module). Each namespace might get its own header and attribute table, plus an extra table for each collection. This is NOT part of the statistics specification; other tools can format how they like.

Turning Collection Off

It is recommended each namespace have an "Enabled" item which, if False, stops collection (but not reporting) of statistical data. Applications SHOULD provide controls to pause and resume collection by setting these entries to False or True, if present.

Usage

    import logging
    # Initialize the repository
    if not hasattr(logging, 'statistics'): logging.statistics = {}
    # Initialize my namespace
    mystats = logging.statistics.setdefault('My Stuff', {})
    # Initialize my namespace's scalars and collections
    mystats.update({
        'Enabled': True,
        'Start Time': time.time(),
        'Important Events': 0,
        'Events/Second': lambda s: (
            (s['Important Events'] / (time.time() - s['Start Time']))),
        })
    ...
    for event in events:
        ...
        # Collect stats
        if mystats.get('Enabled', False):
            mystats['Important Events'] += 1

Original post blogged on b2evolution.

19 Nov 2010 7:08am GMT

12 Nov 2010

feedPlanet CherryPy

Kevin Dangoor: Paver is now on GitHub, thanks to Almad

Paver, the project scripting tool for Python, has just moved to GitHub thanks to Almad. Almad has stepped forward and offered to properly bring Paver into the second decade of the 21st century (doesn't have the same ring to it as bringing something into the 21st century, does it? :)

Seriously, though, Paver reached the point where it was good enough for me and did what I wanted (and, apparently, a good number of other people wanted as well). Almad has some thoughts and where the project should go next and I'm looking forward to hearing more about them. Sign up for the googlegroup to see where Paver is going next.

12 Nov 2010 3:11am GMT

09 Nov 2010

feedPlanet CherryPy

Kevin Dangoor: Paver: project that works, has users, needs a leader

Paver is a Python project scripting tool that I initially created in 2007 to automate a whole bunch of tasks around projects that I was working on. It knows about setuptools and distutils, it has some ideas on handling documentation with example code. It also has users who occasionally like to send in patches. The latest release has had more than 3700 downloads on PyPI.

Paver hasn't needed a lot of work, because it does what it says on the tin: helps you automate project tasks. Sure, there's always more that one could do. But, there isn't more that's required for it to be a useful tool, day-to-day.

Here's the point of my post: Paver is in danger of being abandoned. At this point, everything significant that I am doing is in JavaScript, not Python. The email and patch traffic is low, but it's still too much for someone that's not even actively using the tool any more.

If you're a Paver user and either:

1. want to take the project in fanciful new directions or,

2. want to keep the project humming along with a new .x release every now and then

please let me know.

09 Nov 2010 7:44pm GMT