30 Jun 2015

feedPlanet Python

Graham Dumpleton: Proxying to a Python web application running in Docker.

I have seen a few questions of late being posed about how to setup Apache to proxy to a Python web application running in a Docker container. The questions seem to be the result of people who have in the past previously run Apache/mod_wsgi on the host, with potentially multiple Python web applications, now making use of Docker to break out the individual web applications into separate containers.

30 Jun 2015 8:48pm GMT

Graham Dumpleton: Proxying to a Python web application running in Docker.

I have seen a few questions of late being posed about how to setup Apache to proxy to a Python web application running in a Docker container. The questions seem to be the result of people who have in the past previously run Apache/mod_wsgi on the host, with potentially multiple Python web applications, now making use of Docker to break out the individual web applications into separate containers.

30 Jun 2015 8:48pm GMT

Python Software Foundation: CSA goes to PSF Brochure Creators

RESOLVED, that the Python Software Foundation award Armin Stross-Radschinski and Jan Ulrich Hasecke the 1st Qtr 2015 PSF Community Service Award for their work on creating the PSF Python Brochure.

For the last several years, a dedicated team has toiled in obscurity on a task they knew to be important for the future of a programming language they loved, but at the same time, one that many thought would be a fool's errand and would never pay off. These intrepid visionaries kept going, through thick and thin; through difficulties getting stories, legal permissions, and sponsors; through naysayers and those who said, again and again, that it was useless, since winter is coming (or something similar); through lions, and tigers and . . . ! Ultimately, they produced (drumroll, please) the PSF Brochure!
All kidding aside, the PSF brochure took an enormous amount of work and has been a huge success. It stands as a real-world ambassador for Python, for which we should all be grateful, and of which we should all be aware and proud! The next time one of your relatives, or friend of a friend, or a new acquaintance asks "so why is this open source language you're spending so much time on such a big deal?" (see fn.* below), you needn't break a sweat explaining; just hand them the brochure.
And beyond saving individual Pythonistas a lot of time and effort, the brochure, more importantly, conveys to "CIOs and chief developers, scientists and programmers, university lecturers, teachers and students, customers, clients, managers and employees" the benefits, functions, uses, applications, advantages, features, potential, and ease of using Python.
Armin worked on the design and layout of the brochure, managed the visual aspects of the project, getting the sponsor ads into the brochure, managing the print runs, the project support website, ordering system, payment system, and finally all the shipping of the brochures to various conferences and user groups around the world.
Jan Ulrich was the main editor of the brochure content and worked with the sponsor story authors to create interesting stories. He also wrote the editorial parts of the brochure: the intro and the import success sections.
They both also helped with finding good success stories and sponsors, a task which took more time and effort than originally anticipated. According to PSF Director, Marc-Andre Lemburg, who headed up the project,
Armin and Jan Ulrich both put a huge amount of work into the creation of the brochure. Armin on the visual and production side, Jan Ulrich on the editorial and content side. Without their efforts and passion, we would not have succeeded running this four year project to completion."
You can find more information about the project on the wiki page, the support website, and by reading previous posts to this blog: PSF Brochure, Brochure Sold Out.
footnote*: a real question really asked by real relatives!
I would love to hear from readers. Please send feedback, comments, or blog ideas to me at msushi@gnosis.cx.

30 Jun 2015 8:11pm GMT

Python Software Foundation: CSA goes to PSF Brochure Creators

RESOLVED, that the Python Software Foundation award Armin Stross-Radschinski and Jan Ulrich Hasecke the 1st Qtr 2015 PSF Community Service Award for their work on creating the PSF Python Brochure.

For the last several years, a dedicated team has toiled in obscurity on a task they knew to be important for the future of a programming language they loved, but at the same time, one that many thought would be a fool's errand and would never pay off. These intrepid visionaries kept going, through thick and thin; through difficulties getting stories, legal permissions, and sponsors; through naysayers and those who said, again and again, that it was useless, since winter is coming (or something similar); through lions, and tigers and . . . ! Ultimately, they produced (drumroll, please) the PSF Brochure!
All kidding aside, the PSF brochure took an enormous amount of work and has been a huge success. It stands as a real-world ambassador for Python, for which we should all be grateful, and of which we should all be aware and proud! The next time one of your relatives, or friend of a friend, or a new acquaintance asks "so why is this open source language you're spending so much time on such a big deal?" (see fn.* below), you needn't break a sweat explaining; just hand them the brochure.
And beyond saving individual Pythonistas a lot of time and effort, the brochure, more importantly, conveys to "CIOs and chief developers, scientists and programmers, university lecturers, teachers and students, customers, clients, managers and employees" the benefits, functions, uses, applications, advantages, features, potential, and ease of using Python.
Armin worked on the design and layout of the brochure, managed the visual aspects of the project, getting the sponsor ads into the brochure, managing the print runs, the project support website, ordering system, payment system, and finally all the shipping of the brochures to various conferences and user groups around the world.
Jan Ulrich was the main editor of the brochure content and worked with the sponsor story authors to create interesting stories. He also wrote the editorial parts of the brochure: the intro and the import success sections.
They both also helped with finding good success stories and sponsors, a task which took more time and effort than originally anticipated. According to PSF Director, Marc-Andre Lemburg, who headed up the project,
Armin and Jan Ulrich both put a huge amount of work into the creation of the brochure. Armin on the visual and production side, Jan Ulrich on the editorial and content side. Without their efforts and passion, we would not have succeeded running this four year project to completion."
You can find more information about the project on the wiki page, the support website, and by reading previous posts to this blog: PSF Brochure, Brochure Sold Out.
footnote*: a real question really asked by real relatives!
I would love to hear from readers. Please send feedback, comments, or blog ideas to me at msushi@gnosis.cx.

30 Jun 2015 8:11pm GMT

Mauveweb: Pygame Zero in MagPi

Pygame Zero has been featured in this month's MagPi, the official Raspberry Pi Magazine. There's a double page spread including an interview with me:

MagPi issue 35 Page 8 MagPi issue 35 Page 9

Download the full PDF of Issue 35 here (it is CC-BY-SA-NC 3.0). The article is on pages 8 and 9.

30 Jun 2015 5:20pm GMT

Mauveweb: Pygame Zero in MagPi

Pygame Zero has been featured in this month's MagPi, the official Raspberry Pi Magazine. There's a double page spread including an interview with me:

MagPi issue 35 Page 8 MagPi issue 35 Page 9

Download the full PDF of Issue 35 here (it is CC-BY-SA-NC 3.0). The article is on pages 8 and 9.

30 Jun 2015 5:20pm GMT

Omaha Python Users Group: July 15 Meeting Details

Topic/Speaker - "Integrating Python Into Other Code Types" / Adam Shaver
It will cover direct C/C++ integration, use of boost for integration, and use of an Enterprise Service Bus (ESB) - Zato.io - to integrate python into the work flow.

Location - Alley Poyner Macchietto Architecture Office in the Tip Top Building at 1516 Cuming Street.

Meeting starts at 6:30 pm, Wednesday, 7/15/2015

30 Jun 2015 2:19pm GMT

Omaha Python Users Group: July 15 Meeting Details

Topic/Speaker - "Integrating Python Into Other Code Types" / Adam Shaver
It will cover direct C/C++ integration, use of boost for integration, and use of an Enterprise Service Bus (ESB) - Zato.io - to integrate python into the work flow.

Location - Alley Poyner Macchietto Architecture Office in the Tip Top Building at 1516 Cuming Street.

Meeting starts at 6:30 pm, Wednesday, 7/15/2015

30 Jun 2015 2:19pm GMT

Europython: EuroPython 2015: Call for On-site Volunteers

EuroPython is organized and run by volunteers from the Python community, but we're only a few and we will need more help to make the conference run smoothly.

We need your help !

We will need help with the conference and registration desk, giving out the swag bags and t-shirts, session chairing, entrance control, set up and tear down, etc.

Perks for Volunteers

In addition to endless fame and glory as official EuroPython Volunteer, we have also added some real-life few perks for you:

Register as Volunteer

Please see our EuroPython Volunteers page for details and the registration form:

image

If you have questions, please write to our helpdesk@europython.eu.

Hope to see you in Bilbao :-)

Enjoy,
-
EuroPython 2015 Team

30 Jun 2015 11:02am GMT

Europython: EuroPython 2015: Call for On-site Volunteers

EuroPython is organized and run by volunteers from the Python community, but we're only a few and we will need more help to make the conference run smoothly.

We need your help !

We will need help with the conference and registration desk, giving out the swag bags and t-shirts, session chairing, entrance control, set up and tear down, etc.

Perks for Volunteers

In addition to endless fame and glory as official EuroPython Volunteer, we have also added some real-life few perks for you:

Register as Volunteer

Please see our EuroPython Volunteers page for details and the registration form:

image

If you have questions, please write to our helpdesk@europython.eu.

Hope to see you in Bilbao :-)

Enjoy,
-
EuroPython 2015 Team

30 Jun 2015 11:02am GMT

"Menno's Musings": IMAPClient 0.13

I'm chuffed to announce that IMAPClient 0.13 is out!

Here's what's new:

See the NEWS.rst file and manual for more details.

IMAPClient can be installed from PyPI (pip install imapclient) or downloaded from the IMAPClient site.

I'm also excited to announce that Nylas (formerly Inbox) has now employed me to work on IMAPClient part time. There should be a significant uptick in the development of IMAPClient.

The next major version of IMAPClient will be 1.0.0, and will be primarily focussed on enhancing TLS/SSL support.

30 Jun 2015 10:38am GMT

"Menno's Musings": IMAPClient 0.13

I'm chuffed to announce that IMAPClient 0.13 is out!

Here's what's new:

See the NEWS.rst file and manual for more details.

IMAPClient can be installed from PyPI (pip install imapclient) or downloaded from the IMAPClient site.

I'm also excited to announce that Nylas (formerly Inbox) has now employed me to work on IMAPClient part time. There should be a significant uptick in the development of IMAPClient.

The next major version of IMAPClient will be 1.0.0, and will be primarily focussed on enhancing TLS/SSL support.

30 Jun 2015 10:38am GMT

Nicola Iarocci: Cerberus 0.9 has been released

A few days ago Cerberus 0.9 was released. It includes a bunch of new cool features, let's browse through some of them. Collection rules First up is the new set of anyof, allof, noneof and oneof validation rules. anyof allows you to list multiple sets of rules to validate against. The field will be considered […]

30 Jun 2015 9:16am GMT

Nicola Iarocci: Cerberus 0.9 has been released

A few days ago Cerberus 0.9 was released. It includes a bunch of new cool features, let's browse through some of them. Collection rules First up is the new set of anyof, allof, noneof and oneof validation rules. anyof allows you to list multiple sets of rules to validate against. The field will be considered […]

30 Jun 2015 9:16am GMT

Talk Python to Me: #14 Moving from PHP to Python 3 with Patreon

It's uncommon when technology and purpose combine to create something amazing. But that's exactly what's happening here a Patreon. Learn how they are using Python to enable an entirely new type of crowdsourcing for creative endeavours (podcasting, art, open source, and more).<more></more> In this episode, I speak with Albert Shue from Patreon about their journey of converting patreon.com from PHP to Python 3. You will learn some practical techniques for setting up such a project for success and avoiding some of the biggest risks. Links from the show: <div style="font-size: .85em;"> <b>Patreon</b>: <a href='http://patreon.com ' target='_blank'>patreon.com</a> <b>Michael's Campaign</b>: <a href='http://patreon.com/mkennedy' target='_blank'>patreon.com/mkennedy</a> <b>How to write a spelling corrector</b>: <a href='http://norvig.com/spell-correct.html' target='_blank'>norvig.com/spell-correct.html</a> <b>Rollbar</b>: <a href='https://rollbar.com/' target='_blank'>rollbar.com</a> <b>Albert on Twitter</b>: <a href='https://twitter.com/146' target='_blank'>@146</a> <b>Patreon Hiring (1)</b>: <a href='https://medium.com/@jackconte/patreon-needs-data-scientists-c667d6fa2b4a' target='_blank'>via Medium.com</a> <b>Patreon Hiring (2)</b>: <a href='https://www.patreon.com/careers' target='_blank'>patreon.com/careers</a> <b>Stackoverflow 2015 developer survey</b>: <a href='http://stackoverflow.com/research/developer-survey-2015' target='_blank'>stackoverflow.com/research/developer-survey-2015</a> <b>IPython Keynote</b>: <a href='https://www.youtube.com/watch?v=2NSbuKFYyvc' target='_blank'>youtube.com/watch?v=2NSbuKFYyvc</a> <b>Talk Python T-Shirt</b>: <a href='/home/shirt' target='_blank'>talkpythontome.com/home/shirt</a> <b>Sponsor: Codeship</b>: <a href='https://codeship.com/?utm_source=talkpython&utm_medium=podcast&utm_campaign=TalkPython' target='_blank'>codeship.com</a> <b>Sponsor: Hired</b>: <a href='https://hired.com/?utm_source=podcast&utm_medium=talkpythontome&utm_content=display-4k' target='_blank'>hired.com/talkpythontome</a> </div>

30 Jun 2015 8:00am GMT

Talk Python to Me: #14 Moving from PHP to Python 3 with Patreon

It's uncommon when technology and purpose combine to create something amazing. But that's exactly what's happening here a Patreon. Learn how they are using Python to enable an entirely new type of crowdsourcing for creative endeavours (podcasting, art, open source, and more).<more></more> In this episode, I speak with Albert Shue from Patreon about their journey of converting patreon.com from PHP to Python 3. You will learn some practical techniques for setting up such a project for success and avoiding some of the biggest risks. Links from the show: <div style="font-size: .85em;"> <b>Patreon</b>: <a href='http://patreon.com ' target='_blank'>patreon.com</a> <b>Michael's Campaign</b>: <a href='http://patreon.com/mkennedy' target='_blank'>patreon.com/mkennedy</a> <b>How to write a spelling corrector</b>: <a href='http://norvig.com/spell-correct.html' target='_blank'>norvig.com/spell-correct.html</a> <b>Rollbar</b>: <a href='https://rollbar.com/' target='_blank'>rollbar.com</a> <b>Albert on Twitter</b>: <a href='https://twitter.com/146' target='_blank'>@146</a> <b>Patreon Hiring (1)</b>: <a href='https://medium.com/@jackconte/patreon-needs-data-scientists-c667d6fa2b4a' target='_blank'>via Medium.com</a> <b>Patreon Hiring (2)</b>: <a href='https://www.patreon.com/careers' target='_blank'>patreon.com/careers</a> <b>Stackoverflow 2015 developer survey</b>: <a href='http://stackoverflow.com/research/developer-survey-2015' target='_blank'>stackoverflow.com/research/developer-survey-2015</a> <b>IPython Keynote</b>: <a href='https://www.youtube.com/watch?v=2NSbuKFYyvc' target='_blank'>youtube.com/watch?v=2NSbuKFYyvc</a> <b>Talk Python T-Shirt</b>: <a href='/home/shirt' target='_blank'>talkpythontome.com/home/shirt</a> <b>Sponsor: Codeship</b>: <a href='https://codeship.com/?utm_source=talkpython&utm_medium=podcast&utm_campaign=TalkPython' target='_blank'>codeship.com</a> <b>Sponsor: Hired</b>: <a href='https://hired.com/?utm_source=podcast&utm_medium=talkpythontome&utm_content=display-4k' target='_blank'>hired.com/talkpythontome</a> </div>

30 Jun 2015 8:00am GMT

29 Jun 2015

feedPlanet Python

Davide Moro: Pip for buildout folks

... or buildout for pip folks.

In this article I'm going to talk about how to manage software (Python) projects with buildout or pip.

What do you mean for project?

A package that contains all the application-specific settings, database configuration, which packages your project will need and where they lives.

Projects should be managed like a software if you want to assure the needed quality:

This blog post is not:

Buildout

I've been using buildout for many years and we are still good friends.
Buildout definition (from http://www.buildout.org):

"""
Buildout is a Python-based build system for creating, assembling and deploying applications from multiple parts, some of which may be non-Python-based. It lets you create a buildout configuration and reproduce the same software later.
"""

With buildout you can build and share reproducible environments, not only for Python based components.

Before buildout (if I remember well the first time I get started to use buildout was in 2007, probably during the very first Plone Sorrento sprint) it was a real pain sharing a complete and working developing environment pointing to the right version of several repositories, etc. With buildout it was questions of minutes.

From https://pypi.python.org/pypi/mr.developer.
Probably with pip there is less fun because there isn't a funny picture that celebrates it?!

Buildout configuration files are modular and extensible (not only on per-section basis). There are a lot of buildout recipes, probably the one I prefer is mr.developer (https://pypi.python.org/pypi/mr.developer). It allowed me to fetch different versions of the repositories depending on the buildout profile in use, for example:

You can accomplish this thing creating different configurations for different profiles, like that:


[buildout]

...

[sources]

your_plugin = git git@github.com:username/your_plugin.git

...


I don't like calling ./bin/buildout -c [production|devel].cfgwith the -c syntax because it is too much error prone. I prefer to create a symbolic link to the right buildout profile (called buildout.cfg) and you'll perform the same command both in production or during development always typing:


$ ./bin/buildout


This way you'll avoid nasty errors like launching a wrong profile in producion. So use just the plain ./bin/buildout command and live happy.

With buildout you can show and freeze all the installed versions of your packages providing a versions.cfg file.

Here you can see my preferred buildout recipes:

Buildout or not buildout, one of the of the most common needs it is the ability to switch from develop to tags depending on you are in development or production mode and reproduce the same software later. I can't figure out to manage software installations without this quality assurance.

More info: http://www.buildout.org

Pip

Let's see how to create reproducible environments with develop or tags dependencies for production environments with pip (https://pip.pypa.io/en/latest/).

Basically you specify your devel requirements on a devel-requirements.txt file (the name doesn't matter) pointing to the develop/master/trunk on your repository.

There is another file that I call production-requirements (the file name doesn't matter) that it is equivalent to the previous one but:

This way it is quite simple seeing which releases are installed in production mode, with no cryptic hash codes.

You can use now the production-requirements.txt as a template for generating an easy to read requirements.txt. You'll use this file when installing in production.

You can create a regular Makefile if you don't want to repeat yourself or make scripts if you prefer:

For example if you are particular lazy you can create a script that will create your requirements.txt file using the production-requirements.txt like a template.
This is a simple script, it is just an example, that shows how to build your requirements.txt omitting lines with grep, sed, etc:

#!/bin/bash

pip install -r production-requirements.txt
pip freeze -r production-requirements.txt | grep -v mip_project | sed '1,2d' > requirements.txt

When running this script, you should activate another Python environment in order to not pollute the production requirements list with development stuff.

If you want to make your software reusable and as flexible as possible, you can add a regular setup.py module with optional dependencies, that you can activate depending on what you need. For example in devel-mode you might want to activate an entry point called docs (see -e .[docs] in devel-requirements.txt) with optional Sphinx dependencies. Or in production you can install MySQL specific dependencies (-e .[mysql]).

In the examples below I'll also show how to refer to external requirements file (url or a file).

setup.py

You can define optional extra requirements in your setup.py module.

mysql_requires = [
'MySQL-python',
]

docs_requires = [
'Sphinx',
'docutils',
'repoze.sphinx.autointerface',
]
...

setup(
name='mip_project',
version=version,
...
extras_require={
'mysql': mysql_requires,
'docs': docs_requires,
        ... 
},

devel-requirements.txt

Optional extra requirement can be activated using the [] syntax (see -e .[docs]).
You can also include external requirement files or urls (see -r) and tell pip how to fetch some concrete dependencies (see -e git+...#egg=your_egg).

-r https://github.com/.../.../blob/VERSION/requirements.txt
 
# Kotti
Kotti[development,testing]==VERSION

# devel (to no be added in production)
zest.releaser

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git#egg=kotti_boxes
...

-e .[docs]

production_requirements.txt

The production requirements should point to tags (see @VERSION).

-r https://github.com/Kotti/Kotti/blob/VERSION/requirements.txt
Kotti[development,testing]==VERSION

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@0.1.1#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git@0.1.3#egg=kotti_boxes
...

-e .[mysql]

requirements.txt
The requirements.txt is autogenerated based on the production-requirements.txt model file. All the installed versions are appended in alphabetical at the end of the file, it can be a very long list.
All the tag versions provided in the production-requirements.txt are automatically converted to hash values (@VERSION -> @3c1a191...).

Kotti==1.0.0a4

# Third party's eggs
kotti-newsitem==0.2
kotti-calendar==0.8.2
kotti-link==0.1
kotti-navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@3c1a1914901cb33fcedc9801764f2749b4e1df5b#egg=kotti_actions-dev
-e git+https://github.com/truelab/kotti_boxes.git@3730705703ef4e523c566c063171478902645658#egg=kotti_boxes-dev
...

## The following requirements were added by pip freeze:
alembic==0.6.7
appdirs==1.4.0
Babel==1.3
Beaker==1.6.4
... 

Final consideration

Use pip to install Python packages from Pypi.

If you're looking for management of fully integrated cross-platform software stacks, buildout is for you.

With buildout no Python code needed unless you are going to write new recipes (the plugin mechanism provided by buildout to add new functionalities to your software building, see http://buildout.readthedocs.org/en/latest/docs/recipe.html).

Instead with pip you can manage also cross-platform stacks but you loose the flexibility of buildout recipes and inheritable configuration files.

Anyway if you consider buildout too magic or you just need a way to switch from production vs development mode you can use pip as well.

Links

If you need more info have a look at the following urls:

Other useful links:

Update 20150629

If you want an example I've created a pip-based project for Kotti CMS (http://kotti.pylonsproject.org):

29 Jun 2015 11:26pm GMT

Davide Moro: Pip for buildout folks

... or buildout for pip folks.

In this article I'm going to talk about how to manage software (Python) projects with buildout or pip.

What do you mean for project?

A package that contains all the application-specific settings, database configuration, which packages your project will need and where they lives.

Projects should be managed like a software if you want to assure the needed quality:

This blog post is not:

Buildout

I've been using buildout for many years and we are still good friends.
Buildout definition (from http://www.buildout.org):

"""
Buildout is a Python-based build system for creating, assembling and deploying applications from multiple parts, some of which may be non-Python-based. It lets you create a buildout configuration and reproduce the same software later.
"""

With buildout you can build and share reproducible environments, not only for Python based components.

Before buildout (if I remember well the first time I get started to use buildout was in 2007, probably during the very first Plone Sorrento sprint) it was a real pain sharing a complete and working developing environment pointing to the right version of several repositories, etc. With buildout it was questions of minutes.

From https://pypi.python.org/pypi/mr.developer.
Probably with pip there is less fun because there isn't a funny picture that celebrates it?!

Buildout configuration files are modular and extensible (not only on per-section basis). There are a lot of buildout recipes, probably the one I prefer is mr.developer (https://pypi.python.org/pypi/mr.developer). It allowed me to fetch different versions of the repositories depending on the buildout profile in use, for example:

You can accomplish this thing creating different configurations for different profiles, like that:


[buildout]

...

[sources]

your_plugin = git git@github.com:username/your_plugin.git

...


I don't like calling ./bin/buildout -c [production|devel].cfgwith the -c syntax because it is too much error prone. I prefer to create a symbolic link to the right buildout profile (called buildout.cfg) and you'll perform the same command both in production or during development always typing:


$ ./bin/buildout


This way you'll avoid nasty errors like launching a wrong profile in producion. So use just the plain ./bin/buildout command and live happy.

With buildout you can show and freeze all the installed versions of your packages providing a versions.cfg file.

Here you can see my preferred buildout recipes:

Buildout or not buildout, one of the of the most common needs it is the ability to switch from develop to tags depending on you are in development or production mode and reproduce the same software later. I can't figure out to manage software installations without this quality assurance.

More info: http://www.buildout.org

Pip

Let's see how to create reproducible environments with develop or tags dependencies for production environments with pip (https://pip.pypa.io/en/latest/).

Basically you specify your devel requirements on a devel-requirements.txt file (the name doesn't matter) pointing to the develop/master/trunk on your repository.

There is another file that I call production-requirements (the file name doesn't matter) that it is equivalent to the previous one but:

This way it is quite simple seeing which releases are installed in production mode, with no cryptic hash codes.

You can use now the production-requirements.txt as a template for generating an easy to read requirements.txt. You'll use this file when installing in production.

You can create a regular Makefile if you don't want to repeat yourself or make scripts if you prefer:

For example if you are particular lazy you can create a script that will create your requirements.txt file using the production-requirements.txt like a template.
This is a simple script, it is just an example, that shows how to build your requirements.txt omitting lines with grep, sed, etc:

#!/bin/bash

pip install -r production-requirements.txt
pip freeze -r production-requirements.txt | grep -v mip_project | sed '1,2d' > requirements.txt

When running this script, you should activate another Python environment in order to not pollute the production requirements list with development stuff.

If you want to make your software reusable and as flexible as possible, you can add a regular setup.py module with optional dependencies, that you can activate depending on what you need. For example in devel-mode you might want to activate an entry point called docs (see -e .[docs] in devel-requirements.txt) with optional Sphinx dependencies. Or in production you can install MySQL specific dependencies (-e .[mysql]).

In the examples below I'll also show how to refer to external requirements file (url or a file).

setup.py

You can define optional extra requirements in your setup.py module.

mysql_requires = [
'MySQL-python',
]

docs_requires = [
'Sphinx',
'docutils',
'repoze.sphinx.autointerface',
]
...

setup(
name='mip_project',
version=version,
...
extras_require={
'mysql': mysql_requires,
'docs': docs_requires,
        ... 
},

devel-requirements.txt

Optional extra requirement can be activated using the [] syntax (see -e .[docs]).
You can also include external requirement files or urls (see -r) and tell pip how to fetch some concrete dependencies (see -e git+...#egg=your_egg).

-r https://github.com/.../.../blob/VERSION/requirements.txt
 
# Kotti
Kotti[development,testing]==VERSION

# devel (to no be added in production)
zest.releaser

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git#egg=kotti_boxes
...

-e .[docs]

production_requirements.txt

The production requirements should point to tags (see @VERSION).

-r https://github.com/Kotti/Kotti/blob/VERSION/requirements.txt
Kotti[development,testing]==VERSION

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@0.1.1#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git@0.1.3#egg=kotti_boxes
...

-e .[mysql]

requirements.txt
The requirements.txt is autogenerated based on the production-requirements.txt model file. All the installed versions are appended in alphabetical at the end of the file, it can be a very long list.
All the tag versions provided in the production-requirements.txt are automatically converted to hash values (@VERSION -> @3c1a191...).

Kotti==1.0.0a4

# Third party's eggs
kotti-newsitem==0.2
kotti-calendar==0.8.2
kotti-link==0.1
kotti-navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@3c1a1914901cb33fcedc9801764f2749b4e1df5b#egg=kotti_actions-dev
-e git+https://github.com/truelab/kotti_boxes.git@3730705703ef4e523c566c063171478902645658#egg=kotti_boxes-dev
...

## The following requirements were added by pip freeze:
alembic==0.6.7
appdirs==1.4.0
Babel==1.3
Beaker==1.6.4
... 

Final consideration

Use pip to install Python packages from Pypi.

If you're looking for management of fully integrated cross-platform software stacks, buildout is for you.

With buildout no Python code needed unless you are going to write new recipes (the plugin mechanism provided by buildout to add new functionalities to your software building, see http://buildout.readthedocs.org/en/latest/docs/recipe.html).

Instead with pip you can manage also cross-platform stacks but you loose the flexibility of buildout recipes and inheritable configuration files.

Anyway if you consider buildout too magic or you just need a way to switch from production vs development mode you can use pip as well.

Links

If you need more info have a look at the following urls:

Other useful links:

Update 20150629

If you want an example I've created a pip-based project for Kotti CMS (http://kotti.pylonsproject.org):

29 Jun 2015 11:26pm GMT

Amjith Ramanujam: FuzzyFinder - in 10 lines of Python

Introduction:

FuzzyFinder is a popular feature available in decent editors to open files. The idea is to start typing partial strings from the full path and the list of suggestions will be narrowed down to match the desired file.

Examples:

Vim (Ctrl-P)

Sublime Text (Cmd-P)

This is an extremely useful feature and it's quite easy to implement.

Problem Statement:

We have a collection of strings (filenames). We're trying to filter down that collection based on user input. The user input can be partial strings from the filename. Let's walk this through with an example. Here is a collection of filenames:

When the user types 'djm' we are supposed to match 'django_migrations.py' and 'django_admin_log.py'. The simplest route to achieve this is to use regular expressions.

Solutions:

Naive Regex Matching:

Convert 'djm' into 'd.*j.*m' and try to match this regex against every item in the list. Items that match are the possible candidates.

This got us the desired results for input 'djm'. But the suggestions are not ranked in any particular order.

In fact, for the second example with user input 'mig' the best possible suggestion 'migrations.py' was listed as the last item in the result.

Ranking based on match position:

We can rank the results based on the position of the first occurrence of the matching character. For user input 'mig' the position of the matching characters are as follows:

Here's the code:

We made the list of suggestions to be tuples where the first item is the position of the match and second item is the matching filename. When this list is sorted, python will sort them based on the first item in tuple and use the second item as a tie breaker. On line 14 we use a list comprehension to iterate over the sorted list of tuples and extract just the second item which is the file name we're interested in.

This got us close to the end result, but as shown in the example, it's not perfect. We see 'main_generator.py' as the first suggestion, but the user wanted 'migration.py'.

Ranking based on compact match:

When a user starts typing a partial string they will continue to type consecutive letters in a effort to find the exact match. When someone types 'mig' they are looking for 'migrations.py' or 'django_migrations.py' not 'main_generator.py'. The key here is to find the most compact match for the user input.

Once again this is trivial to do in python. When we match a string against a regular expression, the matched string is stored in the match.group().

For example, if the input is 'mig', the matching group from the 'collection' defined earlier is as follows:

We can use the length of the captured group as our primary rank and use the starting position as our secondary rank. To do that we add the len(match.group()) as the first item in the tuple, match.start() as the second item in the tuple and the filename itself as the third item in the tuple. Python will sort this list based on first item in the tuple (primary rank), second item as tie-breaker (secondary rank) and the third item as the fall back tie-breaker.

This produces the desired behavior for our input. We're not quite done yet.

Non-Greedy Matching

There is one more subtle corner case that was caught by Daniel Rocco. Consider these two items in the collection ['api_user', 'user_group']. When you enter the word 'user' the ideal suggestion should be ['user_group', 'api_user']. But the actual result is:

Looking at this output, you'll notice that api_user appears before user_group. Digging in a little, it turns out the search user expands to u.*s.*e.*r; notice that user_group has two rs, so the pattern matches user_gr instead of the expected user. The longer match length forces the ranking of this match down, which again seems counterintuitive. This is easy to change by using the non-greedy version of the regex (.*? instead of .*) on line 4.

Now that works for all the cases we've outlines. We've just implemented a fuzzy finder in 10 lines of code.

Conclusion:

That was the design process for implementing fuzzy matching for my side project pgcli, which is a repl for Postgresql that can do auto-completion.

I've extracted fuzzyfinder into a stand-alone python package. You can install it via 'pip install fuzzyfinder' and use it in your projects.

Thanks to Micah Zoltu and Daniel Rocco for reviewing the algorithm and fixing the corner cases.

If you found this interesting, you should follow me on twitter.

Epilogue:

When I first started looking into fuzzy matching in python, I encountered this excellent library called fuzzywuzzy. But the fuzzy matching done by that library is a different kind. It uses levenshtein distance to find the closest matching string from a collection. Which is a great technique for auto-correction against spelling errors but it doesn't produce the desired results for matching long names from partial sub-strings.

29 Jun 2015 6:29pm GMT

Amjith Ramanujam: FuzzyFinder - in 10 lines of Python

Introduction:

FuzzyFinder is a popular feature available in decent editors to open files. The idea is to start typing partial strings from the full path and the list of suggestions will be narrowed down to match the desired file.

Examples:

Vim (Ctrl-P)

Sublime Text (Cmd-P)

This is an extremely useful feature and it's quite easy to implement.

Problem Statement:

We have a collection of strings (filenames). We're trying to filter down that collection based on user input. The user input can be partial strings from the filename. Let's walk this through with an example. Here is a collection of filenames:

When the user types 'djm' we are supposed to match 'django_migrations.py' and 'django_admin_log.py'. The simplest route to achieve this is to use regular expressions.

Solutions:

Naive Regex Matching:

Convert 'djm' into 'd.*j.*m' and try to match this regex against every item in the list. Items that match are the possible candidates.

This got us the desired results for input 'djm'. But the suggestions are not ranked in any particular order.

In fact, for the second example with user input 'mig' the best possible suggestion 'migrations.py' was listed as the last item in the result.

Ranking based on match position:

We can rank the results based on the position of the first occurrence of the matching character. For user input 'mig' the position of the matching characters are as follows:

Here's the code:

We made the list of suggestions to be tuples where the first item is the position of the match and second item is the matching filename. When this list is sorted, python will sort them based on the first item in tuple and use the second item as a tie breaker. On line 14 we use a list comprehension to iterate over the sorted list of tuples and extract just the second item which is the file name we're interested in.

This got us close to the end result, but as shown in the example, it's not perfect. We see 'main_generator.py' as the first suggestion, but the user wanted 'migration.py'.

Ranking based on compact match:

When a user starts typing a partial string they will continue to type consecutive letters in a effort to find the exact match. When someone types 'mig' they are looking for 'migrations.py' or 'django_migrations.py' not 'main_generator.py'. The key here is to find the most compact match for the user input.

Once again this is trivial to do in python. When we match a string against a regular expression, the matched string is stored in the match.group().

For example, if the input is 'mig', the matching group from the 'collection' defined earlier is as follows:

We can use the length of the captured group as our primary rank and use the starting position as our secondary rank. To do that we add the len(match.group()) as the first item in the tuple, match.start() as the second item in the tuple and the filename itself as the third item in the tuple. Python will sort this list based on first item in the tuple (primary rank), second item as tie-breaker (secondary rank) and the third item as the fall back tie-breaker.

This produces the desired behavior for our input. We're not quite done yet.

Non-Greedy Matching

There is one more subtle corner case that was caught by Daniel Rocco. Consider these two items in the collection ['api_user', 'user_group']. When you enter the word 'user' the ideal suggestion should be ['user_group', 'api_user']. But the actual result is:

Looking at this output, you'll notice that api_user appears before user_group. Digging in a little, it turns out the search user expands to u.*s.*e.*r; notice that user_group has two rs, so the pattern matches user_gr instead of the expected user. The longer match length forces the ranking of this match down, which again seems counterintuitive. This is easy to change by using the non-greedy version of the regex (.*? instead of .*) on line 4.

Now that works for all the cases we've outlines. We've just implemented a fuzzy finder in 10 lines of code.

Conclusion:

That was the design process for implementing fuzzy matching for my side project pgcli, which is a repl for Postgresql that can do auto-completion.

I've extracted fuzzyfinder into a stand-alone python package. You can install it via 'pip install fuzzyfinder' and use it in your projects.

Thanks to Micah Zoltu and Daniel Rocco for reviewing the algorithm and fixing the corner cases.

If you found this interesting, you should follow me on twitter.

Epilogue:

When I first started looking into fuzzy matching in python, I encountered this excellent library called fuzzywuzzy. But the fuzzy matching done by that library is a different kind. It uses levenshtein distance to find the closest matching string from a collection. Which is a great technique for auto-correction against spelling errors but it doesn't produce the desired results for matching long names from partial sub-strings.

29 Jun 2015 6:29pm GMT

Mike Driscoll: Python 101: Episode #7 – Exception Handling

I recently recorded the next episode of Python 101. This one is on Exception Handling. I hope you like it:

29 Jun 2015 5:15pm GMT

Mike Driscoll: Python 101: Episode #7 – Exception Handling

I recently recorded the next episode of Python 101. This one is on Exception Handling. I hope you like it:

29 Jun 2015 5:15pm GMT

Django Weblog: Security advisory: simple_tag does not do auto-escaping

As per our documentation, the simple_tag decorator used for creating custom template tags does not run auto-escaping on its contents (up to and including Django 1.8). The team has noticed, however, that this makes it very easy to introduce XSS vulnerabilities when using simple_tag, and we have found examples of vulnerable code in the wild.

For this reason, Django 1.9 will change this behavior to improve security. In the mean time, all users are encouraged to check every usage of simple_tag in their own template tags and ensure they are not vulnerable, as per the instructions in the 1.9 release notes.

29 Jun 2015 4:17pm GMT

Django Weblog: Security advisory: simple_tag does not do auto-escaping

As per our documentation, the simple_tag decorator used for creating custom template tags does not run auto-escaping on its contents (up to and including Django 1.8). The team has noticed, however, that this makes it very easy to introduce XSS vulnerabilities when using simple_tag, and we have found examples of vulnerable code in the wild.

For this reason, Django 1.9 will change this behavior to improve security. In the mean time, all users are encouraged to check every usage of simple_tag in their own template tags and ensure they are not vulnerable, as per the instructions in the 1.9 release notes.

29 Jun 2015 4:17pm GMT

Martijn Faassen: Build a better batching UI with Morepath and Jinja2

Introduction

This post is the first in what I hope will be a series on neat things you can do with Morepath. Morepath is a Python web micro framework with some very interesting capabilities. What we'll look at today is what you can do with Morepath's link generation in a server-driven web application. While Morepath is an excellent fit to create REST APIs, it also works well server aplications. So let's look at how Morepath can help you to create a batching UI.

On the special occasion of this post we also released a new version of Morepath, Morepath 0.11.1!

A batching UI is a UI where you have a larger amount of data available than you want to show to the user at once. You instead partition the data in smaller batches, and you let the user navigate through these batches by clicking a previous and next link. If you have 56 items in total and the batch size is 10, you first see items 0-9. You can then click next to see items 10-19, then items 20-29, and so on until you see the last few items 50-55. Clicking previous will take you backwards again.

In this example, a URL to see a single batch looks like this:

http://example.com/?start=20

To see items 20-29. You can also approach the application like this:

http://example.com/

to start at the first batch.

I'm going to highlight the relevant parts of the application here. The complete example project can be found on Github. I have included instructions on how to install the app in the README.rst there.

Model

First we need to define a few model classes to define the application. We are going to go for a fake database of fake persons that we want to batch through.

Here's the Person class:

class Person(object):
    def __init__(self, id, name, address, email):
        self.id = id
        self.name = name
        self.address = address
        self.email = email

We use the neat fake-factory package to create some fake data for our fake database; the fake database is just a Python list:

fake = Faker()

def generate_random_person(id):
    return Person(id, fake.name(), fake.address(), fake.email())

def generate_random_persons(amount):
    return [generate_random_person(id) for id in range(amount)]

person_db = generate_random_persons(56)

So far nothing special. But next we create a special PersonCollection model that represents a batch of persons:

class PersonCollection(object):
    def __init__(self, persons, start):
        self.persons = persons
        if start < 0 or start >= len(persons):
            start = 0
        self.start = start

    def query(self):
        return self.persons[self.start:self.start + BATCH_SIZE]

    def previous(self):
        if self.start == 0:
            return None
        start = self.start - BATCH_SIZE
        if start < 0:
            start = 0
        return PersonCollection(self.persons, start)

    def next(self):
        start = self.start + BATCH_SIZE
        if start >= len(self.persons):
            return None
        return PersonCollection(self.persons, self.start + BATCH_SIZE)

To create an instance of PersonCollection you need two arguments: persons, which is going to be our person_db we created before, and start, which is the start index of the batch.

We define a query method that queries the persons we need from the larger batch, based on start and a global constant, BATCH_SIZE. Here we do this by simply taking a slice. In a real application you'd execute some kind of database query.

We also define previous and next methods. These give back the previous PersonCollection and next PersonCollection. They use the same persons database, but adjust the start of the batch. If there is no previous or next batch as we're at the beginning or the end, these methods return None.

There is nothing directly web related in this code, though of course PersonCollection is there to serve our web application in particular. But as you notice there is absolutely no interaction with request or any other parts of the Morepath API. This makes it easier to reason about this code: you can for instance write unit tests that just test the behavior of these instances without dealing with requests, HTML, etc.

Path

Now we expose these models to the web. We tell Morepath what models are behind what URLs, and how to create URLs to models:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

Let's look at this in more detail:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

This is not a lot of code, but it actually tells Morepath a lot:

  • When you go to the root path / you get the instance returned by the get_person_collection function.
  • This URL takes a request parameter start, for instance ?start=10.
  • This request parameter is optional. If it's not given it defaults to 0.
  • Since the default is a Python int object, Morepath rejects any requests with request parameters that cannot be converted to an integer as a 400 Bad Request. So ?start=11 is legal, but ?start=foo is not.
  • When asked for the link to a PersonCollection instance in Python code, as we'll see soon, Morepath uses this information to reconstruct it.

Now let's look at get_person:

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

This uses a path with a parameter in it, id, which is passed to the get_person function. It explicitly sets the system to expect an int and reject anything else, but we could've used id=0 as a default parameter instead here too. Finally, get_person can return None if the id is not known in our Python list "database". Morepath automatically turns this into a 404 Not Found for you.

View & template for Person

While PersonCollection and Person instances now have a URL, we didn't tell Morepath yet what to do when someone goes there. So for now, these URLs will respond with a 404.

Let's fix this by defining some Morepath views. We'll do a simple view for Person first:

@App.html(model=Person, template='person.jinja2')
def person_default(self, request):
    return {
        'id': self.id,
        'name': self.name,
        'address': self.address,
        'email': self.email
    }

We use the html decorator to indicate that this view delivers data of Content-Type text/html, and that it uses a person.jinja2 template to do so.

The person_default function itself gets a self and a request argument. The self argument is an instance of the model class indicated in the decorator, so a Person instance. The request argument is a WebOb request instance. We give the template the data returned in the dictionary.

The template person.jinja2 looks like this:

<!DOCTYPE html>
<html>
  <head>
    <title>Morepath batching demo</title>
  </head>
  <body>
    <p>
      Name: {{ name }}<br/>
      Address: {{ address }}<br/>
      Email: {{ email }}<br />
    </p>
  </body>
</html>

Here we use the Jinja2 template language to render the data to HTML. Morepath out of the box does not support Jinja2; it's template language agnostic. But in our example we use the Morepath extension more.jinja2 which integrates Jinja2. Chameleon support is also available in more.chameleon in case you prefer that.

View & template for PersonCollection

Here is the view that exposes PersonCollection:

@App.html(model=PersonCollection, template='person_collection.jinja2')
def person_collection_default(self, request):
    return {
        'persons': self.query(),
        'previous_link': request.link(self.previous()),
        'next_link': request.link(self.next()),
    }

It gives the template the list of persons that is in the current PersonCollection instance so it can show them in a template as we'll see in a moment. It also creates two URLs: previous_link and next_link. These are links to the previous and next batch available, or None if no previous or next batch exists (this is the first or the last batch).

Let's look at the template:

<!DOCTYPE html>
<html>
 <head>
   <title>Morepath batching demo</title>
  </head>
  <body>
    <table>
      <tr>
        <th>Name</th>
        <th>Email</th>
        <th>Address</th>
      </tr>
      {% for person in persons %}
      <tr>
        <td><a href="{{ request.link(person) }}">{{ person.name }}</a></td>
        <td>{{ person.email }}</td>
        <td>{{ person.address }}</td>
      </tr>
      {% endfor %}
    </table>
    {% if previous_link %}
    <a href="{{ previous_link }}">Previous</a>
    {% endif %}
    {% if next_link %}
    <a href="{{ next_link }}">Next</a>
    {% endif %}
  </body>
</html>

A bit more is going on here. First it loops through the persons list to show all the persons in a batch in a HTML table. The name in the table is a link to the person instance; we use request.link() in the template to create this URL.

The template also shows a previous and next link, but only if they're not None, so when there is actually a previous or next batch available.

That's it

And that's it, besides a few details of application setup, which you can find in the complete example project on Github.

There's not much to this code, and that's how it should be. I invite you to compare this approach to a batching UI to what an implementation for another web framework looks like. Do you put the link generation code in the template itself? Or as ad hoc code inside the view functions? How clear and concise and testable is that code compared to what we just did here? Do you give back the right HTTP status codes when things go wrong? Consider also how easy it would be to expand the code to include searching in addition to batching.

Do you want to try out Morepath now? Read the very extensive documentation. I hope to hear from you!

29 Jun 2015 2:48pm GMT

Martijn Faassen: Build a better batching UI with Morepath and Jinja2

Introduction

This post is the first in what I hope will be a series on neat things you can do with Morepath. Morepath is a Python web micro framework with some very interesting capabilities. What we'll look at today is what you can do with Morepath's link generation in a server-driven web application. While Morepath is an excellent fit to create REST APIs, it also works well server aplications. So let's look at how Morepath can help you to create a batching UI.

On the special occasion of this post we also released a new version of Morepath, Morepath 0.11.1!

A batching UI is a UI where you have a larger amount of data available than you want to show to the user at once. You instead partition the data in smaller batches, and you let the user navigate through these batches by clicking a previous and next link. If you have 56 items in total and the batch size is 10, you first see items 0-9. You can then click next to see items 10-19, then items 20-29, and so on until you see the last few items 50-55. Clicking previous will take you backwards again.

In this example, a URL to see a single batch looks like this:

http://example.com/?start=20

To see items 20-29. You can also approach the application like this:

http://example.com/

to start at the first batch.

I'm going to highlight the relevant parts of the application here. The complete example project can be found on Github. I have included instructions on how to install the app in the README.rst there.

Model

First we need to define a few model classes to define the application. We are going to go for a fake database of fake persons that we want to batch through.

Here's the Person class:

class Person(object):
    def __init__(self, id, name, address, email):
        self.id = id
        self.name = name
        self.address = address
        self.email = email

We use the neat fake-factory package to create some fake data for our fake database; the fake database is just a Python list:

fake = Faker()

def generate_random_person(id):
    return Person(id, fake.name(), fake.address(), fake.email())

def generate_random_persons(amount):
    return [generate_random_person(id) for id in range(amount)]

person_db = generate_random_persons(56)

So far nothing special. But next we create a special PersonCollection model that represents a batch of persons:

class PersonCollection(object):
    def __init__(self, persons, start):
        self.persons = persons
        if start < 0 or start >= len(persons):
            start = 0
        self.start = start

    def query(self):
        return self.persons[self.start:self.start + BATCH_SIZE]

    def previous(self):
        if self.start == 0:
            return None
        start = self.start - BATCH_SIZE
        if start < 0:
            start = 0
        return PersonCollection(self.persons, start)

    def next(self):
        start = self.start + BATCH_SIZE
        if start >= len(self.persons):
            return None
        return PersonCollection(self.persons, self.start + BATCH_SIZE)

To create an instance of PersonCollection you need two arguments: persons, which is going to be our person_db we created before, and start, which is the start index of the batch.

We define a query method that queries the persons we need from the larger batch, based on start and a global constant, BATCH_SIZE. Here we do this by simply taking a slice. In a real application you'd execute some kind of database query.

We also define previous and next methods. These give back the previous PersonCollection and next PersonCollection. They use the same persons database, but adjust the start of the batch. If there is no previous or next batch as we're at the beginning or the end, these methods return None.

There is nothing directly web related in this code, though of course PersonCollection is there to serve our web application in particular. But as you notice there is absolutely no interaction with request or any other parts of the Morepath API. This makes it easier to reason about this code: you can for instance write unit tests that just test the behavior of these instances without dealing with requests, HTML, etc.

Path

Now we expose these models to the web. We tell Morepath what models are behind what URLs, and how to create URLs to models:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

Let's look at this in more detail:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

This is not a lot of code, but it actually tells Morepath a lot:

  • When you go to the root path / you get the instance returned by the get_person_collection function.
  • This URL takes a request parameter start, for instance ?start=10.
  • This request parameter is optional. If it's not given it defaults to 0.
  • Since the default is a Python int object, Morepath rejects any requests with request parameters that cannot be converted to an integer as a 400 Bad Request. So ?start=11 is legal, but ?start=foo is not.
  • When asked for the link to a PersonCollection instance in Python code, as we'll see soon, Morepath uses this information to reconstruct it.

Now let's look at get_person:

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

This uses a path with a parameter in it, id, which is passed to the get_person function. It explicitly sets the system to expect an int and reject anything else, but we could've used id=0 as a default parameter instead here too. Finally, get_person can return None if the id is not known in our Python list "database". Morepath automatically turns this into a 404 Not Found for you.

View & template for Person

While PersonCollection and Person instances now have a URL, we didn't tell Morepath yet what to do when someone goes there. So for now, these URLs will respond with a 404.

Let's fix this by defining some Morepath views. We'll do a simple view for Person first:

@App.html(model=Person, template='person.jinja2')
def person_default(self, request):
    return {
        'id': self.id,
        'name': self.name,
        'address': self.address,
        'email': self.email
    }

We use the html decorator to indicate that this view delivers data of Content-Type text/html, and that it uses a person.jinja2 template to do so.

The person_default function itself gets a self and a request argument. The self argument is an instance of the model class indicated in the decorator, so a Person instance. The request argument is a WebOb request instance. We give the template the data returned in the dictionary.

The template person.jinja2 looks like this:

<!DOCTYPE html>
<html>
  <head>
    <title>Morepath batching demo</title>
  </head>
  <body>
    <p>
      Name: {{ name }}<br/>
      Address: {{ address }}<br/>
      Email: {{ email }}<br />
    </p>
  </body>
</html>

Here we use the Jinja2 template language to render the data to HTML. Morepath out of the box does not support Jinja2; it's template language agnostic. But in our example we use the Morepath extension more.jinja2 which integrates Jinja2. Chameleon support is also available in more.chameleon in case you prefer that.

View & template for PersonCollection

Here is the view that exposes PersonCollection:

@App.html(model=PersonCollection, template='person_collection.jinja2')
def person_collection_default(self, request):
    return {
        'persons': self.query(),
        'previous_link': request.link(self.previous()),
        'next_link': request.link(self.next()),
    }

It gives the template the list of persons that is in the current PersonCollection instance so it can show them in a template as we'll see in a moment. It also creates two URLs: previous_link and next_link. These are links to the previous and next batch available, or None if no previous or next batch exists (this is the first or the last batch).

Let's look at the template:

<!DOCTYPE html>
<html>
 <head>
   <title>Morepath batching demo</title>
  </head>
  <body>
    <table>
      <tr>
        <th>Name</th>
        <th>Email</th>
        <th>Address</th>
      </tr>
      {% for person in persons %}
      <tr>
        <td><a href="{{ request.link(person) }}">{{ person.name }}</a></td>
        <td>{{ person.email }}</td>
        <td>{{ person.address }}</td>
      </tr>
      {% endfor %}
    </table>
    {% if previous_link %}
    <a href="{{ previous_link }}">Previous</a>
    {% endif %}
    {% if next_link %}
    <a href="{{ next_link }}">Next</a>
    {% endif %}
  </body>
</html>

A bit more is going on here. First it loops through the persons list to show all the persons in a batch in a HTML table. The name in the table is a link to the person instance; we use request.link() in the template to create this URL.

The template also shows a previous and next link, but only if they're not None, so when there is actually a previous or next batch available.

That's it

And that's it, besides a few details of application setup, which you can find in the complete example project on Github.

There's not much to this code, and that's how it should be. I invite you to compare this approach to a batching UI to what an implementation for another web framework looks like. Do you put the link generation code in the template itself? Or as ad hoc code inside the view functions? How clear and concise and testable is that code compared to what we just did here? Do you give back the right HTTP status codes when things go wrong? Consider also how easy it would be to expand the code to include searching in addition to batching.

Do you want to try out Morepath now? Read the very extensive documentation. I hope to hear from you!

29 Jun 2015 2:48pm GMT

Mike Driscoll: PyDev of the Week: David Beazley

This week we welcome David Beazley (@dabeaz) as our PyDev of the Week! David is the author of the Python Essential Reference and the co-author of the Python Cookbook, Third edition. He also has a blog that I enjoyed when I was first learning Python, although I don't think it's updated much any more. You might find his Python talks of interest though. Let's spend some time getting to know him better!

Can you tell us a little about yourself (hobbies, education, etc):

I've lived in the Chicago area for the past 17 years where I've put down roots with my wife and kids. I'm currently self-employed and spend most of my time working on a variety of Python-related projects including training, consulting, and book writing. Besides coding, I enjoy playing music, working in the shop, and bike riding. I've recently been trying to bike outside throughout the Chicago winter in temperatures down to about -25C. There are a lot of things that aren't quite right about that, but it's a lot of fun in its own weird way. At the very least, it's a nice break from staring at a computer screen all day.

Regarding my education, I've been playing around with computer programming since about the 6th grade. I studied math as an undergraduate, but eventually went on to earn a Ph.D. in computer science. I also taught various computer science topics for about seven years as a university professor.

Why did you start using Python?

At the time I started using Python (1996), I was working on the problem of making it easier for computational physicists to interact with simulation software running on supercomputers. The standard practice had been to submit non-interactive batch jobs to the system, offload the resulting data many hours later, and then try to wrap your brain around what happened using various data analysis tools running on your desktop. The only problem is that none of this really worked-or certainly didn't work nearly as efficiently as it could have (often taking weeks of work to do the most simple kind of experiment). Around 1995, I had been experimenting with the idea of incorporating a scripting language directly into our physics software so that users could interact with it more directly. I was familiar with tools such as MATLAB and IDL and had been trying to make something more custom tailored to our specific problem (molecular dynamics). I had even created a home-grown scripting language as well as a code generation tool that later became known as Swig (yes, that Swig). The idea of creating my own programming language didn't sit so well with my PhD committee and they suggested that I look at some alternatives. So, I had started to explore other options including Tcl, Perl, and Guile. I discovered Python purely by chance after reading a mention of it in an article that Paul Dubois had written in Computers in Physics. As an aside, Paul had been thinking about the exact same kind of problems (making physics software programmable) and was active in the development of many tools that became precursors to NumPy and SciPy.

It didn't take long to sell me on Python. I loved the simple syntax and the interactive REPL was exactly the kind of thing I wanted to have with our physics software. Not only that, the implementation in C was quite clean and easy to work with. One of my earliest projects involved porting Python to run on the Connection Machine 5 and Cray T3D massively parallel systems. It was kind of insane-you could sit there at the interactive Python prompt like you do now, but whatever you typed would execute simultaneously on 1024 CPUs. The thing that really sold people was that my little Python hack could take problems that had previously required 5-6 hours of work and reduce them down to about 4 seconds. Minds were blown. Keep in mind, this was long before the existence of anything like NumPy, SciPy, Pandas, or any of the advanced data analysis tools that Python programmers take for granted now. The idea that you'd put a slow interpreted language on a supercomputer and interact with your data seemed wrong on so many levels of wrong to those with strong opinions on such matters.

What other programming languages do you know and which is your favorite?

I would consider myself to be pretty fluent in C and assembly programming. I have varying amounts of experience with other languages including C++, Objective C, Java, Perl, Tcl, Scheme, ML, PHP, and Javascript although you'd probably find my head buried in a book or on Stack Overflow if I had to do anything useful in those. I recently found myself messing around with Java because my kids have been bugging me about making Minecraft mods. I can get around, but I don't think I'll be seeking employment as a Java programmer anytime in the foreseeable future. I really need to explore a more Pythonic solution for that.

As for a favorite language, that's a hard one. For day-to-day tasks, Python is definitely my tool of choice. However, if I had to pick an all-time favorite language over my whole career, I think it would probably be assembly language. One of my first jobs involved writing device drivers for a graphics card. I also spent a lot of time buried in assembly language learning how to mod and reverse engineer arcade games on my Apple 2. There is a certain raw simplicity and beauty to assembly language. There are also unusual challenges such as only being able to debug your code through the use of blinking lights or a beeps. It requires a certain combination of perseverance and ingenuity. However, it's also tremendously satisfying to see your code finally working when you figure it out. You also gain a much deeper awareness of how computers and algorithms work by coding in assembly.

What projects are you working on now?

I'm currently in the early stages of updating the Python Essential Reference book to a new edition. Over the past two years, I've also been doing some software development for a startup. Much of that code is pretty typical stuff involving databases, web services, testing, and so forth. I don't consider myself to be a web programmer so I've actually learned quite a few new things by working on that.

Which Python libraries are your favorite (core or 3rd party)?

As a general rule, my favorites tend to be the built-in libraries. It's hard to point at a favorite, but I think it might be the collections module. I feel like collections is this secret weapon that lets me solve all sorts of tricky data handling problems. If you're not using it, you're probably missing out. For pure fun, I also like the multiprocessing.connection submodule. There's some neat magic in there that can be used to set up authenticated network connections between Python interpreters running on different machines, pass Python objects around between those machines, and do other interesting distributed computing sorts of things.

For 3rd party modules, I've been spending most of my recent time using SqlAlchemy, Pandas, and requests. All of those are great. I've also spent a fair bit of time using modules related to Redis, ZeroMQ, AWS, and related bits of technology.

Why did you decide to write books about Python?

I got into book writing by refusing to say "no" when asked. I had served as the program chair of the Python conference in 1998 and shortly after the conference, an editor at New Riders publishing (since absorbed into Pearson) contacted me about the possibility of writing a Python book. At that time, there were only a handful of other Python books around. In past programming work, I'd found great use from texts such as the well known Kernighan and Ritche "C Programming Language" book as well as books by W. Richard Stevens on advanced Unix programming. I thought it might be interesting to try and write a Python book in a similar vein. So, this ultimately led to the publication of the Python Essential Reference.

The more recent Python Cookbook (O'Reilly) has a slightly different spin. I've been increasingly bothered by all of the excuses and apologies made concerning the use of Python 3. So, for that book, I decided I'd just embrace Python 3 to the greatest extent possible and try to write the most modern Python book I could-without concern for Python 2. There is a certain calculated risk in doing that, but if Python is going to have a bright future, I feel that it should be served by forward-looking books.

Where do you see Python going as a programming language?

Given the general turmoil surrounding Python 2 and 3, this is an interesting question. Having worked with Python 3 for about six years, there are a lot of really great things about it to like. However, it's also quite different in a few critical areas-and that's something that people will have to come to terms with if they choose to use it. Although there will certainly be projects that never migrate to Python 2, there's no reason why you can't start creating new projects in Python 3 now. In the big picture, the world of programming is always going to be filled with messy and annoying problems-things that Python has always excelled at. For that reason, I don't really see it going away anytime soon.

What is your take on the current market for Python programmers?

The Python job market is not something that I follow too closely. However, I don't know any unemployed Python programmers so that's probably a good sign.

Is there anything else you'd like to say?

Python has always been a language with a sense of practicality and fun-programming with it is supposed to be enjoyable. If not, you're probably doing something wrong.

The Last 10 PyDevs of the Week

29 Jun 2015 12:30pm GMT

Mike Driscoll: PyDev of the Week: David Beazley

This week we welcome David Beazley (@dabeaz) as our PyDev of the Week! David is the author of the Python Essential Reference and the co-author of the Python Cookbook, Third edition. He also has a blog that I enjoyed when I was first learning Python, although I don't think it's updated much any more. You might find his Python talks of interest though. Let's spend some time getting to know him better!

Can you tell us a little about yourself (hobbies, education, etc):

I've lived in the Chicago area for the past 17 years where I've put down roots with my wife and kids. I'm currently self-employed and spend most of my time working on a variety of Python-related projects including training, consulting, and book writing. Besides coding, I enjoy playing music, working in the shop, and bike riding. I've recently been trying to bike outside throughout the Chicago winter in temperatures down to about -25C. There are a lot of things that aren't quite right about that, but it's a lot of fun in its own weird way. At the very least, it's a nice break from staring at a computer screen all day.

Regarding my education, I've been playing around with computer programming since about the 6th grade. I studied math as an undergraduate, but eventually went on to earn a Ph.D. in computer science. I also taught various computer science topics for about seven years as a university professor.

Why did you start using Python?

At the time I started using Python (1996), I was working on the problem of making it easier for computational physicists to interact with simulation software running on supercomputers. The standard practice had been to submit non-interactive batch jobs to the system, offload the resulting data many hours later, and then try to wrap your brain around what happened using various data analysis tools running on your desktop. The only problem is that none of this really worked-or certainly didn't work nearly as efficiently as it could have (often taking weeks of work to do the most simple kind of experiment). Around 1995, I had been experimenting with the idea of incorporating a scripting language directly into our physics software so that users could interact with it more directly. I was familiar with tools such as MATLAB and IDL and had been trying to make something more custom tailored to our specific problem (molecular dynamics). I had even created a home-grown scripting language as well as a code generation tool that later became known as Swig (yes, that Swig). The idea of creating my own programming language didn't sit so well with my PhD committee and they suggested that I look at some alternatives. So, I had started to explore other options including Tcl, Perl, and Guile. I discovered Python purely by chance after reading a mention of it in an article that Paul Dubois had written in Computers in Physics. As an aside, Paul had been thinking about the exact same kind of problems (making physics software programmable) and was active in the development of many tools that became precursors to NumPy and SciPy.

It didn't take long to sell me on Python. I loved the simple syntax and the interactive REPL was exactly the kind of thing I wanted to have with our physics software. Not only that, the implementation in C was quite clean and easy to work with. One of my earliest projects involved porting Python to run on the Connection Machine 5 and Cray T3D massively parallel systems. It was kind of insane-you could sit there at the interactive Python prompt like you do now, but whatever you typed would execute simultaneously on 1024 CPUs. The thing that really sold people was that my little Python hack could take problems that had previously required 5-6 hours of work and reduce them down to about 4 seconds. Minds were blown. Keep in mind, this was long before the existence of anything like NumPy, SciPy, Pandas, or any of the advanced data analysis tools that Python programmers take for granted now. The idea that you'd put a slow interpreted language on a supercomputer and interact with your data seemed wrong on so many levels of wrong to those with strong opinions on such matters.

What other programming languages do you know and which is your favorite?

I would consider myself to be pretty fluent in C and assembly programming. I have varying amounts of experience with other languages including C++, Objective C, Java, Perl, Tcl, Scheme, ML, PHP, and Javascript although you'd probably find my head buried in a book or on Stack Overflow if I had to do anything useful in those. I recently found myself messing around with Java because my kids have been bugging me about making Minecraft mods. I can get around, but I don't think I'll be seeking employment as a Java programmer anytime in the foreseeable future. I really need to explore a more Pythonic solution for that.

As for a favorite language, that's a hard one. For day-to-day tasks, Python is definitely my tool of choice. However, if I had to pick an all-time favorite language over my whole career, I think it would probably be assembly language. One of my first jobs involved writing device drivers for a graphics card. I also spent a lot of time buried in assembly language learning how to mod and reverse engineer arcade games on my Apple 2. There is a certain raw simplicity and beauty to assembly language. There are also unusual challenges such as only being able to debug your code through the use of blinking lights or a beeps. It requires a certain combination of perseverance and ingenuity. However, it's also tremendously satisfying to see your code finally working when you figure it out. You also gain a much deeper awareness of how computers and algorithms work by coding in assembly.

What projects are you working on now?

I'm currently in the early stages of updating the Python Essential Reference book to a new edition. Over the past two years, I've also been doing some software development for a startup. Much of that code is pretty typical stuff involving databases, web services, testing, and so forth. I don't consider myself to be a web programmer so I've actually learned quite a few new things by working on that.

Which Python libraries are your favorite (core or 3rd party)?

As a general rule, my favorites tend to be the built-in libraries. It's hard to point at a favorite, but I think it might be the collections module. I feel like collections is this secret weapon that lets me solve all sorts of tricky data handling problems. If you're not using it, you're probably missing out. For pure fun, I also like the multiprocessing.connection submodule. There's some neat magic in there that can be used to set up authenticated network connections between Python interpreters running on different machines, pass Python objects around between those machines, and do other interesting distributed computing sorts of things.

For 3rd party modules, I've been spending most of my recent time using SqlAlchemy, Pandas, and requests. All of those are great. I've also spent a fair bit of time using modules related to Redis, ZeroMQ, AWS, and related bits of technology.

Why did you decide to write books about Python?

I got into book writing by refusing to say "no" when asked. I had served as the program chair of the Python conference in 1998 and shortly after the conference, an editor at New Riders publishing (since absorbed into Pearson) contacted me about the possibility of writing a Python book. At that time, there were only a handful of other Python books around. In past programming work, I'd found great use from texts such as the well known Kernighan and Ritche "C Programming Language" book as well as books by W. Richard Stevens on advanced Unix programming. I thought it might be interesting to try and write a Python book in a similar vein. So, this ultimately led to the publication of the Python Essential Reference.

The more recent Python Cookbook (O'Reilly) has a slightly different spin. I've been increasingly bothered by all of the excuses and apologies made concerning the use of Python 3. So, for that book, I decided I'd just embrace Python 3 to the greatest extent possible and try to write the most modern Python book I could-without concern for Python 2. There is a certain calculated risk in doing that, but if Python is going to have a bright future, I feel that it should be served by forward-looking books.

Where do you see Python going as a programming language?

Given the general turmoil surrounding Python 2 and 3, this is an interesting question. Having worked with Python 3 for about six years, there are a lot of really great things about it to like. However, it's also quite different in a few critical areas-and that's something that people will have to come to terms with if they choose to use it. Although there will certainly be projects that never migrate to Python 2, there's no reason why you can't start creating new projects in Python 3 now. In the big picture, the world of programming is always going to be filled with messy and annoying problems-things that Python has always excelled at. For that reason, I don't really see it going away anytime soon.

What is your take on the current market for Python programmers?

The Python job market is not something that I follow too closely. However, I don't know any unemployed Python programmers so that's probably a good sign.

Is there anything else you'd like to say?

Python has always been a language with a sense of practicality and fun-programming with it is supposed to be enjoyable. If not, you're probably doing something wrong.

The Last 10 PyDevs of the Week

29 Jun 2015 12:30pm GMT

Robin Wilson: IPython tips, tricks & notes – Part 1

.highlight .hll {background-color:#ffffcc}.highlight {background:#f8f8f8;}.highlight .c {color:#408080;font-style:italic}.highlight .err {border:1px solid #FF0000}.highlight .k {color:#008000;font-weight:bold}.highlight .o {color:#666666}.highlight .cm {color:#408080;font-style:italic}.highlight .cp {color:#BC7A00}.highlight .c1 {color:#408080;font-style:italic}.highlight .cs {color:#408080;font-style:italic}.highlight .gd {color:#A00000}.highlight .ge {font-style:italic}.highlight .gr {color:#FF0000}.highlight .gh {color:#000080;font-weight:bold}.highlight .gi {color:#00A000}.highlight .go {color:#888888}.highlight .gp {color:#000080;font-weight:bold}.highlight .gs {font-weight:bold}.highlight .gu {color:#800080;font-weight:bold}.highlight .gt {color:#0044DD}.highlight .kc {color:#008000;font-weight:bold}.highlight .kd {color:#008000;font-weight:bold}.highlight .kn {color:#008000;font-weight:bold}.highlight .kp {color:#008000}.highlight .kr {color:#008000;font-weight:bold}.highlight .kt {color:#B00040}.highlight .m {color:#666666}.highlight .s {color:#BA2121}.highlight .na {color:#7D9029}.highlight .nb {color:#008000}.highlight .nc {color:#0000FF;font-weight:bold}.highlight .no {color:#880000}.highlight .nd {color:#AA22FF}.highlight .ni {color:#999999;font-weight:bold}.highlight .ne {color:#D2413A;font-weight:bold}.highlight .nf {color:#0000FF}.highlight .nl {color:#A0A000}.highlight .nn {color:#0000FF;font-weight:bold}.highlight .nt {color:#008000;font-weight:bold}.highlight .nv {color:#19177C}.highlight .ow {color:#AA22FF;font-weight:bold}.highlight .w {color:#bbbbbb}.highlight .mb {color:#666666}.highlight .mf {color:#666666}.highlight .mh {color:#666666}.highlight .mi {color:#666666}.highlight .mo {color:#666666}.highlight .sb {color:#BA2121}.highlight .sc {color:#BA2121}.highlight .sd {color:#BA2121;font-style:italic}.highlight .s2 {color:#BA2121}.highlight .se {color:#BB6622;font-weight:bold}.highlight .sh {color:#BA2121}.highlight .si {color:#BB6688;font-weight:bold}.highlight .sx {color:#008000}.highlight .sr {color:#BB6688}.highlight .s1 {color:#BA2121}.highlight .ss {color:#19177C}.highlight .bp {color:#008000}.highlight .vc {color:#19177C}.highlight .vg {color:#19177C}.highlight .vi {color:#19177C}.highlight .il {color:#666666} .ansibold{font-weight:bold}.ansiblack{color:#000}.ansired{color:#8b0000}.ansigreen{color:#006400}.ansiyellow{color:#a52a2a}.ansiblue{color:#00008b}.ansipurple{color:#9400d3}.ansicyan{color:#4682b4}.ansigray{color:#808080}.ansibgblack{background-color:#000}.ansibgred{background-color:#f00}.ansibggreen{background-color:#008000}.ansibgyellow{background-color:#ff0}.ansibgblue{background-color:#00f}.ansibgpurple{background-color:#f0f}.ansibgcyan{background-color:#0ff}.ansibggray{background-color:#808080}

During the last week, I attended the Next Generation Computational Modelling (NGCM) Summer Academy at the University of Southampton. Three days were spent on a detailed IPython course, run by MinRK, one of the core IPython developers, and two days on a Pandas course taught by Skipper Seaborn and Chris Fonnesbeck.
The course was very useful, and I'm going to post a series of blog posts covering some of the things I've learnt. All of the posts will be written for people like me: people who already use IPython or Pandas but may not know some of the slightly more hidden tips and techniques.

Useful Keyboard Shortcuts

Everyone knows the Shift-Return keyboard shortcut to run the current cell in the IPython Notebook, but there are actually three 'running' shortcuts that you should know:

  • Shift-Return: Run the current cell and move to the cell below
  • Ctrl-Return: Run the current cell and stay in that cell
  • Opt-Return: Run the current cell, create a new cell below, and move to it

Once you know these you'll find all sorts of useful opportunities to use them. I now use Ctrl-Return a lot when writing code, running it, changing it, running it again etc - it really speeds that process up!
Also, everyone knows that TAB does autocompletion in IPython, but did you know that Shift-TAB pops up a handy little tooltip giving information about the currently selected item (for example, the argument list for a function, the type of a variable etc. This popup box can be expanded to its full size by clicking the + button on the top right - or, by pressing Shift-TAB again.

Magic commands

Again, a number of IPython magic commands are well known: for example, %run and %debug, but there are loads more that can be really useful. A couple of really useful ones that I wasn't aware of are:

%%writefile

This writes the contents of the cell to a file. For example:

%%writefile test.txt
This is a test file!
It can contain anything I want...

And more...
Writing test.txt
!cat test.txt
This is a test file!
It can contain anything I want...

And more...

%xmode

This changes the way that exceptions are displayed in IPython. It can take three options: plain, context and verbose. Let's have a look at these.
First we create a simple module with a couple of functions, this basically just gives us a way to have a stack trace with multiple functions that leads to a ZeroDivisionError.

%%writefile mod.py

def f(x):
    return 1.0/(x-1)

def g(y):
    return f(y+1)
Writing mod.py
Now we'll look at what happens with the default option of context
import mod
mod.g(0)
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-6-a54c5799f57e> in <module>()
      1 import mod
----> 2 mod.g(0)

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in g(y)
      4 
      5 def g(y):
----> 6     return f(y+1)

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in f(x)
      1 
      2 def f(x):
----> 3     return 1.0/(x-1)
      4 
      5 def g(y):

ZeroDivisionError: float division by zero
You're probably fairly used to seeing that: it's the standard IPython stack trace view. If we want to go back to plain Python we can set it to plain. You can see that you don't get any context on the lines surrounding the exception - not so helpful!
%xmode plain
Exception reporting mode: Plain
import mod
mod.g(0)
Traceback (most recent call last):

  File "<ipython-input-8-a54c5799f57e>", line 2, in <module>
    mod.g(0)

  File "/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py", line 6, in g
    return f(y+1)

  File "/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py", line 3, in f
    return 1.0/(x-1)

ZeroDivisionError: float division by zero
The most informative option is verbose, which gives all of the information that is given by context but also gives you the values of local and global variables. In the example below you can see that g was called as g(0) and f was called as f(1).
%xmode verbose
Exception reporting mode: Verbose
import mod
mod.g(0)
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-10-a54c5799f57e> in <module>()
      1 import mod
----> 2 mod.g(0)
        global mod.g = <function g at 0x10899aa60>

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in g(y=0)
      4 
      5 def g(y):
----> 6     return f(y+1)
        global f = <function f at 0x10899a9d8>
        y = 0

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in f(x=1)
      1 
      2 def f(x):
----> 3     return 1.0/(x-1)
        x = 1
      4 
      5 def g(y):

ZeroDivisionError: float division by zero

%load

The load magic loads a Python file, from a filepath or URL, and replaces the contents of the cell with the contents of the file. One really useful application of this is to get example code from the internet. For example, the code %load http://matplotlib.org/mpl_examples/showcase/integral_demo.py will create a cell containing that matplotlib example.

%connect_info & %qtconsole

IPython operates on a client-server basis, and multiple clients (which can be consoles, qtconsoles, or notebooks) can connect to one backend kernel. To get the information required to connect a new front-end to the kernel that the notebook is using, run %connect_info:

%connect_info
{
  "control_port": 49569,
  "signature_scheme": "hmac-sha256",
  "transport": "tcp",
  "stdin_port": 49568,
  "key": "59de1682-ef3e-42ca-b393-487693cfc9a2",
  "ip": "127.0.0.1",
  "shell_port": 49566,
  "hb_port": 49570,
  "iopub_port": 49567
}

Paste the above JSON into a file, and connect with:
    $> ipython <app> --existing <file>
or, if you are local, you can connect with just:
    $> ipython <app> --existing kernel-a5c50dd5-12d3-46dc-81a9-09c0c5b2c974.json 
or even just:
    $> ipython <app> --existing 
if this is the most recent IPython session you have started.
There is also a shortcut that will load a qtconsole connected to the same kernel:
%qtconsole

Stopping output being printed

This is a little thing, that is rather reminiscent of Mathematica, but can be quite handy. You can suppress the output of any cell by ending it with ;. For example:

5+10
15
5+10;
Right, that's enough for the first part - tune in next time for tips on figures, interactive widgets and more.

29 Jun 2015 10:12am GMT

Robin Wilson: IPython tips, tricks & notes – Part 1

.highlight .hll {background-color:#ffffcc}.highlight {background:#f8f8f8;}.highlight .c {color:#408080;font-style:italic}.highlight .err {border:1px solid #FF0000}.highlight .k {color:#008000;font-weight:bold}.highlight .o {color:#666666}.highlight .cm {color:#408080;font-style:italic}.highlight .cp {color:#BC7A00}.highlight .c1 {color:#408080;font-style:italic}.highlight .cs {color:#408080;font-style:italic}.highlight .gd {color:#A00000}.highlight .ge {font-style:italic}.highlight .gr {color:#FF0000}.highlight .gh {color:#000080;font-weight:bold}.highlight .gi {color:#00A000}.highlight .go {color:#888888}.highlight .gp {color:#000080;font-weight:bold}.highlight .gs {font-weight:bold}.highlight .gu {color:#800080;font-weight:bold}.highlight .gt {color:#0044DD}.highlight .kc {color:#008000;font-weight:bold}.highlight .kd {color:#008000;font-weight:bold}.highlight .kn {color:#008000;font-weight:bold}.highlight .kp {color:#008000}.highlight .kr {color:#008000;font-weight:bold}.highlight .kt {color:#B00040}.highlight .m {color:#666666}.highlight .s {color:#BA2121}.highlight .na {color:#7D9029}.highlight .nb {color:#008000}.highlight .nc {color:#0000FF;font-weight:bold}.highlight .no {color:#880000}.highlight .nd {color:#AA22FF}.highlight .ni {color:#999999;font-weight:bold}.highlight .ne {color:#D2413A;font-weight:bold}.highlight .nf {color:#0000FF}.highlight .nl {color:#A0A000}.highlight .nn {color:#0000FF;font-weight:bold}.highlight .nt {color:#008000;font-weight:bold}.highlight .nv {color:#19177C}.highlight .ow {color:#AA22FF;font-weight:bold}.highlight .w {color:#bbbbbb}.highlight .mb {color:#666666}.highlight .mf {color:#666666}.highlight .mh {color:#666666}.highlight .mi {color:#666666}.highlight .mo {color:#666666}.highlight .sb {color:#BA2121}.highlight .sc {color:#BA2121}.highlight .sd {color:#BA2121;font-style:italic}.highlight .s2 {color:#BA2121}.highlight .se {color:#BB6622;font-weight:bold}.highlight .sh {color:#BA2121}.highlight .si {color:#BB6688;font-weight:bold}.highlight .sx {color:#008000}.highlight .sr {color:#BB6688}.highlight .s1 {color:#BA2121}.highlight .ss {color:#19177C}.highlight .bp {color:#008000}.highlight .vc {color:#19177C}.highlight .vg {color:#19177C}.highlight .vi {color:#19177C}.highlight .il {color:#666666} .ansibold{font-weight:bold}.ansiblack{color:#000}.ansired{color:#8b0000}.ansigreen{color:#006400}.ansiyellow{color:#a52a2a}.ansiblue{color:#00008b}.ansipurple{color:#9400d3}.ansicyan{color:#4682b4}.ansigray{color:#808080}.ansibgblack{background-color:#000}.ansibgred{background-color:#f00}.ansibggreen{background-color:#008000}.ansibgyellow{background-color:#ff0}.ansibgblue{background-color:#00f}.ansibgpurple{background-color:#f0f}.ansibgcyan{background-color:#0ff}.ansibggray{background-color:#808080}

During the last week, I attended the Next Generation Computational Modelling (NGCM) Summer Academy at the University of Southampton. Three days were spent on a detailed IPython course, run by MinRK, one of the core IPython developers, and two days on a Pandas course taught by Skipper Seaborn and Chris Fonnesbeck.
The course was very useful, and I'm going to post a series of blog posts covering some of the things I've learnt. All of the posts will be written for people like me: people who already use IPython or Pandas but may not know some of the slightly more hidden tips and techniques.

Useful Keyboard Shortcuts

Everyone knows the Shift-Return keyboard shortcut to run the current cell in the IPython Notebook, but there are actually three 'running' shortcuts that you should know:

  • Shift-Return: Run the current cell and move to the cell below
  • Ctrl-Return: Run the current cell and stay in that cell
  • Opt-Return: Run the current cell, create a new cell below, and move to it

Once you know these you'll find all sorts of useful opportunities to use them. I now use Ctrl-Return a lot when writing code, running it, changing it, running it again etc - it really speeds that process up!
Also, everyone knows that TAB does autocompletion in IPython, but did you know that Shift-TAB pops up a handy little tooltip giving information about the currently selected item (for example, the argument list for a function, the type of a variable etc. This popup box can be expanded to its full size by clicking the + button on the top right - or, by pressing Shift-TAB again.

Magic commands

Again, a number of IPython magic commands are well known: for example, %run and %debug, but there are loads more that can be really useful. A couple of really useful ones that I wasn't aware of are:

%%writefile

This writes the contents of the cell to a file. For example:

%%writefile test.txt
This is a test file!
It can contain anything I want...

And more...
Writing test.txt
!cat test.txt
This is a test file!
It can contain anything I want...

And more...

%xmode

This changes the way that exceptions are displayed in IPython. It can take three options: plain, context and verbose. Let's have a look at these.
First we create a simple module with a couple of functions, this basically just gives us a way to have a stack trace with multiple functions that leads to a ZeroDivisionError.

%%writefile mod.py

def f(x):
    return 1.0/(x-1)

def g(y):
    return f(y+1)
Writing mod.py
Now we'll look at what happens with the default option of context
import mod
mod.g(0)
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-6-a54c5799f57e> in <module>()
      1 import mod
----> 2 mod.g(0)

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in g(y)
      4 
      5 def g(y):
----> 6     return f(y+1)

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in f(x)
      1 
      2 def f(x):
----> 3     return 1.0/(x-1)
      4 
      5 def g(y):

ZeroDivisionError: float division by zero
You're probably fairly used to seeing that: it's the standard IPython stack trace view. If we want to go back to plain Python we can set it to plain. You can see that you don't get any context on the lines surrounding the exception - not so helpful!
%xmode plain
Exception reporting mode: Plain
import mod
mod.g(0)
Traceback (most recent call last):

  File "<ipython-input-8-a54c5799f57e>", line 2, in <module>
    mod.g(0)

  File "/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py", line 6, in g
    return f(y+1)

  File "/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py", line 3, in f
    return 1.0/(x-1)

ZeroDivisionError: float division by zero
The most informative option is verbose, which gives all of the information that is given by context but also gives you the values of local and global variables. In the example below you can see that g was called as g(0) and f was called as f(1).
%xmode verbose
Exception reporting mode: Verbose
import mod
mod.g(0)
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-10-a54c5799f57e> in <module>()
      1 import mod
----> 2 mod.g(0)
        global mod.g = <function g at 0x10899aa60>

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in g(y=0)
      4 
      5 def g(y):
----> 6     return f(y+1)
        global f = <function f at 0x10899a9d8>
        y = 0

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin&apossNotes/mod.py in f(x=1)
      1 
      2 def f(x):
----> 3     return 1.0/(x-1)
        x = 1
      4 
      5 def g(y):

ZeroDivisionError: float division by zero

%load

The load magic loads a Python file, from a filepath or URL, and replaces the contents of the cell with the contents of the file. One really useful application of this is to get example code from the internet. For example, the code %load http://matplotlib.org/mpl_examples/showcase/integral_demo.py will create a cell containing that matplotlib example.

%connect_info & %qtconsole

IPython operates on a client-server basis, and multiple clients (which can be consoles, qtconsoles, or notebooks) can connect to one backend kernel. To get the information required to connect a new front-end to the kernel that the notebook is using, run %connect_info:

%connect_info
{
  "control_port": 49569,
  "signature_scheme": "hmac-sha256",
  "transport": "tcp",
  "stdin_port": 49568,
  "key": "59de1682-ef3e-42ca-b393-487693cfc9a2",
  "ip": "127.0.0.1",
  "shell_port": 49566,
  "hb_port": 49570,
  "iopub_port": 49567
}

Paste the above JSON into a file, and connect with:
    $> ipython <app> --existing <file>
or, if you are local, you can connect with just:
    $> ipython <app> --existing kernel-a5c50dd5-12d3-46dc-81a9-09c0c5b2c974.json 
or even just:
    $> ipython <app> --existing 
if this is the most recent IPython session you have started.
There is also a shortcut that will load a qtconsole connected to the same kernel:
%qtconsole

Stopping output being printed

This is a little thing, that is rather reminiscent of Mathematica, but can be quite handy. You can suppress the output of any cell by ending it with ;. For example:

5+10
15
5+10;
Right, that's enough for the first part - tune in next time for tips on figures, interactive widgets and more.

29 Jun 2015 10:12am GMT

Alex Clark: Pillow 2-9-0 Is Almost Out

Pillow 2.9.0 will be released on July 1, 2015.

Pre-release

Please help the Pillow Fighters prepare for the Pillow 2.9.0 release by downloading and testing this pre-release:

Report issues

As you might expect, we'd like to avoid the creation of a 2.9.1 release within 24-48 hours of 2.9.0 due to any unforeseen circumstances. If you suspect such an issue to exist in 2.9.0.dev2, please let us know:

Thank you!

29 Jun 2015 12:01am GMT

Alex Clark: Pillow 2-9-0 Is Almost Out

Pillow 2.9.0 will be released on July 1, 2015.

Pre-release

Please help the Pillow Fighters prepare for the Pillow 2.9.0 release by downloading and testing this pre-release:

Report issues

As you might expect, we'd like to avoid the creation of a 2.9.1 release within 24-48 hours of 2.9.0 due to any unforeseen circumstances. If you suspect such an issue to exist in 2.9.0.dev2, please let us know:

Thank you!

29 Jun 2015 12:01am GMT

10 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: King Willams Town Bahnhof

Gestern musste ich morgens zur Station nach KWT um unsere Rerservierten Bustickets für die Weihnachtsferien in Capetown abzuholen. Der Bahnhof selber ist seit Dezember aus kostengründen ohne Zugverbindung - aber Translux und co - die langdistanzbusse haben dort ihre Büros.


Größere Kartenansicht




© benste CC NC SA

10 Nov 2011 10:57am GMT

09 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein

Niemand ist besorgt um so was - mit dem Auto fährt man einfach durch, und in der City - nahe Gnobie- "ne das ist erst gefährlich wenn die Feuerwehr da ist" - 30min später auf dem Rückweg war die Feuerwehr da.




© benste CC NC SA

09 Nov 2011 8:25pm GMT

08 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Brai Party

Brai = Grillabend o.ä.

Die möchte gern Techniker beim Flicken ihrer SpeakOn / Klinke Stecker Verzweigungen...

Die Damen "Mamas" der Siedlung bei der offiziellen Eröffnungsrede

Auch wenn weniger Leute da waren als erwartet, Laute Musik und viele Leute ...

Und natürlich ein Feuer mit echtem Holz zum Grillen.

© benste CC NC SA

08 Nov 2011 2:30pm GMT

07 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Lumanyano Primary

One of our missions was bringing Katja's Linux Server back to her room. While doing that we saw her new decoration.

Björn, Simphiwe carried the PC to Katja's school


© benste CC NC SA

07 Nov 2011 2:00pm GMT

06 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nelisa Haircut

Today I went with Björn to Needs Camp to Visit Katja's guest family for a special Party. First of all we visited some friends of Nelisa - yeah the one I'm working with in Quigney - Katja's guest fathers sister - who did her a haircut.

African Women usually get their hair done by arranging extensions and not like Europeans just cutting some hair.

In between she looked like this...

And then she was done - looks amazing considering the amount of hair she had last week - doesn't it ?

© benste CC NC SA

06 Nov 2011 7:45pm GMT

05 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Mein Samstag

Irgendwie viel mir heute auf das ich meine Blogposts mal ein bischen umstrukturieren muss - wenn ich immer nur von neuen Plätzen berichte, dann müsste ich ja eine Rundreise machen. Hier also mal ein paar Sachen aus meinem heutigen Alltag.

Erst einmal vorweg, Samstag zählt zumindest für uns Voluntäre zu den freien Tagen.

Dieses Wochenende sind nur Rommel und ich auf der Farm - Katja und Björn sind ja mittlerweile in ihren Einsatzstellen, und meine Mitbewohner Kyle und Jonathan sind zu Hause in Grahamstown - sowie auch Sipho der in Dimbaza wohnt.
Robin, die Frau von Rommel ist in Woodie Cape - schon seit Donnerstag um da ein paar Sachen zur erledigen.
Naja wie dem auch sei heute morgen haben wir uns erstmal ein gemeinsames Weetbix/Müsli Frühstück gegönnt und haben uns dann auf den Weg nach East London gemacht. 2 Sachen waren auf der Checkliste Vodacom, Ethienne (Imobilienmakler) außerdem auf dem Rückweg die fehlenden Dinge nach NeedsCamp bringen.

Nachdem wir gerade auf der Dirtroad losgefahren sind mussten wir feststellen das wir die Sachen für Needscamp und Ethienne nicht eingepackt hatten aber die Pumpe für die Wasserversorgung im Auto hatten.

Also sind wir in EastLondon ersteinmal nach Farmerama - nein nicht das onlinespiel farmville - sondern einen Laden mit ganz vielen Sachen für eine Farm - in Berea einem nördlichen Stadteil gefahren.

In Farmerama haben wir uns dann beraten lassen für einen Schnellverschluss der uns das leben mit der Pumpe leichter machen soll und außerdem eine leichtere Pumpe zur Reperatur gebracht, damit es nicht immer so ein großer Aufwand ist, wenn mal wieder das Wasser ausgegangen ist.

Fego Caffé ist in der Hemmingways Mall, dort mussten wir und PIN und PUK einer unserer Datensimcards geben lassen, da bei der PIN Abfrage leider ein zahlendreher unterlaufen ist. Naja auf jeden Fall speichern die Shops in Südafrika so sensible Daten wie eine PUK - die im Prinzip zugang zu einem gesperrten Phone verschafft.

Im Cafe hat Rommel dann ein paar online Transaktionen mit dem 3G Modem durchgeführt, welches ja jetzt wieder funktionierte - und übrigens mittlerweile in Ubuntu meinem Linuxsystem perfekt klappt.

Nebenbei bin ich nach 8ta gegangen um dort etwas über deren neue Deals zu erfahren, da wir in einigen von Hilltops Centern Internet anbieten wollen. Das Bild zeigt die Abdeckung UMTS in NeedsCamp Katjas Ort. 8ta ist ein neuer Telefonanbieter von Telkom, nachdem Vodafone sich Telkoms anteile an Vodacom gekauft hat müssen die komplett neu aufbauen.
Wir haben uns dazu entschieden mal eine kostenlose Prepaidkarte zu testen zu organisieren, denn wer weis wie genau die Karte oben ist ... Bevor man einen noch so billigen Deal für 24 Monate signed sollte man wissen obs geht.

Danach gings nach Checkers in Vincent, gesucht wurden zwei Hotplates für WoodyCape - R 129.00 eine - also ca. 12€ für eine zweigeteilte Kochplatte.
Wie man sieht im Hintergrund gibts schon Weihnachtsdeko - Anfang November und das in Südafrika bei sonnig warmen min- 25°C

Mittagessen haben wir uns bei einem Pakistanischen Curry Imbiss gegönnt - sehr empfehlenswert !
Naja und nachdem wir dann vor ner Stunde oder so zurück gekommen sind habe ich noch den Kühlschrank geputzt den ich heute morgen zum defrosten einfach nach draußen gestellt hatte. Jetzt ist der auch mal wieder sauber und ohne 3m dicke Eisschicht...

Morgen ... ja darüber werde ich gesondert berichten ... aber vermutlich erst am Montag, denn dann bin ich nochmal wieder in Quigney(East London) und habe kostenloses Internet.

© benste CC NC SA

05 Nov 2011 4:33pm GMT

31 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Sterkspruit Computer Center

Sterkspruit is one of Hilltops Computer Centres in the far north of Eastern Cape. On the trip to J'burg we've used the opportunity to take a look at the centre.

Pupils in the big classroom


The Trainer


School in Countryside


Adult Class in the Afternoon


"Town"


© benste CC NC SA

31 Oct 2011 4:58pm GMT

Benedict Stein: Technical Issues

What are you doing in an internet cafe if your ADSL and Faxline has been discontinued before months end. Well my idea was sitting outside and eating some ice cream.
At least it's sunny and not as rainy as on the weekend.


© benste CC NC SA

31 Oct 2011 3:11pm GMT

30 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nellis Restaurant

For those who are traveling through Zastron - there is a very nice Restaurant which is serving delicious food at reasanable prices.
In addition they're selling home made juices jams and honey.




interior


home made specialities - the shop in the shop


the Bar


© benste CC NC SA

30 Oct 2011 4:47pm GMT

29 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: The way back from J'burg

Having the 10 - 12h trip from J'burg back to ELS I was able to take a lot of pcitures including these different roadsides

Plain Street


Orange River in its beginngings (near Lesotho)


Zastron Anglican Church


The Bridge in Between "Free State" and Eastern Cape next to Zastron


my new Background ;)


If you listen to GoogleMaps you'll end up traveling 50km of gravel road - as it was just renewed we didn't have that many problems and saved 1h compared to going the official way with all it's constructions sites




Freeway


getting dark


© benste CC NC SA

29 Oct 2011 4:23pm GMT

28 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Wie funktioniert eigentlich eine Baustelle ?

Klar einiges mag anders sein, vieles aber gleich - aber ein in Deutschland täglich übliches Bild einer Straßenbaustelle - wie läuft das eigentlich in Südafrika ?

Ersteinmal vorweg - NEIN keine Ureinwohner die mit den Händen graben - auch wenn hier mehr Manpower genutzt wird - sind sie fleißig mit Technologie am arbeiten.

Eine ganz normale "Bundesstraße"


und wie sie erweitert wird


gaaaanz viele LKWs


denn hier wird eine Seite über einen langen Abschnitt komplett gesperrt, so das eine Ampelschaltung mit hier 45 Minuten Wartezeit entsteht


Aber wenigstens scheinen die ihren Spaß zu haben ;) - Wie auch wir denn gücklicher Weise mussten wir nie länger als 10 min. warten.

© benste CC NC SA

28 Oct 2011 4:20pm GMT