22 Nov 2014

feedPlanet Python

Invent with Python: IDLE Reimagined

I've started a wiki for an IDLE redesign project: https://github.com/asweigart/idle-reimagined/wiki

If you would like to help, please join the mailing list: https://groups.google.com/forum/#!forum/idle-reimagined/

IDLE Reimagined mockup screenshot

From the wiki:

IDLE Reimagined is the code name for a redesign for Python's default IDLE editor with focus as an educational tool. IDLE's chief utility is that it comes installed with Python, making it simple for newbies to start programming. But professional software developers don't use IDLE as their IDE. Instead of turning IDLE into a sophisticated IDE for professional software developers, it can be tooled with features specifically to make it friendly to those learning to program.

Prime Directives for the new design:

  1. IR is designed not for experienced developers or those new to Python, but specifically for those new to programming.
  2. IR is meant to be a drop-in replacement of IDLE, and be installed with the default Python installer.
  3. IR's code will use the tkinter GUI toolkit (unless a better GUI toolkit is bundled with Python).
  4. IR is fully-featured offline, but also has features for finding help or sharing code online.
  5. "Simple is better than complex."

These are the features that will distinguish IR and make it a good candidate to replace IDLE:

  1. Single window design, with file editor on the upper pane and interactive shell on the lower pane. (No more confusing separate windows for shell & file editor.)
  2. Tabbed file editor.
  3. Foreign language support. (Though Python's keywords and standard library will still be in English, IDLE itself can be multi-lingual.)
  4. Tutorial plugin system for Codecademy-like tutorials.
  5. Integrated pip installer.
  6. Integrated pastebin feature. (Easily share code with those who can help you.)
  7. "Plain English" error message translations. Instant Google-search for error messages.
  8. Detects and warns if you are trying to run Python 2 code on Python 3.
  9. Lightweight real-time lint tool that will point out missing variables and syntax errors. (Checks for errors, but does not check for style or PEP8.)

22 Nov 2014 8:56pm GMT

Invent with Python: IDLE Reimagined

I've started a wiki for an IDLE redesign project: https://github.com/asweigart/idle-reimagined/wiki

If you would like to help, please join the mailing list: https://groups.google.com/forum/#!forum/idle-reimagined/

IDLE Reimagined mockup screenshot

From the wiki:

IDLE Reimagined is the code name for a redesign for Python's default IDLE editor with focus as an educational tool. IDLE's chief utility is that it comes installed with Python, making it simple for newbies to start programming. But professional software developers don't use IDLE as their IDE. Instead of turning IDLE into a sophisticated IDE for professional software developers, it can be tooled with features specifically to make it friendly to those learning to program.

Prime Directives for the new design:

  1. IR is designed not for experienced developers or those new to Python, but specifically for those new to programming.
  2. IR is meant to be a drop-in replacement of IDLE, and be installed with the default Python installer.
  3. IR's code will use the tkinter GUI toolkit (unless a better GUI toolkit is bundled with Python).
  4. IR is fully-featured offline, but also has features for finding help or sharing code online.
  5. "Simple is better than complex."

These are the features that will distinguish IR and make it a good candidate to replace IDLE:

  1. Single window design, with file editor on the upper pane and interactive shell on the lower pane. (No more confusing separate windows for shell & file editor.)
  2. Tabbed file editor.
  3. Foreign language support. (Though Python's keywords and standard library will still be in English, IDLE itself can be multi-lingual.)
  4. Tutorial plugin system for Codecademy-like tutorials.
  5. Integrated pip installer.
  6. Integrated pastebin feature. (Easily share code with those who can help you.)
  7. "Plain English" error message translations. Instant Google-search for error messages.
  8. Detects and warns if you are trying to run Python 2 code on Python 3.
  9. Lightweight real-time lint tool that will point out missing variables and syntax errors. (Checks for errors, but does not check for style or PEP8.)

22 Nov 2014 8:56pm GMT

Brett Cannon: The game mechanics of Destiny

My good friend Paul is about to start playing Destiny with me on the ps4, so I figured I should explain to him the game mechanics upfront so he can make wise decisions early on and maximize his playing time. And since it's me I figured I might as well share my notes publicly. Do realize this focuses on the game mechanics and not explaining every minute detail of the game, so I will gloss over some bits.

Until you're level 20

Destiny's game mechanics are very different from when you start a character until you hit level 20 compared to level 20 and beyond (currently the level cap is 30).

Leveling up

Until level 20 you level up your character through experience; standard fare compared to other games.

Focuses

Beyond simply having a level, your character also has a focus. You start with one focus and unlock new abilities of your focus through experience. Once you hit level 15, though, you unlock a second focus that you can switch to. Now when you switch your new focus will have no abilities, so that means e.g. no grenades initially.

By the time you hit level 15 you will probably want to have decided if you want to stick with your initial focus or switch and start building up your new focus. Getting deep into your abilities takes a lot of time so you probably don't want to go too far into any one focus that you don't think you will care about.

I should also note that the level 15 unlock is only required for one character. After that any new character you start will be able to switch focuses from the beginning.

Reputation

Once you hit level 4 you unlock bounties. You complete bounties for reputation which you will care about once you are past level 20, so start completing bounties as soon as you can. Reputation is essentially a ranking amongst various factions in Destiny which sell different gear. For any faction, rank 1 lets you buy pretty stuff, rank 2 is armour, and rank 3 is weapons. When you start a character you earn reputation for the Vanguard for any non-PvP bounties, patrols, and strikes. Playing in the Crucible - which is the PvP multiplayer - earns you Crucible reputation. You can't buy any gear from a faction until you hit level 18, but you can at least earn reputation starting at level 4.

At level 18 you start earning marks which is the currency of the factions. You can get Vanguard marks for completing various kinds of strikes. Crucible marks come from playing in the Crucible.

One thing to realize about turning in bounties is that the reputation is not only for you but the items you have equipped. This means if you have maxed out an item you can swap in another one you use on occasion, turn in a bounty, and then have the item gain experience without ever using it to earn that bounty.

Items

Initially you get items either through buying them using glimmer (the basic currency which is shared amongst your characters), drops from enemies, or rewards for completing things such as missions.

The rarity of items is:

  1. Common - white
  2. Uncommon - green
  3. Rare - blue
  4. Legendary - purple
  5. Exotic - gold

Until you hit level 20 you will only get common and uncommon items.

Starting with uncommon items, you can level up your items through experience and materials. The basics just use glimmer to unlock item abilities, but later on harder-to-obtain materials are necessary. Basically anything that takes glimmer and/or weapons parts you can just upgrade without caring. All other upgrades should be thought through based on how much you like the item.

You can share items between your characters through the vault in the Tower. You should also realize that all drops you pick up in the game are unique to you, so there is no arguing over who gets what loot.

Level 20 and beyond

Items

Once you hit level 20 you still start earning rare items. You also begin to see legendary items as well as working towards exotic gear.

Exotic items

Exotic items are extremely quirky compared to the other item rarities. To begin with, you can only have a single exotic weapon and a single piece of exotic armor equipped at one time.

How you obtain exotic items is also quirky. You can collect motes of light which you take to the Watcher for exotic gear. You can collect Strange Coins to trade with Xur (who only shows up on weekends). You can get exotic bounties with some ridiculous requirements. Or you can complete raids.

Factions

Remember how you have been earning reputation for the Vanguard and Crucible since you hit level 4? Well now you have a bunch of factions to earn for. There are 3 other core factions always in the Tower along with unique ones that rotate in every so often for a limited time. If you wear a trinket from a faction then all of your reputation, Vanguard or Crucible, works toward your reputation for that faction (this only applies to the non-default factions, i.e. not Vanguard and Crucible). You will use only Crucible marks to buy items, though, with these other factions. They all have different focuses in terms of stats so that will influence what faction you focus on.

What all this means is that if you play both PvP and PvE then you may want to choose a non-default faction so as to pool your experience. It does mean, though, you will need to play enough multiplayer to earn the Crucible marks required to purchase items.

Leveling up

One of the biggest shifts in Destiny once you hit level 20 is that you level up not through experience but light. Rare armour and higher have an amount of light and the total light amount decides what level you are at. You can get to level 24 on rare items alone and level 28 with bought legendary items. But to hit past level 28 you will need to either earn higher end legendary gear or get an exotic piece of armour.

Summary

22 Nov 2014 1:50am GMT

Brett Cannon: The game mechanics of Destiny

My good friend Paul is about to start playing Destiny with me on the ps4, so I figured I should explain to him the game mechanics upfront so he can make wise decisions early on and maximize his playing time. And since it's me I figured I might as well share my notes publicly. Do realize this focuses on the game mechanics and not explaining every minute detail of the game, so I will gloss over some bits.

Until you're level 20

Destiny's game mechanics are very different from when you start a character until you hit level 20 compared to level 20 and beyond (currently the level cap is 30).

Leveling up

Until level 20 you level up your character through experience; standard fare compared to other games.

Focuses

Beyond simply having a level, your character also has a focus. You start with one focus and unlock new abilities of your focus through experience. Once you hit level 15, though, you unlock a second focus that you can switch to. Now when you switch your new focus will have no abilities, so that means e.g. no grenades initially.

By the time you hit level 15 you will probably want to have decided if you want to stick with your initial focus or switch and start building up your new focus. Getting deep into your abilities takes a lot of time so you probably don't want to go too far into any one focus that you don't think you will care about.

I should also note that the level 15 unlock is only required for one character. After that any new character you start will be able to switch focuses from the beginning.

Reputation

Once you hit level 4 you unlock bounties. You complete bounties for reputation which you will care about once you are past level 20, so start completing bounties as soon as you can. Reputation is essentially a ranking amongst various factions in Destiny which sell different gear. For any faction, rank 1 lets you buy pretty stuff, rank 2 is armour, and rank 3 is weapons. When you start a character you earn reputation for the Vanguard for any non-PvP bounties, patrols, and strikes. Playing in the Crucible - which is the PvP multiplayer - earns you Crucible reputation. You can't buy any gear from a faction until you hit level 18, but you can at least earn reputation starting at level 4.

At level 18 you start earning marks which is the currency of the factions. You can get Vanguard marks for completing various kinds of strikes. Crucible marks come from playing in the Crucible.

One thing to realize about turning in bounties is that the reputation is not only for you but the items you have equipped. This means if you have maxed out an item you can swap in another one you use on occasion, turn in a bounty, and then have the item gain experience without ever using it to earn that bounty.

Items

Initially you get items either through buying them using glimmer (the basic currency which is shared amongst your characters), drops from enemies, or rewards for completing things such as missions.

The rarity of items is:

  1. Common - white
  2. Uncommon - green
  3. Rare - blue
  4. Legendary - purple
  5. Exotic - gold

Until you hit level 20 you will only get common and uncommon items.

Starting with uncommon items, you can level up your items through experience and materials. The basics just use glimmer to unlock item abilities, but later on harder-to-obtain materials are necessary. Basically anything that takes glimmer and/or weapons parts you can just upgrade without caring. All other upgrades should be thought through based on how much you like the item.

You can share items between your characters through the vault in the Tower. You should also realize that all drops you pick up in the game are unique to you, so there is no arguing over who gets what loot.

Level 20 and beyond

Items

Once you hit level 20 you still start earning rare items. You also begin to see legendary items as well as working towards exotic gear.

Exotic items

Exotic items are extremely quirky compared to the other item rarities. To begin with, you can only have a single exotic weapon and a single piece of exotic armor equipped at one time.

How you obtain exotic items is also quirky. You can collect motes of light which you take to the Watcher for exotic gear. You can collect Strange Coins to trade with Xur (who only shows up on weekends). You can get exotic bounties with some ridiculous requirements. Or you can complete raids.

Factions

Remember how you have been earning reputation for the Vanguard and Crucible since you hit level 4? Well now you have a bunch of factions to earn for. There are 3 other core factions always in the Tower along with unique ones that rotate in every so often for a limited time. If you wear a trinket from a faction then all of your reputation, Vanguard or Crucible, works toward your reputation for that faction (this only applies to the non-default factions, i.e. not Vanguard and Crucible). You will use only Crucible marks to buy items, though, with these other factions. They all have different focuses in terms of stats so that will influence what faction you focus on.

What all this means is that if you play both PvP and PvE then you may want to choose a non-default faction so as to pool your experience. It does mean, though, you will need to play enough multiplayer to earn the Crucible marks required to purchase items.

Leveling up

One of the biggest shifts in Destiny once you hit level 20 is that you level up not through experience but light. Rare armour and higher have an amount of light and the total light amount decides what level you are at. You can get to level 24 on rare items alone and level 28 with bought legendary items. But to hit past level 28 you will need to either earn higher end legendary gear or get an exotic piece of armour.

Summary

22 Nov 2014 1:50am GMT

21 Nov 2014

feedPlanet Python

Ian Ozsvald: My Keynote at PyConIreland 2014 – “The Real Unsolved Problems in Data Science”

I've just given the opening keynote here at PyConIreland 2014 - many thanks to the organisers for letting me get on stage. This is based on 15 years experience running my own consultancies in Data Science and Artificial Intelligence. (Small note - with the pic below James mis-tweeted 'sexist' instead of 'sexiest' (from my opening slide) <sigh>)

The slides for "The Real Unsolved Problems in Data Science" are available on speakerdeck along with the full video. I wrote this for the more engineering-focused PyConIreland audience. These are the high level points, I did rather fill my hour:

From discussions afterwards it seems that my message "you need clean data to do neat data science stuff" was well received. I'm certainly not the only person in the room battling with Unicode foolishness (not in Python of course as Python 3+ solves the Unicode problem :-).


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

21 Nov 2014 6:39pm GMT

Ian Ozsvald: My Keynote at PyConIreland 2014 – “The Real Unsolved Problems in Data Science”

I've just given the opening keynote here at PyConIreland 2014 - many thanks to the organisers for letting me get on stage. This is based on 15 years experience running my own consultancies in Data Science and Artificial Intelligence. (Small note - with the pic below James mis-tweeted 'sexist' instead of 'sexiest' (from my opening slide) <sigh>)

The slides for "The Real Unsolved Problems in Data Science" are available on speakerdeck along with the full video. I wrote this for the more engineering-focused PyConIreland audience. These are the high level points, I did rather fill my hour:

From discussions afterwards it seems that my message "you need clean data to do neat data science stuff" was well received. I'm certainly not the only person in the room battling with Unicode foolishness (not in Python of course as Python 3+ solves the Unicode problem :-).


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

21 Nov 2014 6:39pm GMT

Filipe Saraiva: Presenting a Season of KDE 2014 student – Minh Ngo

Season of KDE is an outreach program hosted by the KDE community. This year I am working as a mentor to a long time requested project related with Cantor - the development of Python 3 backend. You can read more about Cantor in my blog (texts in English and Portuguese). So, let's say welcome and good luck to Minh Ngo, the student behind this project!

Hi,

My name is Minh,

Minh Ngo

I'm BSc graduated student. I'm Vietnamese, but unlike other Vietnamese students spent most of my life in Ukraine. Currently, I'm preparing myself to the Master degree that will start in the next semester.

Open source is my free time hobby, so I would like to make something that is useful for the community. Previously, I was participated in the GSoC 2013 program and in several open source projects. Some of my personal projects is available on my github page https://github.com/Ignotus, not so popular like other cool projects, but several are used by other people and this fact makes me very happy :) .

Cantor is one of opportunities to spend time to create an useful thing and win an exclusive KDE T-shirt :). I decided to start my contribution with the Python3 backend, because few months ago I studied several courses that are related with Machine Learning, so I was looking for a stable desktop backend for IPython. A notebook version IPython I do not entirely like and its qtconsole version doesn't satisfy me in terms of functionality, therefore I decided to find some existent frontend for IPython that I can tune for myself. And the story with Cantor began after than :)

Happy hacking!

21 Nov 2014 4:32pm GMT

Filipe Saraiva: Presenting a Season of KDE 2014 student – Minh Ngo

Season of KDE is an outreach program hosted by the KDE community. This year I am working as a mentor to a long time requested project related with Cantor - the development of Python 3 backend. You can read more about Cantor in my blog (texts in English and Portuguese). So, let's say welcome and good luck to Minh Ngo, the student behind this project!

Hi,

My name is Minh,

Minh Ngo

I'm BSc graduated student. I'm Vietnamese, but unlike other Vietnamese students spent most of my life in Ukraine. Currently, I'm preparing myself to the Master degree that will start in the next semester.

Open source is my free time hobby, so I would like to make something that is useful for the community. Previously, I was participated in the GSoC 2013 program and in several open source projects. Some of my personal projects is available on my github page https://github.com/Ignotus, not so popular like other cool projects, but several are used by other people and this fact makes me very happy :) .

Cantor is one of opportunities to spend time to create an useful thing and win an exclusive KDE T-shirt :). I decided to start my contribution with the Python3 backend, because few months ago I studied several courses that are related with Machine Learning, so I was looking for a stable desktop backend for IPython. A notebook version IPython I do not entirely like and its qtconsole version doesn't satisfy me in terms of functionality, therefore I decided to find some existent frontend for IPython that I can tune for myself. And the story with Cantor began after than :)

Happy hacking!

21 Nov 2014 4:32pm GMT

Flavio Percoco: What's coming in Kilo for Glance, Zaqar and Oslo?

As usual, here's a write up of what happened last week during the OpenStack Summit. More than a summary, this post contains the plans we discussed for the next 6 months.

Glance

Lots of things happened in Juno for Glance. Work related to artifacts was done, async workers were implemented and glance_store was created. If none of these things excite you, I'm sorry to tell you that you're missing the big picture.

The 3 features mentioned above are the bases of many things that will happen in Kilo. For long time, we've been waiting for async workers to land and now that we have them we can't but use them. One of the first things that will consume this feature is image introspection, which will allow glance to read image's metadata and extract useful information from them. In addition to this, we'll messing with images a bit more by implementing basic support for image conversion to allow for automatic conversion of images during uploads and also as a manual operation. There are many things to take care of here and tons of subtle corner cases so please, keep an eye on these things and help us out.

The work on artifacts is not complete, there are still many things to do there and lots of patches and code are being written. This still seems to be the path the project is going down to for Kilo to allow more generic catalogs and support for storing data assets.

One more thing on Glance, all the work that happened in glancestore during Juno, will finally pay off in Kilo. We'll start refactoring the library and it'll likely be adopted by Nova in K-2. Noticed I said likely? That's because before we get there, we need to clean up the messy glance wrapper nova has. In that same session we discussed what to do with that code and agreed on getting rid of it and let nova consume glanceclient directly, which will happen in kilo-1 before the glancestore adoption. Here's the spec.

Zaqar

When thinking about Zaqar and Kilo, you need to keep 3 things in mind:

  1. Notifications
  2. Persistent Transport
  3. Integration with other services

Notifications is something we've wanted to work on since Icehouse. We talked about them back in Hong Kong, then in Atlanta and we finally have a good plan for them now. The team will put lots of efforts on this feature and we'd love to get as much feedback as possible on the implementation, use cases and targets. In order to implement notifications and mark a fresh start, the team has also decided to bump the API version number to 2 and use this chance to clean up the technical debt from previous versions. Some of the things that will go away from the API are:

One of the projects goal is to be easily consumed regardless of the device you're using. Moreover, the project wants to allow users to integrate with it. Therefore, the team is planning to start working on a persistent Transport in order to define a message-based protocol that is both stateless and persistent as far as the communication between the peers goes. The first target is websocket, which will allow users to consume Zaqar's API from a browser and even using a library without having to go down to raw TCP connections, which was highly discouraged at the summit. This falls perfectly in the projects goals to be easily consumable and to reuse existing technologies and solutions as much as possible.

Although the above two features sound exciting, the ultimate goal is to integrate with other projects in the community. The team has long waited for this opportunity and now that it has a stable API, it is the perfect time for this integration to happen. At our integration session folks from Barbican, Trove, Heat, Horizon showed up - THANKS - and they all shared use-cases, ideas and interesting opinions about what they need and about what they'd like to see happening for Kilo with regards to this integration. Based on the results of this session Heat and Horizon are likely to be the first targets. The team is thrilled about this and we're all looking forward for this collaboration to happen.

Oslo

No matter what I work on, I'll always have time for Oslo. Just like for the other projects I mentioned, there will be exciting things happening in Oslo as well.

Let me start by saying that new libraries will be released but not many of them. This will give the team the time needed to focus on the existing ones and also to work on the other, perhaps equally important, items in the list. For example, we'll be moving away from using namespaces - YAY!, which means we'll be updating all the already released libraries. Something that's worth mentioning is that the already released libraries won't be renamed and the ones to be released will follow the same standard for names. The difference is that they won't be using namespaces internally at all.

Also related to the libraries maintenance, the team has decided to stop using alpha versions for the libraries. One of the points against this is that we currently don't put caps on stable branches, however this will change in Kilo. We will pin to MAJOR.MINOR+1 in stable, allowing bug fixes in MAJOR.MINOR.PATCH+1.

I unfortunately couldn't attend all the Oslo sessions and I missed one that I really wanted to attend about oslo.messaging. By reading the etherpad, it looks like great things will happen in the library during kilo that will help with growing its community. Drivers will be kept in tree, zmq won't be deprecated, yet. Some code de-duplication will happen and both the rabbit and qpid driver will be merged into a single one now that kombu has support for qpid. Just like other projects throughout OpenStack, we'll be targeting full Py3K support like CRAZY!

Hopefully I didn't forget anything or even worse said something stupid. Now, if you may excuse me, I gotta go offline for the next 6 month. Someone has to work on these things.

21 Nov 2014 3:33pm GMT

Flavio Percoco: What's coming in Kilo for Glance, Zaqar and Oslo?

As usual, here's a write up of what happened last week during the OpenStack Summit. More than a summary, this post contains the plans we discussed for the next 6 months.

Glance

Lots of things happened in Juno for Glance. Work related to artifacts was done, async workers were implemented and glance_store was created. If none of these things excite you, I'm sorry to tell you that you're missing the big picture.

The 3 features mentioned above are the bases of many things that will happen in Kilo. For long time, we've been waiting for async workers to land and now that we have them we can't but use them. One of the first things that will consume this feature is image introspection, which will allow glance to read image's metadata and extract useful information from them. In addition to this, we'll messing with images a bit more by implementing basic support for image conversion to allow for automatic conversion of images during uploads and also as a manual operation. There are many things to take care of here and tons of subtle corner cases so please, keep an eye on these things and help us out.

The work on artifacts is not complete, there are still many things to do there and lots of patches and code are being written. This still seems to be the path the project is going down to for Kilo to allow more generic catalogs and support for storing data assets.

One more thing on Glance, all the work that happened in glancestore during Juno, will finally pay off in Kilo. We'll start refactoring the library and it'll likely be adopted by Nova in K-2. Noticed I said likely? That's because before we get there, we need to clean up the messy glance wrapper nova has. In that same session we discussed what to do with that code and agreed on getting rid of it and let nova consume glanceclient directly, which will happen in kilo-1 before the glancestore adoption. Here's the spec.

Zaqar

When thinking about Zaqar and Kilo, you need to keep 3 things in mind:

  1. Notifications
  2. Persistent Transport
  3. Integration with other services

Notifications is something we've wanted to work on since Icehouse. We talked about them back in Hong Kong, then in Atlanta and we finally have a good plan for them now. The team will put lots of efforts on this feature and we'd love to get as much feedback as possible on the implementation, use cases and targets. In order to implement notifications and mark a fresh start, the team has also decided to bump the API version number to 2 and use this chance to clean up the technical debt from previous versions. Some of the things that will go away from the API are:

One of the projects goal is to be easily consumed regardless of the device you're using. Moreover, the project wants to allow users to integrate with it. Therefore, the team is planning to start working on a persistent Transport in order to define a message-based protocol that is both stateless and persistent as far as the communication between the peers goes. The first target is websocket, which will allow users to consume Zaqar's API from a browser and even using a library without having to go down to raw TCP connections, which was highly discouraged at the summit. This falls perfectly in the projects goals to be easily consumable and to reuse existing technologies and solutions as much as possible.

Although the above two features sound exciting, the ultimate goal is to integrate with other projects in the community. The team has long waited for this opportunity and now that it has a stable API, it is the perfect time for this integration to happen. At our integration session folks from Barbican, Trove, Heat, Horizon showed up - THANKS - and they all shared use-cases, ideas and interesting opinions about what they need and about what they'd like to see happening for Kilo with regards to this integration. Based on the results of this session Heat and Horizon are likely to be the first targets. The team is thrilled about this and we're all looking forward for this collaboration to happen.

Oslo

No matter what I work on, I'll always have time for Oslo. Just like for the other projects I mentioned, there will be exciting things happening in Oslo as well.

Let me start by saying that new libraries will be released but not many of them. This will give the team the time needed to focus on the existing ones and also to work on the other, perhaps equally important, items in the list. For example, we'll be moving away from using namespaces - YAY!, which means we'll be updating all the already released libraries. Something that's worth mentioning is that the already released libraries won't be renamed and the ones to be released will follow the same standard for names. The difference is that they won't be using namespaces internally at all.

Also related to the libraries maintenance, the team has decided to stop using alpha versions for the libraries. One of the points against this is that we currently don't put caps on stable branches, however this will change in Kilo. We will pin to MAJOR.MINOR+1 in stable, allowing bug fixes in MAJOR.MINOR.PATCH+1.

I unfortunately couldn't attend all the Oslo sessions and I missed one that I really wanted to attend about oslo.messaging. By reading the etherpad, it looks like great things will happen in the library during kilo that will help with growing its community. Drivers will be kept in tree, zmq won't be deprecated, yet. Some code de-duplication will happen and both the rabbit and qpid driver will be merged into a single one now that kombu has support for qpid. Just like other projects throughout OpenStack, we'll be targeting full Py3K support like CRAZY!

Hopefully I didn't forget anything or even worse said something stupid. Now, if you may excuse me, I gotta go offline for the next 6 month. Someone has to work on these things.

21 Nov 2014 3:33pm GMT

Duncan McGreggor: ErlPort: Using Python from Erlang/LFE

This is a short little blog post I've been wanting to get out there ever since I ran across the erlport project a few years ago. Erlang was built for fault-tolerance. It had a goal of unprecedented uptimes, and these have been achieved. It powers 40% of our world's telecommunications traffic. It's capable of supporting amazing levels of concurrency (remember the 2007 announcement about the performance of YAWS vs. Apache?).

With this knowledge in mind, a common mistake by folks new to Erlang is to think these performance characteristics will be applicable to their own particular domain. This has often resulted in failure, disappointment, and the unjust blaming of Erlang. If you want to process huge files, do lots of string manipulation, or crunch tons of numbers, Erlang's not your bag, baby. Try Python or Julia.

But then, you may be thinking: I like supervision trees. I have long-running processes that I want to be managed per the rules I establish. I want to run lots of jobs in parallel on my 64-core box. I want to run jobs in parallel over the network on 64 of my 64-core boxes. Python's the right tool for the jobs, but I wish I could manage them with Erlang.

(There are sooo many other options for the use cases above, many of them really excellent. But this post is about Erlang/LFE :-)).

Traditionally, if you want to run other languages with Erlang in a reliable way that doesn't bring your Erlang nodes down with badly behaved code, you use Ports. (more info is available in the Interoperability Guide). This is what JInterface builds upon (and, incidentally, allows for some pretty cool integration with Clojure). However, this still leaves a pretty significant burden for the Python or Ruby developer for any serious application needs (quick one-offs that only use one or two data types are not that big a deal).

erlport was created by Dmitry Vasiliev in 2009 in an effort to solve just this problem, making it easier to use of and integrate between Erlang and more common languages like Python and Ruby. The project is maintained, and in fact has just received a few updates. Below, we'll demonstrate some usage in LFE with Python 3.

If you want to follow along, there's a demo repo you can check out:
Change into the repo directory and set up your Python environment:
Next, switch over to the LFE directory, and fire up a REPL:
Note that this will first download the necessary dependencies and compile them (that's what the [snip] is eliding).

Now we're ready to take erlport for a quick trip down to the local:
And that's all there is to it :-)

Perhaps in a future post we can dive into the internals, showing you more of the glory that is erlport. Even better, we could look at more compelling example usage, approaching some of the functionality offered by such projects as Disco or Anaconda.


21 Nov 2014 3:08pm GMT

Duncan McGreggor: ErlPort: Using Python from Erlang/LFE

This is a short little blog post I've been wanting to get out there ever since I ran across the erlport project a few years ago. Erlang was built for fault-tolerance. It had a goal of unprecedented uptimes, and these have been achieved. It powers 40% of our world's telecommunications traffic. It's capable of supporting amazing levels of concurrency (remember the 2007 announcement about the performance of YAWS vs. Apache?).

With this knowledge in mind, a common mistake by folks new to Erlang is to think these performance characteristics will be applicable to their own particular domain. This has often resulted in failure, disappointment, and the unjust blaming of Erlang. If you want to process huge files, do lots of string manipulation, or crunch tons of numbers, Erlang's not your bag, baby. Try Python or Julia.

But then, you may be thinking: I like supervision trees. I have long-running processes that I want to be managed per the rules I establish. I want to run lots of jobs in parallel on my 64-core box. I want to run jobs in parallel over the network on 64 of my 64-core boxes. Python's the right tool for the jobs, but I wish I could manage them with Erlang.

(There are sooo many other options for the use cases above, many of them really excellent. But this post is about Erlang/LFE :-)).

Traditionally, if you want to run other languages with Erlang in a reliable way that doesn't bring your Erlang nodes down with badly behaved code, you use Ports. (more info is available in the Interoperability Guide). This is what JInterface builds upon (and, incidentally, allows for some pretty cool integration with Clojure). However, this still leaves a pretty significant burden for the Python or Ruby developer for any serious application needs (quick one-offs that only use one or two data types are not that big a deal).

erlport was created by Dmitry Vasiliev in 2009 in an effort to solve just this problem, making it easier to use of and integrate between Erlang and more common languages like Python and Ruby. The project is maintained, and in fact has just received a few updates. Below, we'll demonstrate some usage in LFE with Python 3.

If you want to follow along, there's a demo repo you can check out:
Change into the repo directory and set up your Python environment:
Next, switch over to the LFE directory, and fire up a REPL:
Note that this will first download the necessary dependencies and compile them (that's what the [snip] is eliding).

Now we're ready to take erlport for a quick trip down to the local:
And that's all there is to it :-)

Perhaps in a future post we can dive into the internals, showing you more of the glory that is erlport. Even better, we could look at more compelling example usage, approaching some of the functionality offered by such projects as Disco or Anaconda.


21 Nov 2014 3:08pm GMT

The Changelog: Want to get more out of Spotlight? Shine a Flashlight

Flashlight is a plugin system for Yosemite's (newly improved) Spotlight. It already supports weather, Wolfram|Alpha, terminal commands, and much more.

flashlight-example

Plugins are written in Python, so it should be pretty easy to hop in and code up your own!


Subscribe to The Changelog Weekly - our free weekly email covering everything that hits our open source radar.


The post Want to get more out of Spotlight? Shine a Flashlight appeared first on The Changelog.

21 Nov 2014 1:34pm GMT

The Changelog: Want to get more out of Spotlight? Shine a Flashlight

Flashlight is a plugin system for Yosemite's (newly improved) Spotlight. It already supports weather, Wolfram|Alpha, terminal commands, and much more.

flashlight-example

Plugins are written in Python, so it should be pretty easy to hop in and code up your own!


Subscribe to The Changelog Weekly - our free weekly email covering everything that hits our open source radar.


The post Want to get more out of Spotlight? Shine a Flashlight appeared first on The Changelog.

21 Nov 2014 1:34pm GMT

Julien Danjou: Distributed group management and locking in Python with tooz

With OpenStack embracing the Tooz library more and more over the past year, I think it's a good start to write a bit about it.

A bit of history

A little more than year ago, with my colleague Yassine Lamgarchal and others at eNovance, we investigated on how to solve a problem often encountered inside OpenStack: synchronization of multiple distributed workers. And while many people in our ecosystem continue to drive development by adding new bells and whistles, we made a point of solving new problems with a generic solution able to address the technical debt at the same time.

Yassine wrote the first ideas of what should be the group membership service that was needed for OpenStack, identifying several projects that could make use of this. I've presented this concept during the OpenStack Summit in Hong-Kong during an Oslo session. It turned out that the idea was well-received, and the week following the summit we started the tooz project on StackForge.

Goals

Tooz is a Python library that provides a coordination API. Its primary goal is to handle groups and membership of these groups in distributed systems.

Tooz also provides another useful feature which is distributed locking. This allows distributed nodes to acquire and release locks in order to synchronize themselves (for example to access a shared resource).

The architecture

If you are familiar with distributed systems, you might be thinking that there are a lot of solutions already available to solve these issues: ZooKeeper, the Raft consensus algorithm or even Redis for example.

You'll be thrilled to learn that Tooz is not the result of the NIH syndrome, but is an abstraction layer on top of all these solutions. It uses drivers to provide the real functionalities behind, and does not try to do anything fancy.

All the drivers do not have the same amount of functionality of robustness, but depending on your environment, any available driver might be suffice. Like most of OpenStack, we let the deployers/operators/developers chose whichever backend they want to use, informing them of the potential trade-offs they will make.

So far, Tooz provides drivers based on:

All drivers are distributed across processes. Some can be distributed across the network (ZooKeeper, memcached, redis…) and some are only available on the same host (IPC).

Also note that the Tooz API is completely asynchronous, allowing it to be more efficient, and potentially included in an event loop.

Features

Group membership

Tooz provides an API to manage group membership. The basic operations provided are: the creation of a group, the ability to join it, leave it and list its members. It's also possible to be notified as soon as a member joins or leaves a group.

Leader election

Each group can have a leader elected. Each member can decide if it wants to run for the election. If the leader disappears, another one is elected from the list of current candidates. It's possible to be notified of the election result and to retrieve the leader of a group at any moment.

Distributed locking

When trying to synchronize several workers in a distributed environment, you may need a way to lock access to some resources. That's what a distributed lock can help you with.

Adoption in OpenStack

Ceilometer is the first project in OpenStack to use Tooz. It has replaced part of the old alarm distribution system, where RPC was used to detect active alarm evaluator workers. The group membership feature of Tooz was leveraged by Ceilometer to coordinate between alarm evaluator workers.

Another new feature part of the Juno release of Ceilometer is the distribution of polling tasks of the central agent among multiple workers. There's again a group membership issue to know which nodes are online and available to receive polling tasks, so Tooz is also being used here.

The Oslo team has accepted the adoption of Tooz during this release cycle. That means that it will be maintained by more developers, and will be part of the OpenStack release process.

This opens the door to push Tooz further in OpenStack. Our next candidate would be write a service group driver for Nova.

The complete documentation for Tooz is available online and has examples for the various features described here, go read it if you're curious and adventurous!

21 Nov 2014 12:10pm GMT

Julien Danjou: Distributed group management and locking in Python with tooz

With OpenStack embracing the Tooz library more and more over the past year, I think it's a good start to write a bit about it.

A bit of history

A little more than year ago, with my colleague Yassine Lamgarchal and others at eNovance, we investigated on how to solve a problem often encountered inside OpenStack: synchronization of multiple distributed workers. And while many people in our ecosystem continue to drive development by adding new bells and whistles, we made a point of solving new problems with a generic solution able to address the technical debt at the same time.

Yassine wrote the first ideas of what should be the group membership service that was needed for OpenStack, identifying several projects that could make use of this. I've presented this concept during the OpenStack Summit in Hong-Kong during an Oslo session. It turned out that the idea was well-received, and the week following the summit we started the tooz project on StackForge.

Goals

Tooz is a Python library that provides a coordination API. Its primary goal is to handle groups and membership of these groups in distributed systems.

Tooz also provides another useful feature which is distributed locking. This allows distributed nodes to acquire and release locks in order to synchronize themselves (for example to access a shared resource).

The architecture

If you are familiar with distributed systems, you might be thinking that there are a lot of solutions already available to solve these issues: ZooKeeper, the Raft consensus algorithm or even Redis for example.

You'll be thrilled to learn that Tooz is not the result of the NIH syndrome, but is an abstraction layer on top of all these solutions. It uses drivers to provide the real functionalities behind, and does not try to do anything fancy.

All the drivers do not have the same amount of functionality of robustness, but depending on your environment, any available driver might be suffice. Like most of OpenStack, we let the deployers/operators/developers chose whichever backend they want to use, informing them of the potential trade-offs they will make.

So far, Tooz provides drivers based on:

All drivers are distributed across processes. Some can be distributed across the network (ZooKeeper, memcached, redis…) and some are only available on the same host (IPC).

Also note that the Tooz API is completely asynchronous, allowing it to be more efficient, and potentially included in an event loop.

Features

Group membership

Tooz provides an API to manage group membership. The basic operations provided are: the creation of a group, the ability to join it, leave it and list its members. It's also possible to be notified as soon as a member joins or leaves a group.

Leader election

Each group can have a leader elected. Each member can decide if it wants to run for the election. If the leader disappears, another one is elected from the list of current candidates. It's possible to be notified of the election result and to retrieve the leader of a group at any moment.

Distributed locking

When trying to synchronize several workers in a distributed environment, you may need a way to lock access to some resources. That's what a distributed lock can help you with.

Adoption in OpenStack

Ceilometer is the first project in OpenStack to use Tooz. It has replaced part of the old alarm distribution system, where RPC was used to detect active alarm evaluator workers. The group membership feature of Tooz was leveraged by Ceilometer to coordinate between alarm evaluator workers.

Another new feature part of the Juno release of Ceilometer is the distribution of polling tasks of the central agent among multiple workers. There's again a group membership issue to know which nodes are online and available to receive polling tasks, so Tooz is also being used here.

The Oslo team has accepted the adoption of Tooz during this release cycle. That means that it will be maintained by more developers, and will be part of the OpenStack release process.

This opens the door to push Tooz further in OpenStack. Our next candidate would be write a service group driver for Nova.

The complete documentation for Tooz is available online and has examples for the various features described here, go read it if you're curious and adventurous!

21 Nov 2014 12:10pm GMT

Reinout van Rees: Summary of my "developer laptop automation" talk

Last week I gave a talk at a python meetup in Eindhoven (NL). I summarize Therry van Neerven's python desktop application development talk, but I didn't write one for my own "developer laptop automation" talk.

Turns out Therry returned the favour and made a summary of my talk. A good one!

21 Nov 2014 9:15am GMT

Reinout van Rees: Summary of my "developer laptop automation" talk

Last week I gave a talk at a python meetup in Eindhoven (NL). I summarize Therry van Neerven's python desktop application development talk, but I didn't write one for my own "developer laptop automation" talk.

Turns out Therry returned the favour and made a summary of my talk. A good one!

21 Nov 2014 9:15am GMT

Martin Fitzpatrick: PyQtConfig: A simple API for keeping your PyQt Widgets and config in sync

Introducing PyQtConfig: a simple API for handling, persisting and synchronising configuration within PyQt applications. This module was built initially as part of the Pathomx data analysis platform but spun out into a standalone module when it became clear it was quite useful.

This post gives a brief overview of the API features and use. It is still in development so suggestions, comments, bug reports and pull-requests are very welcome.

Demo of config setting with widgets #1

Features

Introduction

The core of the API is a ConfigManager instance that holds configuration settings (either as a Python dict, or a QSettings instance) and provides standard methods to get and set values.

Configuration parameters can have Qt widgets attached as handlers. Once attached the widget and the configuration value will be kept in sync. Setting the value on the ConfigManager will update any attached widgets and changes to the value on the widget will be reflected immmediately in the ConfigManager. Qt signals are emitted on each update.

Default values can be set and will be returned transparently if a parameter remains unset. The current state of config can be saved and reloaded via XML or exported to a flat dict.

A small application has been included in PyQtConfig to demonstrate these features (interaction with widgets requires a running QApplication). Go to the pyqtconfig install folder and run it with:

python -m pyqtconfig.demo

Demo of config setting with widgets #2

Demo of config setting with widgets #3

Demo of config setting with widgets #4

Demo of config setting with widgets #4

Simple usage (dictionary)

To store your settings you need to create a ConfigManager instance. This consists of a settings dictionary, a default settings dictionary and a number of helper functions to handle setting, getting and other functions.

from pyqtconfig import ConfigManager

config = ConfigManager()

config.set_defaults({
    'number': 13,
    'text': 'hello',
    'array': ['1','2'],
    'active': True,    
})

Before values are set the default value will be returned when queried.

config.get('number')
13

config.set('number', 42)
config.get('number')
42

Simple usage (QSettings)

The QSettingsManager provides exactly the same API as the standard QConfigManager, the only difference is in the storage of values.

from pyqtconfig import QSettingsManager

settings = QSettingsManager()

settings.set('number', 42)
settings.set('text', "bla")
settings.set('array', ["a", "b"])
settings.set('active', True)

settings.get('number')
>> 42

Note: On some platforms, versions of Qt, or Qt APIs QSettings will return strings for all values which can lead to complicated code and breakage. However, PyQtConfig is smart enough to use the type of the config parameter in defaults to auto-convert returned values.

However, you do not have to set defaults manually. As of v0.7 default values are auto-set when attaching widgets (handlers) to the config manager if they're not already set.

From this point on we'll be referring to the ConfigManager class only, but all features work identically in QSettingsManager.

Adding widget handlers

So far we could have achieved the same thing with a standard Python dict/QSettings object. The real usefulness of PyQtConfig is in the ability to interact with QWidgets maintaining synchronisation between widgets and internal config, and providing a simple standard interface to retrieve values.

Note: It's difficult to demonstrate the functionality since you need a running QApplication to make it work, and you can't do that in the interactive interpreter. The examples that follow are contrived outputs that you would see if it were possible to do that. For a real example, see the demo included in the package.

lineEdit = QtGui.QLineEdit()
config.add_handler('text', lineEdit)

checkbox = QtGui.QCheckBox('active')
config.add_handler('active', checkbox)

Demo of config setting with widgets #2

The values of the widgets are automatically set to the pre-set defaults. Note that if we hadn't pre-set a default value the reverse would happen, and the default would be set to the value in the widget. This allows you to define the defaults in either way.

Next we'll change the value of both widgets.

We can read out the values of the widgets via the ConfigManager using the standard get interface rather than using the widget-specific access functions.

config.get('text')
>> 'hello'

config.get('active')
>> True

We can also update the widgets via the ConfigManager using set.

config.set('text', 'new value')
config.set('active', False)

Demo of config setting with widgets #2

Mapping

Sometimes you want to display a different value in a widget than you store in the configuration. The most obvious example would be in a combo box where you want to list nice descriptive names, but want to store short names or numbers in the configuration.

To enable this PyQtConfig allows a mapper to be defined when attaching a widget to a config. Mappers are provided as tuple of 2 functions set and get that each perform the conversion required when setting and getting the value from the widget. To simplify map creation however you can also specify the mapping as a dict and PyQtConfig will create the necessary lambdas behind the scenes.

CHOICE_A = 1
CHOICE_B = 2
CHOICE_C = 3
CHOICE_D = 4

map_dict = {
    'Choice A': CHOICE_A,
    'Choice B': CHOICE_B,
    'Choice C': CHOICE_C,
    'Choice D': CHOICE_D,
}

config.set_default('combo', CHOICE_C)
config.get('combo')
>> 3

comboBox = QtGui.QComboBox()
comboBox.addItems( map_dict.keys() )
config.add_handler('combo', comboBox, mapper=map_dict)

Demo of config setting with widgets #2

Note how the config is set to 3 (the value of CHOICE_C) but displays "Choice C" as text.

Saving and loading data

QSettingsManager uses a QSettings object as a config store and so the saving of configuration is automatic through the Qt APIs. However, if you're using ConfigManager you will need another approach to load and save your settings (note that these functions are also available in QSettingsManager if you want them).

The simplest access is to output the stored data as a dict using as_dict().

config.as_dict()

This dict contains all values in the internal dictionary, with defaults used where values are not set. You can take this dict and set the defaults on a new ConfigManager to persist state.

config2 = ConfigManager()
config2.set_defaults( config.as_dict() )

config2.get('combo')
>> 3

You can also export and import data as XML. The two functions for handling XML import take an ElementTree root element and search for config settings under Config/ConfigSetting. This allows you to use PyQtConfig to write config into an XML file without worrying about the format.

import ElementTree as et

config.set('combo', CHOICE_D)

root = et.Element("MyXML")
root = config.getXMLConfig( root )

config2.setXMLConfig(root)
config2.get('combo')
>> 4

Finishing up

Hope you find PyQtConfig useful in your PyQt projects. Let me know if you have any comments, suggestions, bug reports and pull-requests.

21 Nov 2014 8:17am GMT

Martin Fitzpatrick: PyQtConfig: A simple API for keeping your PyQt Widgets and config in sync

Introducing PyQtConfig: a simple API for handling, persisting and synchronising configuration within PyQt applications. This module was built initially as part of the Pathomx data analysis platform but spun out into a standalone module when it became clear it was quite useful.

This post gives a brief overview of the API features and use. It is still in development so suggestions, comments, bug reports and pull-requests are very welcome.

Demo of config setting with widgets #1

Features

Introduction

The core of the API is a ConfigManager instance that holds configuration settings (either as a Python dict, or a QSettings instance) and provides standard methods to get and set values.

Configuration parameters can have Qt widgets attached as handlers. Once attached the widget and the configuration value will be kept in sync. Setting the value on the ConfigManager will update any attached widgets and changes to the value on the widget will be reflected immmediately in the ConfigManager. Qt signals are emitted on each update.

Default values can be set and will be returned transparently if a parameter remains unset. The current state of config can be saved and reloaded via XML or exported to a flat dict.

A small application has been included in PyQtConfig to demonstrate these features (interaction with widgets requires a running QApplication). Go to the pyqtconfig install folder and run it with:

python -m pyqtconfig.demo

Demo of config setting with widgets #2

Demo of config setting with widgets #3

Demo of config setting with widgets #4

Demo of config setting with widgets #4

Simple usage (dictionary)

To store your settings you need to create a ConfigManager instance. This consists of a settings dictionary, a default settings dictionary and a number of helper functions to handle setting, getting and other functions.

from pyqtconfig import ConfigManager

config = ConfigManager()

config.set_defaults({
    'number': 13,
    'text': 'hello',
    'array': ['1','2'],
    'active': True,    
})

Before values are set the default value will be returned when queried.

config.get('number')
13

config.set('number', 42)
config.get('number')
42

Simple usage (QSettings)

The QSettingsManager provides exactly the same API as the standard QConfigManager, the only difference is in the storage of values.

from pyqtconfig import QSettingsManager

settings = QSettingsManager()

settings.set('number', 42)
settings.set('text', "bla")
settings.set('array', ["a", "b"])
settings.set('active', True)

settings.get('number')
>> 42

Note: On some platforms, versions of Qt, or Qt APIs QSettings will return strings for all values which can lead to complicated code and breakage. However, PyQtConfig is smart enough to use the type of the config parameter in defaults to auto-convert returned values.

However, you do not have to set defaults manually. As of v0.7 default values are auto-set when attaching widgets (handlers) to the config manager if they're not already set.

From this point on we'll be referring to the ConfigManager class only, but all features work identically in QSettingsManager.

Adding widget handlers

So far we could have achieved the same thing with a standard Python dict/QSettings object. The real usefulness of PyQtConfig is in the ability to interact with QWidgets maintaining synchronisation between widgets and internal config, and providing a simple standard interface to retrieve values.

Note: It's difficult to demonstrate the functionality since you need a running QApplication to make it work, and you can't do that in the interactive interpreter. The examples that follow are contrived outputs that you would see if it were possible to do that. For a real example, see the demo included in the package.

lineEdit = QtGui.QLineEdit()
config.add_handler('text', lineEdit)

checkbox = QtGui.QCheckBox('active')
config.add_handler('active', checkbox)

Demo of config setting with widgets #2

The values of the widgets are automatically set to the pre-set defaults. Note that if we hadn't pre-set a default value the reverse would happen, and the default would be set to the value in the widget. This allows you to define the defaults in either way.

Next we'll change the value of both widgets.

We can read out the values of the widgets via the ConfigManager using the standard get interface rather than using the widget-specific access functions.

config.get('text')
>> 'hello'

config.get('active')
>> True

We can also update the widgets via the ConfigManager using set.

config.set('text', 'new value')
config.set('active', False)

Demo of config setting with widgets #2

Mapping

Sometimes you want to display a different value in a widget than you store in the configuration. The most obvious example would be in a combo box where you want to list nice descriptive names, but want to store short names or numbers in the configuration.

To enable this PyQtConfig allows a mapper to be defined when attaching a widget to a config. Mappers are provided as tuple of 2 functions set and get that each perform the conversion required when setting and getting the value from the widget. To simplify map creation however you can also specify the mapping as a dict and PyQtConfig will create the necessary lambdas behind the scenes.

CHOICE_A = 1
CHOICE_B = 2
CHOICE_C = 3
CHOICE_D = 4

map_dict = {
    'Choice A': CHOICE_A,
    'Choice B': CHOICE_B,
    'Choice C': CHOICE_C,
    'Choice D': CHOICE_D,
}

config.set_default('combo', CHOICE_C)
config.get('combo')
>> 3

comboBox = QtGui.QComboBox()
comboBox.addItems( map_dict.keys() )
config.add_handler('combo', comboBox, mapper=map_dict)

Demo of config setting with widgets #2

Note how the config is set to 3 (the value of CHOICE_C) but displays "Choice C" as text.

Saving and loading data

QSettingsManager uses a QSettings object as a config store and so the saving of configuration is automatic through the Qt APIs. However, if you're using ConfigManager you will need another approach to load and save your settings (note that these functions are also available in QSettingsManager if you want them).

The simplest access is to output the stored data as a dict using as_dict().

config.as_dict()

This dict contains all values in the internal dictionary, with defaults used where values are not set. You can take this dict and set the defaults on a new ConfigManager to persist state.

config2 = ConfigManager()
config2.set_defaults( config.as_dict() )

config2.get('combo')
>> 3

You can also export and import data as XML. The two functions for handling XML import take an ElementTree root element and search for config settings under Config/ConfigSetting. This allows you to use PyQtConfig to write config into an XML file without worrying about the format.

import ElementTree as et

config.set('combo', CHOICE_D)

root = et.Element("MyXML")
root = config.getXMLConfig( root )

config2.setXMLConfig(root)
config2.get('combo')
>> 4

Finishing up

Hope you find PyQtConfig useful in your PyQt projects. Let me know if you have any comments, suggestions, bug reports and pull-requests.

21 Nov 2014 8:17am GMT

20 Nov 2014

feedPlanet Python

Django Weblog: DSF board election 2015 results

We're happy to announce the winners of the DSF board elections 2015:

President: Russell Keith-Magee

Board members: Karen Tracey and Ola Sitarska

Secretary: Andy McKay

Treasurer: Stacey Haysler

Feel free to let us know if you'd like to know the full voting results.

The board of the Django Software Foundation (DSF) has just met and voted unanimously to confirm Russell, Karen, Ola, Andy and Stacey for their seat on the DSF board.

We want to thank all the other candidates again for their participation and hope to see them running at the next annual board election.

Ola is a co-organizer of many community events like DjangoCon Europe in Warsaw (DjangoCircus) and Django: Under The Hood and a co-founder of Django Girls where she helps run non-profit and free events for hundreds of women who want to learn about building the web. She cares about bringing more inclusivity into the Django community and making it easier for beginners to start using and developing Django. She works as a Django Developer at Potato in London.

Stacey brings an incredible amount of experience with corporate administration and financial management to the board. She has experience managing 501(c)(3) organizations, and has been involved in the organisation of Open Source conferences such as PgCon. She works as a Client Services Director at PostgreQL Experts, an OSS consultancy.

We look forward to seeing the contributions that Ola and Stacey can bring to the DSF board as new members of the board.

The DSF would like to thank Adrian for his many years of service, who has literally been there since its inception back in 2008. He has been an amazing influence and in many cases the driving force behind what Django is today.

Our thanks also to Joseph Kocherhans who has served as DSF Treasurer similarly since 2008. Joseph has done an amazing job over the last 6 years, and especially over the last 12 months.

20 Nov 2014 5:20pm GMT

Django Weblog: DSF board election 2015 results

We're happy to announce the winners of the DSF board elections 2015:

President: Russell Keith-Magee

Board members: Karen Tracey and Ola Sitarska

Secretary: Andy McKay

Treasurer: Stacey Haysler

Feel free to let us know if you'd like to know the full voting results.

The board of the Django Software Foundation (DSF) has just met and voted unanimously to confirm Russell, Karen, Ola, Andy and Stacey for their seat on the DSF board.

We want to thank all the other candidates again for their participation and hope to see them running at the next annual board election.

Ola is a co-organizer of many community events like DjangoCon Europe in Warsaw (DjangoCircus) and Django: Under The Hood and a co-founder of Django Girls where she helps run non-profit and free events for hundreds of women who want to learn about building the web. She cares about bringing more inclusivity into the Django community and making it easier for beginners to start using and developing Django. She works as a Django Developer at Potato in London.

Stacey brings an incredible amount of experience with corporate administration and financial management to the board. She has experience managing 501(c)(3) organizations, and has been involved in the organisation of Open Source conferences such as PgCon. She works as a Client Services Director at PostgreQL Experts, an OSS consultancy.

We look forward to seeing the contributions that Ola and Stacey can bring to the DSF board as new members of the board.

The DSF would like to thank Adrian for his many years of service, who has literally been there since its inception back in 2008. He has been an amazing influence and in many cases the driving force behind what Django is today.

Our thanks also to Joseph Kocherhans who has served as DSF Treasurer similarly since 2008. Joseph has done an amazing job over the last 6 years, and especially over the last 12 months.

20 Nov 2014 5:20pm GMT

PyTennessee: Keynote: Chris Fonnesbeck

Chris Fonnesbeck

Chris Fonnesbeck is an Assistant Professor in the Department of Biostatistics at the Vanderbilt University School of Medicine. He specializes in computational statistics, Bayesian methods, evidence-based medicine, and infectious disease modeling. Chris created and continues to contribute to PyMC, a Python package for Bayesian statistical modeling. He originally hails from Vancouver, BC and received his Ph.D. from the University of Georgia.

I'm super excited to have Chris keynote for us this year after this fantastic presentation last year. The CFP closed last night, and Chris finishes out the powerful quadrant of keynotes this year. Get your tickets soon!

20 Nov 2014 1:53pm GMT

PyTennessee: Keynote: Chris Fonnesbeck

Chris Fonnesbeck

Chris Fonnesbeck is an Assistant Professor in the Department of Biostatistics at the Vanderbilt University School of Medicine. He specializes in computational statistics, Bayesian methods, evidence-based medicine, and infectious disease modeling. Chris created and continues to contribute to PyMC, a Python package for Bayesian statistical modeling. He originally hails from Vancouver, BC and received his Ph.D. from the University of Georgia.

I'm super excited to have Chris keynote for us this year after this fantastic presentation last year. The CFP closed last night, and Chris finishes out the powerful quadrant of keynotes this year. Get your tickets soon!

20 Nov 2014 1:53pm GMT

ShiningPanda: Production Server Monitoring

Today requires.io introduces Site Monitoring, a security feature to check that the dependencies of the Python apps deployed on your production servers are up-to-date and secure.

Requires.io can already monitor the requirements of your projects from their source code. We expanded the API so that by adding two lines to your deployment scripts you can now check that your production apps are secure:

$ pip install -U requires.io
$ requires.io update-site -t $MY_SECRET_TOKEN -r $MY_REPO

Step-by-step Tutorial

In this small tutorial we will setup Site Monitoring for the project requires/myapp. This tutorial assumes that you already have an account on requires.io... If you don't, just register!

1. Plan upgrade

First ensure that your plan support the Site Monitoring feature. This can be done from the settings page. In this case I need an Indie+ account.

Plan upgrade

2. Upgrade your deployment script

Go to the "monitoring" section of your settings. There you can just copy the necessary line. In this case it is:

requires.io update-site -t 6ade5eb345d8a79ad69a9f868021e0210522aceb -r REPO

The token is valid for the account requires, so for the project requires/myapp we just need to replace REPO by myapp.

requires.io update-site -t 62717a87341c8500d316bf52635a9e40ced04ace -r myapp
Monitoring

For an app deployed with a simple fabric script (using fabtools to handle the virtualenv), the resulting script would look similar to this:

with fabtools.python.virtualenv(virtualenv):
    run('pip install -r requirements.txt')
    run('pip install requires.io')
    run('requires.io update-site -t 6ade5eb345d8a79ad69a9f868021e0210522aceb -r myapp')

Adapt for your own deployment scripts!

4. Check the result

Just go to your requirements page on requires.io: you will see a new section called "Sites" in the right column.

Sites

Notifications

Notifications for the Site Monitoring feature are coming very soon... Requires.io notification system is being thoroughly updated, but it is not quite ready yet.

Heroku

We are currently testing the requires.io Heroku app. So if you want to hook requires.io to your heroku account to use the Site Monitoring feature, let us know!

20 Nov 2014 12:58pm GMT

ShiningPanda: Production Server Monitoring

Today requires.io introduces Site Monitoring, a security feature to check that the dependencies of the Python apps deployed on your production servers are up-to-date and secure.

Requires.io can already monitor the requirements of your projects from their source code. We expanded the API so that by adding two lines to your deployment scripts you can now check that your production apps are secure:

$ pip install -U requires.io
$ requires.io update-site -t $MY_SECRET_TOKEN -r $MY_REPO

Step-by-step Tutorial

In this small tutorial we will setup Site Monitoring for the project requires/myapp. This tutorial assumes that you already have an account on requires.io... If you don't, just register!

1. Plan upgrade

First ensure that your plan support the Site Monitoring feature. This can be done from the settings page. In this case I need an Indie+ account.

Plan upgrade

2. Upgrade your deployment script

Go to the "monitoring" section of your settings. There you can just copy the necessary line. In this case it is:

requires.io update-site -t 6ade5eb345d8a79ad69a9f868021e0210522aceb -r REPO

The token is valid for the account requires, so for the project requires/myapp we just need to replace REPO by myapp.

requires.io update-site -t 62717a87341c8500d316bf52635a9e40ced04ace -r myapp
Monitoring

For an app deployed with a simple fabric script (using fabtools to handle the virtualenv), the resulting script would look similar to this:

with fabtools.python.virtualenv(virtualenv):
    run('pip install -r requirements.txt')
    run('pip install requires.io')
    run('requires.io update-site -t 6ade5eb345d8a79ad69a9f868021e0210522aceb -r myapp')

Adapt for your own deployment scripts!

4. Check the result

Just go to your requirements page on requires.io: you will see a new section called "Sites" in the right column.

Sites

Notifications

Notifications for the Site Monitoring feature are coming very soon... Requires.io notification system is being thoroughly updated, but it is not quite ready yet.

Heroku

We are currently testing the requires.io Heroku app. So if you want to hook requires.io to your heroku account to use the Site Monitoring feature, let us know!

20 Nov 2014 12:58pm GMT

PyPy Development: Tornado without a GIL on PyPy STM

This post is by Konstantin Lopuhin, who tried PyPy STM during the Warsaw sprint.

Python has a GIL, right? Not quite - PyPy STM is a python implementation without a GIL, so it can scale CPU-bound work to several cores. PyPy STM is developed by Armin Rigo and Remi Meier, and supported by community donations. You can read more about it in the docs.

Although PyPy STM is still a work in progress, in many cases it can already run CPU-bound code faster than regular PyPy, when using multiple cores. Here we will see how to slightly modify Tornado IO loop to use transaction module. This module is described in the docs and is really simple to use - please see an example there. An event loop of Tornado, or any other asynchronous web server, looks like this (with some simplifications):

while True:
    for callback in list(self._callbacks):
        self._run_callback(callback)
    event_pairs = self._impl.poll()
    self._events.update(event_pairs)
    while self._events:
        fd, events = self._events.popitem()
        handler = self._handlers[fd]
        self._handle_event(fd, handler, events)

We get IO events, and run handlers for all of them, these handlers can also register new callbacks, which we run too. When using such a framework, it is very nice to have a guaranty that all handlers are run serially, so you do not have to put any locks. This is an ideal case for the transaction module - it gives us guaranties that things appear to be run serially, so in user code we do not need any locks. We just need to change the code above to something like:

while True:
    for callback in list(self._callbacks):
        transaction.add(                # added
            self._run_callback, callback)
    transaction.run()                   # added
    event_pairs = self._impl.poll()
    self._events.update(event_pairs)
    while self._events:
        fd, events = self._events.popitem()
        handler = self._handlers[fd]
        transaction.add(                # added
            self._handle_event, fd, handler, events)
    transaction.run()                   # added

The actual commit is here, - we had to extract a little function to run the callback.

Part 1: a simple benchmark: primes

Now we need a simple benchmark, lets start with this - just calculate a list of primes up to the given number, and return it as JSON:

def is_prime(n):
    for i in xrange(2, n):
        if n % i == 0:
            return False
    return True

class MainHandler(tornado.web.RequestHandler):
    def get(self, num):
        num = int(num)
        primes = [n for n in xrange(2, num + 1) if is_prime(n)]
        self.write({'primes': primes})

We can benchmark it with siege:

siege -c 50 -t 20s http://localhost:8888/10000

But this does not scale. The CPU load is at 101-104 %, and we handle 30 % less request per second. The reason for the slowdown is STM overhead, which needs to keep track of all writes and reads in order to detect conflicts. And the reason for using only one core is, obviously, conflicts! Fortunately, we can see what this conflicts are, if we run code like this (here 4 is the number of cores to use):

PYPYSTM=stm.log ./primes.py 4

Then we can use print_stm_log.py to analyse this log. It lists the most expensive conflicts:

14.793s lost in aborts, 0.000s paused (1258x STM_CONTENTION_INEVITABLE)
File "/home/ubuntu/tornado-stm/tornado/tornado/httpserver.py", line 455, in __init__
    self._start_time = time.time()
File "/home/ubuntu/tornado-stm/tornado/tornado/httpserver.py", line 455, in __init__
    self._start_time = time.time()
...

There are only three kinds of conflicts, they are described in stm source, Here we see that two threads call into external function to get current time, and we can not rollback any of them, so one of them must wait till the other transaction finishes. For now we can hack around this by disabling this timing - this is only needed for internal profiling in tornado.

If we do it, we get the following results (but see caveats below):

Impl. req/s
PyPy 2.4 14.4
CPython 2.7 3.2
PyPy-STM 1 9.3
PyPy-STM 2 16.4
PyPy-STM 3 20.4
PyPy STM 4 24.2

As we can see, in this benchmark PyPy STM using just two cores can beat regular PyPy! This is not linear scaling, there are still conflicts left, and this is a very simple example but still, it works!

But its not that simple yet :)

First, these are best-case numbers after long (much longer than for regular PyPy) warmup. Second, it can sometimes crash (although removing old pyc files fixes it). Third, benchmark meta-parameters are also tuned.

Here we get relatively good results only when there are a lot of concurrent clients - as a results, a lot of requests pile up, the server is not keeping with the load, and transaction module is busy with work running this piled up requests. If we decrease the number of concurrent clients, results get slightly worse. Another thing we can tune is how heavy is each request - again, if we ask primes up to a lower number, then less time is spent doing calculations, more time is spent in tornado, and results get much worse.

Besides the time.time() conflict described above, there are a lot of others. The bulk of time is lost in these two conflicts:

14.153s lost in aborts, 0.000s paused (270x STM_CONTENTION_INEVITABLE)
File "/home/ubuntu/tornado-stm/tornado/tornado/web.py", line 1082, in compute_etag
    hasher = hashlib.sha1()
File "/home/ubuntu/tornado-stm/tornado/tornado/web.py", line 1082, in compute_etag
    hasher = hashlib.sha1()

13.484s lost in aborts, 0.000s paused (130x STM_CONTENTION_WRITE_READ)
File "/home/ubuntu/pypy/lib_pypy/transaction.py", line 164, in _run_thread
    got_exception)

The first one is presumably calling into some C function from stdlib, and we get the same conflict as for time.time() above, but is can be fixed on PyPy side, as we can be sure that computing sha1 is pure.

It is easy to hack around this one too, just removing etag support, but if we do it, performance is much worse, only slightly faster than regular PyPy, with the top conflict being:

83.066s lost in aborts, 0.000s paused (459x STM_CONTENTION_WRITE_WRITE)
File "/home/arigo/hg/pypy/stmgc-c7/lib-python/2.7/_weakrefset.py", line 70, in __contains__
File "/home/arigo/hg/pypy/stmgc-c7/lib-python/2.7/_weakrefset.py", line 70, in __contains__

Comment by Armin: It is unclear why this happens so far. We'll investigate...

The second conflict (without etag tweaks) originates in the transaction module, from this piece of code:

while True:
    self._do_it(self._grab_next_thing_to_do(tloc_pending),
                got_exception)
    counter[0] += 1

Comment by Armin: This is a conflict in the transaction module itself; ideally, it shouldn't have any, but in order to do that we might need a little bit of support from RPython or C code. So this is pending improvement.

Tornado modification used in this blog post is based on 3.2.dev2. As of now, the latest version is 4.0.2, and if we apply the same changes to this version, then we no longer get any scaling on this benchmark, and there are no conflicts that take any substantial time.

Comment by Armin: There are two possible reactions to a conflict. We can either abort one of the two threads, or (depending on the circumstances) just pause the current thread until the other one commits, after which the thread will likely be able to continue. The tool ``print_stm_log.py`` did not report conflicts that cause pauses. It has been fixed very recently. Chances are that on this test it would report long pauses and point to locations that cause them.

Part 2: a more interesting benchmark: A-star

Although we have seen that PyPy STM is not all moonlight and roses, it is interesting to see how it works on a more realistic application.

astar.py is a simple game where several players move on a map (represented as a list of lists of integers), build and destroy walls, and ask server to give them shortest paths between two points using A-star search, adopted from ActiveState recipie.

The benchmark bench_astar.py is simulating players, and tries to put the main load on A-star search, but also does some wall building and destruction. There are no locks around map modifications, as normal tornado is executing all callbacks serially, and we can keep this guaranty with atomic blocks of PyPy STM. This is also an example of a program that is not trivial to scale to multiple cores with separate processes (assuming more interesting shared state and logic).

This benchmark is very noisy due to randomness of client interactions (also it could be not linear), so just lower and upper bounds for number of requests are reported

Impl. req/s
PyPy 2.4 5 .. 7
CPython 2.7 0.5 .. 0.9
PyPy-STM 1 2 .. 4
PyPy STM 4 2 .. 6

Clearly this is a very bad benchmark, but still we can see that scaling is worse and STM overhead is sometimes higher. The bulk of conflicts come from the transaction module (we have seen it above):

91.655s lost in aborts, 0.000s paused (249x STM_CONTENTION_WRITE_READ)
File "/home/ubuntu/pypy/lib_pypy/transaction.py", line 164, in _run_thread
    got_exception)

Although it is definitely not ready for production use, you can already try to run things, report bugs, and see what is missing in user-facing tools and libraries.

Benchmarks setup:

20 Nov 2014 10:10am GMT

PyPy Development: Tornado without a GIL on PyPy STM

This post is by Konstantin Lopuhin, who tried PyPy STM during the Warsaw sprint.

Python has a GIL, right? Not quite - PyPy STM is a python implementation without a GIL, so it can scale CPU-bound work to several cores. PyPy STM is developed by Armin Rigo and Remi Meier, and supported by community donations. You can read more about it in the docs.

Although PyPy STM is still a work in progress, in many cases it can already run CPU-bound code faster than regular PyPy, when using multiple cores. Here we will see how to slightly modify Tornado IO loop to use transaction module. This module is described in the docs and is really simple to use - please see an example there. An event loop of Tornado, or any other asynchronous web server, looks like this (with some simplifications):

while True:
    for callback in list(self._callbacks):
        self._run_callback(callback)
    event_pairs = self._impl.poll()
    self._events.update(event_pairs)
    while self._events:
        fd, events = self._events.popitem()
        handler = self._handlers[fd]
        self._handle_event(fd, handler, events)

We get IO events, and run handlers for all of them, these handlers can also register new callbacks, which we run too. When using such a framework, it is very nice to have a guaranty that all handlers are run serially, so you do not have to put any locks. This is an ideal case for the transaction module - it gives us guaranties that things appear to be run serially, so in user code we do not need any locks. We just need to change the code above to something like:

while True:
    for callback in list(self._callbacks):
        transaction.add(                # added
            self._run_callback, callback)
    transaction.run()                   # added
    event_pairs = self._impl.poll()
    self._events.update(event_pairs)
    while self._events:
        fd, events = self._events.popitem()
        handler = self._handlers[fd]
        transaction.add(                # added
            self._handle_event, fd, handler, events)
    transaction.run()                   # added

The actual commit is here, - we had to extract a little function to run the callback.

Part 1: a simple benchmark: primes

Now we need a simple benchmark, lets start with this - just calculate a list of primes up to the given number, and return it as JSON:

def is_prime(n):
    for i in xrange(2, n):
        if n % i == 0:
            return False
    return True

class MainHandler(tornado.web.RequestHandler):
    def get(self, num):
        num = int(num)
        primes = [n for n in xrange(2, num + 1) if is_prime(n)]
        self.write({'primes': primes})

We can benchmark it with siege:

siege -c 50 -t 20s http://localhost:8888/10000

But this does not scale. The CPU load is at 101-104 %, and we handle 30 % less request per second. The reason for the slowdown is STM overhead, which needs to keep track of all writes and reads in order to detect conflicts. And the reason for using only one core is, obviously, conflicts! Fortunately, we can see what this conflicts are, if we run code like this (here 4 is the number of cores to use):

PYPYSTM=stm.log ./primes.py 4

Then we can use print_stm_log.py to analyse this log. It lists the most expensive conflicts:

14.793s lost in aborts, 0.000s paused (1258x STM_CONTENTION_INEVITABLE)
File "/home/ubuntu/tornado-stm/tornado/tornado/httpserver.py", line 455, in __init__
    self._start_time = time.time()
File "/home/ubuntu/tornado-stm/tornado/tornado/httpserver.py", line 455, in __init__
    self._start_time = time.time()
...

There are only three kinds of conflicts, they are described in stm source, Here we see that two threads call into external function to get current time, and we can not rollback any of them, so one of them must wait till the other transaction finishes. For now we can hack around this by disabling this timing - this is only needed for internal profiling in tornado.

If we do it, we get the following results (but see caveats below):

Impl. req/s
PyPy 2.4 14.4
CPython 2.7 3.2
PyPy-STM 1 9.3
PyPy-STM 2 16.4
PyPy-STM 3 20.4
PyPy STM 4 24.2

As we can see, in this benchmark PyPy STM using just two cores can beat regular PyPy! This is not linear scaling, there are still conflicts left, and this is a very simple example but still, it works!

But its not that simple yet :)

First, these are best-case numbers after long (much longer than for regular PyPy) warmup. Second, it can sometimes crash (although removing old pyc files fixes it). Third, benchmark meta-parameters are also tuned.

Here we get relatively good results only when there are a lot of concurrent clients - as a results, a lot of requests pile up, the server is not keeping with the load, and transaction module is busy with work running this piled up requests. If we decrease the number of concurrent clients, results get slightly worse. Another thing we can tune is how heavy is each request - again, if we ask primes up to a lower number, then less time is spent doing calculations, more time is spent in tornado, and results get much worse.

Besides the time.time() conflict described above, there are a lot of others. The bulk of time is lost in these two conflicts:

14.153s lost in aborts, 0.000s paused (270x STM_CONTENTION_INEVITABLE)
File "/home/ubuntu/tornado-stm/tornado/tornado/web.py", line 1082, in compute_etag
    hasher = hashlib.sha1()
File "/home/ubuntu/tornado-stm/tornado/tornado/web.py", line 1082, in compute_etag
    hasher = hashlib.sha1()

13.484s lost in aborts, 0.000s paused (130x STM_CONTENTION_WRITE_READ)
File "/home/ubuntu/pypy/lib_pypy/transaction.py", line 164, in _run_thread
    got_exception)

The first one is presumably calling into some C function from stdlib, and we get the same conflict as for time.time() above, but is can be fixed on PyPy side, as we can be sure that computing sha1 is pure.

It is easy to hack around this one too, just removing etag support, but if we do it, performance is much worse, only slightly faster than regular PyPy, with the top conflict being:

83.066s lost in aborts, 0.000s paused (459x STM_CONTENTION_WRITE_WRITE)
File "/home/arigo/hg/pypy/stmgc-c7/lib-python/2.7/_weakrefset.py", line 70, in __contains__
File "/home/arigo/hg/pypy/stmgc-c7/lib-python/2.7/_weakrefset.py", line 70, in __contains__

Comment by Armin: It is unclear why this happens so far. We'll investigate...

The second conflict (without etag tweaks) originates in the transaction module, from this piece of code:

while True:
    self._do_it(self._grab_next_thing_to_do(tloc_pending),
                got_exception)
    counter[0] += 1

Comment by Armin: This is a conflict in the transaction module itself; ideally, it shouldn't have any, but in order to do that we might need a little bit of support from RPython or C code. So this is pending improvement.

Tornado modification used in this blog post is based on 3.2.dev2. As of now, the latest version is 4.0.2, and if we apply the same changes to this version, then we no longer get any scaling on this benchmark, and there are no conflicts that take any substantial time.

Comment by Armin: There are two possible reactions to a conflict. We can either abort one of the two threads, or (depending on the circumstances) just pause the current thread until the other one commits, after which the thread will likely be able to continue. The tool ``print_stm_log.py`` did not report conflicts that cause pauses. It has been fixed very recently. Chances are that on this test it would report long pauses and point to locations that cause them.

Part 2: a more interesting benchmark: A-star

Although we have seen that PyPy STM is not all moonlight and roses, it is interesting to see how it works on a more realistic application.

astar.py is a simple game where several players move on a map (represented as a list of lists of integers), build and destroy walls, and ask server to give them shortest paths between two points using A-star search, adopted from ActiveState recipie.

The benchmark bench_astar.py is simulating players, and tries to put the main load on A-star search, but also does some wall building and destruction. There are no locks around map modifications, as normal tornado is executing all callbacks serially, and we can keep this guaranty with atomic blocks of PyPy STM. This is also an example of a program that is not trivial to scale to multiple cores with separate processes (assuming more interesting shared state and logic).

This benchmark is very noisy due to randomness of client interactions (also it could be not linear), so just lower and upper bounds for number of requests are reported

Impl. req/s
PyPy 2.4 5 .. 7
CPython 2.7 0.5 .. 0.9
PyPy-STM 1 2 .. 4
PyPy STM 4 2 .. 6

Clearly this is a very bad benchmark, but still we can see that scaling is worse and STM overhead is sometimes higher. The bulk of conflicts come from the transaction module (we have seen it above):

91.655s lost in aborts, 0.000s paused (249x STM_CONTENTION_WRITE_READ)
File "/home/ubuntu/pypy/lib_pypy/transaction.py", line 164, in _run_thread
    got_exception)

Although it is definitely not ready for production use, you can already try to run things, report bugs, and see what is missing in user-facing tools and libraries.

Benchmarks setup:

20 Nov 2014 10:10am GMT

Stefan Behnel: lxml christmas funding

My bicycle was recently stolen and since I now have to get a new one, here's a proposal.

From today on until December 24th, I will divert all donations that I receive for my work on lxml to help in restoring my local mobility.

If you do not like this 'misuse', do not donate in this time frame. I do hope, however, that some of you like the idea that the money they give for something they value is used for something that is of value to the receiver.

All the best -- Stefan

20 Nov 2014 6:59am GMT

Stefan Behnel: lxml christmas funding

My bicycle was recently stolen and since I now have to get a new one, here's a proposal.

From today on until December 24th, I will divert all donations that I receive for my work on lxml to help in restoring my local mobility.

If you do not like this 'misuse', do not donate in this time frame. I do hope, however, that some of you like the idea that the money they give for something they value is used for something that is of value to the receiver.

All the best -- Stefan

20 Nov 2014 6:59am GMT

Vasudev Ram: Find if a Python string is an anagram of a palindrome

By Vasudev RamI saw this interesting thread on Hacker News some 10-odd days ago:

HN: Rust and Go

Apart from being generally of interest, it had a sub-thread that was about finding if a given string is an anagram of a palindrome. A few people replied in the thread, giving solutions in different languages, such as Scala, JavaScript, Go and Python.

Some of the Python solutions were already optimized to some extent (e.g. using collections.Counter and functools.partial - it was a thread about the merits of programming languages, after all), so I decided to write one or two simple or naive solutions instead, and then see if those could be optimized some, maybe differently from the solutions in the HN thread.

Here is one such simple solution to the problem, of finding out if a string is an anagram of a palindrome. I've named it iaop_01.py (for Is Anagram Of Palindrome, version 01). The solution includes a scramble() function, to make an anagram of a palindrome, so that we have input for the test, and a main function to run the rest of the code to exercise things, for both the case when the string is an anagram of a palindrome, and when it is not.

The logic I've used is this (in pseudocode, even though Python is executable pseudocode, ha ha):


For each character c in the string s:
If c already occurs as a key in dict char_counts,
increment its count (the value corresponding to the key),
else set its count to 1.
After the loop, the char_counts dict will contain the counts
of all the characters in the string, keyed by character.
Then we check how many of those counts are odd.
If at most one count is odd, the string is an anagram of
a palindrome, else not.


And here is the Python code for iaop_01.py:


"""
Program to find out whether a string is an anagram of a palindrome.
Based on the question posed in this Hacker News thread:
https://news.ycombinator.com/item?id=8575589
"""

from random import shuffle

def anagram_of_palindrome(s):
char_counts = {}
for c in s:
char_counts[c] = char_counts.get(c, 0) + 1
odd_counts = 0
for v in char_counts.values():
if v % 2 == 1:
odd_counts += 1
return odd_counts = 1

def scramble(s):
lis = [ c for c in s ]
shuffle(lis)
return ''.join(lis)

def main():
# First, test with a list of strings which are anagrams of palindromes.
aops = ['a', 'bb', 'cdc', 'foof', 'madamimadam', 'ablewasiereisawelba']
for s in aops:
s2 = scramble(s)
print "{} is an anagram of palindrome ({}): {}".format(s2, \
s, anagram_of_palindrome(s2))
print
# Next, test with a list of strings which are not anagrams of palindromes.
not_aops = ['ab', 'bc', 'cde', 'fool', 'padamimadam']
for s in not_aops:
s2 = scramble(s)
print "{} is an anagram of a palindrome: {}".format(s2, \
anagram_of_palindrome(s2))

main()

And here is the output of running it:


$ python iaop_01.py
a is an anagram of palindrome (a): True
bb is an anagram of palindrome (bb): True
ccd is an anagram of palindrome (cdc): True
ffoo is an anagram of palindrome (foof): True
daadmamimma is an anagram of palindrome (madamimadam): True
srewaeaawbeilebials is an anagram of palindrome (ablewasiereisawelba): True

ba is an anagram of a palindrome: False
bc is an anagram of a palindrome: False
dec is an anagram of a palindrome: False
loof is an anagram of a palindrome: False
ampdaaiammd is an anagram of a palindrome: False

One simple optimization that can be made is to add these two lines:


if odd_counts > 1:
return False

just after the line "odd_count += 1". What that does is stop early if it finds that the number of characters with odd counts is greater than 1, even if there are many more counts to be checked, since our rule has been satisfied. If I think up more optimizations to the above solution, or any alternative solutions, I'll show them in a future post. Update: Since it is on a related topic, you may also like to check out this other post I wrote a while ago: A simple text file indexing program in Python. BTW, the two longer palindromes are lower-cased, scrunched-together versions of these well-known palindromes:Madam, I'm Adam.Able was I ere I saw Elba(attributed to Napoleon).- Vasudev Ram - Dancing Bison EnterprisesSignup for news about products from me. Contact Page

Share |
Vasudev Ram

20 Nov 2014 3:05am GMT

Vasudev Ram: Find if a Python string is an anagram of a palindrome

By Vasudev RamI saw this interesting thread on Hacker News some 10-odd days ago:

HN: Rust and Go

Apart from being generally of interest, it had a sub-thread that was about finding if a given string is an anagram of a palindrome. A few people replied in the thread, giving solutions in different languages, such as Scala, JavaScript, Go and Python.

Some of the Python solutions were already optimized to some extent (e.g. using collections.Counter and functools.partial - it was a thread about the merits of programming languages, after all), so I decided to write one or two simple or naive solutions instead, and then see if those could be optimized some, maybe differently from the solutions in the HN thread.

Here is one such simple solution to the problem, of finding out if a string is an anagram of a palindrome. I've named it iaop_01.py (for Is Anagram Of Palindrome, version 01). The solution includes a scramble() function, to make an anagram of a palindrome, so that we have input for the test, and a main function to run the rest of the code to exercise things, for both the case when the string is an anagram of a palindrome, and when it is not.

The logic I've used is this (in pseudocode, even though Python is executable pseudocode, ha ha):


For each character c in the string s:
If c already occurs as a key in dict char_counts,
increment its count (the value corresponding to the key),
else set its count to 1.
After the loop, the char_counts dict will contain the counts
of all the characters in the string, keyed by character.
Then we check how many of those counts are odd.
If at most one count is odd, the string is an anagram of
a palindrome, else not.


And here is the Python code for iaop_01.py:


"""
Program to find out whether a string is an anagram of a palindrome.
Based on the question posed in this Hacker News thread:
https://news.ycombinator.com/item?id=8575589
"""

from random import shuffle

def anagram_of_palindrome(s):
char_counts = {}
for c in s:
char_counts[c] = char_counts.get(c, 0) + 1
odd_counts = 0
for v in char_counts.values():
if v % 2 == 1:
odd_counts += 1
return odd_counts = 1

def scramble(s):
lis = [ c for c in s ]
shuffle(lis)
return ''.join(lis)

def main():
# First, test with a list of strings which are anagrams of palindromes.
aops = ['a', 'bb', 'cdc', 'foof', 'madamimadam', 'ablewasiereisawelba']
for s in aops:
s2 = scramble(s)
print "{} is an anagram of palindrome ({}): {}".format(s2, \
s, anagram_of_palindrome(s2))
print
# Next, test with a list of strings which are not anagrams of palindromes.
not_aops = ['ab', 'bc', 'cde', 'fool', 'padamimadam']
for s in not_aops:
s2 = scramble(s)
print "{} is an anagram of a palindrome: {}".format(s2, \
anagram_of_palindrome(s2))

main()

And here is the output of running it:


$ python iaop_01.py
a is an anagram of palindrome (a): True
bb is an anagram of palindrome (bb): True
ccd is an anagram of palindrome (cdc): True
ffoo is an anagram of palindrome (foof): True
daadmamimma is an anagram of palindrome (madamimadam): True
srewaeaawbeilebials is an anagram of palindrome (ablewasiereisawelba): True

ba is an anagram of a palindrome: False
bc is an anagram of a palindrome: False
dec is an anagram of a palindrome: False
loof is an anagram of a palindrome: False
ampdaaiammd is an anagram of a palindrome: False

One simple optimization that can be made is to add these two lines:


if odd_counts > 1:
return False

just after the line "odd_count += 1". What that does is stop early if it finds that the number of characters with odd counts is greater than 1, even if there are many more counts to be checked, since our rule has been satisfied. If I think up more optimizations to the above solution, or any alternative solutions, I'll show them in a future post. Update: Since it is on a related topic, you may also like to check out this other post I wrote a while ago: A simple text file indexing program in Python. BTW, the two longer palindromes are lower-cased, scrunched-together versions of these well-known palindromes:Madam, I'm Adam.Able was I ere I saw Elba(attributed to Napoleon).- Vasudev Ram - Dancing Bison EnterprisesSignup for news about products from me. Contact Page

Share |
Vasudev Ram

20 Nov 2014 3:05am GMT

10 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: King Willams Town Bahnhof

Gestern musste ich morgens zur Station nach KWT um unsere Rerservierten Bustickets für die Weihnachtsferien in Capetown abzuholen. Der Bahnhof selber ist seit Dezember aus kostengründen ohne Zugverbindung - aber Translux und co - die langdistanzbusse haben dort ihre Büros.


Größere Kartenansicht




© benste CC NC SA

10 Nov 2011 10:57am GMT

09 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein

Niemand ist besorgt um so was - mit dem Auto fährt man einfach durch, und in der City - nahe Gnobie- "ne das ist erst gefährlich wenn die Feuerwehr da ist" - 30min später auf dem Rückweg war die Feuerwehr da.




© benste CC NC SA

09 Nov 2011 8:25pm GMT

08 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Brai Party

Brai = Grillabend o.ä.

Die möchte gern Techniker beim Flicken ihrer SpeakOn / Klinke Stecker Verzweigungen...

Die Damen "Mamas" der Siedlung bei der offiziellen Eröffnungsrede

Auch wenn weniger Leute da waren als erwartet, Laute Musik und viele Leute ...

Und natürlich ein Feuer mit echtem Holz zum Grillen.

© benste CC NC SA

08 Nov 2011 2:30pm GMT

07 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Lumanyano Primary

One of our missions was bringing Katja's Linux Server back to her room. While doing that we saw her new decoration.

Björn, Simphiwe carried the PC to Katja's school


© benste CC NC SA

07 Nov 2011 2:00pm GMT

06 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nelisa Haircut

Today I went with Björn to Needs Camp to Visit Katja's guest family for a special Party. First of all we visited some friends of Nelisa - yeah the one I'm working with in Quigney - Katja's guest fathers sister - who did her a haircut.

African Women usually get their hair done by arranging extensions and not like Europeans just cutting some hair.

In between she looked like this...

And then she was done - looks amazing considering the amount of hair she had last week - doesn't it ?

© benste CC NC SA

06 Nov 2011 7:45pm GMT

05 Nov 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Mein Samstag

Irgendwie viel mir heute auf das ich meine Blogposts mal ein bischen umstrukturieren muss - wenn ich immer nur von neuen Plätzen berichte, dann müsste ich ja eine Rundreise machen. Hier also mal ein paar Sachen aus meinem heutigen Alltag.

Erst einmal vorweg, Samstag zählt zumindest für uns Voluntäre zu den freien Tagen.

Dieses Wochenende sind nur Rommel und ich auf der Farm - Katja und Björn sind ja mittlerweile in ihren Einsatzstellen, und meine Mitbewohner Kyle und Jonathan sind zu Hause in Grahamstown - sowie auch Sipho der in Dimbaza wohnt.
Robin, die Frau von Rommel ist in Woodie Cape - schon seit Donnerstag um da ein paar Sachen zur erledigen.
Naja wie dem auch sei heute morgen haben wir uns erstmal ein gemeinsames Weetbix/Müsli Frühstück gegönnt und haben uns dann auf den Weg nach East London gemacht. 2 Sachen waren auf der Checkliste Vodacom, Ethienne (Imobilienmakler) außerdem auf dem Rückweg die fehlenden Dinge nach NeedsCamp bringen.

Nachdem wir gerade auf der Dirtroad losgefahren sind mussten wir feststellen das wir die Sachen für Needscamp und Ethienne nicht eingepackt hatten aber die Pumpe für die Wasserversorgung im Auto hatten.

Also sind wir in EastLondon ersteinmal nach Farmerama - nein nicht das onlinespiel farmville - sondern einen Laden mit ganz vielen Sachen für eine Farm - in Berea einem nördlichen Stadteil gefahren.

In Farmerama haben wir uns dann beraten lassen für einen Schnellverschluss der uns das leben mit der Pumpe leichter machen soll und außerdem eine leichtere Pumpe zur Reperatur gebracht, damit es nicht immer so ein großer Aufwand ist, wenn mal wieder das Wasser ausgegangen ist.

Fego Caffé ist in der Hemmingways Mall, dort mussten wir und PIN und PUK einer unserer Datensimcards geben lassen, da bei der PIN Abfrage leider ein zahlendreher unterlaufen ist. Naja auf jeden Fall speichern die Shops in Südafrika so sensible Daten wie eine PUK - die im Prinzip zugang zu einem gesperrten Phone verschafft.

Im Cafe hat Rommel dann ein paar online Transaktionen mit dem 3G Modem durchgeführt, welches ja jetzt wieder funktionierte - und übrigens mittlerweile in Ubuntu meinem Linuxsystem perfekt klappt.

Nebenbei bin ich nach 8ta gegangen um dort etwas über deren neue Deals zu erfahren, da wir in einigen von Hilltops Centern Internet anbieten wollen. Das Bild zeigt die Abdeckung UMTS in NeedsCamp Katjas Ort. 8ta ist ein neuer Telefonanbieter von Telkom, nachdem Vodafone sich Telkoms anteile an Vodacom gekauft hat müssen die komplett neu aufbauen.
Wir haben uns dazu entschieden mal eine kostenlose Prepaidkarte zu testen zu organisieren, denn wer weis wie genau die Karte oben ist ... Bevor man einen noch so billigen Deal für 24 Monate signed sollte man wissen obs geht.

Danach gings nach Checkers in Vincent, gesucht wurden zwei Hotplates für WoodyCape - R 129.00 eine - also ca. 12€ für eine zweigeteilte Kochplatte.
Wie man sieht im Hintergrund gibts schon Weihnachtsdeko - Anfang November und das in Südafrika bei sonnig warmen min- 25°C

Mittagessen haben wir uns bei einem Pakistanischen Curry Imbiss gegönnt - sehr empfehlenswert !
Naja und nachdem wir dann vor ner Stunde oder so zurück gekommen sind habe ich noch den Kühlschrank geputzt den ich heute morgen zum defrosten einfach nach draußen gestellt hatte. Jetzt ist der auch mal wieder sauber und ohne 3m dicke Eisschicht...

Morgen ... ja darüber werde ich gesondert berichten ... aber vermutlich erst am Montag, denn dann bin ich nochmal wieder in Quigney(East London) und habe kostenloses Internet.

© benste CC NC SA

05 Nov 2011 4:33pm GMT

31 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Sterkspruit Computer Center

Sterkspruit is one of Hilltops Computer Centres in the far north of Eastern Cape. On the trip to J'burg we've used the opportunity to take a look at the centre.

Pupils in the big classroom


The Trainer


School in Countryside


Adult Class in the Afternoon


"Town"


© benste CC NC SA

31 Oct 2011 4:58pm GMT

Benedict Stein: Technical Issues

What are you doing in an internet cafe if your ADSL and Faxline has been discontinued before months end. Well my idea was sitting outside and eating some ice cream.
At least it's sunny and not as rainy as on the weekend.


© benste CC NC SA

31 Oct 2011 3:11pm GMT

30 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Nellis Restaurant

For those who are traveling through Zastron - there is a very nice Restaurant which is serving delicious food at reasanable prices.
In addition they're selling home made juices jams and honey.




interior


home made specialities - the shop in the shop


the Bar


© benste CC NC SA

30 Oct 2011 4:47pm GMT

29 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: The way back from J'burg

Having the 10 - 12h trip from J'burg back to ELS I was able to take a lot of pcitures including these different roadsides

Plain Street


Orange River in its beginngings (near Lesotho)


Zastron Anglican Church


The Bridge in Between "Free State" and Eastern Cape next to Zastron


my new Background ;)


If you listen to GoogleMaps you'll end up traveling 50km of gravel road - as it was just renewed we didn't have that many problems and saved 1h compared to going the official way with all it's constructions sites




Freeway


getting dark


© benste CC NC SA

29 Oct 2011 4:23pm GMT

28 Oct 2011

feedPython Software Foundation | GSoC'11 Students

Benedict Stein: Wie funktioniert eigentlich eine Baustelle ?

Klar einiges mag anders sein, vieles aber gleich - aber ein in Deutschland täglich übliches Bild einer Straßenbaustelle - wie läuft das eigentlich in Südafrika ?

Ersteinmal vorweg - NEIN keine Ureinwohner die mit den Händen graben - auch wenn hier mehr Manpower genutzt wird - sind sie fleißig mit Technologie am arbeiten.

Eine ganz normale "Bundesstraße"


und wie sie erweitert wird


gaaaanz viele LKWs


denn hier wird eine Seite über einen langen Abschnitt komplett gesperrt, so das eine Ampelschaltung mit hier 45 Minuten Wartezeit entsteht


Aber wenigstens scheinen die ihren Spaß zu haben ;) - Wie auch wir denn gücklicher Weise mussten wir nie länger als 10 min. warten.

© benste CC NC SA

28 Oct 2011 4:20pm GMT