27 Apr 2015

feedPlanet Plone

Davide Moro: kotti_multilingual

kotti_multilingual is a package still in an early stage of development that adds to the Kotti CMS (http://kotti.pylonsproject.org/) multilingual capabilities. It is neither feature complete nor can be considered API stable: so things will change!

You can find various fixes on this fork (a PR is still in the review phase):

How it works

First of all you should add a LanguageRoot folder in your root site. It is like the standard folderish Document, but with an editable Language attribute where you set the language code (eg: en, it, etc).

Once you have created two or more language folders (with >2 languages there is a problem with the translation link actions at this time of writing) you can add your contents and translate them.

The translate menu prompts a translate into italian action from /en/welcome

If you click on the translate into action, it will create a translated instance in /it/welcome (you can rename it later in /it/benvenuto or whatever you like) and you'll be redirected to a regular edit form prefilled with the english fields values.

Once saved, you can switch to the existing translation and navigate among languages as shown in the following picture:

You can switch to the existing English translation

kotti_multilingual supports the quite advanced concept of language independent fields: a field whose values should be inherited by translations, only editable on the root translation.

You can see for example a select widget in edit mode on the root translation:

And the same field in readonly mode on the translated object:

See the kotti_multilingual.widget.i10n_widget_factory code for more info.

Code examples

And now code examples.


You can define language independent fields in your type_info attribute on your resource.

class YourResource(...):


type_info = Document.type_info.copy(
language_independent_fields= ['course_sku',],


The edit form does not require changes, you just need to apply the i10n_widget_factory on your language independent fields (in some particular cases you need a bit more complex setup when you have to deal with not null column, required fields, etc). In these particular cases you'll have to play with get_source (kotti_multilingual.api.source) and put the widget in readonly mode. If you experience problems cloning select widgets you might have to migrate to deferred widgets (that creates a new widget instance each time) and set the widget mode in readonly when needed.

from kotti_multilingual.widget import i10n_widget_factory
from kotti_multilingual.api import get_source
def deferred_widget(node, kw):
request = kw['request']
context = request.context


widget = SelectWidget(values=available_tags, multiple=True)
if get_source(context) is not None:
widget.readonly = True
return widget

class YourResourceSchema(colander.Schema):

course_sku = colander.SchemaNode(
title=_(u"Course SKU"),

class YourResourceAddForm(ImageAddForm):
schema_factory = YourResourceSchema

def get_bind_data(self):
bind_data = super(YourResourceAddForm, self).get_bind_data()
# we tell to the i10n_widget_factory that this is an addform,
# so our widgets will be shown as usual in edit mode
bind_data['addform'] = True
return bind_data

Final thoughts

Yes, it is a very very young package but very promising!
It is not complete and probably it never will be complete because SQLAlchemy is huge and I think it is not possible to cover all the possible SQLAlchemy combinations.

For example this fork includes support for the SQLAlchemy's association_proxy feature and language independent fields (in this case the copy_properties_blacklist attribute on your resource is your friend).

This is open source, dude: if you need something that is not yet covered, just fork kotti_multilingual, implement the missing parts and share with others!

Update 20150427

Merged and released new version of kotti_multilingual on PyPI https://pypi.python.org/pypi/kotti_multilingual/0.2a3.
Development happens on https://github.com/Kotti/kotti_multilingual now.

27 Apr 2015 4:16pm GMT

26 Apr 2015

feedPlanet Plone

Alex Clark: Plock Rocks

Plock Meme

Plock is a Plone installer for the pip-loving crowd. Plone is the ultimate open source enterprise CMS.

Understanding Plock

To understand Plock you must understand a few things:

  • The complexity of the Plone stack [1].
  • My desire to simplify, clarify and reduce-to-bare-elements everything I touch.
  • My willingness to mask complexity when eliminating it is not possible, despite the risk of contributing to it.

Pyramid author Chris McDonough [2] once made a comment a long time ago to the effect: "Let's stop piling more crap on top of Plone" and that sentiment still resonates today. That's why even though I love small and useful tools like Plock, it pains me to know what Plock is doing "under the hood" [7]. Nevertheless, I felt compelled to write it and make it work well because not having it is even more painful.

Before I tell you what Plock is [8], let me briefly describe what Plone is.

What is Plone, really?

What is the complexity I mention above? Briefly, with as few loaded statements as possible:

  • Zope2 "application server". This is something you can pip install but the results will not be usable [3].

  • Zope2 add-ons AKA "products", most notably the Zope2 Content Management Framework (CMF). This is something you install on top of Zope2 (conceptually but not literally, pip install Products.CMFCore) that provides typical content management features e.g. personalization, workflow, cataloging, etc.

  • Zope3 technologies e.g. the Zope Component Architecture (ZCA). These are things that are included-or-integrated with Zope2 and Plone. [4]

  • Buildout technologies e.g. setuptools, console scripts, recipes, extensions, etc. You can't easily build Plone without them, so we may as well declare them as dependencies.

  • Plone technologies. Plone was originally known as a "skin for CMF" but has become much more than that.

    • Archetypes Legacy content type framework.
    • Dexterity Modern content type framework based on modern Zope concepts e.g. "Reuse over reinvention".
    • Diazo Modern theming engine based on XSLT that "maps Plone content to generic website themes."

In total, if you pip install Plone over 200 Python packages are installed [5].

What is Plock, really?

OK now it's time to explain Plock. Plock is something:

  • you install from PyPI via pip install plock. "Pip installs packages. Plock installs Plone."
  • you use to install Plone without having to know about tarballs or Buildout.
  • you use to install Plone add-ons without having to know about Buildout.

In one sentence: Plock runs Buildout so you don't have to, at least initially.

First steps with Plock

Step #1

The first step with Plock [9] is that light bulb moment when you say to yourself: "I've heard that Plone is the ultimate open source enterprise CMS and I'd love to try it!" But you aren't willing to download a compressed archive and run the installer nor are you willing to pip install zc.buildout and figure the rest out for yourself. Enter Plock.

Step #2

The second step with Plock is knowing that it exists you can install it with: pip install plock.

Step #3

The third step with Plock is using it to install Plone:

$ plock plone
Creating virtualenv... (plone)
Installing buildout...
Downloading installer (https://launchpad.net/plone/4.3/4.3.4/+download/Plone-4.3.4-r1-UnifiedInstaller.tgz)
Unpacking installer...
Unpacking cache...
Installing eggs...
Installing cmmi & dist...
Configuring cache...
Running buildout...
Done, now run:
  plone/bin/plone fg

Now Plock's work is done; visit the following URL: http:://localhost:8080 and you should see:

Plock Screen 1

Create a Plone site:

Plock Screen 2

Start using Plone:

Plock Screen 3

Next steps with Plock

Plock is more than just a way to install the latest stable version of Plone quickly and easily. It's also a way to find and install Plone add-ons quickly and easily, and a way to install almost any version of Plone including the upcoming Plone 5 release.

Installing Add-ons

Step #1

List all Plone-related packages on PyPI:

$ plock -l
1) 73.unlockItems                           - A small tool for unlocking web_dav locked item in a plone portal.
2) actionbar.panel                          - Provides a (old) facebook style action panel at the bottom of your  Plone site
3) adi.init                                 - Deletes Plone's default contents
4) adi.samplecontent                        - Deletes Plone's default content and adds some sample content
5) adi.slickstyle                           - A slick style for Plone portals, easily extendable for your own styles.
6) affinitic.simplecookiecuttr              - Basic integration of jquery.cookiecuttr.js for Plone 3
7) anthill.querytool                        - GUI for AdvancedQuery with some extensions - searching the easy way for Plone
8) anthill.skinner                          - Skinning for plone made easy
9) anz.dashboard                            - Plone netvibes like dashboard implementation
10) anz.ijabbar                              - Integrate iJab(an open source XMPP web chat client recommended by xmpp.org) to your plone site.
1,352) zopeskel.diazotheme                      - Paster templates for Plone Diazo theme package
1,353) zopeskel.niteoweb                        - Paster templates for standard NiteoWeb Plone projects
1,354) zopyx.ecardsng                           - An ECard implementation for Plone
1,355) zopyx.existdb                            - Plone-ExistDB integration
1,356) zopyx.ipsumplone                         - Lorem ipsum text and image demo content for Plone
1,357) zopyx.multieventcalendar                 - A multi-event calendar for Plone 3.X
1,358) zopyx.plone.cassandra                    - Show all assigned local roles within a subtree for any Plone 4 site
1,359) zopyx.plone.migration                    - Export/import scripts for migration Plone 2+3 to Plone 4
1,360) zopyx.smartprintng.plone                 - Produce & Publisher server integration with Plone
1,361) zopyx.together                           - Plone integration with together.js

Step #2


Plock currently only supports the initial creation of buildout.cfg, so if you have already run plock once and you want to install add-ons you'll have to use -f to overwrite buildout.cfg.

Pick a few interesting things and install them:

$ plock plone -i "Products.PloneFormGen collective.plonetruegallery eea.facetednavigation"
Creating virtualenv... (plone)
Installing buildout...
Downloading installer (https://launchpad.net/plone/4.3/4.3.4/+download/Plone-4.3.4-r1-UnifiedInstaller.tgz)
Unpacking installer...
Unpacking cache...
Installing eggs...
Installing cmmi & dist...
Configuring cache...
Installing addons...
- https://pypi.python.org/pypi/Products.PloneFormGen
- https://pypi.python.org/pypi/collective.plonetruegallery
- https://pypi.python.org/pypi/eea.facetednavigation
Running buildout...
Done, now run:
  plone/bin/plone fg

Now you should see your add-ons available in Plone:

Plock Screen 6

Upgrading Plone

Step #1

Realize Plock has created a buildout.cfg file you can edit with a text editor.

Step #2

Also realize Plock hosts Buildout configuration files called Pins you can extend from your local buildout.cfg file [10].

Step #3

Edit your buildout.cfg file. Change the first extends URL from:

extends =
#    https://raw.github.com/plock/pins/master/dev


extends =
#    https://raw.github.com/plock/pins/master/dev

Run Buildout and start Plone:

$ bin/buildout
$ bin/plone fg

Enjoy the Plone 5 running man:

Plock Screen 5


Cut and paste this into a terminal:

pip install plock; plock plone; plone/bin/plone fg

Now open http://localhost:8080 and happy Ploning.

Plock 0.3.0 is out! Install with pip install plock and report issues here: https://github.com/plock/plock/issues.


[1] Whether or not dealing with that complexity is "worth it" I will not address here. Suffice it to say people still use and care about Plone and with Plone 5 coming "real soon now" there is some excitement building.
[2] He probably made it many times, and rightfully so.
[3] You can create an "instance" after pip install zope2 with bin/mkzopeinstance but $INSTANCE/bin/runzope fails with ImportError: cannot import name _error_start probably due to mismanaged package versions. Maybe we can fix this with version specs included in a dummy package's setup.py?
[4] The integration is not seemless, an undisputed fact as far as I know.
[5] 235
[7] Creating and executing a buildout.cfg file for the end user. Buildout configuration files are written in INI-style text. Ideally the end user sees this file and says "Ah, now I understand how this works."
[8] I've also covered Plock before here.
[9] As someone familiar with Python and a UNIX shell already, because that is the market I like to serve.
[10] Yes, there is a security and/or reliability issue with doing this; you are clearly trading security and reliability for convenience.

26 Apr 2015 10:10pm GMT

Josh Johnson: DevOps Is Bullshit: Why One Programmer Doesn’t Do It Anymore

I've always been handy with hardware. I was one of "those kids" you hear about that keeps taking things apart just to see how they work - and driving their parents nuts in the process. When I was a teenager, I toyed with programming but didn't get serious with it until I decided I wanted to get into graphic design. I found out that you don't have to write HTML yourself, you can use programming to do it for you!

But I never stopped tinkering with hardware and systems. I used Linux and BSD on my desktop for years, built my LAMP stacks from source, and simulated the server environment when I couldn't - when I used windows for work, and when I eventually adopted Apple as my primary platform, I first started with cross-compiled versions of the components, and eventually got into virtualization.

In the early days (maybe 10 years ago) there seemed to be few programmers who were like me, or if they were, they never took "operations" or "sysadmin" jobs, and neither did I. So there was always a natural divide. Aside from being a really nice guy who everyone likes, I had a particular rapport with my cohorts who specialized in systems.

I'm not sure exactly what it was. It may have been that I was always interested in the finer details of how a system works. It may have been my tendency to document things meticulously, or my interest in automation and risk reduction. It could have just been that I was willing to take the time to cross the divide and talk to them, even when I didn't need something. It may have just boiled down to the fact that when they were busy, I could do things myself, and I wanted to follow their standards, and get their guidance. It's hard to tell, even today, as my systems skills have developed beyond what they ever were before, but the rapport has continued on.

And then something happened. As my career progressed, I took on more responsibilities and did more and more systems work. This was partly because of the divide widening to some extent at one particular job, but mostly because, I could. Right around this time the "DevOps Revolution" was beginning.

Much like when I was a teenager and everyone needed a web site, suddenly everyone needed DevOps.

I didn't really know what it was. I was aware of the term, but being a smart person, I tend to ignore radical claims of great cultural shifts, especially in technology. In this stance, I find myself feeling a step or two behind at times, but it helps keep things in perspective. Over time, technology changes, but true radicalism is rare. Most often, a reinvention or revisiting of past ideas forms the basis for such claims. This "DevOps" thing was no different. Honestly, at the time it seemed like a smoke screen; a flashy way to save money for startups.

I got sick of tending systems - when you're doing it properly, it can be a daunting task. Dealing with storage, access control, backups, networking, high availability, maintenance, security, and all of the domain-specific aspects can easily become overwhelming. But worse, I was doing too much front-line support, which honestly, at the time was more important than the programming it was distracting me from. I love my users, and I see their success as my success. I didn't mind it, but the bigger problems I wanted to solve were consistently being held above my head, just out of my grasp. I could ignore my users or ignore my passion, and that was a saddening conundrum. I felt like all of the creativity I craved was gone, and I was being paid too much (or too little depending on if you think I was an over paid junior sysadmin or an under paid IT manager with no authority) to work under such tedium. So I changed jobs.

I made the mistake of letting my new employer decide where they wanted me to go in the engineering organization.

What I didn't know about this new company was that it had been under some cultural transition just prior to bringing me on board. Part of that culture shift was incorporating so-called "DevOps" into the mix. By fiat or force.

Because of my systems experience, I landed on the front line of that fight: the "DevOps Team". I wasn't happy.

But as I dug in, I saw some potential. We had the chance to really shore up the development practices, reduce risk in deployments, make the company more agile, and ultimately make more money.

We had edicts to make things happen, under the assumption that if we built it, the developers would embrace it. These things included continuous integration, migrating from subversion to git, building and maintaining code review tools, and maintaining the issue tracking system. We had other, less explicit responsibilities that became central to our work later on, including developer support, release management, and interfacing with the separate, segregated infrastructure department. This interaction was especially important, since we had no systems of our own, and we weren't allowed to administer any machines. We didn't have privileged access to any of the systems we needed to maintain for a long time.

With all the hand wringing and flashing of this "DevOps" term, I dug in and read about it, and what all the hubub was about. I then realized something. What we were doing wasn't DevOps.

Then I realized something else. I was DevOps. I always had been. The culture was baked into the kind of developer I was. Putting me, and other devs with similar culture on a separate team, whether that was the "DevOps" team or the infrastructure team was a fundamental mistake.

The developers didn't come around. At one point someone told a teammate of mine that they thought we were "IT support". What needed to happen was the developers had to embrace the concept that they were capable of doing at least some systems things themselves, in safe and secure manner, and the infrastructure team had to let them do it. But my team just sat there in the middle, doing what we could to keep the lights on and get the code out, but ultimately just wasting our time. Some developers starting using AWS, with the promise of it being a temporary solution, but in a vacuum nonetheless. We were not having the impact that management wanted us to have.

My time at this particular company ended in a coup of sorts. This story is worthy of a separate blog post some day when it hurts a little less to think about. But lets just say I was on the wrong side of the revolution and left as quickly as I could when it was over.

In my haste, I took another "DevOps" job. My manager there assured me that it would be a programming job first, and a systems job second. "We need more 'dev' in our 'devops'", he told me.

What happened was very similar to my previous "DevOps" experience, but more acute. Code, and often requirements, were thrown over the wall at the last minute. As it fell in our laps, we scrambled to make it work, and work properly, as it seemed no one would think of things like fail over or backups or protecting private information when they were making their plans. Plans made long ago, far away, and without our help.

This particular team was more automation focused. We had two people who were more "dev" than "ops", and the operations people were no slouches when it came to scripting or coding in their own right.

It was a perfect blend, and as a team we got along great and pulled off some miracles.

But ultimately, we were still isolated. We, and our managers tried to bridge the gap to no avail. Developers, frustrated with our sizable backlog, went over our heads to get access to our infrastructure and started doing it for themselves, often with little or no regard for our policies or practice. We would be tasked with cleaning up their mess when it was time for production deployment - typically in a major hurry after the deadline had passed.

The original team eventually evaporated. I was one of the last to leave, as new folks were brought into a remote office. I stuck it out for a lot of reasons: I was promised transfer to NYC, I had good healthcare, I loved my team. But ultimately what made me stick around was the hope, that kept getting rebuilt and dashed as management rotated in and out above me, that we could make it work.

I took the avenue of providing automated tools to let the developers have freedom to do as they pleased, yet we could ensure they were complying with company security guidelines and adhering to sane operations practices.

Sadly, politics and priorities kept my vision from coming to reality. It's OK, in hindsight, because so much more was broken about so-called "DevOps" at this particular company. I honestly don't think that it could have made that much of a difference.

Near the end of my tenure there, I tried to help some of the developers help themselves by sitting with them and working out how to deploy their code properly side-by-side. It was a great collaboration, but it fell short. It represented a tiny fraction of the developers we supported. Even with those really great developers finally interfacing with my team, it was too little, too late.

Another lesson learned: you can't force cultural change. It has to start from the bottom up, and it needs breathing room to grow.

I had one final "DevOps" experience before I put my foot down and made the personal declaration that "DevOps is bullshit", and I wasn't going to do it anymore.

Due to the titles I had taken, and the experiences of the last couple of years, I found myself in a predicament. I was seen by recruiters as a "DevOps guy" and not as a programmer. It didn't matter that I had 15 years of programming experience in several languages, or that I had focused on programming even in these so-called "DevOps" jobs. All that mattered was that, as a "DevOps Engineer" I could be easily packaged for a high-demand market.

I went along with the type casting for a couple of reasons. First, as I came to realize, I am DevOps - if anyone was going to come into a company and bridge the gap between operations and engineering, it'd be me. Even if the company had a divide, which every company I interviewed with had, I might be able to come on board and change things.

But there was a problem. At least at the companies I interviewed at, it seemed that "DevOps" really meant "operations and automation" (or more literally "AWS and configuration management"). The effect this had was devastating. The somewhat superficial nature of parts of my systems experience got in the way of landing some jobs I would have been great at. I was asked questions about things that had never been a problem for me in 15 years of building software and systems to support it, and being unable to answer, but happy to talk through the problem, would always end in a net loss.

When I would interview at the few programming jobs I could find or the recruiters would give me, they were never for languages I knew well. And even when they were, my lack of computer science jargon bit me - hard. I am an extremely capable software engineer, someone who learns quickly and hones skills with great agility. My expertise is practical, however, and it seemed that the questions that needed to be asked, that would have illustrated my skill, weren't. I think to them, I looked like a guy who was sick of systems that was playing up their past dabbling in software to change careers.

So it seemed "DevOps", this great revolution, and something that was baked into my very identity as a programmer, had left me in the dust.

I took one final "DevOps" job before I gave up. I was optimistic, since the company was growing fast and I liked everyone I met there. Sadly, it had the same separations, and was subject to the same problems. The developers, who I deeply respected, were doing their own thing, in a vacuum. My team was unnecessarily complicating everything and wasting huge amounts of time. Again, it was just "ops with automation" and nothing more.

So now lets get to the point of all of this. We understand why I might think "DevOps is bullshit", and why I might not want to do it anymore. But what does that really mean? How can my experiences help you, as a developer, as an operations person, or as a company with issues they feel "DevOps" could address?

Don't do DevOps. It's that simple. Apply the practices and technology that comprise what DevOps is to your development process, and stop putting up walls between different specialties.

A very wise man once said "If you have a DevOps team, you're doing it wrong". If you start doing that, stop it.

There is some nuance here, and my experience can help save you some trouble by identifying some of the common mistakes:

Let me stop for one moment and share another lesson I've learned: if it ain't broke, don't fix it.

If you have a working organization that seems old fashioned, leave it alone. It's possible to incorporate the tech, and even some of the cultural aspects of DevOps without radically changing how things work - it's just not DevOps anymore, so don't call it that. Be critical of your process and practices, kaizen and all that, but don't sacrifice what works just to join the cargo cult. You will waste money, and you will destroy morale. The pragmatic operations approach is the happiest one.

Beware of geeks bearing gifts.

So lets say you know why you want DevOps, and you're certain that the cultural shift is what's right for your organization. Everyone is excited about it. What might a proper "DevOps" team look like?

I can speak to this, because I currently work in one.

First, never call it "DevOps". It's just what you do as part of your job. Some days you're writing code, other days you're doing a deployment, or maintenance. Everyone shares all of those responsibilities equally.

People still have areas of experience and expertise. This isn't pushing people into a luke-warm, mediocre dilution of their skills - this is passionate people doing what they love. It's just that part of that, is launching a server or writing a chef recipe or debugging a production issue.

As such you get a truly cross functional team. Where expertise differs, first, there's a level of respect and trust. So if someone knows more about a topic than someone else, they will likely be the authority on it. The rest of the team trusts them to steer the group in the right direction.

This means that you can hire operations people to join your team. Just don't give them exclusive responsibility for what they're best at - integrate them. The same goes for any "non deveoper" skillset, be that design, project managment or whatever.

Beyond that, everyone on the team has a thirst to develop new skills and look at their work in different ways. This is when the difference in expertise provides an opportunity to teach. Teaching brings us closer together and helps us all gain better understanding of what we're doing.

So that's what DevOps really is. You take a bunch of really skilled, passionate, talented people who don't have their heads shoved so far up their own asses that they can take the time to learn new things. People who see the success of the business as a combined responsibility that is eqully shared. "That's not my job" is not something they are prone to saying, but they're happy to delegate or share a task if need be. You give them the infrastructure, and time (and encouragement doesn't hurt), to build things in a way that makes the most sense for their productivity, and the business, embracing that equal, shared sense of responsibility. Things like continuous integration and zero-downtime deployments just happen as a function of smart, passionate people working toward a shared goal.

It's an organic, culture-driven process. We may start doing continuous deployment, or utlize "the cloud" or treat our "code as infrastructure" but only if it makes sense. The developers are the operations people and the operations people are the developers. An application system is seen in a holistic manner and developed as a single unit. No one is compromising, we all get better as we all just fucking do it.

DevOps is indeed bullshit. What matters is good people working together without artificial boundaries. Tech is tech. It's not possible for everyone to share like this, but when it works, it's amazing - but is it really DevOps? I don't know, I don't do that anymore.

26 Apr 2015 7:38pm GMT

Josh Johnson: Raspberry Pi Build Environment In No Time At All

Leveraging PRoot and qemu, it's easy to configure Raspberry Pi's, build and install packages, without the need to do so on physical hardware. It's especially nice if you have to work with many disk images at once, create specialized distributions, reset passwords, or install/customize applications that aren't yet in the official repositories.

I've recently dug in to building apps and doing fun things with the Raspberry Pi. With the recent release of the Raspberry Pi 2, its an even more exciting platform. I've documented what I've been using to make my workflow more productive.

Table Of Contents


We'll use a Linux machine. Below are setup instructions for Ubuntu and Arch. I prefer Arch for desktop and personal work, I use Debian or Ubuntu for production deployments.

Arch Linux is a great "tinkerer's" distribution - if you haven't used it before it's worth checking out. It's great on the Raspberry Pi.

Debian and Ubuntu have some differences, but share the same base and use the same package management system (apt). I've included instructions for Ubuntu in particular, since it's the most similar to Raspbian, the default Raspberry Pi operating system, and folks may be more familiar with that environment.

Generally speaking, you'll need the following things:

Once the packages are installed, the commands and processes for building and working with Raspberry Pi boot disks are the same.

NOTE: we assume you have sudo installed and configured.

Virtual Machine Notes

If you're using an Apple (Mac OS X) computer or Windows, the easiest way to work with Linux systems is via virtualization. VirtualBox is available for most platforms and is easy to work with.

The virtualbox documentation can walk you through the installation of VirtualBox and creating your first virtual machine.

When working with an SD card, you'll might want to follow instructions for "Access to entire physical hard disk" to make the card accessible to the virtual machine. As an alternative, you could use a USB SD card reader, and usb pass-thru to present not the disk to the virtual machine, but the entire USB device, and let the virtual machine deal with mounting it.

Both of these approaches can be (very) error prone, but provide the most "native" way of working.

Instead I'd recommend installing guest additions. With guest additions installed in your virtual machine, you can use the shared folders feature of VirtualBox. This makes it easy to copy disk images created in your virtual machine to your host machine, and then you can use the standard instructions for Windows and Mac OS to copy the disks images to your SD cards.

Advanced Usage Note: Personally, my usual method of operations with VirtualBox VMs is to set up Samba in my virtual machine and share a folder over a host-only network (or I'll use bridged networking so I can connect to it from any machine on my LAN) - I'd consider this a more "advanced" approach but I've had more consistent results for day-to-day work than using guest additions or mounting host disks. However, for the simple task of just copying disk images back and forth to the virtual machine, the shared folders feature should suffice.

Arch Linux

We'll use pacman and wget to procure and install most of the tools we need:

$ sudo pacman -S dosfstools wget qemu unzip pv
$ wget http://static.proot.me/proot-x86_64
$ chmod +x proot-x86_64
$ sudo mv proot-x86_64 /usr/local/bin/proot

First, we install the following packages:

Gives us the ability to create FAT filesystems, required for making a disk bootable on the RaspberryPi.
General purpose file grabber - used for downloading installation files and PRoot
QEMU emulator - allows us to run RaspberryPi executables
Decompresses ZIP archives.
Pipeline middleware that shows a progress bar (we'll be using it to make copying disk images with dd a little easier for the impatient)

Then we download PRoot, make the file executable, and copy it to a common location for global executable that everyone on a machine can access, /usr/local/bin. This location is just a suggestion - to follow along with the examples in this article, you just need to put the proot executable somewhere on your $PATH.

Finally, we'll use an AUR package to obtain the kpartx tool.

kpartx wraps a handful of tasks required for creating loopback devices into a single action.

If you haven't used the AUR before, check out the documentation first for an overview of the process, and to install prerequisites.

$ wget https://aur.archlinux.org/packages/mu/multipath-tools/multipath-tools.tar.gz
$ tar -zxvf multipath-tools.tar.gz
$ cd multipath-tools
$ makepkg
$ sudo pacman -U sudo pacman -U multipath-tools-*.pkg.tar.xz


Ubuntu Desktop comes with most of the tools we need (in particular, wget, the ability to mount dos file systems, and unzip). As such, the process of getting set up for using PRoot is a bit simpler, compared to Arch.

Ubuntu uses apt-get for package installation.

$ sudo apt-get install qemu kpartx pv
$ wget http://static.proot.me/proot-x86_64
$ chmod +x proot-x86_64
$ sudo mv proot-x86_64 /usr/local/bin/proot

First, we install the following packages:

QEMU emulator - allows us to run RaspberryPi executables
Helper tool that wraps a handful of tasks required for creating loopback devices into a single action.
Pipeline middleware that shows a progress bar (we'll be using it to make copying disk images with dd a little easier for the impatient)

Then, we install PRoot by downloading the binary from proot.me, making it executable, and putting it somewhere on our $PATH, /usr/local/bin, making it available to all users on the system. This location is merely a suggestion, but putting the proot executable somewhere on your $PATH will make it easier to follow along with the examples below.

Working With A Disk Image

A disk (in the Raspberry Pi's case, we're talking about an SD card) is just an arrangement of blocks for data storage. On top of those blocks is a description of how files are represented in those blocks, or a filesystem (for more detail, see the Wikipedia articles on Disk Storage and File System).

Disks can exist in the physical world, or can be represented by a special file, called a disk image. We can download pre-made images with Raspbian already installed from the official Raspberry Pi downloads page.

$ wget http://downloads.raspberrypi.org/raspbian_latest -O rasbian_latest.img.zip
$ unzip rasbian_latest.img.zip
Archive:  raspbian_latest.zip
  inflating: 2015-02-16-raspbian-wheezy.img

Take note of the name of the img file - it will vary depending on the current release of Raspbian at the time.

At this point we have a disk image we can mount by creating a loopback device. Once we have it mounted, we can use QEMU and PRoot to run commands within it without fully booting it.

We'll use kpartx to set up a loopback device for each partition in the disk image:

$ sudo kpartx -a -v 2015-02-16-raspbian-wheezy.img 
add map loop0p1 (254:0): 0 114688 linear /dev/loop0 8192
add map loop0p2 (254:1): 0 6277120 linear /dev/loop0 122880

The -a command line switch tells kpartx to create new loopback devices. The -v switch asks kpartx to be more verbose and print out what it's doing.

We can do a dry-run and inspect the disk image using the -l switch:

$ sudo kpartx -l 2015-02-16-raspbian-wheezy.img
loop0p1 : 0 114688 /dev/loop0 8192
loop0p2 : 0 6277120 /dev/loop0 122880
loop deleted : /dev/loop0

We can see the partitions to be sure, using fdisk -l

$ sudo fdisk -l /dev/loop0

Disk /dev/loop0: 3.1 GiB, 3276800000 bytes, 6400000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009bf4f

Device       Boot  Start     End Sectors Size Id Type
/dev/loop0p1        8192  122879  114688  56M  c W95 FAT32 (LBA)
/dev/loop0p2      122880 6399999 6277120   3G 83 Linux

We can also see them using lsblk:

$ lsblk
sda         8:0    0 14.9G  0 disk 
└─sda1      8:1    0 14.9G  0 part /
sdc         8:32   0 29.8G  0 disk 
└─sdc1      8:33   0 29.8G  0 part /run/media/jj/STEALTH
loop0       7:0    0  3.1G  0 loop 
├─loop0p1 254:0    0   56M  0 part 
└─loop0p2 254:1    0    3G  0 part 

Generally speaking, the first, smaller partition will be the boot partition, and the others will hold data. It's typical with RaspberryPi distributions to use a simple 2-partition scheme like this.

The new partitions will end up in /dev/mapper:

$ ls /dev/mapper
control  loop0p1  loop0p2

Now we can mount our partitions. We'll first make a couple of descriptive directories for mount points:

$ mkdir raspbian-boot raspbian-root
$ sudo mount /dev/mapper/loop0p1 raspbian-boot
$ sudo mount /dev/mapper/loop0p2 raspbian-root

At this point we can go to the next section where we will run PRoot and start doing things "inside" the disk image.

Working With An Existing Disk

We can use PRoot with an existing disk (SD card) as well. The first step is to insert the disk into your computer. Your operating system will likely automatically boot it. We also need to find out which device the disk is registered as.

lsblk can answer both questions for us:

$ lsblk
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdb      8:16   1 14.9G  0 disk 
├─sdb1   8:17   1   56M  0 part /run/media/jj/boot
└─sdb2   8:18   1    3G  0 part /run/media/jj/f24a4949-f4b2-4cad-a780-a138695079ec
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

On my system, the SD card I inserted (a Raspbian disk I pulled out of a Raspberry Pi) came up as /dev/sdb. It has two paritions, sdb1 and sdb2. Both partitions were automatically mounted, to /run/media/jj/boot and /run/media/jj/f24a4949-f4b2-4cad-a780-a138695079ec, respectively.

Typically, the first, smaller partition will be the boot partition. To verify this, we'll again use fdisk -l:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009bf4f

Device     Boot  Start     End Sectors Size Id Type
/dev/sdb1         8192  122879  114688  56M  c W95 FAT32 (LBA)
/dev/sdb2       122880 6399999 6277120   3G 83 Linux

Here we see that /dev/sdb1 is 56 megabytes in size, and is of type "W95 FAT32 (LBA)". This is typically indicative of a RasbperryPi boot partition, so /dev/sdb1 is our boot partition, and /dev/sdb2 is our root partition.

We can use the existing mounts that the operating system set up automatically for us, if we want, but it's a bit easier to un-mount the partitions and mount them somewhere more descriptive, like raspbian-boot and raspbian-boot:

$ sudo umount /dev/sdb1 /dev/sdb2
$ mkdir -p raspbian-boot raspbian-root
$ sudo mount /dev/sdb1 raspbian-boot
$ sudo mount /dev/sdb2 raspbian-root

Note: The -p switch to mkdir causes mkdir to ignore already-exsiting directories. We've added it here in case you were following along in the previous section and already have these directories handy.

A call to lsblk will confirm that we've mounted things as we expected:

$ lsblk
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdb      8:16   1 14.9G  0 disk 
├─sdb1   8:17   1   56M  0 part /run/media/jj/STEALTH/raspbian-boot
└─sdb2   8:18   1    3G  0 part /run/media/jj/STEALTH/raspbian-root
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

Now we can proceed to the next section, and run the same PRoot command to configure, compile and/or install things - but this time we'll be working directly on the SD card instead of inside of a disk image.

Basic Configuration/Package Installation

Now that we've got either a disk image or a physical disk mounted, we can run commands within those filesystems using PRoot.

NOTE: The following command line switches worked for me, but took some experimentation to figure out. Please take some time to read the PRoot documentation so you understand exactly what the switches mean.

We can run any command directly (like say, apt-get) but it's useful to be able to "log in" to the disk image (run a shell), and then perform our tasks:

$ sudo proot -q qemu-arm -S raspbian-root -b raspbian-boot:/boot /bin/bash

This mode of PRoot forces the root user inside of the disk image. The -q switch wraps every command in the qemu-arm emulator program, making it possible to run code compiled for the RaspberryPi's ARM processor. The -S parameter sets the directory that will be the "root" - essentially that means that raspbian-root will map to /. -S also fakes the root user (id 0), and adds some protections for us in the event we've mixed in files from our host system that we don't want the disk image code to modify. -b splices in additional directories - we add the /boot partition, since that's where new kernel images and other boot-related stuff gets installed. This isn't entirely necessary, but its useful for system upgrades and making changes to boot settings. Finally, we tell PRoot which command to run, in this case, /bin/bash, the BASH shell.

Now that we're "in" the disk image, we can update and install new packages.

Since root is not a "normal" user in the default Rasbian installation, the path needs to be adjusted:

# export PATH=$PATH:/usr/sbin:/sbin:/bin:/usr/local/sbin

Now we can do the update/upgrade, and install any additional packages we might want (for example, the samba file sharing server):

# apt-get update
# apt-get upgrade
# apt-get install samba

Check out the man page for apt-get for full details (type man apt-get at a shell prompt).

You will likely see a lot of warnings and possibly errors when installing packages - these can usually be ignored, but make note of them - there may be some environmental tweaks that need to be made.

We can do almost anything in the PRoot environment that we could do logged into a running Raspberry Pi.

We can edit config.txt and change settings (for an explanation of the settings, see the documentation):

# vi /boot/config.txt

We can add a new user:

# adduser jj
Adding user `jj' ...
Adding new group `jj' (1004) ...
Adding new user `jj' (1001) with group `jj' ...
Creating home directory `/home/jj' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for jj
Enter the new value, or press ENTER for the default
        Full Name []: Josh Johnson
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 

We can grant a user sudo privileges (the default sudo configuration allows anyone in the sudo group to run commands as root via sudo):

# usermod -a -G sudo jj
# groups jj
jj : jj sudo

You can reset someone's password, or change the password of the default pi user:

# passwd pi
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully

The possibilities here are endless, with a few exceptions:

Compiling For The RPi

Raspbian comes with most of the tools we'll need (in particular, the build-essential package). Lets build and install the nginx web server - a relatively easy to build package.

If you've never compiled software on Linux before, most (but not all!) source code packages are provided as tarballs, and include some scripts that help you build the software in what's known as the "configure, make, make install" (or CMMI) procedure.

Note: For a great explanation (with examples you can follow to build your own CMMI package), George Brocklehurst wrote an excellent article explaining the details behind CMMI called "The magic behind configure, make, make install".

First we'll need to obtain the nginx tarball:

# wget http://nginx.org/download/nginx-1.7.12.tar.gz
# tar -zxvf nginx-1.7.12.tar.gz

Next we'll look for a README or INSTALL file, to check for any extra build dependencies:

# cd nginx-1.7.12
# ls -l
total 660
-rw-r--r-- 1 jj   indiecity 249016 Apr  7 15:35 CHANGES
-rw-r--r-- 1 jj   indiecity 378885 Apr  7 15:35 CHANGES.ru
-rw-r--r-- 1 jj   indiecity   1397 Apr  7 15:35 LICENSE
-rw-r--r-- 1 root root          46 Apr 18 10:21 Makefile
-rw-r--r-- 1 jj   indiecity     49 Apr  7 15:35 README
drwxr-xr-x 6 jj   indiecity   4096 Apr 18 10:21 auto
drwxr-xr-x 2 jj   indiecity   4096 Apr 18 10:21 conf
-rwxr-xr-x 1 jj   indiecity   2478 Apr  7 15:35 configure
drwxr-xr-x 4 jj   indiecity   4096 Apr 18 10:21 contrib
drwxr-xr-x 2 jj   indiecity   4096 Apr 18 10:21 html
drwxr-xr-x 2 jj   indiecity   4096 Apr 18 10:21 man
drwxr-xr-x 2 root root        4096 Apr 18 10:23 objs
drwxr-xr-x 8 jj   indiecity   4096 Apr 18 10:21 src
# view README

We'll note that, helpfully (cue eye roll) that nginx has put into the README:

Documentation is available at http://nginx.org

A more direct link gives us a little more useful information. Scanning this, there aren't any obvious dependencies or features we want to add/enable, so we can proceed.

We can also find out which options are available by running ./configure --help.

Note: There are several configuration options that control where files are put when the compiled code is installed - they may be of use, in particular the standard --PREFIX. This can help segregate multiple versions of the same application on a system, for example if you need to install a newer/older version and already have one installed via the apt package. It is also useful to build self-contained directory structures that you can easily copy from one system to another.

Run ./configure, note any warnings or errors. There may be some modules or other things not found - that's typically OK, but can help explain why an eventual error happened toward the end of the configure script or during compilation:

# cd nginx-1.7.12
# ./configure
checking for PCRE library ... not found
checking for PCRE library in /usr/local/ ... not found
checking for PCRE library in /usr/include/pcre/ ... not found
checking for PCRE library in /usr/pkg/ ... not found
checking for PCRE library in /opt/local/ ... not found

./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre=<path> option.

Whoa, we ran into a problem! For our use case (just showing off how to do a CMMI build in a PRoot environment) we probably don't need the rewrite module, so we can re-run ./configure with the --without-http_rewrite_module switch.

However, it's useful to understand how to track down dependencies like this, and rewriting is a pretty killer feature of any http server, so lets install the dependency.

The configure script mentions the "PCRE library". PCRE stands for "Perl Compatible Regular Expressions". Perl is a classical systems language that has hard-core text processing capabilities. It's particularly known for its regular expression support and syntax. The Perl regular expression syntax is so useful in fact, that some folks built a library allowing other programmers to use it without having to use Perl itself.

Note: This information can be found by using your favorite search engine!

There are two ways libraries like PCRE are installed. The first, and easiest, is that a system package will be available with the library pre-compiled and ready to go. The second will require the same steps we're following to install nginx - download a tarball, extract, and configure, make, make install.

To find a package, you can use apt-cache search or aptitude search.

I prefer aptitude, since it will tell us what packages are already installed:

# aptitude search pcre
v   apertium-pcre2                                     -                                                             
p   cl-ppcre                                           - Portable Regular Express Library for Common Lisp            
p   clisp-module-pcre                                  - clisp module that adds libpcre support                      
p   gambas3-gb-pcre                                    - Gambas regexp component                                     
p   haskell-pcre-light-doc                             - transitional dummy package                                  
p   libghc-pcre-light-dev                              - Haskell library for Perl 5-compatible regular expressions   
v   libghc-pcre-light-dev-0.4-4f534                    -                                                             
p   libghc-pcre-light-doc                              - library documentation for pcre-light                        
p   libghc-pcre-light-prof                             - pcre-light library with profiling enabled                   
v   libghc-pcre-light-prof-0.4-4f534                   -                                                             
p   libghc-regex-pcre-dev                              - Perl-compatible regular expressions                         
v   libghc-regex-pcre-dev-0.94.2-49128                 -                                                             
p   libghc-regex-pcre-doc                              - Perl-compatible regular expressions; documentation          
p   libghc-regex-pcre-prof                             - Perl-compatible regular expressions; profiling libraries    
v   libghc-regex-pcre-prof-0.94.2-49128                -                                                             
p   libghc6-pcre-light-dev                             - transitional dummy package                                  
p   libghc6-pcre-light-doc                             - transitional dummy package                                  
p   libghc6-pcre-light-prof                            - transitional dummy package                                  
p   liblua5.1-rex-pcre-dev                             - Transitional package for lua-rex-pcre-dev                   
p   liblua5.1-rex-pcre0                                - Transitional package for lua-rex-pcre                       
p   libpcre++-dev                                      - C++ wrapper class for pcre (development)                    
p   libpcre++0                                         - C++ wrapper class for pcre (runtime)                        
p   libpcre-ocaml                                      - OCaml bindings for PCRE (runtime)                           
p   libpcre-ocaml-dev                                  - OCaml bindings for PCRE (Perl Compatible Regular Expression)
v   libpcre-ocaml-dev-werc3                            -                                                             
v   libpcre-ocaml-werc3                                -                                                             
i   libpcre3                                           - Perl 5 Compatible Regular Expression Library - runtime files
p   libpcre3-dbg                                       - Perl 5 Compatible Regular Expression Library - debug symbols
p   libpcre3-dev                                       - Perl 5 Compatible Regular Expression Library - development f
p   libpcrecpp0                                        - Perl 5 Compatible Regular Expression Library - C++ runtime f
p   lua-rex-pcre                                       - Perl regular expressions library for the Lua language       
p   lua-rex-pcre-dev                                   - PCRE development files for the Lua language                 
v   lua5.1-rex-pcre                                    -                                                             
v   lua5.1-rex-pcre-dev                                -                                                             
v   lua5.2-rex-pcre                                    -                                                             
v   lua5.2-rex-pcre-dev                                -                                                             
p   pcregrep                                           - grep utility that uses perl 5 compatible regexes.           
p   pike7.8-pcre                                       - PCRE module for Pike                                        
p   postfix-pcre                                       - PCRE map support for Postfix       

See man aptitude for full details, but the gist is that p means the package is available but not installed, v is a virtual package that points to other packages, and i means the package is installed.

What we want is a package with header files and modules we can compile against - these are usually named lib[SOMETHING]-dev.

Scanning the list, we see a package named libpcre3-dev - this is probably what we want, we can find out by installing it:

# apt-get install libpcre3-dev

Now we can re-run ./configure and see if it works:

# ./configure
checking for PCRE library ... found
Configuration summary
  + using system PCRE library
  + OpenSSL library is not used
  + using builtin md5 code
  + sha1 library is not found
  + using system zlib library

  nginx path prefix: "/usr/local/nginx"
  nginx binary file: "/usr/local/nginx/sbin/nginx"
  nginx configuration prefix: "/usr/local/nginx/conf"
  nginx configuration file: "/usr/local/nginx/conf/nginx.conf"
  nginx pid file: "/usr/local/nginx/logs/nginx.pid"
  nginx error log file: "/usr/local/nginx/logs/error.log"
  nginx http access log file: "/usr/local/nginx/logs/access.log"
  nginx http client request body temporary files: "client_body_temp"
  nginx http proxy temporary files: "proxy_temp"
  nginx http fastcgi temporary files: "fastcgi_temp"
  nginx http uwsgi temporary files: "uwsgi_temp"
  nginx http scgi temporary files: "scgi_temp"

The library was found, the error is gone, and so now we can proceed with compilation.

To build nginx, we simply run make:

# make

If all goes well, then you can isntall it:

# make install

This same basic process can be used to build custom applications written in C/C++, to build applications that aren't yet in the package repository, or build applications with specific features or optimizations enabled that the standard packages might not have.

Using Apt To Install Build Dependencies

One more useful thing that apt-get can do for us: it can install the build dependencies for any given package in the repository. This allows us to get most, if not all, potentially missing dependencies to build a known application.

We could have started off with our nginx exploration by first installing it's build dependencies:

# apt-get build-dep nginx

This won't solve every dependency issue, but it's a useful tool in getting all of your ducks in a row for building, especially for more complex things like desktop applications.

Be careful with build-dep - it can bring in a lot of things, some you may not really need. In our case it's not really a problem, but be aware of space limitations.

Umount and Clean Up

Once we've gotten our disk image configured as we like, we need to un-mount it.

First, we need to exit the bash shell we started with PRoot, then we'll call sync to ensure all data is flushed to any disks:

# exit
$ sync

Now we can un-mount the partitions (the command is the same whether we're using a disk image or an SD card):

$ sudo umount raspbian-root rasbian-boot

We can double-check that the disk is no longer mounted by calling mount without any additional parameters, or using lsblk

$ mount

With lsblk, we'll still see the disks (or loopback devices) present, but not mounted:

$ lsblk
sda         8:0    0 14.9G  0 disk 
└─sda1      8:1    0 14.9G  0 part /
sdc         8:32   0 29.8G  0 disk 
└─sdc1      8:33   0 29.8G  0 part /run/media/jj/STEALTH
loop0       7:0    0  3.1G  0 loop 
├─loop0p1 254:0    0   56M  0 part 
└─loop0p2 254:1    0    3G  0 part 

If we're using a disk image, we'll want to destroy the loopback devices. This is accomplished with kpartx -d:

$ sudo kpartx -d 2015-02-16-raspbian-wheezy.img

We can verify that it's gone using lsblk again:

$ lsblk
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

At this point we can write the disk image to an SD card, or eject the SD card and insert it into a Raspberry Pi.

Writing a Disk Image to an SD Card

We'll use the dd command, which writes raw blocks of data from one block device to another, to copy the disk image we made into an SD card.

NOTE: The SD card you use will be COMPLETELY erased. Proceed with caution.

First, insert the SD card into your computer (or card reader, etc). Depending on your system, it may be automatically mounted. We can find out the device name and if its mounted using lsblk:

$ lsblk
sda      8:0    0  14.9G  0 disk 
└─sda1   8:1    0  14.9G  0 part /
sdb      8:16   1  14.9G  0 disk 
├─sdb1   8:17   1 114.3M  0 part 
├─sdb2   8:18   1     1K  0 part 
└─sdb3   8:19   1    32M  0 part /run/media/jj/SETTINGS
sdc      8:32   0  29.8G  0 disk 
└─sdc1   8:33   0  29.8G  0 part /run/media/jj/STEALTH

We can see the new disk came up as sdb. It has three partitions, sdb1, sdb2, and sdb3. Looking at the MOUNTPOINT column, we can tell that my operating system auto-mounted sdb3 into the /run/media/jj/SETTINGS directory.

Note: The partition layout may vary depending on what was on the SD card before you inserted it. My SD card had a fresh copy of NOOBS that hadn't yet installed an OS.

We can double-check that sdb is the right disk with fdisk:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000cb53d

Device     Boot    Start      End  Sectors   Size Id Type
/dev/sdb1           8192   242187   233996 114.3M  e W95 FAT16 (LBA)
/dev/sdb2         245760 31225855 30980096  14.8G 85 Linux extended
/dev/sdb3       31225856 31291391    65536    32M 83 Linux

fdisk tells us that this is a 16GB drive. The exact amount cited by some drive manufacturers is not in "real" gigabytes, an exponent of 2[*] but in billions of bytes - note the byte count: 16,021,192,704.

We can see the three partitions, and what format they are in. The small FAT filesystem is a good indication that this is a bootable Raspberry Pi disk.

With a fresh SD card, the call to fdisk may look more like this:

Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdb1        8192 31291391 31283200 14.9G  c W95 FAT32 (LBA)

Most SD cards are pre-formatted with a single partition containing a FAT32 filesystem.

It's important to be able to differentiate between your system drives and the target for copying over your disk image - if you point dd at the wrong place, you can destroy important things, like your operating system!

Now that we're sure that /dev/sdb is our SD card, we can proceed.

Since lsblk indicated that at least one of the partitions was mounted (sdb3), we will fist need to un-mount it:

$ sudo umount /dev/sdb3

Now we can verify it's indeed not mounted:

$ lsblk
sda      8:0    0  14.9G  0 disk 
└─sda1   8:1    0  14.9G  0 part /
sdb      8:16   1  14.9G  0 disk 
├─sdb1   8:17   1 114.3M  0 part 
├─sdb2   8:18   1     1K  0 part 
└─sdb3   8:19   1    32M  0 part 
sdc      8:32   0  29.8G  0 disk 
└─sdc1   8:33   0  29.8G  0 part /run/media/jj/STEALTH

And copy the disk image:

$ sudo dd if=2015-02-16-raspbian-wheezy.img of=/dev/sdb bs=4M
781+1 records in
781+1 records out
3276800000 bytes (3.3 GB) copied, 318.934 s, 10.3 MB/s

This will take some time, and dd gives no output until it's finished. Be patient.

dd has a fairly simple interface. The if option indicates the in file, or the disk (or disk image in our case) that is being copied. The of option sets the out file, or the disk to write to. bs sets the block size, which indicates how big of a piece of data to write at a time.

The bs value can be tweaked to get faster or more reliable performance in various situations - we're using 4M (four megabytes) as recommended by raspberrypi.org. The larger the value, the faster dd will run, but there are physical limits to what your system can handle, so it's best to stick with the recommended value.

So dd gives us no output until it's completed. This is kind of an annoying thing about dd but it can be remedied. The easiest way is to install a tool called pv, and split the command - pv acts as an intermediary between two commands and displays a progress bar as it moves along. dd can read and write data to a pipe (details). So we can use two dd commands, put pv in the middle, and get a nice progress bar.

Here's the same copy as before, but using pv:

Note: Here we're using sh -c to wrap the command pipeline in quotes. This allows us to provide the entire pipeline as a single unit. If we didn't, the shell would interpret the first pipe in the pipeline as part of the call to sudo, and not what we want to run as root.

$ ls -l 2015-02-16-raspbian-wheezy.img 
-rw-r--r-- 1 jj jj 3276800000 Apr 18 07:58 2015-02-16-raspbian-wheezy.img
$ sudo sh -c "dd if=2015-02-16-raspbian-wheezy.img bs=4M | pv --size=3276800000 | dd of=/dev/sdb"
 613MiB 0:02:31 [4.22MiB/s] [===========>                                                      ] 19% ETA 0:10:04
# exit

We pass pv a --size argument to give it an idea of how big the file is, so it can provide accurate progress. We found out the size of our disk image using ls -l., which shows the size of the file in bytes.

If we run lsblk again, we'll see the different partition arrangement now on sdb:

$ lsblk
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdb      8:16   1 14.9G  0 disk 
├─sdb1   8:17   1   56M  0 part 
└─sdb2   8:18   1    3G  0 part 
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

fdisk -l gives a bit more detail:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009bf4f

Device     Boot  Start     End Sectors Size Id Type
/dev/sdb1         8192  122879  114688  56M  c W95 FAT32 (LBA)
/dev/sdb2       122880 6399999 6277120   3G 83 Linux

Now we can sync the disks:

$ sync

At this point we have an SD card we can put into a Raspberry Pi and boot.

[*] (1GB = 1 byte * 1024 (kilobyte) * 1024 (megabyte) * 1024, or 1,073,741,824 bytes)

Extra Credit: Making our own disk image

Some distributions, such as Arch, don't distribute disk images, but instead distribute tarballs of files. They let you set up the disk however you want, then copy the files over to install the operating system.

We can create our own disk images using fallocate, and then use fdisk or parted (or if you prefer a GUI, gparted) to partition the disk.

We'll create a disk image for the latest Arch Linux ARM distribution for the Raspberry Pi 2.

Note: You must create the disk image file on a compatible filesystem, such as ext4, for this to work. This is the default system disk filesystem for most modern Linux distributions, including Arch and Ubuntu, so for most people this isn't a problem. The implication is that this will not work on, say, an external hard drive formatted in an incompatible format, such as FAT32.

First we'll create an 8 gigabyte empty disk image:

$ fallocate -l 8G arch-latest-rpi2.img

We'll use fdisk to partition the disk. We need two partitions. The first will be 100 megabytes, formatted as FAT32. We'll need to set the partition's system id to correspond to FAT32 with LBA so that the Raspberry Pi's BIOS knows how to read it.

Note: I've had trouble finding documentation as to exactly why FAT + LBA is required, the assumption is it has something to do with how the ARM processor loads the operating system in the earliest boot stages; if anyone knows more detail or can point me to the documentation about this, it would be greatly appreciated!

The offset for the partition will be 2048 blocks - this is the default that fdisk will suggest (and what the Arch installation instructions tell us to do).

Note: This seems to work well- however, there is some confusion about partition alignment. The Raspbian disk images use a 8192 block offset, and there is a lot of information available explaining how a bad alignment can cause quicker SD card degradation and hurt write performance. I'm still trying to figure out the best way to address this, this is another area where community help would be appreciated :) Here are a few links that dig into the issue: http://wiki.laptop.org/go/How_to_Damage_a_FLASH_Storage_Device, http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/, http://3gfp.com/wp/2014/07/formatting-sd-cards-for-speed-and-lifetime/.

The second partition will be ext4, and use the rest of the the available disk space.

We'll start fdisk and get the initial prompt. No changes will be saved until we instruct fdisk to do so:

$ fdisk arch-latest-rpi2.img
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x152a22d4.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help):

Most of the information here is just telling us that this is a block device with no partitions. If you need help, as indicated, you can type m:

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

First, we need to create a new disk partition table. This is done by entering o:

Command (m for help): o
Building a new DOS disklabel with disk identifier 0xa8e8538a.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Next, we'll create our first primary partition, the boot partition, at 2048 blocks offset, 100MB in size.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-16777215, default 2048): 2048
Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215): +100M

By using the relative number +100M, we save ourselves some trouble of having to do math to figure out how many sectors we need.

We can see what we have so far, by using the p command:

Command (m for help): p

Disk arch-latest-rpi2.img: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa8e8538a

               Device Boot      Start         End      Blocks   Id  System
arch-latest-rpi2.img1            2048      206847      102400   83  Linux

Next, we need to set the partition type (system id) by entering t:

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT
1e  Hidden W95 FAT1 80  Old Minix
Hex code (type L to list codes): c
Changed system type of partition 1 to c (W95 FAT32 (LBA))

After the t command, we opted to enter L to see the list of possible codes. We then see that W95 FAT32 (LBA) corresponds to the code c.

Now we can make our second primary partition for data storage, utilizing the rest of the disk. We again use the n command:

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): p
Partition number (1-4, default 2): 2
First sector (206848-16777215, default 206848):
Using default value 206848
Last sector, +sectors or +size{K,M,G} (206848-16777215, default 16777215):
Using default value 16777215

We accepted the defaults for all of the prompts.

Now, entering p again, we can see the state of the partition table:

Command (m for help): p

Disk arch-latest-rpi2.img: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa8e8538a

               Device Boot      Start         End      Blocks   Id  System
arch-latest-rpi2.img1            2048      206847      102400    c  W95 FAT32 (LBA)
arch-latest-rpi2.img2          206848    16777215     8285184   83  Linux

Now we can write out the table (w), which will exit fdisk:

Command (m for help): w
The partition table has been altered!

WARNING: If you have created or modified any DOS 6.x
partitions, please see the fdisk manual page for additional
Syncing disks.

Now we need to format the partitions. We'll use kpartx to create block devices for us that we can format:

$ sudo kpartx -av arch-latest-rpi2.img
add map loop0p1 (252:0): 0 204800 linear /dev/loop0 2048
add map loop0p2 (252:1): 0 16570368 linear /dev/loop0 206848

As we saw earilier, the devices will show up in /dev/mapper, as /dev/mapper/loop0p1 and /dev/mapper/loop0p2.

First we'll format the boot partition loop0p1, as :

$ sudo mkfs.vfat /dev/mapper/loop0p1
mkfs.fat 3.0.26 (2014-03-07)
unable to get drive geometry, using default 255/63

Next the data partition, in ext4 format:

$ sudo mkfs.ext4 /dev/mapper/loop0p2
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
518144 inodes, 2071296 blocks
103564 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2122317824
64 block groups
32768 blocks per group, 32768 fragments per group
8096 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

At this point we just need to mount the new filesystems, download the installation tarball and use tar to extract and copy the files:

First we'll grab the installation files:

$ wget http://archlinuxarm.org/os/ArchLinuxARM-rpi-2-latest.tar.gz

Next we'll mount the new filesystems:

$ mkdir arch-root arch-boot
$ sudo mount /dev/mapper/loop0p1 arch-boot
$ sudo mount /dev/mapper/loop0p2 arch-root

And finally populate the disk image with the system files, and move the boot directory to the boot partition:

$ sudo tar -xpf ArchLinuxARM-rpi-2-latest.tar.gz -C arch-root
$ sync
$ sudo mv arch-root/boot/* arch-boot/

We're using a few somewhat less common parameters for tar. Typically we'll use -xvf to tell tar to extract (-x), be verbose (-v) and specify the file (-f). We've added the -p switch to preserve permissions. This is especially important with system files.

The -C switch tells tar to change to the arch-root directory before extraction, effectively extracting the files directly to the root filesystem.

You may see some warnings about extended header keywords, these can be ignored.

Now we just need to clean up (unmount, remove the loopback devs):

$ sudo umount arch-root arch-boot
$ sudo kpartx -d arch-latest-rpi2.img

Now we've got our own Arch disk image we can distribute, or copy onto SD cards. We can also mount it on the loopback and use PRoot to further configure it, as we did above with Raspbian.

Where To Go From Here

With this basic workflow, we can do all sorts of interesting things. A few ideas:

So there we go - now you can customize the Raspberry Pi operating system with impunity, on your favorite workstation or laptop machine. If you have any questions, corrections, or suggestions for ways to streamline the process, please leave a comment!

26 Apr 2015 7:35pm GMT

25 Apr 2015

feedPlanet Plone

Gil Forcada: 1st WPOD recap

Last Friday the first WPOD happened around the globe.

Here is what was done on the Berlin gathering:

Hopefully other participants around the globe will share their achievements as well!

Are you already planning the next WPOD? Mark it on your calendar May 29th.

Happy hacking!

25 Apr 2015 8:45pm GMT

24 Apr 2015

feedPlanet Plone

Abstract Technology: Plone Open Garden 2015 - the right chance for discussing and planning

Reaching its 9th edition, the PLOG is growing and getting wiser. This was the year of the Strategic Summit with 5 days of talking, planning and preparing the future of Plone in the next 5 years.

24 Apr 2015 3:34pm GMT

Abstract Technology: Plone Open Garden 2015 - the right chance for discussing and planning

Reaching its 9th edition, the PLOG is growing and getting wiser. This was the year of the Strategic Summit with 5 days of talking, planning and preparing the future of Plone in the next 5 years.

24 Apr 2015 3:34pm GMT

20 Apr 2015

feedPlanet Plone

eGenix: eGenix mxODBC Zope DA 2.2.1 GA


The eGenix mxODBC Zope DA allows you to easily connect your Plone CMS or Zope installation to just about any database backend on the market today, giving you the reliability of the commercially supported eGenix product mxODBC and the flexibility of the ODBC standard as middle-tier architecture.

The mxODBC Zope Database Adapter is highly portable, just like Zope itself and provides a high performance interface to all your ODBC data sources, using a single well-supported interface on Windows, Linux, Mac OS X, FreeBSD and other platforms.

This makes it ideal for deployment in ZEO Clusters and Zope hosting environments where stability and high performance are a top priority, establishing an excellent basis and scalable solution for your Plone CMS.

>>> mxODBC Zope DA Product Page


The 2.2.1 release of our mxODBC Zope/Plone Database Adapter product is a patch level release of the popular ODBC database interface for Plone and Zope. It includes these enhancements and fixes:

Feature Updates:

Driver Compatibility Enhancements:


mxODBC Zope DA 2.2.0 was released on 2014-12-11. Please see the mxODBC Zope DA 2.2.0 release announcement for all the new features we have added.

The complete list of changes is available on the mxODBC Zope DA changelog page.


Users are encouraged to upgrade to this latest mxODBC Plone/Zope Database Adapter release to benefit from the new features and updated ODBC driver support. We have taken special care not to introduce backwards incompatible changes, making the upgrade experience as smooth as possible.

Customers who have purchased mxODBC Plone/Zope DA 2.2 licenses can continue to use their licenses with this patch level release.

For major and minor upgrade purchases, we will give out 20% discount coupons going from mxODBC Zope DA 1.x to 2.2 and 50% coupons for upgrades from mxODBC Zope DA 2.x to 2.2. After upgrade, use of the original license from which you upgraded is no longer permitted. Patch level upgrades (e.g. 2.2.0 to 2.2.1) are always free of charge.

Please contact the eGenix.com Sales Team with your existing license serials for details for an upgrade discount coupon.

If you want to try the new release before purchase, you can request 30-day evaluation licenses by visiting our web-site or writing to sales@egenix.com, stating your name (or the name of the company) and the number of eval licenses that you need.


Please visit the eGenix mxODBC Zope DA product page for downloads, instructions on installation and documentation of the packages.

If you want to try the package, please jump straight to the download instructions.

Fully functional evaluation licenses for the mxODBC Zope DA are available free of charge.


Commercial support for this product is available directly from eGenix.com.

Please see the support section of our website for details.

More Information

For more information on eGenix mxODBC Zope DA, licensing and download instructions, please write to sales@egenix.com.

Enjoy !

Marc-Andre Lemburg, eGenix.com

20 Apr 2015 8:00am GMT

17 Apr 2015

feedPlanet Plone

Gil Forcada: Testing pull requests and multi-repository changes

At Plone we use Continuous Integration (with Jenkins) to keep us aware of any change made on any of our +200 of packages that break the tests.

Thus making it feasible to spot where the problem was introduced, find the changes that were made and report back to the developer that made the changes to warn him/her about it.

A more elaborate step by step description is on our CI rules, be sure to read them!

At the same time though, we use GitHub pull requests to make code reviews easy and have a way to provide feedback and context to let everyone give their opinion/comment on changes that can be non-trivial.

Sadly, pull requests and multi-repository changes can not be tested directly with Jenkins, yes, there is a plugin for that, but our CI setup is a bit (note the emphasis) more complex than that…

Fortunately we came up with two solutions (it's Plone at the end, we can not have only one solution :D)

Single pull requests

If you have a pull request on a core package that you want to test follow these steps:

  1. Get the pull request URL
  2. Go to http://jenkins.plone.org and login with your GitHub user
  3. Go to pull-request job: http://jenkins.plone.org/job/pull-request (you can see it always at the front page of jenkins.plone.org)
  4. Click on the Build with Parameters link on the left column
  5. Paste the pull request URL from #1 step
  6. Click on Build

Once it runs you will get an email with the result of the build. If everything is green you can add a comment on the pull request to let everyone know that tests pass.

Note: it's your responsibility to run that job with your pull request and that changes made on other packages after tests started running can still make your pull request fail later on, so even if the pull-request job is green, be sure to keep an eye on the main jenkins jobs as soon as you merge your pull request.

Example: Remove Products.CMFDefault from Products.CMFPlone (by @tomgross)

Pull request: https://github.com/plone/Products.CMFPlone/pull/438

Jenkins job: http://jenkins.plone.org/job/pull-request/80

Multi-repository changes

When the changes, like massive renamings for example, are spread over more than one repository the approach taken before doesn't work, as the pull-request Jenkins job is only able to change one single repository.

But we, the CI/testing team, have another ace on our sleeves: create a buildout configuration in the plips folder on buildout.coredev (branch 5.0) that lists all your repositories and which branch should be used, see some examples.

Once you have that cfg file, you can politely ask the CI team to create a Jenkins job for you. They will make a lot of clever things to make that work on jenkins (3 lines change plus following some instructions) and sooner or later a new Jenkins job will show up on the PLIPs tab on jenkins.plone.org.

Rinse and repeat!

Extra bonus and caveats

All Jenkins jobs, be it the pull-request, PLIPs and of course the core jobs, are configured to send an email to the one that triggered the job, so don't worry about how long do they take to run, once they are finished you will get notified.

The caveat is that the above is only valid for changes targeting Plone 5. We didn't put the extra effort to make it sure it also works for pull requests (either single or multi-repository) aimed at Plone 4.3. It's quite trivial to add it for multi-repositories, a bit more work to make it run on single pull requests, still feasible to do if there's enough people asking for it.

Hopefully the amount of pull requests for Plone 4.3 will decrease more and more as Plone 5 is getting closer and closer one pull request at a time :)

Now there's no excuse on pushing changes to master without having tested them before on jenkins.plone.org!

Proposals on improvements and suggestions are always welcome on the issue tracker for jenkins.plone.org GitHub repository. Help on handling all those issues are, of course, also welcomed!

Happy testing!

17 Apr 2015 8:40pm GMT

eGenix: Python Meeting Düsseldorf - 2015-04-29

The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.


Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:

Mittwoch, 29.04.2015, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf


Bereits angemeldete Vorträge

Johannes Spielmann
"Nachrichtenprotokolle in Python"

Matthias Endler
"The State of PyPy"

Charlie Clark
"Die Kunst des Nichtstun: Eine Einführung in Profiling"
"et_xmlfile: Valides XML schreiben mit niedrigem Speicherbedarf"

Marc-Andre Lemburg
"SSL in Python 2.7.9"
"YouTube Feed mit feedparser auswerten"

Weitere Vorträge können gerne noch angemeldet werden. Bei Interesse, bitte unter info@pyddf.de melden.

Startzeit und Ort

Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.

Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.

Über dem Eingang steht ein großes "Schwimm'in Bilk" Logo. Hinter der Tür direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.

>>> Eingang in Google Street View


Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:


Das Python Meeting Düsseldorf nutzt eine Mischung aus Open Space und Lightning Talks, wobei die Gewitter bei uns auch schon mal 20 Minuten dauern können :-)

Lightning Talks können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit XGA Auflösung steht zur Verfügung. Folien bitte als PDF auf USB Stick mitbringen.

Lightning Talk Anmeldung bitte formlos per EMail an info@pyddf.de


Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.

Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.

Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.


Da wir nur für ca. 20 Personen Sitzplätze haben, möchten wir bitten, sich per EMail anzumelden. Damit wird keine Verpflichtung eingegangen. Es erleichtert uns allerdings die Planung.

Meeting Anmeldung bitte formlos per EMail an info@pyddf.de

Weitere Informationen

Weitere Informationen finden Sie auf der Webseite des Meetings:


Viel Spaß !

Marc-Andre Lemburg, eGenix.com

17 Apr 2015 8:00am GMT

16 Apr 2015

feedPlanet Plone

Mikko Ohtamaa: Inspecting thread dumps of hung Python processes and test runs

Sometimes, moderately complex Python applications with several threads tend to hang on exit. The application refuses to quit and just idles there waiting for something. Often this is because if any of the Python threads are alive when the process tries to exit it will wait any alive thread to terminate, unless Thread.daemon is set to true.

In the past, it use to be little painful to figure out which thread and function causes the application to hang, but no longer! Since Python 3.3 CPython interpreter comes with a faulthandler module. faulthandler is a mechanism to tell the Python interpreter to dump the stack trace of every thread upon receiving an external UNIX signal.

Here is an example how to figure out why the unit test run, executed with pytest, does not exit cleanly. All tests finish, but the test suite refuses to quit.

First we run the tests and set a special environment variable PYTHONFAULTHANDLER telling CPython interpreter to activate the fault handler. This environment variable works regardless how your Python application is started (you run python command, you run a script directly, etc.)


And then the test suite has finished, printing out the last dot… but nothing happens despite our ferocious sipping of coffee.


How to proceed:

Press CTRL-Z to suspend the current active process in UNIX shell.

Use the following command to send SIGABRT signal to the suspended process.

kill -SIGABRT %1

Voilá - you get the traceback. In this case, it instantly tells SQLAlchemy is waiting for something and most likely the database has deadlocked due to open conflicting transactions.

Fatal Python error: Aborted

Thread 0x0000000103538000 (most recent call first):
  File "/opt/local/Library/Fra%                                                                                                                                                                     meworks/Python.framework/Versions/3.4/lib/python3.4/socketserver.py", line 154 in _eintr_retry
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/socketserver.py", line 236 in serve_forever
  File "/Users/mikko/code/trees/pyramid_web20/pyramid_web20/tests/functional.py", line 40 in run
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/threading.py", line 921 in _bootstrap_inner
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/threading.py", line 889 in _bootstrap

Current thread 0x00007fff75128310 (most recent call first):
  File "/Users/mikko/code/trees/venv/lib/python3.4/site-packages/SQLAlchemy-1.0.0b5-py3.4-macosx-10.9-x86_64.egg/sqlalchemy/engine/default.py", line 442 in do_execute
  File "/Users/mikko/code/trees/venv/lib/python3.4/site-packages/SQLAlchemy-1.0.0b5-py3.4-macosx-10.9-x86_64.egg/sqlalchemy/sql/schema.py", line 3638 in drop_all
  File "/Users/mikko/code/trees/pyramid_web20/pyramid_web20/tests/conftest.py", line 124 in teardown
  File "/Users/mikko/code/trees/venv/lib/python3.4/site-packages/_pytest/config.py", line 41 in main
  File "/Users/mikko/code/trees/venv/bin/py.test", line 9 in <module>

Subscribe to RSS feed Follow me on Twitter Follow me on Facebook Follow me Google+

16 Apr 2015 8:39pm GMT

13 Apr 2015

feedPlanet Plone

Gil Forcada: WPOD

WPOD: World Plone Office Day

During this year's PLOG I presented the simple idea behind WPOD:

That's it, as simple and as easy as it can be.

Mark on your calendars every last Friday of the month, you have an appointment with the Plone community to bring Plone one step further ahead!

WPOD in Berlin

Preparations are being made for the first ever WPOD in Berlin that my company will gladly host. If you happen to be around Berlin, please contact me telling that you are coming!

The location is Hegelplatz 1, 10117 Berlin.

You are welcome during all day long. Plonistas are expected to come during the morning, enjoy some lunch together, and hack away until late afternoon.

WPOD around the world

If you happen to not be in Berlin, fear not, an irc channel will be available (#sprint on irc.freenode.net) so you can feel the same experience as in any other city hosting a WPOD.

Credit where credit's due

That's not an original idea of mine, nor is something that I thought of myself alone, Philip Bauer already tried to present the very same idea on last year's Plone Conference in Bristol.

Later on, during the Bycicle sprint in Berlin, Stefania and I discussed about it and defined the format as it will start with.

Thanks to them for their bright minds and clever ideas!


Within 10 days the first WPOD will happen, Plonistas will hack/plan/design away and Plone will get better and better.

I hope that other cities and individuals alike will start participating on WPOD, the more we are the bolder the change we will make.

There are some plans to put all the relevant information regarding WPOD on plone.org, either on the current website, or even better on the newer plone.org that is on the making (watch here for tickets ready to be fixed by any of you!).

Happy hacking!

Update: a meetup has been created, please RSVP there.

13 Apr 2015 9:04pm GMT

10 Apr 2015

feedPlanet Plone

Maurits van Rees: PLOG Friday evening report-out

Report-out of today

  • Some cleanup tasks for the future identified, fifteen to twenty. Made list of benefits connected to each task. Like: pep8, remove skins usage from core Plone. Risks. Probably setup web page for people or companies to donate money to spend on cleanup, bit like Plone Intranet Consortium. Workers that work for, for example, half the normal wage. Because these are mostly boring tasks, that no one really wants to pick up, not very rewarding of its own. Individual, or group who organises itself. Sponsor can pay you as normal. Do not set up a big organisation. Trust each other.
  • Through the web customization, without being horrible. Looked at collective.jbot. Some security issues that need to be considered. That package needs Plone 5 support. ACE editor is useful. Resource folder to add. jbot folder in your diazo theme. Advertise as add-on, not in core, as the approved way to allow these kinds of hacks, with round-tripping. Maybe move to core in the end.
  • Increase number of Asian Canadians in Plone. :-) So: diversity meeting. Some are under represented. White males dominate the Plone scene, or at least at the PLOG. But there are some families here, which is very good. Non-native English speakers should feel welcome. At future events, we ask you to look at for newcomers. Connect new people with old-timers. Prioritize first-time speakers at events, help and encourage them. Expand range of talks, like how to run a small business, be a free-lancer. Simple things to make ourselves more attractive to new people.
  • Documentation. Explained how we do it.
  • Trainings. Three people are willing to give a training at the next conference. Fulvio, Fred, Philip. Beginner, integrator, developer. Master class maybe? Training the trainers. Enable new trainers, tips and tricks, how to set it up and market it. So: we are going to have a Plone Training Team, with me as lead. Increase visibility when people give trainings. Monthly hangouts.
  • Translations. Fixed lots of i18n issues. You can start to translate now for Plone 5. We need help figuring out how to extract messages from Javascript.
  • Communication with the community. Collection of activity to get into newsletter. Get teams to report regularly and consistently, also about help they may need. Teams should fill out a document about themselves on plone.org. New information in newsletter. Job openings. Recent launches. Contact Christina. Sponsorship. Social media plan, record upcoming events in a calendar. We like to avoid twisting your arm for information that should be shared with the outside world.
  • Mosaic is working in Plone 5. Want to do a release this weekend. Alpha release. Various things that do not work yet, but at least you can try it. It changes various things in the rendering engine, like with tiles. Philip: I was mind blown again and again when seeing it today, lost for words.
  • Release team. Commit to doing bugfix releases monthly. Let other people release packages. Write nicer combined changelog for each release, more welcoming, more exciting for new developers.
  • Plone RESTapi. Created package to use http verbs in the ZPublisher, delegating to browser views. plone.restapi can build on top of that.

General report-out of this week

  • Cleaning up bits of the backend, like portal skins, tools, and also simply pep8.
  • RESTapi, preparation for frontend.

A bit scary at the beginning of the week, complaining about what does not work, or all the big work that still needs to be done. But there is a plan for the long term future, with sane steps for short and middle term. So rough roadmap is possible and totally compelling. More energy for people who contribute. We can be brave for the next five years, to see a brighter future.

Big cheer for everybody!

Tomorrow: overflow day. You can have more discussions. Or visit Sorrento or other bits of the surroundings. Paul will start compiling some documents in a kind of roadmap, and people are invited to help. Open space about Plone booth at Bilbao. Plone RESTapi.

Maurizio: the board really helped this year, with declaring this a Strategic Summit, and helping in organizing. Thank you all!


[Image by Fred van Dijk.]

10 Apr 2015 5:55pm GMT

Maurits van Rees: PLOG Friday morning talks

On the Trello board there is now a column for teams. Feel free to edit it. And add yourself to a team if you feel lonely and want to join the fun. :-)

Report-outs from yesterday:

Gil: Berlin Sprint

With Stefania we plan to organize a sprint in Berlin (not the one Guido mentioned above). Working on Mosaic. The coming weeks we will send a formal announcement. Middle of September this year.

Fred: Six Thinking Hats

See presentation at Trello.

This is about Six Thinking Hats, by Edward de Bono. Edward has thought a lot about thinking. Gotta love the meta approach. Creative, lateral, structured thinking, brainstorming, etcetera. Some say it is pseudo science. See for yourself.

Meetings can be discussions, about arguments, ending in 'yes, but...'. A suggestion, followed by risk, idea, emotion, fact, cause, effect, all through each other. Familiar? We have seen it. You cannot compare a fact with an emotion. Six Thinking Hats is about parallel thinking. One thing at the same time. First all think about ideas, without facts or emotions. Do not feed the egos. No showing off.

So what are those hats?

  • White: facts, numbers, what do or don't we know, which questions need asking, pretend you are a computer. Examples of data, research. No argumentation, no judgement. Need research, then store it for later. Somebody's facts can be another person's opinion; can you explain, based on your experience why something is 'for a fact' wrong?
  • Red: emotion, but feeling. Fire, warmth, intuition. We don't know why we don't agree, but we just don't. You don't have to prove what you are feeling, argument about it. Emotions are there, the red hat just let's them surface.
  • Black: risk, disadvantage. Critical thinking. Cautions. Being critical is easy, basic survival instinct to avoid getting eaten by a lion, you will not argue with it. It is important. Sit down with six optimists and pick a new framework... not so good.
  • Yellow: optimism, benefits. Look at the bright side. How can we do this. How will the future then look in one year. Proposals, suggestions, constructive thinking. No arguments, but do ground it with facts. Best case scenario.
  • Green: creative solutions. Edward de Bono has written twenty other books about this. Get new ideas. Brainstorming, no judgement. Thought experiments. Postpone judgement. Come up with at least five different alternatives, like five Javascript frameworks. Reverse the proposal to come up with a new proposal. Provocation: choose a random word (banana) and associate with the current proposal in mind.
  • Blue: control, meta. Blue sky. Overview. Helicopter view. Think about thinking. Meta. Organise the meeting. Which hat order do you start with in this meeting? Role of the chairman probably, or some other dedicated blue person. Observation.

Deal or no deal: if you wear the hat, stick to that thinking direction. Everybody wears the same hat at the same time. Do not say "you are too emotional", but say "you are wearing the red hat". It comes across as less hostile.

How do you use it? You can use it alone or in a group. Start without hats, but then separate the hats when you are stuck: you do not have to use it all the time. Limit the time per hat.

Why use it? It makes things easier. No mix-up of emotions. Think outside your own comfort zone: you may naturally be more black or more yellow. And of course shorter meetings.

The group leader watches the process and decides that people should now put on a specific color.

White hat: what about facts presented as opinions. Use "E-prime": English without the verb "to be". So not "no one is reading news letters", but "in my experience, this does not happen." Start with yourself.

Let's try it! Now!

First try: Let's say Plone wants a new logo. There is a proposal. Discuss it now. Proposal is four squares of blue, yellow, red, green...

Second try: we support Python 3 at the end of 2016.

[Can't summarize these tries, but it was fun and interesting to do.]

Alexander Loechel: Patterns for working Project Teams

It project management, team motivation. Novel from Tom DeMarco: Deadline. In a fictional way he describes software projects and what can go wrong. Other books: Peopleware - Productive Projects and Teams. And Adrenaline Junkies - Template Zombies.

Conclusion on why so many IT projects fail: the major problems of our work are not so much technological as sociological in nature.

He makes lots of points, with patterns and anti-patterns.

My personal conclusion: Plone community intuitively does most of his points right. Keep calm and carry on.

Eric Steele: Hey, when is Plone version X going to be released?

I get this question all the time. It mostly takes so long because I am busy releasing Plone...

Check Jenkins, auto-checkouts, check changelog, etc. By the time I am through the long list of checks for the long list of to-be-released packages, the list has grown by at least ten...

By 2020, Plone will dominate PyPI with over 99 percent of the packages being for Plone, and our cyborgs will take over the world.

Nathan: Security Team

Some of the core people are on it. There is some fatigue on the team, because it is a lot of work when there really is a problem. If your company can help, that would be cool and smart. We need someone who knows Plone really well.

10 Apr 2015 9:48am GMT

09 Apr 2015

feedPlanet Plone

Maurits van Rees: PLOG Thursday RESTapi current status

RESTapi current status

Timo started with some proof of concept implementations. See https://github.com/plone/plone.restapi

If it would not work with for example the ZPublisher, then that would be bad, so we should look into that. Let it support http verbs, like POST, GET, PUT, DELETE, instead of assuming it is a webdav request when it is not POST or GET.

Aren't people moving away from that, just using GET parameters? Staying close to REST seems best, Angular and other frameworks can handle it. Workflow actions will use POST anyway.

You will always transform stuff between the saved data and presented data, like saving a uuid and presenting a normal url. You save something and then may get back an object with different values.

Several levels of RESTfulness.

  1. Resources
  2. RPC calls
  3. HTTP verbs
  4. hypermedia

If we only go for the second level, we could just use the json api. We should play around with the third level, to see if we can make it work.

There is a risk that we break webdav when we fix the ZPublisher. We may have to do that. Webdav works according to some, is buggy for others, or not working at all. For webdav you could look at Accept headers, or discover webdav in some way like that.

Take a look at the dexterity transmogrify code and see if we can take some export code from that. Also https://github.com/collective/plone.jsonapi.core. And look at json schema.

We thought about authentication, but the first phase is just about reading. In a web browser the current authentication will work fine. For non browser visits we need something else, but that can be done later.

The edit schema may differ from the add schema or the view schema. David Glick has written code in javascript for creating a form based on such a schema, using ReactJS and ReactForms.

So we may not need z3c.form then. But z3c.form also does data transformation and validation, you would still need that. If your schema is defined in json, you could use some json schema handling and validation in the backend as well. That is long term.

If you GET a page, you want a json with all the data you might want to see there, so title and fields of this object, list of items if it is a folder, portlets.

Timo: I have limited time to work on this. I do have a project where I am using it. Good if it can work on Plone 4.3 for this. But if it would only work on Plone 5 that would not be a deal breaker.

Hypermedia is there: you can click through the site with json. The json exposes other urls that you could click.

There is a live demo linked on the github page: https://github.com/plone/plone.restapi. You can install a Mozilla json plugin to look at it.

If companies would be willing to give developers money or time for this, that could be helpful. Maybe there is appetite to pool resources. The API design needs to be done before we can ask someone to really dive in and commit. It could feel strange that one person gets paid and others work on it for free, although I wouldn't mind, for me it is a lack of time.

Javascript front-end

Good to get some options out there, get understanding with more people about what we are actually talking about, so if we make a decision it is more informed, and knowingly agreed upon by more people. What are limitations of Angular, React, Patternslib, etcetera? What do we expect from a javascript front-end.

Plone Intranet is using Patternslib and it will be around in 2020.

People will build multiple javascript front-ends for Plone, with whatever framework they like.

Can we come up with a matrix of several frameworks in the next session?

[Well, we tried, but your note taker gave up.]

09 Apr 2015 4:01pm GMT