29 May 2016

feedPlanet Plone

Vikas Parashar: Google Summer of Code — Week 1

One week into coding period.

Continue reading on Medium »

29 May 2016 5:41pm GMT

T. Kim Nguyen: ansible-playbook error "sudo: no tty present"

"sudo: no tty present and no askpass program specified"

29 May 2016 12:26am GMT

27 May 2016

feedPlanet Plone

Ramon Navarro Bosch: What are we doing in PloneNG - Barcelona Sprint 2016

One week later I could get some time to write down what we did and which ideas are behind the Barcelona Sprint 2016...

It all started two months ago when we where thinking about how we could organize Barcelona Sprint to be productive. We've already decided to create three teams, REST API, experimental new backend and frontend so we needed to see how a week sprint can be organized to reach some goals. At that moment I contacted Asko, Timo and Nathan to ask if they can lead each team and prepare a pre-sprint discussion (so we did not need a pillow fight for react vs angular,...) and the goals for the sprint. They did a great job and the result was a document:


With that document in mind, discussed with all sprinters that were coming and define our goals:

So we started a really nice week with a lot of grey matter, nice weather and energy! I really want to thanks Barcelona Activa and Startup Bootcamp for holding our sprint in their facilities! It's been great to have so much space and resources to work and concentrate!

I also want to thank all sprinters, because it did it! We could accomplish and overcome all the goals! The faces of all sprinters by the end of the week was joy and proud, so it's been great!

Finally and not least I want a special thanks to all Iskra/Intranetum team to help making it possible, the ones who attended the sprint (Aleix, Alex, Berta and Marc) and the ones who stayed at the office (specially Eudald)

I've been talking with all three groups and mostly involved on the backend team, so I'll try to explain some backend decisions and results.


Right now plone.server package on github.com/plone/plone.server is a WIP backend that has:

There are two missing parts that are covered by external tools by now:

There is lots of things to work on, workflows, improve request, improve transactions on ZODB,... but it's a long term project!

Opinion on some concerns

About MVCC, we can maintain it on the new core with three different approaches (thanks Asko!)

It's clear that lots of websockets, utilities and requests will mean memory depending on the connection cache size.

This approach is still not implemented, we are discussing about different approaches.

About elasticsearch cataloging strategy its triggered on commit success, so search functionality will not be able before commit. That is a change from the actual stack and my opinion is that we are abusing about catalog searches on navigation and rendering. As there is no templates my opinion is that its possible to deal with BTree navigation.


plone.server, plone_client and plone.oauth are tested on travis-ci with the needed backend services to avoid mocking them using docker-compose.

plone.server, plone_client and plone.oauth are build on docker container by docker hub for each commit

After each plone.server build at docker hub a deployment is done to a sandbox cluster.

The main idea is to provide a continuous integration and continuous deployment story.

Try sandbox:

git clone https://github.com/pyrenees/docker.git
cd docker
python get_token.py
# Choose any RAW token
# Call with you HTTP tool to
ACCEPT: application/json

Soon an integration with plone_client, a roadmap for plone.server and tests will be included!


After deploying the application I've run a small read performance test to compare plone.server agains actual stack (plone 5 with plone.restapi):

Operation : GET /dexterity_object with authentication
Result : The same on both operations

plone.server is 241% faster right now from 10 - 100 concurrent users
plone.server is 342% faster right now on 700 concurrent users

27 May 2016 4:04pm GMT

Wildcard Corp: Wildcard Featured in Stevens Point Journal

As a growing company that doesn't see itself slowing down any time soon, Wildcard is starting to grab the attention of people in the local Stevens Point area.

An interviewer from the Stevens Point Journal had the opportunity to interview Director of Operations Gregg Gokey about the status of the company and what we are trying to build toward. Among our key goals outlined is our focus on pointing young technologists toward open source programs, especially local students.

It's truly a tremendous sight to see the publicity we are getting as a Plone provider. Hopefully this recognition will help open the door to new opportunities to spread Plone in the local community. Follow the link below to check out the article:

[Stevens Point tech company Wildcard serves FBI]

27 May 2016 10:00am GMT

23 May 2016

feedPlanet Plone

Asko Soukka: Plone Barcelona Sprint 2016 Report

For the last week, I was lucky enough to be allowed to participate Plonecommunity sprint at Barcelona. The print was about polishing the new RESTful API for Plone, and experimenting with new front end and backend ideas, to prepare Plone for the next decade (as visioned in its roadmap). And once again, the community proved the power of its deeply rooted sprinting culture (adopted from the Zope community in the early 2000).

Just think about this: You need to get some new features for your sophisticated software framework, but you don't have resources to do it on your own. So, you set up a community sprint: reserve the dates and the venue, choose the topics for the sprint, advertise it or invite the people you want, and get a dozen of experienced developers to enthusiastically work on your topics for more for a full week, mostly at their own cost. It's a crazy bargain. More than too good to be true. Yet, that's just what seems to happen in the Plone community, over and over again.

To summarize, the sprint had three tracks: At first there was the completion of plone.restapi - a high quality and fully documented RESTful hypermedia API for all of the currently supported Plone versions. And after this productive sprint, the first official release for that should be out at any time now.

Then there was the research and prototyping of a completely new REST API based user interface for Plone 5 and 6: An extensible Angular 2 based app, which does all its interaction with Plone backend through the new RESTful API, and would universally support both server side and browser side rendering for fast response time, SEO and accessibility. Also these goals were reached, all the major blockers were resolved, and the chosen technologies were proven to be working together. To pick of my favorite sideproduct from that track: Albert Casado, the designer of Plone 5 default theme in LESS, appeared to migrate the theme to SASS.

Finally, there was our small backend moonshot team: Ramon and Aleix from Iskra / Intranetum (Catalonia), Eric from AMP Sport (U.S.), Nathan from Wildcard (U.S.) and yours truly from University of Jyväskylä (Finland). Our goal was to start with an alternative lightweight REST backend for the new experimental frontend, re-using the best parts of the current Plone stack when possible. Eventually, to meet our goals within the given time constraints, we agreed on the following stack: aiohttp based HTTP server, the Plone Dexterity content-type framework (without any HTML views or forms) built around Zope Toolkit, and ZODB as our database, all on Python 3.5 or greater. Yet, Pyramid remains as a possible alternative for ZTK later.


I was responsible for preparing the backend track in advance, and got us started with a a simple aiohttp based HTTP backend with experimental ZODB connection supporting multiple concurrent transaction (when handled with care). Most of my actual sprint time went into upgrading Plone Dexterity content-type framework (and its tests) to support Python 3.5. That also resulted in backwards compatible fixes and pull requests for Python 3.5 support for all its dependencies in plone.* namespace.

Ramon took the lead in integrating ZTK into the new backend, implemented a content-negotiation and content-language aware traversal, and kept us motivated by rising the sprint goal once features started clicking together. Aleix implemented an example docker-compose -setup for everything being developd at the sprint, and open-sourced their in-house OAuth-server as plone.oauth. Nathan worked originally in the frontend-team, but joined us for the last third of the sprint for pytest-based test setup and asyncio-integrated Elasticsearch integration. Eric replaced Zope2-remains in our Dexterity fork with ZTK equivalents, and researched all the available options in integrating content serialization of plone.restapi into our independent backend, eventually leading into a new package called plone.jsonserializer.

The status of our backend experiment after the sprint? Surprisingly good. We got far enough, that it's almost easier to point the missing and incomplete pieces that still remain on our to do:

So, that was a lot of checkbox ticked in a single sprint, really something to be proud of. And if not enough, an overlapping Plone sprint at Berlin got Python 3.5 upgrades of our stack even further, my favorite result being a helper tool for migrating Python 2 version ZODB databases to Python 3. These two sprints really transformed the nearing end-of-life of Python 2 from a threat into a possibility for our communitt, and confirmed that Plone has a viable roadmap well beyond 2020.

Personally, I just cannot wait for a suitable project with Dexterity based content-types on a modern asyncio based http server, or the next change meet our wonderful Catalan friends! :)

23 May 2016 4:13am GMT

22 May 2016

feedPlanet Plone

Mikko Ohtamaa: Python standard logging pattern

(this article originally appeared in Websauna documentation)

1. Introduction

Python standard library provides logging module as a de facto solution for libraries and applications to log their behavior. logging is extensively used by Websauna, Pyramid, SQLAlchemy and other Python packages.

  • Python logging subsystem can be configured using external configuration file and the logging configuration format is specified in Python standard library.
  • Python logger can be individually turned on, off and their verbosity adjusted on per module basis. For example by default, Websauna development server sets SQLALchemy logging level to INFO instead of DEBUG to avoid flooding the console with verbose SQL logs. However if you are debugging issues related to a database you might want to set the SQLAlchemy logging back to INFO.
  • Logging is preferred diagnose method over print statements cluttered around source code.. Well designed logging calls can be left in the source code and later turned back on if the problems must be diagnosed further.
  • Python logging output can be directed to console, file, rotating file, syslog, remote server, email, etc.

2. Log colorization

  • Websauna uses rainbow_logging_handler which colorizes the logs, making it easier to read them in the console of the development web server.


3. Standard logging pattern

A common logging pattern in Python is:

import logging

logger = logging.getLogger(__name__)

def my_view(request):
    logger.debug("my_view got request: %s", request)
    logger.info("my_view got request: %s", request)
    logger.error("my_view got request: %s and BAD STUFF HAPPENS", request)

        raise RuntimeError("OH NOES")
    except Exception as e:
        # Let's log full traceback even when we ignore this exception
        # and it's not risen again
  • This names a logger based on a module so you can switch logger on/off on module basis.
  • Pass logged objects to logging.Logger.debug() and co. as full and let the logger handle the string formatting. This allows intelligent display of logged objects when using non-console logging solutions like Sentry.
  • Use logging.Logger.exception() to report exceptions. This will record the full traceback of the exception and not just the error message.

Please note that although this logging pattern is common, it's not a universal solution. For example if you are creating third party APIs, you might want to pass the logger to a class instance of an API, so that the API consumer can take over the logger setup and there is no inversion of control.

4. Changing logging level using INI settings

Websauna defines development web server log levels in its core development.ini. Your Websauna application inherits settings from this file and can override them for each logger in the conf/development.ini file of your application.

For example to set SQLAlchemy and transaction logging level to more verbose you can do:

level = DEBUG

level = DEBUG

Now console is flooded with very verbose logging:

[2016-05-22 20:39:55,429] [sqlalchemy.engine.base.Engine _begin_impl] BEGIN (implicit)
[2016-05-22 20:39:55,429] [txn.123145312813056 __init__] new transaction
[2016-05-22 20:39:55,429] [sqlalchemy.engine.base.Engine _execute_context] SELECT users.password AS users_password, users.id AS users_id, users.uuid AS users_uuid, users.username AS users_username, users.email AS users_email, users.created_at AS users_created_at, users.updated_at AS users_updated_at, users.activated_at AS users_activated_at, users.enabled AS users_enabled, users.last_login_at AS users_last_login_at, users.last_login_ip AS users_last_login_ip, users.user_data AS users_user_data, users.last_auth_sensitive_operation_at AS users_last_auth_sensitive_operation_at, users.activation_id AS users_activation_id

5. Initialization loggers from INI file

If you need to initialize loggers in your own applications see websauna.system.devop.cmdline.setup_logging() for how Websauna picks up loggers from INI configuration file.

6. More information

How Websauna logs username and email for every internal server error. It's impressive service if your devops teams calls a customer on a second an error happens and guide the customer around the error. As a bonus if using Sentry you will see the Gravatar profile image of the user when viewing the exception.

Logbook is an alternative for Python standard library logging if performance is critical or the application has more complex logging requirements .

Discussion about log message formatting and why we are still using old style string formatting.

structlog package - add context to your logged messages like user id or HTTP request URL.

Subscribe to RSS feed Follow me on Twitter Follow me on Facebook Follow me Google+

22 May 2016 7:30pm GMT

20 May 2016

feedPlanet Plone

Gil Forcada: Berlin 2016 sprint update

This week some Plonistas, unfortunately fewer than expected, meet at IN-Berlin to work towards a brighter future for Plone.

Some of us resumed our work already started in Innsbruck sprint early this year while some other topics grew out of discussions and trying things out.

I will never get to write as good as Paul reporting on Barcelona's sprint, but hey, I will do my best:

In no special order:

Just by trying out the newly shiny plone.org website (congrats plone.org team!!), some bugs where discovered and reported, work done by Stephan Klinger.

The PLIP about assimilating collective.indexing to Plone core finally got all its tests green and is about to be ready for review, work done by Maik Derstappen and Gil Forcada.

Python 3 support was a hot topic during the sprint, so with that in mind Florian Pilz and Michael Howitz created a new tool, zodb.py3migrate, that allows you to convert existing ZODB databases to python 3! Best of all, its great documentation! Be aware that the migration is done in place!

Our Jens2 team was not quadratic this time around, but Jens Vagelpohl continued trying to tame Pluggable Authentication Service (aka PAS).

Did I say that Python 3 was a hot topic? Thomas Lotze with googling support from Gil Forcada took the problem head on and decided that, as Thomas had not that much time (unfortunately he was around only 3 days), work on cracking hard nuts: C extensions. And indeed he did! AccessControl and DocumentTemplate at least compile on python 3.5 (throwing quite some warnings, but hey, have you ever seen Archetypes being installed?). Best of all is that Python 2.6 and 2.7 already have quite some macros for forward compatibility, so the bits that are compatible were pull requested and they are already merged!

Unfortunately that work, C extensions, came to a halt as we hit with RestrictedPython. After long discussions between Thomas, Gil and Maik we decided to make a video conference with the Barcelona sprint, Eric Bréhault, as well as Alexander Loechel, who couldn't join is in Berlin, but that he already had worked on the topic while in Insbruck early this year (read about his findings). The discussion went really well and Alexander offered himself to continue working on it, stay tuned!

Maik and Stephan meanwhile where not happy with the constellation of form packages that we currently have (z3c.form, plone.autoform, plone.supermodel, plone.z3cform, plone.app.z3cform…) and made yet another package1… just kidding! No, instead they worked on improving the forms related documentation: getting rid of grok2 and making sure that the documentation is consistent3. For that Stephan created the linked above package and Maik was working on moving the remaining useful bits of plone.directives.form to plone.autoform. Stephan started a discussion on the topic on community.plone.org.

Lastly, I put my jenkins/mr.roboto hat on and added some more functionality to mr.roboto: warn which pull request jenkins jobs need to ran to know if a pull request can be safely merged and auto update buildout.coredev's checkouts.cfg whenever a pull request gets merged. The logic is fairly trivial, but gathering all the pieces together to drive the control is far from it, and testing is yet a complete other matter (thanks mock!!).

All in all we had fun, lots of things got done or discussed and, as with every sprint, everyone is looking for the next one!

Happy hacking!

  1. will be soon moved to collective
  2. and hidden grok dependencies lying in plone.directives.form and plone.directives.dexterity
  3. at the beginning there are some fields, and two pages later, magically, fields are different!

20 May 2016 9:20am GMT

19 May 2016

feedPlanet Plone

Paul Roeland: Barcelona 2016 sprint update


This week, a group of 15 people is gathering in lovely Barcelona to work, experiment and play with new concepts for our favorite CMS, Plone.

And we're not the only ones, a few countries further north a group of people are doing another sprint, in Berlin! But I'll let the Berlin people do their own reporting.

We divided up in three teams, delving into the front-end, restapi and back-end (more or less), with lots of interaction going on between the teams.

Since my strengths aren't exactly diving into the deep inward workings of Plone and underlying technologies, I had the nice job to try to summarize where the work is going for now, in terms that hopefully anybody with a keen interest in the community can understand.

Do note that in describing some of the work in terms that even I can understand, I may have grossly oversimplified or misrepresented some things. So, if your reaction is "huh?", it's probably wise to think "oh, polyester has gotten it all wrong" and ask accordingly, before putting on your angry hat and take to da interwebz….

REST api

Probably the easiest to describe, as it is already quite well documented and used in production, is plone.restapi. It has clear goals and design ideas, and saw a lot of progress already this week.

Endpoints (that's functionality to all folks new to the jargon: things you can get information from, or create content, or change workflow state or permissions) are being added as we speak. And some of the the tougher issues with CORS and other security-relevant topics are being tackled.

One recurring theme on this is that one of the key strengths of Plone, real traversal, is not something that others do very well, or even reckon with. As an example, the current version of OpenAPI (formerly known as Swagger), a formal description language for API's which would be great to use as a documenting and discovery tool, wants to know all endpoints in advance. That's all very fine and dandy if your site basically looks like index.php?incomprehensiblemumbojumbo, but ours are better behaved. In short, we need to parametrize the path in the URI, and do no know in advance how they will look or how many there will be. And that's a problem for the tooling right now, although they are aware of it.

On the plus side, the way Plone does real traversal makes it a really, really good fit for a proper REST interface. So, that will be the way we're moving forward.

Status: A beta version of plone.rest (the underlying HTTP verbs) should come out this week, and the actual plone.restapi will see a proper alpha release with many new endpoints and improved documentation.

Polyester Crystal Ball Assessment: expect this to be useful already in the very near future, and absolutely essential in the slightly longer run. The documentation-fu runs strongly in this one, too.


At the same time, a very talented bunch of people is hacking away at a prototype for a full, modern client based purely on said REST api.

To my slight disappointment, the Great JS Framework Wars were not decided by pillow-fight, but by actual reasoned discussion. And that means that this client is being written in Angular2. Of course, remember, anybody can always implement a client in their preferred framework-du-jour, but for this prototype this decision stands.

so, after a few days of intense discussions, frantic typing and good food, what is already working as ng2 components?

Of course there is still a long list of work in progress:

all in all, a pretty impressive list, and already enough to give you an understanding that such a one-page client is not only possible, but could be pretty cool, and fun to use both for end users and integrators.

Polyester Crystal Ball Assessment: well, color me impressed. It will take work, and we may run into some issues. But such a client seems eminently possible, and I'm pretty much convinced that the needs and wants of 'casual' integrators, themers and tinkerers will be met. I say: bring it on!


The team whose work was hardest to grasp for me was the group working on future & experimental backends. The ideas behind it are clear enough:

yet the devil is in the details, and un-tangling the many tangled layers of Zope, CMF and more that now form our mille-feuille of a stack is no easy task.

At the same time it is also crucial that we as a community start experimenting, and testing some of it in real world scenarios.

So, experimenting there was. And do note: none of this is final, we're definitely in the "let's try some crazy stuff and see how far we get " stage here…
Basically we took the top layer (dexterity) and the bottom layer (ZTK) and ripped out everything in between, and see how far that would get us before adding layers back.

Some results so far:

the surprising part of this (and remember: this was meant to be experimental) is that quite a bit of the porting to Python3 can already go back into mainstream Plone.

And of course that it shows there is an active interest, and developing ideas, in growing the next generation of Plone. All while keeping up with our proud heritage of allowing far-reaching customization and modularity. So if you need to plug in another search system or another source of user/group data, that will be possible.

Polyester Crystal Ball Assessment: expect this to be at least two years away for any simple mortal like me. Also expect lots of changes and lessons learnt. But do be very excited about the experiments going on, which will grow into real-life solutions being vetted with real-world sites, and getting us a clear path into the future. Oh, and part of the Python3 porting is usable far earlier than I had expected.

I never promised you a rose garden…

As you would expect, all groups also encountered complex problems.

Security is always hard. In a world with discoverable API's, and back-ends having to deal with potentially malicious (or just sloppily programmed, which might lead to the same results) front-ends this becomes ever more important, on every level. Remember: every new web technology leads to a new security acronym. Want websockets? Learn about CSWSH…

Luckily this is very much in everybody's mind here, and security is a core key ingredient and not an afterthought on what people are developing.

Traversal still remains as powerful an idea as when Jim Fulton created Zope in 1998. It's how the web should work. Yet many of the new tools and techniques that we are now incorporating do not deal with it very well. On the other hand, it's what sets us apart, and we may have to adapt the tools to do the right thing.

Untrusted Code (a.k.a. Restricted Python) is both pretty hard to deal with, from a security standpoint, yet also a reality for any kind of serious universal CMS with sophisticated users. The balance here is a fine one.

The JavaScript ecosystem still remains very much in flux.
If you ever thought "buildout" could be a bit of a diva, just wait until you meet the Nuclear Powered Mushroom. It throws tantrums like a three-year-old on a sugar rush.
And there have been 13 days since the last grunt-gulp-bower-webpack-rollup replacement…

FAQ / Panic Attack

I'm sure a lot of you will have many questions. As did I.

So I asked them ;-)

wait… there's more!

The sprint is not even over. And there will be many more sprints and discussions, both online and in real life. But let me say that this was one good way of giving all those future discussions a much more solid foundation of prototypes, experiments and performance statistics.

Time to get involved!

This particular experience was brought to you by the following people:

Ramon Navarro Bosch, Víctor Fernández de Alba, Timo Stollenwerk, Eric Bréhault, Asko Soukka, Nathan Van Gheem, Sam Schwartz, Lukas Graf, Thomas Buchberger, Eric Steele, Berta Capdevila Cano, Rob Gietema, Aleix Llusa Serrá, Albert Casado and Paul Roeland, with liberal helpings of sunshine, culture, cuisine and assorted friends & family who where visiting Barcelona…

19 May 2016 1:06pm GMT

17 May 2016

feedPlanet Plone

Maurits van Rees: Steven Pemberton: The future of programming

See the PyGrunn website for more info about this one-day Python conference in Groningen, The Netherlands.

I am from the CWI, Center for Mathematics and Informatics, at Amsterdam, where Python was born, and where I worked on ABC, the basis for Python. I wrote parts of gcc. I ended up chairing the w3c html working group.

I will talk about Moore's switch and the future of programming.

We were developing ABC in the beginning of the eighties, when computers were really slow. However, we knew about Moore's law, that computers would become faster.

In the fifties, computers were really expensive. You could hire an hour of computer time for the amount of money you pay a developer in a year. What I call Moore's switch: this has gone the other way around. Earlier programming languages were geared towards making it the computer easier, not the programmer.

Moore's law: computing power doubles every 18 months. In 1977 was the first time I heard say that Moore's law was soon over. In 1988 my laptop had a power of 500, now it has doubled fifteen times.

By the 1970's, computers had become cheaper, but programmers not: software crisis. Ninety percent of the cost was in debugging. Fred Brookes wrote about this in The Mythical Man Month. The larger the program, the more expensive it becomes.

An order of magnitude improvement would help a lot. What takes one week, would take a morning instead.

A declarative approach is much shorter, and therefor faster to write. Can this help? What does declarative programming mean? I wrote a declarative clock program in the beginning of the 1990s of twelve lines, instead of 1000 for a procedural clock program.

Declarative: you specify what needs to be and remain true. This also means there can be no while loops.

Look at XForms for a declarative language. A certain company went from five years and thirty people to finish a project to one year and ten people, by using XForms. It shows that declarative programming is feasible, usable, for real world projects.

I believe that eventually everyone will switch to declarative programming.


"Some programmers move from Python to lower level languages to get more performance out of computers." They may not have done the numbers. Programmers need to learn a new technique, which may make them not want to do this. Countries have held back the use of the Arab numerals that we now all use.

In ABC we saw that people were mostly busy with sorting and searching. So we made this very fast.

"Do big companies use this?" Yes, various, like Yahoo, IBM. Usually in small groups, not company wide.

"What will then happen with Python?" What happened to Pascal?

"Where should I start?" Look at Xforms. That is the only standardised version that I know of that does this stuff.

Strictly speaking, spreadsheets are declarative.

"What books do you recommend?" There are no books on XForms yet.

"Do you consider Prolog a declarative language?" Not really, though I see what you mean.

[For more information, see the XForms article on wikipedia, Maurits.]

Twitter: @stevenpemberton

17 May 2016 12:41pm GMT

13 May 2016

feedPlanet Plone

Maurits van Rees: Martijn Faassen: Morepath under the hood

Python and web developer since 1998. I did Zope, and for a little while it was as popular as Python itself.

What is this about? Implementation details, concepts, creativity, software development.

Morepath is a web microframework. The planet Zope exploded and Morepath came out. It has a unique approach to routing and link generation with Traject. Easy and powerful generic code with Reg. Extensible and overridable with Dectate.

In the early nineties you had simple file system traversal to publish a file on the web. Zope 2, in 1998, had traversal through an object tree, conceptually similar to filesystem traversal. Drawback: all objects need to have code to support web-stuff. Creativity: filesystem traversal is translated to an object tree. Similar: javascript client frameworks that mimick what used to be done on the server.

Zope 3 got traversal with components: adapt an object to an interface that knows how to publish to html, or to json. So the base object can be web agnostic again.

Pyramid simplified traversal, with __getitem__. So the object needs to be web aware again. Might not be an issue.

Routing: map a route to a view function. As developer you need to handle a 404 yourself, instead of letting the framework do this.

You can fight about this as frameworks. But morepath has it all. It is a synthesis.

I experimented with a nicer developer API than Zope was offering to get a view for traversal. So I created experimental packages like iface and crom. I threw them together in Reg. It was just a rewrite of the Zope Component Architecture with a simpler API.

Class dispatch: foo.bar() has self as first argument. Reg uses functools.singledispatch and builds multiple dispatch. But then I generalised it even more to predicate dispatch, as Pyramid had.

Don't be afraid to break stuff when you refactor things.

Dectate is a meta framework for code configuration. Old history involving Zope, Grok, martian, venusian, but now Dectate. With this you can extend or override configuration in your app, for example when you need to change something for one website.

Detours are good for learning.

Splitting things off into a library helps for focus, testing, documentation.

Morepath uses all these superpowers to form a micro framework.

Twitter: @faassen

13 May 2016 2:42pm GMT

Reinout van Rees: Pygrunn keynote: the future of programming - Steven Pemberton

(One of my summaries of the one-day 2016 PyGrunn conference).

Steven Pemberton (https://en.wikipedia.org/wiki/Steven_Pemberton) is one of the developers of ABC, a predecessor of python.

He's a researcher at CWI in Amsterdam. It was the first non-military internet site in Europe in 1988 when the whole of Europe was still connected to the USA with a 64kb link.

When designing ABC they were considered completely crazy because it was an interpreted language. Computers were slow at that time. But they knew about Moore's law. Computers would become much faster.

At that time computers were very, very expensive. Programmers were basically free. Now it is the other way. Computers are basically free and programmers are very expensive. So, at that time, in the 1950s, programming languages were designed around the needs of the computer, not the programmer.

Moore's law is still going strong. Despite many articles claiming its imminent demise. He heard the first one in 1977. Steven showed a graph of his own computers. It fits.

On modern laptops, the CPU is hardly doing anything most of the time. So why use programming languages optimized for giving the CPU a rest?

There's another cost. The more lines a program has, the more bugs there are in it. But it is not a linear relationship. More like lines ^ 1.5. So a program with 10x more lines probably has 30x more bugs.

Steven thinks the future of programming is in declarative programming instead of in procedural programming. Declarative code describes what you want to achieve and not how you want to achieve it. It is much shorter.

Procedural code would have specified everything in detail. He showed a code example of 1000 lines. And a declarative one of 15 lines. Wow.

He also showed an example with xforms, which is declarative. Projects that use it regularly report a factor of 10 in savings compared to more traditional methods. He mentioned a couple of examples.

Steven doesn't necessarily want us all to jump on Xforms. It might not fit with our usecases. But he does want us to understand that declarative languages are the way to go. The approach has been proven.

In response to a question he compared it to the difference between roman numerals and arabic numerals and the speed difference in using them.

(The sheets will be up on http://homepages.cwi.nl/~steven/Talks/2016/05-13-pygrunn/ later).

13 May 2016 2:24pm GMT

Reinout van Rees: Pygrunn keynote: Morepath under the hood - Martijn Faassen

(One of my summaries of the one-day 2016 PyGrunn conference).

Martijn Faassen is well-known from lxml, zope, grok. Europython, Zope foundation. And he's written Morepath, a python web framework.

Three subjects in this talk:

  • Morepath implementation details.
  • History of concepts in web frameworks
  • Creativity in software development.

Morepath implementation details. A framework with super powers ("it was the last to escape from the exploding planet Zope")

Traversal. In the 1990's you'd have filesystem traversal. example.com/addresses/faassen would map to a file /webroot/addresses/faassen.

In zope2 (1998) you had "traversal through an object tree. So root['addresses']['faassen'] in python. The advantage is that it is all python. The drawback is that every object needs to know how to render itself for the web. It is an example of creativity: how do we map filesystem traversal to objects?.

In zope3 (2001) the goal was the zope2 object traversal, but with objects that don't need to know how to handle the web. A way of working called "component architecture" was invented to add traversal-capabilities to existing objects. It works, but as a developer you need to quite some configuration and registration. Creativity: "separation of concerns" and "lookups in a registry"

Pyramid sits somewhere in between. And has some creativity on its own.

Another option is routing. You map a url explicitly to a function. A @route('/addresses/{name}') decorator to a function (or a django urls.py). The creativity is that is simple.

Both traversal and routing have their advantages. So Morepath has both of them. Simple routing to get to the content object and then traversal from there to the view.

The creativity here is "dialectic". You have a "thesis" and an "antithesis" and end up with a "synthesis". So a creative mix between two ideas that seem to be opposites.

Apart from traversal/routing, there's also the registry. Zope's registry (component architecture) is very complicated. He's now got a replacement called "Reg" (http://reg.readthedocs.io/).

He ended up with this after creatively experimenting with it. Just experimenting, nothing serious. Rewriting everything from scratch.

(It turned out there already was something that worked a bit the same in the python standard library: @functools.singledispatch.)

He later extended it from single dispatch to multiple dispatch. The creativity here was the freedom to completely change the implementation as he was the only user of the library at that moment. Don't be afraid to break stuff. Everything has been invented before (so research). Also creative: it is just a function.

A recent spin-off: "dectate". (http://dectate.readthedocs.io/). A decorator-based configuration system for frameworks :-) Including subclassing to override configuration.

Some creativity here: it is all just subclassing. And split something off into a library for focus, testing and documentation. Split something off to gain these advantages.

13 May 2016 1:45pm GMT

Reinout van Rees: Pygrunn: from code to config and back again - Jasper Spaans

(One of my summaries of the one-day 2016 PyGrunn conference).

Jasper works at Fox IT, one of the programs he works on is DetACT, a fraud detection tool for online banking. The technical summary would be something like "spamassassin and wireshark for internet traffic".

  • Wireshark-like: DetACT intercepts online bank traffic and feeds it to a rule engine that ought to detect fraud. The rule engine is the one that needs to be configured.
  • Spamassassin-like: rules with weights. If a transaction gets too many "points", it is marked as suspect. Just like spam detection in emails.

In the beginning of the tool, the rules were in the code itself. But as more and more rules and exceptions got added, maintaining it became a lot of work. And deploying takes a while as you need code review, automatic acceptance systems, customer approval, etc.

From code to config: they rewrote the rule engine from start to work based on a configuration. (Even though Joel Spolsky says totally rewriting your code is the single worst mistake you can make). They went 2x over budget. That's what you get when rewriting completely....

The initial test with hand-written json config files went OK, so they went to step two: make the configuration editable in a web interface. Including config syntax validation. Including mandatory runtime performance evaluation. The advantage: they could deploy new rules much faster than when the rules were inside the source code.

Then... they did a performance test at a customer.... It was 10x slower than the old code. They didn't have enough hardware to run it. (It needs to run on real hardware instead of in the cloud as it is very very sensitive data).

They fired up the profiler and discovered that only 30% of the time is spend on the actual rules, the other 70% is bookkeeping and overhead.

In the end they had the idea to generate python code from the configuration. They tried it. The generated code is ugly, but it works and it is fast. A 3x improvement. Fine, but not a factor of 10, yet.

They tried converting the config to AST (python's Abstract Syntax Tree) instead of to actual python code. Every block was turned into an AST and then combined based on the config. This is then optimized (which you can do with an AST) before generating python code again.

This was fast enough!

Some lesons learned:

  • Joel Spolsky is right. You should not rewrite your software completely. If you do it, do it in very small chunks.
  • Write readable and correct code first. Then benchmark and profile
  • Have someone on your team who knows about compiler construction if you want to solve these kinds of problems.

13 May 2016 12:56pm GMT

Reinout van Rees: Pygrunn: simple cloud with TripleO quickstart - K Rain Leander

(One of my summaries of the one-day 2016 PyGrunn conference).

What is openstack? A "cloud operating system". Openstack is an umbrella with a huge number of actual open source projects under it. The goal is a public and/or private cloud.

Just like you use "the internet" without concerning yourself with the actual hardware everything runs on, just in the same way you should be able to use a private/public cloud on any regular hardware.

What is RDO? Exactly the same as openstack, but using RPM packages. Really, it is exactly the same. So a way to get openstack running on a Red Hat enterprise basis.

There are lots of ways to get started. For RDO there are three oft-used ones:

  • TryStack for trying out a free instance. Not intended for production.

  • PackStack. Install openstack-packstack with "yum". Then you run it on your own hardware.

  • TripleO (https://wiki.openstack.org/wiki/TripleO). It is basically "openstack on openstack". You install an "undercloud" that you use to deploy/update/monitor/manage several "overclouds". An overcloud is then the production openstack cloud.

    TripleO has a separate user interface that's different from openstack's own one. This is mostly done to prevent confusion.

    It is kind of heavy, though. The latest openstack release (mitaka) is resource-hungry and needs ideally 32GB memory. That's just for the undercloud. If you strip it, you could get the requirement down to 16GB.

To help with setting up there's now a TripleO quickstart shell script.

13 May 2016 11:56am GMT

Reinout van Rees: Pygrunn: Understanding PyPy and using it in production - Peter Odding/Bart Kroon

(One of my summaries of the one-day 2016 PyGrunn conference).

pypy is "the faster version of python".

There are actually quite a lot of python implementation. cpython is the main one. There are also JIT compilers. Pypy is one of them. It is by far the most mature. PyPy is a python implementation, compliant with 2.7.10 and 3.2.5. And it is fast!.

Some advantages of pypy:

  • Speed. There are a lot of automatic optimizations. It didn't use to be fast, but since 5 years it is actually faster than cpython! It has a "tracing JIT compiler".
  • Memory usage is often lower.
  • Multi core programming. Some stackless features. Some experimental work has been started ("software transactional memory") to get rid of the GIL, the infamous Global Interpreter Lock.

What does having a "tracing JIT compiler" mean? JIT means "Just In Time". It runs as an interpreter, but it automatically identifies the "hot path" and optimizes that a lot by compiling it on the fly.

It is written in RPython, which is a statically typed subset of python which translates to C and is compiled to produce an interpreter. It provides a framework for writing interpreters. "PyPy" really means "Python written in Python".

How to actually use it? Well, that's easy:

$ pypy your_python_file.py

Unless you're using C modules. Lots of python extension modules use C code that compile against CPython... There is a compatibility layer, but that catches only 40-60% of the cases. Ideally, all extension modules would use "cffi", the C Foreign Function Interface, instead of "ctypes". CFFI provides an interface to C that allows lots of optimizations, especially by pypy.

Peter and Bart work at paylogic. A company that sells tickets for big events. So you have half a million people trying to get a ticket to a big event. Opening multiple browsers to improve their chances. "You are getting DDOSed by your own customers".

Whatever you do: you still have to handle 500000 pageviews in just a few seconds. The solution: a CDN for the HTML and only small JSON requests to servers. Even then then you still need a lot of servers to handle the JSON requests. State synchronisation was a problem as in the end you still had one single server for that single task.

Their results after using pypy for that task:

  • An 8-fold improvement. Initially 4x, but pypy has been optimized since, so they got an extra 2x for free. So: upgrade regularly.
  • Real savings on hosting costs
  • The queue has been tested to work for at least two million visitors now.

Guido van Rossum supposedly says "if you want your code to run faster, you should probably just use PyPy" :-)

Note: slides are online

13 May 2016 11:18am GMT