13 Feb 2017

feedPlanet Plone

Starzel.de: Towards Plone 6

A report from the Alpine City Sprint in Innsbruck

13 Feb 2017 10:46am GMT

06 Feb 2017

feedPlanet Plone

Starzel.de: Push coverage report out of an Gitlab CI Runner

For our Plone/Python projects we often generate coverage reports as HTML sites, this posts show how you can push this report out of the Gitlab CI Runner.

06 Feb 2017 12:57pm GMT

31 Jan 2017

feedPlanet Plone

Affinitic: No more JS / CSS browser cache refresh pain

We often have the following problem :

  1. a CSS or a Javascript file is changed in a package
  2. we make a release (or not) and put it to production
  3. some visitors still gets old CSS and Javascript versions (except if by chance we remembered to reinstall package / re-save portal_css and portal_javascripts configs) and need to flush their browser cache

As we do not have great memory and as our customer's Helpdesk was tired of doing remote cache flush, we wanted to do something (automatic) about it.
Plone adds cache keys as suffixes for CSS and JS files, as for example base-cachekey6247.css.
This cache key doesn't change, even after an instance restart. This is why browsers can keep the "old" versions cached.

To avoid that (and without touching to Apache or whatever), we wrote a little script that exectutes at the end of the instance startup process.It forces the "cook" of resources which generates a new cache key so that browser cannot cache CSS and JS anymore !

This script can of course be improved, but it fits our needs.

[Please note that this code is written for Plone 3.]
First, we subscribe to IDatabaseOpenedWithRootEvent (too bad we couldn't use IProcessStarting because we have no context to get Plone sites) :


Then, our handler gets the work done :

# -*- coding: utf-8 -*-                                                    

from Products.CMFCore.utils import getToolByName                           
from zope.app.appsetup.bootstrap import getInformationFromEvent            
import transaction                                                         

from our.package import logger                                       

def clear_cache(event):                                                    
    Force cache key renewal for CSS and JS on all Plone sites at startup.  
    This avoids any caching of changed CSS or JS.          
    db, connection, root, root_folder = getInformationFromEvent(event)     
    app = root_folder                                                      
    for obj_tuple in app.items():                                           
        (obj_name, obj) = obj_tuple                                            
        if obj.get('portal_type') == 'Plone Site':                         
            p_css = getToolByName(obj, 'portal_css')                        
            p_js = getToolByName(obj, 'portal_javascripts')                 
            logger.info('Cache key changed for CSS/JS on Plone %s' % obj_name)

And voilà, no more browser cache flush hassle for our customer and their Helpdesk :-)

If you have any thoughts on this, or think that this should be packaged in a collective egg (useful for you ?), feel free to comment.

31 Jan 2017 1:42pm GMT

30 Jan 2017

feedPlanet Plone

Jazkarta Blog: Pleiades Wins an Award and Gets an Upgrade

Via Appia entry on Pleiades

Here at Jazkarta we've been working with the Institute for the Study of the Ancient World (ISAW) for the past year on a project funded by the National Endowment for the Humanities to upgrade and improve Pleiades, a gazetteer of ancient places that is free and open to the public. Thus it was very gratifying to learn that Pleiades is the 2017 recipient of the Archaeological Institute of America's Award for Outstanding Work in Digital Archaeology. Congratulations to the Pleiades team, headed by ISAW's Tom Elliott!

Pleiades is the most comprehensive geospatial dataset for antiquity available today, giving scholars, students, and enthusiasts the ability to use, create, and share geographic information about the ancient world. It was developed between 2006 and 2008 on version 3 of the open source Plone content management system. Pleiades allows logged in users to define geographic locations and associate them with ancient places, names and scholarly references. The system remained in place from 2008 to 2016 without significant upgrades to the core Plone stack, despite the addition of a number of custom features. Over that time, over 35,000 places were added to the system - and performance degraded significantly as the content expanded.

Our most important NEH deliverable was improving site performance, which we accomplished through an upgrade from Plone 3 to Plone 4.3 and the elimination of performance bottlenecks identified with the help of New Relic monitoring. As of last September we had reduced the average page load time from 8.48 seconds before the upgrade to 2.1 seconds after. This 400% speed-up is even more impressive than it sounds because bots (search engine indexers and third ­party data users) were severely restricted during the pre-­upgrade measurements, and all restrictions were lifted after the upgrade.

Performance improvement was just the start. Here are some of the other changes we've made to the site, which you can read more about in the August NEH performance report.

Because of Jazkarta's high level of expertise in Plone and related technologies, we were able to deliver the Plone upgrade and related performance improvements 6 months ahead of schedule. This left more time for feature improvements than were originally envisioned. As Tom Elliott put it, "our investment in Jazkarta is paying dividends."

Tagged: award, bibliographies, cms, digital humanities, isaw, performance, pleiades, Plone

30 Jan 2017 8:18pm GMT

Mikko Ohtamaa: Simple loop parallelization in Python

Sometimes you are programming a loop to run over tasks that could be easily parallelized. Usual suspects include loads that wait IO like calls to third party API services.

Since Python 3.2, there have been easy tool for this kind of jobs. concurrent.futures standard library module provides thread and multiprocess pools for executing tasks parallel. For older Python versions, a backport library exists.

Consider a loop that waits RPC traffic and the RPC has a wide enough pipe to handle multiple calls simultaneously:

def import_all(contract: Contract, fname: str):
    """Import all entries from a given CSV file."""

    for row in read_csv(fname):
        # This functions performs multiple RPC calls
        # with wait between calls
        import_invoicing_address(contract, row)

You can create a thread pool that runs tasks on N worker threads. Tasks are wrapped in futures that call the worker function. Each thread keeps consuming tasks from the queue until all of work is done.

import concurrent.futures

def import_all_pooled(contract: Contract, fname: str, workers=32):
    """Parallerized CSV import."""

    # Run the futures within this thread pool
    with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as executor:

        # Stream incoming data and build futures.
        # The execution of futures beings right away and the executor
        # does not wait the loop to be completed.
        futures = [executor.submit(import_invoicing_address, contract, row) for row in read_csv(fname)]

        # This print may be slightly delayed, as futures start executing as soon as the pool begins to fill,
        # eating your CPU time
        print("Executing total", len(futures), "jobs")

        # Wait the executor to complete each future, give 180 seconds for each job
        for idx, future in enumerate(concurrent.futures.as_completed(futures, timeout=180.0)):
            res = future.result()  # This will also raise any exceptions
            print("Processed job", idx, "result", res)

If the work is not CPU intensive then Python's infamous Global Interpreter Global will not become an issue either.

Subscribe to RSS feed Follow me on Twitter Follow me on Facebook Follow me Google+

30 Jan 2017 11:04am GMT

25 Jan 2017

feedPlanet Plone

Martijn Faassen: Seven Years: A Very Personal History of the Web


Humans are storytellers. As anyone who knows me can confirm, I definitely enjoy the activity of telling stories. We process and communicate by telling stories. So let me tell you all a story about my life as a web developer the last 7 years. It may just be self-reflection, but hopefully it's also useful for some others. Perhaps we can see my story as a highly personal history of the web.

I always say that what pulls me to software development most is creativity. I am creative; I can't help myself. I enjoy thinking about creativity. So this is also going to be about my creative trajectory over the last 7 years.

Why 7 years? Because in early 2010 I decided to withdraw myself from a software development community I had been involved in for about 12 years previously, and I took new paths after that. It is now early 2017; time to take stock.

Letting go of an intense involvement with a software development project, up to the point where it became part of my identity, was difficult. Perhaps that's a geeky thing to say, but so it is. I needed to process it.

Life after Zope

Zope was a web framework before that concept existed. The modern web framework only materialized sometime in the year 2005, but Zope had been ahead of its time. I was involved with the Zope community from when it was first released, in 1998. I learned a lot from Zope and its community. I made many valued connections with people that last until this day.

Zope helped shape who I am and how I think, especially as a web developer. In 2013 I wrote a retrospective that went into the history of Zope and my involvement with it.

But I did not just process it by writing blog posts. I also processed it creatively.


So I am a web developer. Back in 2010 I saw some people argue that the time of the web framework had passed. Instead, developers should just gather together a collection of building blocks and hook it up to a server using a standard API (WSGI in the case of Python). This would provide more flexibility than a framework ever could.

In a "X considered Y" style post I argued that web frameworks should be considered useful. Not that many people needed convincing, but hey, why not?

I wrote:

The burden of assembling and integrating best of breed components can be shared: that's what the developers of a framework do. And if you base your work on a pre-assembled framework, it's likely to be less work for you to upgrade, because the framework developers will have taken care that the components in the framework work together in newer versions. There is also a larger chance that people will write extensions and documentation for this same assembly, and that is very likely something you may benefit from in the future.

I follow the ReactJS community. The React community definitely is a community that lets you self-assemble a framework out of many parts. This gives that ecosystem flexibility and encourages creativity -- new approaches can be tried and adopted quickly. I like it a lot.

But figuring out how to actually start a React-based project had become a major effort. To get a good development platform, you needed to learn not only about React but also about a host of packaging and build tools: CommonJS and Webpack and npm and Babel and so on. That's quite intimidating and plain work.

So some React developers realized this and created create-react-app which makes it easy to start a working example, with minimal boilerplate code, and with suggestions on how to expand from there. It's a build framework for React that gathers good software in one place and makes it easy to use. It demonstrates how frameworks can make life easier for developers. It even goes a step further and allows you to opt out of the framework once you need more control. Now that's an interesting idea!

Client-side JS as Servant to the Server UI

So frameworks are useful. And in late 2010 I had an idea for a new one. But before I go into it, I will go on a digression on the role of client-side JavaScript on the web.

This is how almost all JavaScript development used to be done and how it's still done today in many cases: the server framework generates the HTML for the UI, and handles all the details of UI interaction in request/response cycles. But sometimes this is not be enough. More dynamic behavior on the client-side is needed. You then write a little bit of JavaScript to do it, but only when absolutely necessary.

This paradigm makes JavaScript the ugly stepsister of whatever the server server programming language is used; a minor servant of Python, Ruby, Java, PHP or whatever. The framework is on the server. JavaScript is this annoying thing you have to use; a limited, broken language. As a web developer you spend as little as time possible writing it.

In short, in this paradigm JavaScript is the servant to the server, which is in charge of the UI and does the HTML generation.

But JavaScript had been gathering strength. The notion of HTTP-based APIs had attracted wide attention through REST. The term AJAX had been coined by 2005. Browsers had become a lot more capable. To exploit this more and more JavaScript needed to be written.

jQuery was first released in 2006. jQuery provided better APIs over sucky browser APIs, and hid incompatibilities between them. Its core concept is the selector: you select things in the web page so you can implement your dynamic behavior. Selectors fit the server-dominant paradigm very well.

Client-side JS as Master of the UI

By 2010, the notion of single-page web application (SPA) was in the air. SPAs promised more powerful and responsive UIs than server-side development could accomplish. The backend is a HTTP API.

This is a paradigm shift: the server framework lets go of its control of the UI entirely, and client-side JavaScript becomes its new master. It encourages a strong separation between UI on the client and business logic on the server. This brings a big benefit: the unit of UI reuse is on the client, not spread between client and server. This makes reuse of UI components a lot easier.

By 2010, I had played with client-side template languages already. I was about to build a few large web applications, and I wanted them to be dynamic and single page. But client-side JavaScript could easily become a mess. I wanted something that would help organize client-side code better. I wanted a higher-level framework.

The idea

So there we finally get to my idea: to create a client side web framework, by bringing over concepts from server frameworks to the client, and to see what happens to them. Cool things happen! We started with templates and then moved to MVC. We created a notion of components you could compose together. We created a client-side form library based on that. In 2011 we released this as Obviel.

For a little while in 2010, early 2011, I thought I was the only one with this cool idea of a client-side web framework. It turns out that I was not: it was a good idea, and so many people had the same idea at about the same time. Even before we released Obviel, I started to hear about Backbone. Ember and Angular soon followed.

I continued working on Obviel for some time. I created a template language with a built-in i18n system for it, and a client-side router. Almost nobody seemed to care.

In 2011 and 2012 we built a lot of stuff with Obviel. In the beginning of 2013 those projects were done. Obviel didn't get any traction in the wider community. It was a project I learned a lot from, so I don't regret it. I can claim deep experience in the area of client-side development.


I went to my first JS conference in September of 2013. I had originally submitted a talk about Obviel to it, but it wasn't accepted. Everybody was promoting their shiny new client-side framework by that time.

So was Facebook. Pete Hunt gave a talk about React. This was in fact only the second time React had been introduced to a wider audience. Apparently it went over a lot better than the first time. It certainly made an impression on me: there were some fascinating new ideas in it. The React community became ferment with fascinating new ideas. At the conference I talked to people about another idea I'd had: a client framework that helps coordinate client/server communication; maybe sort of like a database, with transactions that commit UI state to the server? Nobody seemed to know of any at the time. Uh oh. If nobody has the idea at the same time, then it might be a bad one?

Then from the React community came Flux and Redux and Relay and Mobx. I let go of Obviel and started to use React. There is a little irony there: my move client-side frameworks had started with templates, but React actually let go of them.

The Server in Modern Client-side times

In early 2013 I read an interesting blog post which prompted me to write Modern Client-Side Times, in which I considered the changed role of the server web framework if it was to be the servant to JavaScript instead of its master.

I wrote a list of what tasks remain for the server framework:

What remains is still significant, however:

  • serving up JSON on URLs with hyperlinks to other URLs
  • processing POSTs of JSON content (this may include parts of form validation)
  • traversal or routing to content
  • integrating with a database (SQLAlchemy in this case)
  • authentication - who are you?
  • authorization - who can access this URL?
  • serve up an initial web page pulling in a whole bunch of static resources (JS, CSS)

I also wrote:

Much of what was useful to a server side web framework is still useful. The main thing that changes is that what goes over the wire from server to client isn't rendered HTML anymore. This is a major change that affects everything, but much does stay the same nonetheless.

I didn't know at the time of writing that I would be working on just such a server web framework very soon.

On the Morepath

In 2013 I put some smaller pieces I had been playing with for a while together and created Morepath, a server Python web framework. I gave an over-long keynote at PyCON DE that year to describe the creative processes that had gone behind it. I gave a more focused talk at EuroPython 2014 that I think works better as an introduction.

I announced Morepath on my blog:

For a while now I've been working on Morepath. I thought I'd say a bit about it here.

Morepath is a Python web micro-framework with super powers. It looks much like your average Python micro-framework, but it packs some seriously power beneath the hood.

One of the surprises of Morepath was the discovery that a web framework that tries to be good at being a REST web server actually works very well as a server web framework as well. That does make sense in retrospect: Morepath is good at letting you build REST services, therefore it needs to be good at HTTP, and any HTTP application benefits from that, no matter whether they render their UI on the client or the server. Still, it was only in early 2015 that Morepath gained official support for server templates.

2014 was full with Morepath development. I announced it at EuroPython. It slowed down a little in 2015, then picked up speed again in 2016.

I'm proud that Morepath is micro in implementation, small in its learning surface, but macro in power. The size of Morepath is another surprise: Morepath itself is currently a little over 2000 lines of Python code, but it does a lot, helped by the powerful Reg (<400 lines) and Dectate (<750 lines) libraries. Morepath offers composable, overridable, extensible applications, an extensible configuration system, an extensible view dispatch system, automated link generation, built-in powerful permission rule system, and lots more. Morepath is like Flask, but with a nuclear fusion generator inside. Seriously. Take a look.

The Future?

Over the last few years Morepath has become a true open source project; we have a small team of core contributors now. And in late 2016 Morepath started to gain a bit more attention in the wider world. I hope that continues. Users that turn into contributors are invaluable for an open source project.

There was a mention of Morepath in an Infoworld article, I was interviewed about it for Podcast__init__, and was also interviewed about it for an upcoming episode of Talk Python to Me.

Ironically I've been writing some Django code lately. I'm new to Django (sort of). I have been reintroduced to the paradigm I started to leave behind 7 years ago. With standard Django, the server rules and JavaScript is this adjunct that you use when you have to. The paradigm works, and for some projects it may be the best approach, but it's definitely not my preferred way to work anymore. But I get to help with architecture and teach a bit so I'll happily take the Django on board.

The Django management UI is cool. It makes me want to implement the equivalent for Morepath with PonyORM and React and Mobx. Or something. Want to help?

I've been itching to do something significant on the client-side again. It's been a little while since I got to do React. I enjoyed attending React Europe 2015 and React Europe 2016. I played with React Native for a bit last year. I want to work with that stuff again.

The space where client and server interacts is fertile with creative potential. That's what I've found with Obviel and React on the client, and with Morepath on the server side. While GraphQL replaces the REST paradigm that Morepath is based around (oops!), I'd enjoy working with it too.

Where might the web be going? I like to think that by being creative I sometimes get to briefly peek into its near future. I hope I can continue to be creative for the next 7 years, as I really enjoy it.

I'm a freelancer, so the clients I work for in part shape my creative future. Hint. Let me know if you have something interesting for me to work on.

25 Jan 2017 10:55pm GMT

Starzel.de: Fix failing parallel running browser tests with Gitlab & Robotframework

One of our project is organized in sprints where all developer work on the same code at the same time. We use one Gitlab CI server with a simple shell executer and had random failing builds with Robotframwork & Firefox.

25 Jan 2017 1:35pm GMT

16 Jan 2017

feedPlanet Plone

eGenix: Python Meeting Düsseldorf - 2017-01-18

The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.


Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:

18.01.2017, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf


Bereits angemeldete Vorträge

Charlie Clark
"Kurze Einführung in openpyxl und Pandas"

Jochen Wersdörfer

Marc-Andre Lemburg
"Optimierung in Python mit PuLP"

Weitere Vorträge können gerne noch angemeldet werden. Bei Interesse, bitte unter info@pyddf.de melden.

Startzeit und Ort

Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.

Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.

Über dem Eingang steht ein großes "Schwimm' in Bilk" Logo. Hinter der Tür direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.

>>> Eingang in Google Street View


Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:


Das Python Meeting Düsseldorf nutzt eine Mischung aus (Lightning) Talks und offener Diskussion.

Vorträge können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit XGA Auflösung steht zur Verfügung.

(Lightning) Talk Anmeldung bitte formlos per EMail an info@pyddf.de


Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.

Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.

Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.


Da wir nur für ca. 20 Personen Sitzplätze haben, möchten wir bitten, sich per EMail anzumelden. Damit wird keine Verpflichtung eingegangen. Es erleichtert uns allerdings die Planung.

Meeting Anmeldung bitte formlos per EMail an info@pyddf.de

Weitere Informationen

Weitere Informationen finden Sie auf der Webseite des Meetings:


Viel Spaß !

Marc-Andre Lemburg, eGenix.com

16 Jan 2017 9:00am GMT

30 Dec 2016

feedPlanet Plone

T. Kim Nguyen: How to monitor your Plone servers with Sentry

The Sentry.io real-time error tracking service gives you detailed insight into production errors, so that you can often fix them BEFORE your clients even notice a problem and BEFORE their users get upset.

What Sentry provides

Sentry provides a free plan that is more than enough to handle a large number of Plone sites (limited to 1 team member and 5,000 events per day).

Sentry notifies you of errors in a variety of ways (email, by default) and it provides you with a dashboard of all your reported errors, which you can mark as resolved, you can ignore them (for a set amount of time), you can link them to an issue tracker such as GitHub, you can share the detailed error message (scrubbed of private data).

Effects of Sentry

If you are like me, once you start using Sentry, you will:

  • be amazed at how many errors your Plone site was logging but didn't know about before (not necessarily caused by bugs, but by search engine crawlers looking for things that aren't on the site, bad links from other sites, insufficient privileges because items are still private)
  • realize that you *can* provide a 100% satisfactory product and service, now that you can see and can fix all the errors, whatever the cause, encountered by your clients and their users
  • find yourself tracking down and finding and fixing bugs, not only in your sites and server, but sometimes in other code you're using

Boost user and client satisfaction

Other things Sentry does: you can easily customize Plone's default error message page to pop up a nice looking dialog box in which a user can provide their name, email address, and description of what they were trying to do when they encountered an error. Not only is this good for YOU (because it helps you understand what led to the error), it is also FANTASTIC FOR YOUR CLIENT because their users now know that administrators have noticed their problem and are trying to fix it. This is a level of user satisfaction that goes above and beyond what most people have come to accept, which is to suffer in silence.

How to set up Sentry with Plone

How it works: you sign up with Sentry, you declare any number of "projects", each of which gets a unique ID that you use in your buildout.cfg configuration.

In buildout.cfg, you want to add custom error logging configuration so that when an error is logged, it not only gets logged normally to your event.log or instance.log but it also gets sent to Sentry with the unique ID you were assigned.

(There are even finer grained ways of tracking multiple "releases" of each project, but that probably is useful only if you're deploying versions of, say, Facebook.com, which, presumably YOU are not if you are reading my lowly blog!)

Two example buildout configurations

Whether you installed Plone with the Unified Installer or some other buildout-based method, you will have to modify both buildout.cfg and base.cfg to use Sentry.

Plone 4.3 ZEO

In a ZEO Plone 4.3 deployment, which I installed with the Unified Installer, buildout.cfg defines two ZEO clients, [client1] and [client2]. To do that cleanly, buildout.cfg extends base.cfg, which in turn contains a [client_base] section, which both [client1] and [client2] derive from.

We modify [client_base] by adding these lines:

event-log-custom =
%import raven.contrib.zope
path ${buildout:var-dir}/${:_buildout_section_name_}/event.log
level INFO
level ERROR

Your specific "dns" line containing YOURUNIQUEID and MOREUNIQUEID will have come from your Sentry project definition.

You want your "path" value to match that of the "event-log" value defined in [client_base]. I haven't tried this, but you may be able to use a line like path ${event-log} instead of path ${buildout:var-dir}/${:_buildout_section_name_}/event.log

Plone 4.2 ZEO

In contrast, a Plone 4.2 ZEO deployment (also using the Unified Installer) is slightly different: both [client1] and [client2] are defined in base.cfg, so you have to append your custom configuration to both, and the value for "path" is different:

event-log-custom =
%import raven.contrib.zope
path ${buildout:directory}/var/client1/event.log
level INFO
level ERROR


event-log-custom =
%import raven.contrib.zope
path ${buildout:directory}/var/client2/event.log
level INFO
level ERROR

In both cases, the key point here is to ensure your Sentry "path" value matches what was defined for each client. As in the previous section, I haven't tried this, but you may be able to use a line like path ${event-log} instead of path ${buildout:directory}/var/client1/event.log and path ${buildout:directory}/var/client2/event.log

Plone "instance" (non-ZEO) deployments

I haven't tested this myself but the idea is the same. You need to customize logging for the [instance], and that will depend on your buildout.cfg and base.cfg.

Add Raven

You also have to add "raven" to your buildout.cfg eggs:

eggs =

Run buildout and restart clients

Now run buildout. Once you restart your ZEO clients, when they log an error they will continue appending to the log file defined in the "path" above but will also send the error to Sentry. Depending on how you set up Sentry, you will receive email notifications and you can view a dashboard of your errors.

In all likelihood, you will begin to be able to fix errors (software, user, or other) BEFORE anyone reports them to you, if they even bother to report the errors at all.

Increased client and user satisfaction

Your clients and their users may not even realize how much better a service you are providing them, but you will know, and you will have greatly increased your ability to retain satisfied customers, and that is golden.

30 Dec 2016 5:10pm GMT

27 Dec 2016

feedPlanet Plone

T. Kim Nguyen: Quills blogging add-on for Plone gets some attention

The venerable Products.Quills blogging add-on for Plone has been neglected for some time. I'd last used it on some 4.x sites but the eggs on the PyPi package Index were missing some files.

Quills has been tested with Plone 4.3.11. It has not been tested with Plone 5.

Now Products.Quills 1.8.1 and the matching quills.app 1.8.1 have been released to pypi:

Both add-ons have been updated with current Plone version PyPi classifiers

How to Install Quills

To install them in your Plone site, add this to your buildout.cfg:

eggs =

then run bin/buildout

If you want to ensure you get the latest version of Products.Quills:

eggs =

I always pin egg versions, either there in the eggs lines or later in a [versions] section or in a separate versions.cfg file, e.g.


Release Management

I used the lovely zest.releaser tool to do that. Among other things, it tags the release in the GitHub repo, so that later on we can all find the exact state of the source code used to release that particular egg, e.g.

Recently I ran into difficulties (still unresolved) with another project that did NOT tag its releases in GitHub, and I've been unable to branch off the correct past state of its code to try to fix some bugs.

27 Dec 2016 9:53pm GMT

T. Kim Nguyen: How to enable online reading of Taylor & Francis journals from your Plone site

For Plone 4.3 I implemented an External Method that obtains a special one-time access URL from Taylor & Francis' web site.

You can see this in action at the International Medieval Sermon Studies web site!

The External Method is secured with the ZMI's Security tab but it is also behind a couple of Plone pages, one of which requires logging in to read.

The Python script itself is:

from Products.CMFCore.utils import getToolByName

def mss_online(self):
# check if we are logged in
pm = getToolByName(self, 'portal_membership', None)
if not pm:
return "Unable to check if you are logged in. Please notify a site administrator."
user = pm.getAuthenticatedMember()
if str(user) == "Anonymous User":
return "You are not logged in."
BIG_URL = "http://www.tandfonline.com/tps/requestticket?ru=http://www.tandfonline.com/biglongurlwithparameters"
import urllib2
url = urllib2.urlopen(BIG_URL).read()
return "<META HTTP-EQUIV=\"Refresh\" CONTENT=\"0; URL=%s\"><html><head></head><body>You are being redirected to <a href=\"$redirURL\">tandfonline.com</a></body></html>" % url

I named the script mss_online.py and placed it in the Plone installation directory's "Extensions" subdirectory (e.g. /opt/Plone/zeocluster/Extensions/mss_online.py).

Then, using the Zope Management Interface, e.g. mysite.net/manage_main, I added an External Method, and set:

  • Id: mss_online
  • Title: (does not matter)
  • Module Name: mss_online
  • Function Name: mss_online

To protect it from non-logged in access, I then used the Security tab to uncheck "Acquire permission settings" and check "Authenticated" for the View permission.

27 Dec 2016 5:24pm GMT

23 Dec 2016

feedPlanet Plone

Hector Velarde: Plone performance: threads or instances?

Recently we had a discussion on the Plone community forum on how to increase performance of Plone-based sites.

I was arguing in favor of instances, because some time ago I read a blog post by davisagli taking about the impact of the Python GIL on performance of multicore servers. Others, like jensens and djay, were skeptical on this argument and told me not to overestimate that.

So, I decide to test this using the production servers of one of our customers.

The site is currently running on 2 different DigitalOcean servers with 4 processors and 8GB RAM; we are using Cloudflare in front of it, and round-robin, DNS-based load balancing.

Prior to my changes, both servers were running with the same configuration:

Both servers where running also a ZEO server on ZRS configuration, one as a master and the other as a slave, doing blob storage replication.

First, here we have some information from this morning, before I made the changes. Here are some graphics I obtained using New Relic on the master:

Here is the same information from the slave:

As you can see, everything is running smoothly: CPU consumption is low and memory consumption is high and… yes, we have an issue with some PhamtomJS processes left behind.

This is what I did later:

I also stopped the memmon Supervisor plugin (just because I had no idea on how much memory the slave server instance will be consuming after the changes), and killed all PhamtomJS processes.

The servers have been running for a couple of hours now and I can share the results. This is the master server:

And this is now the slave:

The only obvious change here is in memory consumption: wow! the sole instance on the slave server is consuming 1GB less than the 4 instances in the master server!

Let's do a little bit more research now. Here we have some information on database activity on the master server (just one instance for the sake of simplicity):

Now here is some similar information for the slave server:

I can say that I was expecting this: there's a lot more activity and the caching is not very well utilized on the slave server (see, smcmahon, that's the beauty of the URL-based load balancing on Varnish demonstrated).

Let's try a different look, now using the vmstat command:

Not many differences here: the CPU is idle most of the time and the interrupts and context switching are almost the same.

Now let's see how much our instances are being used, with the varnishstat command:

Here you can see why there's not too much difference: in fact Varnish is taking care of nearly 90% of the requests and we have only around 3 requests/second hitting the servers.

Let's make another test to see how quickly we are responding the requests using the varnishhist command:

Again, there's almost no difference here.

Conclusion: for our particular case, using threads seems not to affect too much the performance and has a positive impact on memory consumption.

What I'm going to do now is to change the configuration used in production to have 2 instances and 2 threads… why? because restarting a single instance on a server for maintenance purposes would let us without backends during the process if we were using only one instance.

Share and enjoy!

23 Dec 2016 6:55pm GMT

22 Dec 2016

feedPlanet Plone

Plumi: Plumi now on Debian Jessie, Ubuntu 16.04 and Centos 7

We are very excited to announce that after much effort, Plumi is now available to install on Debian Jessie, Ubuntu 16.04 (latest stable) and Centos 7.

The latest code is available here on Github: https://github.com/plumi/plumi.app

Documentation on how to install is available here: https://github.com/plumi/plumi.app/blob/master/README.rst

Further documentation including an introduction, installation, theming and maintenance guide has been updated here: https://mgogoulos.trinket.io/plumi-4-5

This means our free open source video platform now works across these up-to-date and secure major Linux based operating systems. Free community media infrastructure is needed now, more than ever before, and we are very proud to offer this with Plumi.

We want to heartily thank Markos Gogoulos for all his hard work to get us here, and Mist.io for supporting EngageMedia in this work.

Anna Helme

on behalf of EngageMedia

22 Dec 2016 6:41am GMT

Plumi: Ten years of Plumi and looking ahead to 2017

It's been ten years now since we released our first public version of Plumi with a vision to provide free democratic access to video distribution, and I'm very proud of EngageMedia's work to sustain the project through many successes and challenges.

I'd like to thank the whole team at EngageMedia, and all our visionaries, programmers, designers, testers, documenters, supporters and organisers over the years including Andrew Lowenthal, Dave Fregon, Andy Nicholson, Lachlan Musicman, Dimitris Moraitis, Chris Psaltis, Mike Muzurakis, Yiannis Chatzikonstantinou, Sam Stainsby, Jean Jordaan, Nate Aune, Rok Garbas, Steve Anderson, Giannis Stergiou and more. See also: https://github.com/plumi/plumi.app/blob/master/docs/CONTRIBUTORS.txt

Impact producers, video and technology activists, human rights defenders and social justice and environmental advocates across SE Asia and internationally continue to work with EngageMedia across a number of program areas, and contribute as always to the vision for and purpose of Plumi development. See more about EngageMedia's partnerships and projects here: http://engagemedia.org and learn about the Video for Change network here: http://v4c.org

Looking ahead, 2017 will hopefully see Plumi users join forces to take Plumi forward, with a particular eye on getting our "externally-hosted videos" feature out there, which we have done a lot of work on but isn't quite finished. We are also talking with leaders in the Plone community about ideas such as inviting students to work on particular code or documentation projects, and look towards merging some of Plumi's base video engine with Plone core and other major video products for Plone, which would help sustain Plumi's viability into the future.

At this point we'd like to put it out there to Plumi users that we are looking for contributions in order to help EngageMedia maintain the project on behalf of the Plumi community, which has depended on philanthropic funding and donations - never a steady source.

Maintaining the project includes the email lists, issues tracker, Plumi blog, coordination of development, development on core functionality such as recent operating system compatibility updates, updating components, UI bug fixes and improvements, attending Plone conferences and liaising with the Plone community to find and work towards fruitful partnerships.

As always we welcome Python developers to get involved, but we'd also love some financial contributions in 2017 to keep us moving steadily into the future together.

Get in touch on the lists or via the contact form if you think you can help, or want to get involved!

Happy holidays!

Anna Helme on behalf of EngageMedia http://engagemedia.org

22 Dec 2016 6:37am GMT

15 Dec 2016

feedPlanet Plone

Alex Clark: A Shout Out to Shout IRC

I'm back on IRC for the foreseeable future, and loving it. Thank you Shout IRC.



A few years ago, I got old and gave up running command line IRC clients. I've run them all or at least a lot of them, including one whose name is almost certainly in the crosshairs of political correctness. Most recently I ran Weechat and irssi before that. For a while, I gave up IRC completely because I couldn't be bothered. But I missed it, and nothing else seemed to suffice. I tried Slack and thought it was OK, but not IRC. I tried various web clients, but couldn't find one I could stand to use long term. Then Shout IRC came along.

Stay online

I tried Shout for the first time over a year ago, but never bothered to create a Shout account on my server. This was a mistake, since user account creation enables one of Shout's most powerful features: Stay online on IRC even when you log out.


I had gotten annoyed with having to login each time, so I stopped using Shout for a while. I heard good things about Kiwi, but was disappointed to see no npm release. This led me back to Shout, which does have an npm release. What follows are configuration details for irc.aclark.net, for posterity. (I added Let's Encrypt at the last minute for good measure.)



  • EC2 t2.micro running Ubuntu 16.04.1 LTS


apt-get install aptitude
aptitude update; aptitude upgrade -y
aptitude install nginx nodejs-legacy npm python python-pip


sudo -H pip install dotfiles


sudo npm install -g shout

Certbot (Let's Encrypt)

sudo certbot certonly --manual


server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
server {
    listen 443 ssl default_server;
    listen [::]:443 ssl default_server;
    root /var/www/html;
    server_name _;
    location / {
        proxy_pass http://localhost:9000;
    location /.well-known/acme-challenge/AamTqX-Ic-YERnU0RWS2X_WpszSUsi2lIoXkMYOy_Fs {
        add_header Content-Type text/plain;
        return 200 "AamTqX-Ic-YERnU0RWS2X_WpszSUsi2lIoXkMYOy_Fs.gPCswvmAzfObWoqUg6d_…";
    ssl    on;
    ssl_certificate    /etc/ssl/fullchain.pem;
    ssl_certificate_key    /etc/ssl/privkey.pem;


(I store my .shout directory, which includes my Shout & Freenode credentials, in a private dotfiles repository.)

git clone git@bitbucket.org:aclark4life/dotfiles.git Dotfiles
dotfiles -s


I'm currently running shout --private in screen, but may eventually add a systemd service for it.

15 Dec 2016 12:00am GMT