04 Sep 2015

feedPlanet Plone

Starzel.de: Plone 5 Release Party and the Plone Theming Sprint

Eric Steele, our beloved release manager will highlight some of the main features. Afterwards there is food, drink and fun.

The party is also the kick-off to the Plone 5 Theming Sprint in Munich on 16 - 20 September 2015

The main task of the sprint is to document and improve the theming-story for Plone 5. So...

...please consider coming to the Sprint.

During the sprint we will have several talks related to theming:


Details about the sprint: http://www.coactivate.org/projects/plone-5-theming-sprint-munich

Details about the Party: plone5.de


Test Plone 5 now: demo.plone.de
Learn more about Plone: plone.com

04 Sep 2015 1:53pm GMT

Four Digits: plone.api 1.4

What has changed in version 1.4?

Link integrity

plone.api.content.delete now support the parameter check_linkintegrity. This raises an exception if deleting the object(s) would result in broken links.

kwargs for plone.api.content.transition

plone.api.content.transition now accepts kwargs that can be supplied to the workflow transition.

from plone import api
portal = api.portal.get()
api.content.transition(obj=portal['about'], transition='reject', comment='You had a typo on your page.')

Tuple support in content.find for object_provides

The object_provides parameter allows a tuple of interfaces, as well as a list.

All commits: https://github.com/plone/plone.api/compare/1.3.3...1.4
Changelog: https://github.com/plone/plone.api/blob/1.4/docs/CHANGES.rst

A big thank you to ale-rt, neilferreira and pbauer!

04 Sep 2015 8:33am GMT

03 Sep 2015

feedPlanet Plone

Plone.org: Plone 5 launching September 15, 2015

Mark your calendars, Plone 5 is almost here!

03 Sep 2015 11:45pm GMT

Benoît Suttor: Geonode, Geoserver, Postgis with Docker

Framework for maps creation, how we dockerize our Geonode project

Intro

We have some clients which needs a framework for maps creation. We took a look on market of open source solutions for this kind of feature and we became fan of geonode project.

We begin to be familiar with Docker so we decided to create Docker images for Geonode. We would like to separate geoserver and geonode. The goal is to be able to move geoserver or geonode on distinct server if the load increase. So we create different images for Geonode, Geoserver. We also use Nginx image for creation of link between Geonode and Geoserver (and a Postgis image for development).

For this project, we customise geonode. We use django template for project creation as explain on documentation ( http://docs.geonode.org/en/latest/tutorials/devel/projects/setup.html).

$ django-admin startproject imio_geonode --template=https://github.com/GeoNode/geonode-project/archive/master.zip -epy,rst

docker-compose

https://docs.docker.com/compose

We use 2 differents docker-compose.yml files, one for production and one for development.

Differences are :

This is development docker-compose.yml
postgis:
build: Dockerfiles/postgis/
hostname: postgis
volumes:
- ./postgres_data:/var/lib/postgresql

geoserver:
build: Dockerfiles/geoserver/
hostname: geoserver
links:
- postgis
ports:
- 8080:8080
volumes:
- ./geoserver_data:/opt/geoserver/data_dir

geonode:
build: .
hostname: geonode
links:
- postgis
ports:
- 8000:8000
volumes:
- .:/opt/geonode/
entrypoint:
- /usr/bin/python
command: manage.py runserver 0.0.0.0:8000

nginx:
image: nginx:latest
ports:
- 80:80
links:
- geonode
- geoserver
- postgis
volumes:
- nginx-default.conf:/etc/nginx/conf.d/default.conf

Nginx

We need and nginx images to make the link between geoserver and geonode. With Docker >= 1.8 and docker-compose >= 1.4, a new 'network' option arrived and seems to depreced this nginx utility.

Nginx default image (https://registry.hub.docker.com/_/nginx/) is used with this config:

upstream geonode {
    server geonode:8000;
}
upstream geoserver {
    server geoserver:8080;
}

server {
        listen   80;
        client_max_body_size 128m;

        location / {
            proxy_pass         http://geonode;
            proxy_set_header   Host $http_host;
            proxy_set_header   X-Real-IP       $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location /geoserver {
            proxy_pass         http://geoserver;
            proxy_set_header   Host $http_host;
            proxy_set_header   X-Real-IP       $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        error_log /var/log/nginx/error.log warn;
        access_log /var/log/nginx/access.log combined;
}

Nginx is useful for connection login from geoserver to geonode and for upload layers, ... from geonode to geoserver.

For production, we add static and upload folder rules :

location /static {
alias /opt/geonode/static;
} location /uploaded {
alias /opt/geonode/uploaded;
}

Postgis

We use postgis docker image for development. Indead, we have a dedicated server for our databases. For dev, we use this Dockerfile :

FROM postgres:9.4
RUN apt-get update && apt-get install -y postgresql-9.4-postgis-2.1
RUN mkdir /docker-entrypoint-initdb.d COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/initdb-postgis.sh

From postgres images, all sh files in docker-entrypoint-initdb.d folder are run during postgres initialisation. And db init script for geonode looks like :

#!/bin/sh
POSTGRES="gosu postgres"
$POSTGRES postgres --single -E <<EOSQL
CREATE ROLE geonode ENCRYPTED PASSWORD 'geonode' LOGIN;
EOSQL
$POSTGRES postgres --single -E <<EOSQL
CREATE DATABASE geonode OWNER geonode ;
CREATE DATABASE "geonode-imports" OWNER geonode ;
EOSQL
$POSTGRES pg_ctl -w start
$POSTGRES psql -d geonode-imports -c 'CREATE EXTENSION postgis;'
$POSTGRES psql -d geonode-imports -c 'GRANT ALL ON geometry_columns TO PUBLIC;'
$POSTGRES psql -d geonode-imports -c 'GRANT ALL ON spatial_ref_sys TO PUBLIC;'

Geoserver

I create an Docker image from 'tomcat:8-jre7' image, and install geoserver from http://build.geonode.org/geoserver/latest/geoserver.war.

Dockerfile looks like :

FROM tomcat:8-jre7

RUN apt-get update && apt-get install wget
RUN wget -O /usr/local/tomcat/webapps/geoserver.war http://build.geonode.org/geoserver/latest/geoserver.war
RUN apt-get remove -y wget ENV GEOSERVER_DATA_DIR /opt/geoserver/data_dir

Geonode

Production

We use gunicorn for production (https://pypi.python.org/pypi/gunicorn/)

Development

We use 'python manage.py runserver' for development. As you see in docker-compose.yml file, source code is added into a docker image with a volume, thus when you change code on your local computer, it's directly update on docker image.

Dockerfile :

FROM ubuntu:14.04
RUN \
apt-get update && \
apt-get install -y build-essential && \
apt-get install -y libxml2-dev libxslt1-dev libjpeg-dev gettext git python-dev python-pip libgdal1-dev && \
apt-get install -y python-pillow python-lxml python-psycopg2 python-django python-bs4 python-multipartposthandler transifex-client python-paver python-nose python-django-nose python-gdal python-django-pagination python-django-jsonfield python-django-extensions python-django-taggit python-httplib2 RUN mkdir -p /opt/geonode WORKDIR /opt/geonode
ADD requirements.txt /opt/geonode/
RUN pip install -r requirements.txt
ADD . /opt/geonode

COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]

Settings and local_settings

You have to update your local_settings. Settings have to match with settings into Dockerfiles and docker-compose.yml. Here is an exemple of local_settings.py for OGC_SERVER and DATABASES :

SITENAME = 'GeoNode'
SITEURL = 'http://localhost'
GEOSERVER_URL = SITEURL + '/geoserver/'
GEOSERVER_BASE_URL = GEOSERVER_URL
# OGC (WMS/WFS/WCS) Server Settings
OGC_SERVER = {
'default': {
'BACKEND': 'geonode.geoserver',
'LOCATION': 'http://172.17.42.1:80/geoserver/', # Docker IP
'PUBLIC_LOCATION': GEOSERVER_URL,
'USER': 'admin',
'PASSWORD': 'admin',
'MAPFISH_PRINT_ENABLED': True,
'PRINT_NG_ENABLED': True,
'GEONODE_SECURITY_ENABLED': True,
'GEOGIT_ENABLED': False,
'WMST_ENABLED': False,
'BACKEND_WRITE_ENABLED': True,
'WPS_ENABLED': True,
# Set to name of database in DATABASES dictionary to enable
'DATASTORE': 'datastore',
}
}

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'geonode',
'USER': 'geonode',
'PASSWORD': 'geonode',
'HOST': 'postgis',
'PORT': 5432,
},
'datastore': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': 'geonode-imports',
'USER': 'geonode',
'PASSWORD': 'geonode',
'HOST': 'postgis',
'PORT': 5432,
}
}

You also have to set link between Geoserver and Geonode login. Use Docker IP for settings that (geoserver_data/security/auth/geonodeAuthProvider/config.xml) :

<org.geonode.security.GeoNodeAuthProviderConfig>
<id>-0000000000000</id>
<name>geonodeAuthProvider</name>
<className>org.geonode.security.GeoNodeAuthenticationProvider</className>
<baseUrl>http://172.17.42.1:80/</baseUrl>
</org.geonode.security.GeoNodeAuthProviderConfig>

Conclusion

You can now start you geonode project with a simple :

$ docker-compose up

And make Django syncdb of your database :

$ docker-compose run --rm --entrypoint='/usr/bin/python' geonode manage.py syncdb

03 Sep 2015 1:20pm GMT

Reinout van Rees: Automation for better behaviour

Now... that's a provocative title! In a sense, it is intended that way. Some behaviour is better than other behaviour. A value judgment! In the Netherlands, where I live, value judgments are suspect. If you have a comment on someone's behaviour, a common question is whether you're "better" than them. If you have a judgment, you apparently automatically think you've got a higher social status or a higher moral standing. And that's bad, apparently.

Well, I despise such thinking :-)

Absolute values

I think there are absolutes you can refer to, that you can compare to. Lofty goals you can try to accomplish. Obvious truths (which can theoretically be wrong...) that are recognized by many.

Nihilism is fine, btw. If you're a pure nihilist: I can respect that. It is an internally-logical viewpoint. Only you shouldn't complain if some other nihilist cuts you to pieces if that suits his purely-individual nihilistic purposes.

So for practical purposes I'm going to assume there's some higher goal/law/purpose/whatever that we should attain.

Take programming in python. PEP 8, python's official style guide is recognized by most of the python programmers as the style guide they should adhere to. At least, nobody in my company complained if I adjusted/fixed their code to comply with PEP 8. And the addition of bin/pep8 in all of our software projects to make it easy to check for compliance didn't raise any protests. Pyflakes is even clearer, as it often points at real errors of obvious omissions.

For django projects, possible good things include:

  • Sentry integration for nicely-accessible error logging.
  • Using a recent and supported django version. So those 1.4 instances we still have at my workplace should go the way of the dodo.
  • Using proper releases instead of using the latest master git checkout.
  • Using migrations.
  • Tests.

Automation is central to good behaviour

My take on good behaviour is that you should either make it easy to do the good thing or you should make non-good behaviour visible.

As an example, take python releases. As a manager you can say "thou shalt make good releases". Oh wow. An impressive display of power. It reminds me of a certain SF comic where, to teach them a lesson, an entire political assembly was threathened with obliteration from orbit. Needless to say, the strong words didn't have a measurable effect.

You can say the same words at a programmer meeting, of course. "Let's agree to make proper releases". Yes. Right.

What do you have to do for a proper release?

  • Adjust the version in setup.py from 1.2.dev.0 to 1.2.
  • Record the release date in the changelog.
  • Tag the release.
  • Update the version number in setup.py to 1.3.dev.0.
  • Add a new header for 1.3 in the changelog.

Now... That's quite an amount of work. If I'm honest, I trust about 40% of my colleagues to make that effort every time they release a package.

There is a better way. Those very same colleagues can be relied on to make perfect releases all the time if all they have to do is to call bin/fullrelease and press ENTER a few times to do all of the above automatically. Thanks to zest.releaser.

Zest.releaser makes it easier and quicker to make good releases than it is to make bad/quick/sloppy releases by hand.

Further examples

Now... here are some further examples to get you thinking.

All of our projects are started with "nensskel", a tool to create a skeleton for a new project (python lib, django app, django site). It uses "paste script"; many people now use "cookie cutter", which serves the same purpose.

  • For all projects, a proper test setup is included. You can always run bin/test and your test case will run. You only have to fill it in.

  • bin/fullrelease, bin/pep8, bin/pyflakes: if you haven't yet installed those programs globally, how easy can I make it for you to use them???

  • If you want to add documentation, sphinx is all set up for you. The docs/source/ directory is there and sphinx is automatically run every time you run buildout.

  • The README.rst has some easy do-this-do-that comments in there for when you've just started your project. Simple quick things like "add your name in the setup.py author field". And "add a one-line summary to the setup.py and add that same one to the github.com description".

    I cannot make it much easier, right?

    Now... quite some projects still have this TODO list in their README.

Conclusion: you need automation to enable policy

You need automation to enable policy, but even that isn't enough. I cannot possibly automatically write a one-line summary for a just-generated project. So I have to make do with a TODO note in the README and in the setup.py. Which gets disregarded.

If even such simple things get disregarded, bigger things like "add a test" and "provide documentation" and "make sure there is a proper release script" will be hard to get right. I must admit to not always adding tests for functionality.

I'll hereby torture myself with a quote. "Unit testing is for programmers what washing your hands is for doctors before an operation". It is an essential part of your profession. If you go to the hospital, you don't expect to have to ask your doctor to disinfect the hands before the operation. That's expected. Likewise, you shouldn't expect your clients to explicitly ask you for software tests: those should be there by default!

Again, I admit to not always adding tests. That's bad. As a professional software developer I should make sure that at least 90% test coverage is considered normal at my company. In the cases where we measure it, coverage is probably around 50%. Which means "bad". Which also means "you're not measuring it all the time". 90% should also be normal for my own code and I also don't always attain that.

Our company-wide policy should be to get our test coverage at least to 90%. Whether or not if that's our policy, we'll never make 90% if we don't measure it.

And that is the point I want to make. You need tools. You need automation. If you don't measure your test coverage, any developer or management policy statement will be effectively meaningless. If you have a jenkins instance that's in serious neglect (70% of the projects are red), you don't effectively have meaningful tests. Without a functioning jenkins instance (or travis-ci.org), you cannot properly say you're delivering quality software.

Without tooling and automation to prove your policy, your policy statements are effectively worthless. And that's quite a strong value statement :-)

03 Sep 2015 11:31am GMT

01 Sep 2015

feedPlanet Plone

Benoît Suttor: Plone and Docker

In this post, I will try to explain how we put a new Plone site into production in our organization (IMIO, Belgium). For this process, we used some softwares as Puppet, Jenkins, but the process we use should be agnostic from these softwares.

Introduction

In this post, I will try to explain how we put Plone sites into production in our organization (IMIO, Belgium).


For this process, we used some software as Puppet, Jenkins, but the process we use should be agnostic from these softwares.

Short story: when a push is made on Github, Jenkins builds a new image for Docker, pushes this image into a private Docker registry and updates the Docker image on server.

Docker images

We create Docker images with packer. We build .deb files with buildout, mr.bob and Jenkins. We create "debian" folder used to .deb files creation with a mr.bob template. We create 3 deb files:

After creation of deb files, packer uses (and installs) those deb files to create 2 Docker images:

We think this is a good way to have good isolation.

Both images are based on a "base IMIO image". Our base image is based on ubuntu image from docker hub. Each image has a size of +/- 530 MB because we have a lot of plone eggs in our buildout/plone site.

You could also create a simple Dockerfile which pulls a github repo and runs buildout to create your Docker image.

Once Packer has built the Docker images, Jenkins pushes them into a private Docker registry.

Private registry

For this post, I imagine we have a private docker registry in this url : docker.private-registry.be.
We use private registry to store our images.
Our images are created with tag latest and YYYYMMDD-JENKINS_JOB_NUMBER (20150127-97)
We use a private registry for each environment, (staging, production, …) and we copy images between environments. Actually, we automatically update dev and staging environments and when we see there are no problem, we copy images on production.

Update production

We use fig to orchestrate our docker containers (zeo server must be started before zeo clients). We use a script to update our docker images. This script checks if the currently running docker containers use the latest image. If not, the script downloads the latest image, stops the docker containers which are running, remove them and restart containers from new images (we use upstart scripts to starting docker daemon).

cd /fig/directory;fig pulll > /dev/null 2>&1
NAME='plone'
REGISTRY_URL='docker.private-registry.be'
INSTANCE="instance"
ZEO="zeo"
INSTANCE_NAME="instance_1"
INSTANCE_IMAGE="$REGISTRY_URL/$INSTANCE"
ZEO_IMAGE="$REGISTRY_URL/$ZEO"

LATEST_INSTANCE_IMAGE_ID=$(docker images | grep $INSTANCE_IMAGE | grep latest | awk '{print $3}')
LATEST_ZEO_IMAGE_ID=$(docker images | grep $ZEO_IMAGE | grep latest | awk '{print $3}')

TAG_INSTANCE_IMAGE_ID=$(docker images | grep $LATEST_INSTANCE_IMAGE_ID | grep -v latest | awk '{print $2}')
TAG_ZEO_IMAGE_ID=$(docker images | grep $LATEST_ZEO_IMAGE_ID | grep -v latest | awk '{print $2}')

if [ "$TAG_INSTANCE_IMAGE_ID" != "$TAG_ZEO_IMAGE_ID" ];then
echo "Error: instance and zeo images are no the same tag !" 1>&2
exit 1
fi
RUNNING=$(docker ps | grep $INSTANCE_NAME | awk '{print $2}')
LATEST="$INSTANCE_IMAGE:$TAG_INSTANCE_IMAGE_ID"
if [ "$RUNNING" != "$LATEST" ];then
echo "restarting $NAME"
stop $NAME
start $NAME
else
echo "$NAME up to date"
fi

Storage and backup

We use Docker data containers (called storage in our case) for filestorage, blobstorage and backup folders. We start docker container with --volumes-from option. We have to be carefull to NEVER delete a storage container (maybe we have to improve docker for that).

We configure our buildouts to backup all data into var/backups folder and so, we launch docker with --volumes-from and -v options for backup and restore. Thanks to -v options, backups are stored on server and not in Docker. Later, backups are synced to our backup server.

With this zeo docker image, it's easy to backup, pack and restore zodb. In the futur, we envision using relstorage instead of zeoserver. But currently, there is no DB admin in the company (hint to our boss ?).

Conclusion

Docker runs great in production !

I intend to follow docker machine, docker swarm and docker compose.

Thank you to my colleagues Cédric de Wilde and Jean-François Roche for having worked with me to setup our production Plone into Docker.

01 Sep 2015 3:22pm GMT

31 Aug 2015

feedPlanet Plone

Reinout van Rees: Lessons learned from buildout a django site with a reactjs front-end

My colleague Gijs Nijholt just posted his blog entry lessons learned from building a larger app with React.js, which is about the javascript/reactjs side of a django website we both (plus another colleague) recently worked on.

Simplified a bit, the origin is a big pile of measurement data, imported from csv and xml files. Just a huge list of measurements, each with a pointer to a location, parameter, unit, named area, and so on. A relatively simple data model.

The core purpose of the site is threefold:

  • Import, store and export the data. Csv/xml, basically.
  • Select a subset of the data.
  • Show the subset in a table, on a map or visualized as graphs.

The whole import, store, export is where Django shines. The model layer with its friendly and powerful ORM works fine for this kind of relational data. With a bit of work, the admin can be configured so that you can view and edit the data.

Mostly "view" as the data is generally imported automatically. Which means you discover possible errors like "why isn't this data shown" and "why is it shown in the wrong location". With the right search configuration and filters, you can drill down to the offending data and check what's wrong.

Import/export works well with custom django management commands, admin actions and celery tasks.

Now on to the front-end. With the basis being "select a subset" and then "view the subset", I advocated a simple interface with a sidebar. All selection would happen in the sidebar, the main content area would be for viewing. And perhaps some view-customization like sorting/filtering in a table column or adjusting graph colors. This is the mockup I made of the table screen:

http://reinout.vanrees.org/images/2015/efcis_tabel_mockup.png

In the sidebar you can select a period, locations/location groups and parameters. The main area is for one big table. (Or a map or a graph component).

To quickly get a first working demo, I initially threw together three django views, each with a template that extended one base template. Storing the state (=your selection) as a dict in the django session on the server side. A bit of bootstrap css and you've got a reasonable layout. Enough, as Gijs said in his blog entry, to sell the prototype to the customer and get the functional design nailed down.

Expanding the system. The table? That means javascript. And in the end, reactjs was handy to manage the table, the sorting, the data loading and so on. And suddenly state started spreading. Who manages the state? The front-end or the back-end? If it is half-half, how do you coordinate it?

Within a week, we switched the way the site worked. The state is now all on the client side. Reactjs handles the pages and the state and the table and the graph and the map. Everything on one side (whether client-side or server-side) is handiest.

Here's the current table page for comparison with the mockup shown above:

http://reinout.vanrees.org/images/2015/efcis_tabel_website.png

Cooperation is simple this way. The front-end is self-contained and simply talks to a (django-rest-framework) REST django backend. State is on the client (local storage) and the relevant parameters (=the selection) are passed to the server.

Django rest framework's class based views came in handy. Almost all requests, whether for the map, the table or the graph, are basically a filter on the same data, only rendered/serialized in a different way. So we made one base view that grabs all the GET/POST parameters and uses them for a big django query. All the methods of the subclassing views can then use those query results.

A big hurray for class based views that make it easy to put functionality like this in just one place. Less errors that way.

Some extra comments/tips:

  • Even with a javascript front-end, it is still handy to generate the homepage with a Django template. That way, you can generate the URLs to the various API calls as data attributes on a specific element. This prevents hard-coding in the javascript code:

    <body>
      <!-- React app renders itself into this div -->
      <div id="efcis-app"></div>
    
      <script>
        // An object filled with dynamic urls from Django (for XHR data retrieval in React app)
        var config = {};
        config.locationsUrl = '{% url 'efcis-locaties-list' %}';
        config.meetnetTreeUrl = '{% url 'efcis-meetnet-tree' %}';
        config.parameterGroupTreeUrl = '{% url 'efcis-parametergroep-tree' %}';
        config.mapUrl = '{% url 'efcis-map' %}';
        window.config = config;
      </script>
      ...
    </body>
    
  • Likewise, with django staticfiles and staticfiles' ManifestStaticFilesStorage, you get guaranteed unique filenames so that you can cache your static files forever for great performance.

Lessons learned?

  • Splitting up the work is easy and enjoyable when there's a REST back-end and a javascript front-end and if the state is firmly handled by the front-end. Responsibility is clearly divided that way and adding new functionality is often a matter of looking where to implement it. Something that's hard in javascript is sometimes just a few lines of code in python (where you have numpy to do the calculation for you).

    Similarly, the user interface can boil down complex issues to just a single extra parameter send to the REST API, making life easier for the python side of things.

  • When you split state, things get hard. And in practice that, in my experience, means the javascript front-end wins. It takes over the application and the django website is "reduced" to an ORM + admin + REST framework.

    This isn't intended as a positive/negative value statement, just as an observation. Though a javascript framework like reactjs can be used to just manage individual page elements, often after a while everything simply works better if the framework manages everything, including most/all of the state.

31 Aug 2015 4:38pm GMT

30 Aug 2015

feedPlanet Plone

Andreas Jung: collective.elasticindex: Plone integration with Elasticsearch

A better fulltext search for Plone based on Elasticsearch

30 Aug 2015 11:31am GMT

26 Aug 2015

feedPlanet Plone

Andreas Jung: Converting DITA to PDF using CSS Paged Media

Alternative approaches for the PDF generation from DITA maps.

26 Aug 2015 4:31pm GMT

25 Aug 2015

feedPlanet Plone

Reinout van Rees: Easy maintainance: script that prints out repair steps

At my work we have quite a number of different sites/apps. Sometimes it is just a regular django website. Sometimes django + celery. Sometimes it also has extra django management commands, running from cronjobs. Sometimes Redis is used. Sometimes there are a couple of servers working together....

Anyway, life is interesting if you're the one that people go to when something is (inexplicably) broken :-) What are the moving parts? What do you need to check? Running top to see if there's a stuck process running at 100% CPU. Or if something eats up all the memory. df -h to check for a disk that's full. Or looking at performance graphs in Zabbix. Checking our "sentry" instance for error messages. And so on.

You can solve the common problems that way. Restart a stuck server, clean up some files. But what about a website that depends on background jobs, run periodically from celery? If there are 10 similar processes stuck? Can you kill them all? Will they restart?

I had just such a problem a while ago. So I sat down with the developer. Three things came out of it.

  • I was told I could just kill the smaller processes. They can be re-run later. This means it is a good, loosely-coupled design: fine :-)

  • The README now has a section called "troubleshooting" with a couple of command line examples. For instance the specific celery command to purge a specific queue that's often troublesome.

    This is essential! I'm not going to remember that. There are too many different sites/apps to keep all those troubleshooting commands in my head.

  • A handy script (bin/repair) that prints out the commands that need to be executed to get everything right again. Re-running previously-killed jobs, for instance.

The script grew out of the joint debugging session. My colleague was telling me about the various types of jobs and celery/redis queues. And showing me redis commands that told me which jobs still needed executing. "Ok, so how do I then run those jobs? What should I type in?"

And I could check serveral directories to see which files were missing. Plus commands to re-create them. "So how am I going to remember this?"

In the end, I asked him if he could write a small program that did all the work we just did manually. Looking at the directories, looking at the redis queue, printing out the relevant commands?

Yes, that was possible. So a week ago, when the site broke down and the colleague was away on holiday, I could kill a few stuck processes, restart celery and run bin/repair. And copy/paste the suggested commands and execute them. Hurray!

So... make your sysadmin/devops/whatever happy and...

  • Provide a good README with troubleshooting info. Stuff like "you can always run bin/supervisorctl restart all without everything breaking. Or warnings not to do that but to instead do xyz.
  • Provide a script that prints out what needs doing to get everything OK again.

25 Aug 2015 6:34pm GMT

24 Aug 2015

feedPlanet Plone

Reinout van Rees: Runs on python 3: checkoutmanager

Checkoutmanager is a five-year old tool that I still use daily. The idea? A simple ~/.checkoutmanager.cfg ini file that lists your checkouts/clones. Like this (only much longer):

[local]
vcs = git
basedir = ~/local/
checkouts =
    git@github.com:nens/syseggrecipe.git
    git@github.com:buildout/buildout.git
    git@github.com:reinout/reinout-media.git
    git@github.com:rvanlaar/djangorecipe.git

[svn]
vcs = svn
basedir = ~/svn/
checkouts =
    svn+ssh://reinout@svn.zope.org/repos/main/z3c.recipe.usercrontab/trunk

In the morning, I'll normally do a checkoutmanager up and it'll go through the list and do svn up, git pull, hg pull -u, depending on the version control system. Much better than going though a number of them by hand!

Regularly, I'll do checkoutmanager st to see if I've got something I still need to commit. If you just work on one project, no problem. But if you need to do quick fixes on several projects and perhaps also store your laptop's configuration in git... it is easy to forget something:

$ checkoutmanager st
/Users/reinout/vm/veertien/efcis-site
 M README.rst

And did you ever commit something but forgot to push it to the server? checkoutmanager out tells you if you did :-)

Porting to python 3. The repo was originally on bitbucket, but nowadays I keep having to look all over my screen, looking for buttons, to get anything done there. I'm just too used to github, it seems. So after merging a pull request I finally got down to moving it to github.

I also copied over the issues and added one that told me to make sure it runs on python 3, too. Why? Well, it is the good thing to do. And... we had a work meeting last week where we said that ideally we'd want to run everything on python 3.

Two years ago I started a django site with python 3. No real problems there. I had to fix two buildout recipes myself. And the python LDAP package didn't work, but I could work around it. And supervisord didn't run so I had to use the apt-get-installed global one. For the rest: fine.

Recently I got zest.releaser to work on python 3 (that is: someone else did most of the hard work, I helped getting the pull request properly merged :-) ). For that, several test dependencies needed to be fixed for python 3 (which, again, someone else did). Checkoutmanager had the same test dependencies, so getting the test machinery to run was just a matter of updating dependencies.

What had to be done?

  • print 'something' is now a function: print('something'). Boring work, but easy.

  • Some __future__ imports, mostly for the print function and unicode characters.

  • Oh, and setting up travis-ci.org testing. Very easy to get both python 2.7 and 3.4 testing your software that way. Otherwise you keep on switching back/forth between versions yourself.

    (There's also 'tox' you can use for local multi-python-version testing in case you really really need that all the time, I don't use it myself though.)

  • Some from six.moves import xyz to work around changed imports between 2 and 3. Easy peasy, just look at the list in the documentation.

  • It is now try... except SomeError as e instead of try... except SomeError, e. The new syntax already works in 2.7, so there's no problem there.

  • The one tricky part was that checkoutmanager uses doctests instead of "regular" tests. And getting string comparison/printing right on both python 2 and 3 is a pain. You need an ugly change like this one to get it working. Bah.

    But: most people don't use doctests, so they won't have this problem :-)

  • The full list of changes is in this pull request: https://github.com/reinout/checkoutmanager/pull/9 .

  • A handy resource is http://python3porting.com/problems.html . Many common problems are mentioned there. Including solution.

    Django's porting tips at https://docs.djangoproject.com/en/1.8/topics/python3/ are what I recommended to my colleagues as a useful initial guide on what to do. Sane, short advice.

Anyway... Another python 3 package! (And if I've written something that's still used but that hasn't been ported yet: feel free to bug me or to send a pull request!)

24 Aug 2015 12:45pm GMT

T. Kim Nguyen: The open horizon

11 years of Plone at UW Oshkosh, and now a new start

24 Aug 2015 11:20am GMT

T. Kim Nguyen: How to make URLs clickable in PloneFormGen field help text

Plone description fields and PloneFormGen field help text are plain text, not rich text (HTML). Here's how to make URLs they contain clickable.

24 Aug 2015 11:19am GMT

23 Aug 2015

feedPlanet Plone

Davide Moro: Kotti CMS - frontend decoupled from backend. How we did it (part 2)


In the previous article http://davidemoro.blogspot.it/2015/07/kotti-cms-successful-story-part-1.html we have seen that:

No we will see:

Here you can see some screenshots, implementation details and links.

Project setup

The installation folder is a package that contains all the application-specific settings, database configuration, which packages your project will need and where they lives.

From https://pypi.python.org/pypi/mr.developer.

The installation folder is a "one command install" meta package:

so let the computer works for us and have fun.
See:

Populators

Populators are functions with no arguments that get called on system startup, they may then make automatic changes to the database like content initialization.

Populators are very important because when you install the project folder during development or on the very first production instance you'll find all the most important contents and sections by default. Things will be created automatically if the database is empty, so you don't obtain a blank site on the very first install.

Populators are also good for improving the first impact of the end users (I mean editors) with the platform because they see all the main sections already there.

See:

Private backend area

Turning Kotti CMS into a private content administration is quite easy:

Later I've created a generic package that does all the things for you (kotti_backend):

so things are even easier now (install kotti_backend, done).

Multilingual

kotti_multilingual is your friend.

Goto frontend link, translation management and link to the technical documentation online based on Sphinx

See:

Elastic search

kotti_es provides ElasticSearch integration for fulltext search. This plugin needs more love and and a complete refactor (it was built in a hurry and I'm not yet satisfied) but it proved there are no known issue after months of intensive usage.
Probably things will change, hope other guys with the same needs will contribute.

See:

Main navigation and header/footer links

You can use the kotti_actions plugin if you want to implement footer, header links or even nested main navigation menus. Obviously kotti_actions is ment to be used with a decoupled frontend.

As you can see a custom colour can be assigned to courses, navigation links, sections and every kind of object thanks to the json annotations column provided by default by Kotti. So you can add arbitrary fields.

How the multilevel menu looks like on the public website

See:

Portlets

The main layout based on box managers for portlets

The kotti_boxes is your friend. This plugin is ment to be used with a decoupled frontend. And it was quite quick implementing portlets because we didn't need to customize the private backend area.

You can define different page layouts for each resource type (home page, news, courses, etc) and show boxes in well defined areas (box managers), for example LeftBoxManager, AboveFooterBoxManager and so on.

So box and box managers are just non publishable contents and you can:

Banner portlets with links

See:

Editor toolbar

As you can see if you are logged in the frontend will show an editor toolbar with:

Info and links to the backend, edit and folder contents


or see exactly the website as an anonymous user (very common customer request):

Anonymous view

You can also add more features, for example direct edit links for images or portlets or live edit features.

Talking about a pure Python solution, you might implement this feature with a Pyramid Tween (I hope I'll have enough spare time to do that. Anyone would want to contribute? We are very welcome, contact me!):

Course types (custom content types)

The course view with portlets and collapsable paragraphs

They are a sort of rich documents with an image attachment column and integrated with an external ecommerce site. When you add a course type there is an event that initializes automatically subobjects and the main image attachement by default, so less work for editors.

In addition all image content types and image attachments are addable or editable just by allowed users thank to custom roles and global or local permission sharing.

Collapsable paragraphs are implemented with custom content types not directly reachable on the frontend.

There are a lot of fields on this content type, so they are grouped together using fieldsets.
Editors can also create a base private course model and then copy and paste it when new courses should be added.

Sometimes you want to prevent the addability on the site root for particular object types, this way things will remain always tidy (why you should add a course on the very root of the site?).

See:

Windows and MySQL issues and tips

Kotti can be installed on Windows but I strongly suggest to adopt a Unix-like server with Postgresql instead of MySQLas storage layer:

Tests

All the software is tested. Very happy with the py.test framework.

See:

Other third party Kotti plugins

I've used the following third party plugins that can be used on a standard Kotti environment:

See also the full list of available plugins:

Photoes and credits

http://www.mip.polimi.it/enAll the screenshots shown in this article are taken from the "MIP Politecnico di Milano's graduate school of business" website:

So the MIP's website backend is powered by Pylons/Pyramid and Kotti CMS, I'll write a non-technical case study soon. In the mean time many thanks to:

Results

You can consider Kotti as a very good, secure, flexible, battle tested and easy to approach solution for important customers.

All Kotti posts published by @davidemoro

Next steps

Reading this article you should find all the bricks you need if you want to implement a public website decoupled from its backend with Kotti.

Now I'm assembling the above bricks in order to provide an "easy to install" solution with the same pattern I've described here. This is my roadmap:

It can be considered a good starting point for:

So stay tuned and if you like this work please consider to contribute with

or why not sponsorships!

And if you want to know more about Kotti and you are attending +EuroPython Conference 2015 in Bilbao don't miss the Andreas Kaiser's talk "Standing on the Shoulders of Giants: The Kotti Web Application Framework". I'll join the sprint (remotely) at the end of EuroPython so see you on IRC (http://webchat.freenode.net/?channels=kotti). If you want to stay tuned follow https://twitter.com/KottiCMS.

23 Aug 2015 10:05pm GMT

Davide Moro: Introducing substancek. A Kotti project

Let me introduce substancek, a Kotti (http://kotti.pylonsproject.org) project.

What it is substancek?

substancek is:

It is only an additional layer upon the following opinionated stack:

with the following motto:

"""(even) better development experience and complete frontend freedom"""

and introduces (or better promotes) the concept of private admin area (backend) decoupled from the public side (frontend) of your web applications built with Kotti.

In other words it is a set of technologies addressed under the substancek brand that let you extend Kotti in order to use it just as a private backend administration area for your application data.

So you are still using plain Kotti with an additional package (at least kotti_backend depending on what you need).

If you want to know more I've discussed here benefits and why frontend decoupled from the backend pattern. See http://davidemoro.blogspot.it/2015/07/kotti-cms-successful-story-part-1.html

substancek name

Tribute to:

When substancek is for you

Any project of any size (from micro to XXL) involving content management that needs:

So if you project needs (or in future iterations) one or more:

you might consider even more substancek (kotti_backend + Kotti + Pyramid + SQLAlchemy).

For example:

Note well: if you don't need workflows don't be scared because there is no overkill. You can use a one state workflow or no workflow at all for example. No hierarchical data? Use not nestable resources and so on. If it comes out later that you need them it will be quite easy converting your code.

Alternatives

You can use plain Kotti, without the substancek's kotti_backend setup. Or if you prefer noSQL try the excellent substanced (substanced + Pyramid + ZODB). Both solutions are lightweight, tested, well documented and easy to learn. Alternatively if you need really a minimal and unopinionated solution you might use plain Pyramid.

Do you need something more? You might consider to use Plone (https://plone.org) as a framework.

Anyway the good news is that Python is plenty of good options.

substancek architecture details

As already told you the private admin area (backend) and the rest of the application (frontend) are two complete different applications with different settings, different views and shared authentication.

Assuming you are going to use PasteDeploy to run your application, let's consider the following configuration files setup:

backend-dev.ini

[app:kotti]
use = egg:kotti

...
pyramid.includes =
pyramid_debugtoolbar
pyramid_tm
kotti_backend.views.override_root_view


kotti.configurators = kotti_tinymce.kotti_configure
kotti_backend.kotti_configure

kotti.use_workflow = kotti_backend:workflows/simple_backend.zcml

kotti_backend.goto_frontend = 1

This is a normal Kotti setup with:

See more options on the kotti_backend's README file:

frontend-dev.ini

[app:main]
use = egg:Kotti
...

kotti.use_workflow = kotti_backend:workflows/simple_backend.zcml

kotti.configurators =
your_package.kotti_configure

kotti.base_includes =
kotti
kotti.views

On the frontend configuration file we share the same workflow in use on the admin interface (kotti.use_workflow).

One of the most important configuration is the kotti.base_includes override: here we decide what will be loaded on our application. We omit all the Kotti views loaded by default in the standard setup and we load what we want to include where:

The kotti.configurators typically auto includes your package and tell what should be included in your application (pyramid.includes). See the Kotti documentation for more info.

In other words:

"what is not loaded, it doesn't exist"

so the final result is that there is nothing exposed on the frontend except what you decide to load and extreme control. You can register just only one view application or configure a complex setup for a CMS-like application: it's up to you registering only the views your application needs and no more. This way you can use completely different frontend frameworks, different versions of Javascript libraries, you have no css/js conflicts and no need to hide unneeded things and you decide which resources will be published on the frontend.

See also another advance usage pattern "Using Kotti as a library" http://kotti.readthedocs.org/en/latest/developing/advanced/as-a-library.html

development.ini

# See http://pythonpaste.org/deploy/#paste-composite-factory
[composite:main]
use = egg:Paste#urlmap
/ = config:frontend-dev.ini
/admin = config:backend-dev.ini


[server:main]
use = egg:waitress#main
host = 127.0.0.1
port = 5000

The (optional) development.ini shows how to configure a composite application with different mount points. You can change /admin with /cms or /whateveryouwant depending on your needs.

Examples

You can checkout the https://github.com/substancek/substancek_cms_theme package if you want to see in action a (quite complex) example.

I'm going to provide more and simpler examples (eg: a pretend micro application), see the roadmap.

What are the substancek related packages

Here you can see the whole substancek ecosystem:

Who's using substancek technologies

MIP - graduate school of business

The MIP (Politecnico di Milano graduate school of business - www.mip.polimi.it/en) uses substancek technology inside for the private admin interface. This approach is so flexible that let you use Kotti as a private admin content management area and even implement your public views using other frameworks or non Python languages (for example PHP+Symfony2).

See:

substancek_cms_theme

This is a work in progress opinionated CMS frontend implementation that reuses existing Kotti templates and logics.

The look and feel is the same you get with a standard Kotti installation but it shows how to integrate and distribute a Python package integrated with a Yeoman setup (http://yeoman.io) that provides:

Once installed you'll see the admin interface visiting http://localhost:5000/cms.
See the code here: https://github.com/substancek/substancek_cms_theme

Next steps

If you want to contribute there is a lot to do:

Contributions, feedback or pings like "hey, I'm going to use Pyramid/Kotti for the my next project" will be very appreciated!

Documentation

All Kotti posts published by @davidemoro

Twitter links

23 Aug 2015 10:03pm GMT