03 Jul 2015

feedPlanet Plone

Kees Hink: Removing a Dexterity behavior in Generic Setup isn't possible

Post retracted. As Tres remarks, the output posted originally does not indicate a problem in GS but an invalid XML file. I tried setting the attribute remove="true" on the element value="plone.app.multilingual.dx.interfaces.IDexterityTranslatable" in the property name="behaviors" If the problem does exist, the workaround might be to set purge="true" on the property and list all the behaviors

03 Jul 2015 9:33am GMT

David "Pigeonflight" Bain: Help, my updated posts keep bubbling to the top of the Planet

I kept noticing that whenever I updated certain posts they would end up at the top of the Planet Plone RSS feed aggregator. I haven't dug too deeply into the issue, but it seems to be a mixture of the way the Planet is configured and the way default blog feeds are presented by Blogger. Thankfully, the default Blogger feed format can be easily changed. Previously the feed I used for Planet Plone

03 Jul 2015 4:11am GMT

01 Jul 2015

feedPlanet Plone

Benoît Suttor: New recipe, collective.recipe.buildoutcache

This recipe generate a buildout-cache archive. We use pre-generated buildout-cache folder for speed up buildout duration.

Introduction

This recipe generate a buildout-cache archive. We use pre-generated buildout-cache folder for speed up buildout duration. The archive contains one single buildout-cache folder. In this folder, there are 2 folders:

Before starting a buildout, we download and extract buildout-cache and use it on our buildout. We add eggs-directory and download-cache parameters on buildout section like this:

[buildout]

eggs-directory = buildout-cache/eggs download-cache = buildout-cache/downloads

Use case

In our organization, we have a Jenkins server. We created a Jenkins job which generate buildout-cache.tar.gz2 every night and push it into a file server.

We also use Docker, our Dockerfiles download and untar buildout-cache before starting buildout, so creation of docker image became very faster !

How it works

Simply, you have to add an parts with this recipe on your buildout project.

Like this :

[buildout]

parts = ... makebuildoutcachetargz [makebuildoutcachetargz] recipe = collective.recipe.buildoutcache

You can use some parameters for changing name of your archive, use another working directory than ./tmp or use another buildout file than buildout.cfg for eggs downloads, See https://github.com/collective/collective.recipe.buildoutcache.

For recipe installation you can make this command line:

./bin/buildout install makebuildoutcachetargz

And start recipe script:

./bin/makebuildoutcachetargz

Conclusion

Use collective.recipe.buildoutcache and decrease time lost with your buildout ;)

01 Jul 2015 2:37pm GMT

30 Jun 2015

feedPlanet Plone

Davide Moro: Python mock library for ninja testing

If you are going to write unit tests with Python you should consider this library: Python mock (https://pypi.python.org/pypi/mock).

Powerful, elegant, easy, documented (http://www.voidspace.org.uk/python/mock/)...
and standard: mock is now part of the Python standard library, available as unittest.mock in Python 3.3 onwards.

Simple example

Let's suppose you have an existing validator function based on a dbsession import used for querying a relational database. If you are going to write unit tests, you should focus on units without involving real database connections.

validators.py
from yourpackage import DBSession

def validate_sku(value):
...
courses = DBSession.query(Course).\
filter(Course.course_sku == value).\
filter(Course.language == context.language).\
all()
# validate data returning a True or False value
...

tests/test_validators.py
def test_validator():
import mock
with mock.patch('yourpackage.validators.DBSession') as dbsession:
instance = dbsession
instance.query().filter().filter().all.return_value = [mock.Mock(id=1)]
from yourpackage.validators import sku_validator
assert sku_validator(2) is True

In this case the DBSession call with query, the two filter calls and the final all invocation will produce our mock result (a list of with one mock item, an object with an id attribute).

Brilliant! And this is just one simple example: check out the official documentation for further information:

30 Jun 2015 10:05pm GMT

29 Jun 2015

feedPlanet Plone

Davide Moro: Pip for buildout folks

... or buildout for pip folks.

In this article I'm going to talk about how to manage software (Python) projects with buildout or pip.

What do you mean for project?

A package that contains all the application-specific settings, database configuration, which packages your project will need and where they lives.

Projects should be managed like a software if you want to assure the needed quality:

This blog post is not:

Buildout

I've been using buildout for many years and we are still good friends.
Buildout definition (from http://www.buildout.org):

"""
Buildout is a Python-based build system for creating, assembling and deploying applications from multiple parts, some of which may be non-Python-based. It lets you create a buildout configuration and reproduce the same software later.
"""

With buildout you can build and share reproducible environments, not only for Python based components.

Before buildout (if I remember well the first time I get started to use buildout was in 2007, probably during the very first Plone Sorrento sprint) it was a real pain sharing a complete and working developing environment pointing to the right version of several repositories, etc. With buildout it was questions of minutes.

From https://pypi.python.org/pypi/mr.developer.
Probably with pip there is less fun because there isn't a funny picture that celebrates it?!

Buildout configuration files are modular and extensible (not only on per-section basis). There are a lot of buildout recipes, probably the one I prefer is mr.developer (https://pypi.python.org/pypi/mr.developer). It allowed me to fetch different versions of the repositories depending on the buildout profile in use, for example:

You can accomplish this thing creating different configurations for different profiles, like that:


[buildout]

...

[sources]

your_plugin = git git@github.com:username/your_plugin.git

...


I don't like calling ./bin/buildout -c [production|devel].cfgwith the -c syntax because it is too much error prone. I prefer to create a symbolic link to the right buildout profile (called buildout.cfg) and you'll perform the same command both in production or during development always typing:


$ ./bin/buildout


This way you'll avoid nasty errors like launching a wrong profile in producion. So use just the plain ./bin/buildout command and live happy.

With buildout you can show and freeze all the installed versions of your packages providing a versions.cfg file.

Here you can see my preferred buildout recipes:

Buildout or not buildout, one of the of the most common needs it is the ability to switch from develop to tags depending on you are in development or production mode and reproduce the same software later. I can't figure out to manage software installations without this quality assurance.

More info: http://www.buildout.org

Pip

Let's see how to create reproducible environments with develop or tags dependencies for production environments with pip (https://pip.pypa.io/en/latest/).

Basically you specify your devel requirements on a devel-requirements.txt file (the name doesn't matter) pointing to the develop/master/trunk on your repository.

There is another file that I call production-requirements (the file name doesn't matter) that it is equivalent to the previous one but:

This way it is quite simple seeing which releases are installed in production mode, with no cryptic hash codes.

You can use now the production-requirements.txt as a template for generating an easy to read requirements.txt. You'll use this file when installing in production.

You can create a regular Makefile if you don't want to repeat yourself or make scripts if you prefer:

For example if you are particular lazy you can create a script that will create your requirements.txt file using the production-requirements.txt like a template.
This is a simple script, it is just an example, that shows how to build your requirements.txt omitting lines with grep, sed, etc:

#!/bin/bash

pip install -r production-requirements.txt
pip freeze -r production-requirements.txt | grep -v mip_project | sed '1,2d' > requirements.txt

When running this script, you should activate another Python environment in order to not pollute the production requirements list with development stuff.

If you want to make your software reusable and as flexible as possible, you can add a regular setup.py module with optional dependencies, that you can activate depending on what you need. For example in devel-mode you might want to activate an entry point called docs (see -e .[docs] in devel-requirements.txt) with optional Sphinx dependencies. Or in production you can install MySQL specific dependencies (-e .[mysql]).

In the examples below I'll also show how to refer to external requirements file (url or a file).

setup.py

You can define optional extra requirements in your setup.py module.

mysql_requires = [
'MySQL-python',
]

docs_requires = [
'Sphinx',
'docutils',
'repoze.sphinx.autointerface',
]
...

setup(
name='mip_project',
version=version,
...
extras_require={
'mysql': mysql_requires,
'docs': docs_requires,
        ... 
},

devel-requirements.txt

Optional extra requirement can be activated using the [] syntax (see -e .[docs]).
You can also include external requirement files or urls (see -r) and tell pip how to fetch some concrete dependencies (see -e git+...#egg=your_egg).

-r https://github.com/.../.../blob/VERSION/requirements.txt
 
# Kotti
Kotti[development,testing]==VERSION

# devel (to no be added in production)
zest.releaser

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git#egg=kotti_boxes
...

-e .[docs]

production_requirements.txt

The production requirements should point to tags (see @VERSION).

-r https://github.com/Kotti/Kotti/blob/VERSION/requirements.txt
Kotti[development,testing]==VERSION

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@0.1.1#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git@0.1.3#egg=kotti_boxes
...

-e .[mysql]

requirements.txt
The requirements.txt is autogenerated based on the production-requirements.txt model file. All the installed versions are appended in alphabetical at the end of the file, it can be a very long list.
All the tag versions provided in the production-requirements.txt are automatically converted to hash values (@VERSION -> @3c1a191...).

Kotti==1.0.0a4

# Third party's eggs
kotti-newsitem==0.2
kotti-calendar==0.8.2
kotti-link==0.1
kotti-navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@3c1a1914901cb33fcedc9801764f2749b4e1df5b#egg=kotti_actions-dev
-e git+https://github.com/truelab/kotti_boxes.git@3730705703ef4e523c566c063171478902645658#egg=kotti_boxes-dev
...

## The following requirements were added by pip freeze:
alembic==0.6.7
appdirs==1.4.0
Babel==1.3
Beaker==1.6.4
... 

Final consideration

Use pip to install Python packages from Pypi.

If you're looking for management of fully integrated cross-platform software stacks, buildout is for you.

With buildout no Python code needed unless you are going to write new recipes (the plugin mechanism provided by buildout to add new functionalities to your software building, see http://buildout.readthedocs.org/en/latest/docs/recipe.html).

Instead with pip you can manage also cross-platform stacks but you loose the flexibility of buildout recipes and inheritable configuration files.

Anyway if you consider buildout too magic or you just need a way to switch from production vs development mode you can use pip as well.

Links

If you need more info have a look at the following urls:

Other useful links:

Update 20150629

If you want an example I've created a pip-based project for Kotti CMS (http://kotti.pylonsproject.org):

29 Jun 2015 9:26pm GMT

Martijn Faassen: Build a better batching UI with Morepath and Jinja2

Introduction

This post is the first in what I hope will be a series on neat things you can do with Morepath. Morepath is a Python web micro framework with some very interesting capabilities. What we'll look at today is what you can do with Morepath's link generation in a server-driven web application. While Morepath is an excellent fit to create REST APIs, it also works well server aplications. So let's look at how Morepath can help you to create a batching UI.

On the special occasion of this post we also released a new version of Morepath, Morepath 0.11.1!

A batching UI is a UI where you have a larger amount of data available than you want to show to the user at once. You instead partition the data in smaller batches, and you let the user navigate through these batches by clicking a previous and next link. If you have 56 items in total and the batch size is 10, you first see items 0-9. You can then click next to see items 10-19, then items 20-29, and so on until you see the last few items 50-55. Clicking previous will take you backwards again.

In this example, a URL to see a single batch looks like this:

http://example.com/?start=20

To see items 20-29. You can also approach the application like this:

http://example.com/

to start at the first batch.

I'm going to highlight the relevant parts of the application here. The complete example project can be found on Github. I have included instructions on how to install the app in the README.rst there.

Model

First we need to define a few model classes to define the application. We are going to go for a fake database of fake persons that we want to batch through.

Here's the Person class:

class Person(object):
    def __init__(self, id, name, address, email):
        self.id = id
        self.name = name
        self.address = address
        self.email = email

We use the neat fake-factory package to create some fake data for our fake database; the fake database is just a Python list:

fake = Faker()

def generate_random_person(id):
    return Person(id, fake.name(), fake.address(), fake.email())

def generate_random_persons(amount):
    return [generate_random_person(id) for id in range(amount)]

person_db = generate_random_persons(56)

So far nothing special. But next we create a special PersonCollection model that represents a batch of persons:

class PersonCollection(object):
    def __init__(self, persons, start):
        self.persons = persons
        if start < 0 or start >= len(persons):
            start = 0
        self.start = start

    def query(self):
        return self.persons[self.start:self.start + BATCH_SIZE]

    def previous(self):
        if self.start == 0:
            return None
        start = self.start - BATCH_SIZE
        if start < 0:
            start = 0
        return PersonCollection(self.persons, start)

    def next(self):
        start = self.start + BATCH_SIZE
        if start >= len(self.persons):
            return None
        return PersonCollection(self.persons, self.start + BATCH_SIZE)

To create an instance of PersonCollection you need two arguments: persons, which is going to be our person_db we created before, and start, which is the start index of the batch.

We define a query method that queries the persons we need from the larger batch, based on start and a global constant, BATCH_SIZE. Here we do this by simply taking a slice. In a real application you'd execute some kind of database query.

We also define previous and next methods. These give back the previous PersonCollection and next PersonCollection. They use the same persons database, but adjust the start of the batch. If there is no previous or next batch as we're at the beginning or the end, these methods return None.

There is nothing directly web related in this code, though of course PersonCollection is there to serve our web application in particular. But as you notice there is absolutely no interaction with request or any other parts of the Morepath API. This makes it easier to reason about this code: you can for instance write unit tests that just test the behavior of these instances without dealing with requests, HTML, etc.

Path

Now we expose these models to the web. We tell Morepath what models are behind what URLs, and how to create URLs to models:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

Let's look at this in more detail:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

This is not a lot of code, but it actually tells Morepath a lot:

  • When you go to the root path / you get the instance returned by the get_person_collection function.
  • This URL takes a request parameter start, for instance ?start=10.
  • This request parameter is optional. If it's not given it defaults to 0.
  • Since the default is a Python int object, Morepath rejects any requests with request parameters that cannot be converted to an integer as a 400 Bad Request. So ?start=11 is legal, but ?start=foo is not.
  • When asked for the link to a PersonCollection instance in Python code, as we'll see soon, Morepath uses this information to reconstruct it.

Now let's look at get_person:

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

This uses a path with a parameter in it, id, which is passed to the get_person function. It explicitly sets the system to expect an int and reject anything else, but we could've used id=0 as a default parameter instead here too. Finally, get_person can return None if the id is not known in our Python list "database". Morepath automatically turns this into a 404 Not Found for you.

View & template for Person

While PersonCollection and Person instances now have a URL, we didn't tell Morepath yet what to do when someone goes there. So for now, these URLs will respond with a 404.

Let's fix this by defining some Morepath views. We'll do a simple view for Person first:

@App.html(model=Person, template='person.jinja2')
def person_default(self, request):
    return {
        'id': self.id,
        'name': self.name,
        'address': self.address,
        'email': self.email
    }

We use the html decorator to indicate that this view delivers data of Content-Type text/html, and that it uses a person.jinja2 template to do so.

The person_default function itself gets a self and a request argument. The self argument is an instance of the model class indicated in the decorator, so a Person instance. The request argument is a WebOb request instance. We give the template the data returned in the dictionary.

The template person.jinja2 looks like this:

<!DOCTYPE html>
<html>
  <head>
    <title>Morepath batching demo</title>
  </head>
  <body>
    <p>
      Name: {{ name }}<br/>
      Address: {{ address }}<br/>
      Email: {{ email }}<br />
    </p>
  </body>
</html>

Here we use the Jinja2 template language to render the data to HTML. Morepath out of the box does not support Jinja2; it's template language agnostic. But in our example we use the Morepath extension more.jinja2 which integrates Jinja2. Chameleon support is also available in more.chameleon in case you prefer that.

View & template for PersonCollection

Here is the view that exposes PersonCollection:

@App.html(model=PersonCollection, template='person_collection.jinja2')
def person_collection_default(self, request):
    return {
        'persons': self.query(),
        'previous_link': request.link(self.previous()),
        'next_link': request.link(self.next()),
    }

It gives the template the list of persons that is in the current PersonCollection instance so it can show them in a template as we'll see in a moment. It also creates two URLs: previous_link and next_link. These are links to the previous and next batch available, or None if no previous or next batch exists (this is the first or the last batch).

Let's look at the template:

<!DOCTYPE html>
<html>
 <head>
   <title>Morepath batching demo</title>
  </head>
  <body>
    <table>
      <tr>
        <th>Name</th>
        <th>Email</th>
        <th>Address</th>
      </tr>
      {% for person in persons %}
      <tr>
        <td><a href="{{ request.link(person) }}">{{ person.name }}</a></td>
        <td>{{ person.email }}</td>
        <td>{{ person.address }}</td>
      </tr>
      {% endfor %}
    </table>
    {% if previous_link %}
    <a href="{{ previous_link }}">Previous</a>
    {% endif %}
    {% if next_link %}
    <a href="{{ next_link }}">Next</a>
    {% endif %}
  </body>
</html>

A bit more is going on here. First it loops through the persons list to show all the persons in a batch in a HTML table. The name in the table is a link to the person instance; we use request.link() in the template to create this URL.

The template also shows a previous and next link, but only if they're not None, so when there is actually a previous or next batch available.

That's it

And that's it, besides a few details of application setup, which you can find in the complete example project on Github.

There's not much to this code, and that's how it should be. I invite you to compare this approach to a batching UI to what an implementation for another web framework looks like. Do you put the link generation code in the template itself? Or as ad hoc code inside the view functions? How clear and concise and testable is that code compared to what we just did here? Do you give back the right HTTP status codes when things go wrong? Consider also how easy it would be to expand the code to include searching in addition to batching.

Do you want to try out Morepath now? Read the very extensive documentation. I hope to hear from you!

29 Jun 2015 2:48pm GMT

Alex Clark: Pillow 2-9-0 Is Almost Out

Pillow 2.9.0 will be released on July 1, 2015.

Pre-release

Please help the Pillow Fighters prepare for the Pillow 2.9.0 release by downloading and testing this pre-release:

Report issues

As you might expect, we'd like to avoid the creation of a 2.9.1 release within 24-48 hours of 2.9.0 due to any unforeseen circumstances. If you suspect such an issue to exist in 2.9.0.dev2, please let us know:

Thank you!

29 Jun 2015 12:01am GMT

27 Jun 2015

feedPlanet Plone

Alex Clark: Plone on Heroku

Dear Plone, welcome to 2015

Picture it. The year was 2014. I was incredibly moved and inspired by this blog entry:

Someone had finally done it. (zupo in this case, kudos!) Someone had finally beat me to implementing the dream of git push heroku plone. And I could not have been happier.

But something nagging would not let go: I still didn't fully understand how the buildpack worked. Today I'm happy to say: that nag is gone and I now fully understand how Heroku buildpacks work… thanks to… wait for it… a Buildpack for Plock.

Plock Buildpack

There's a lot of the same things going on in both the Plone Buildpack and the Plock Buildpack, with some exceptions.

Experimental

The Plock buildpack is highly experimental, still in development and possibly innovative. Here's what it currently does:

  • Configures Python user site directory in Heroku cache
  • Installs setuptools in user site
  • Installs pip in user site
  • Installs Buildout in user site
  • Installs Plone in cache
  • Copies cache to build directory
  • Installs a portion of "user Plone" (the Heroku app's buildout.cfg) in the build directory (not the cache)
  • Relies on the app to install the remainder (the Heroku app's heroku.cfg). Most importantly the app runs Buildout which finishes quickly thanks to the cache & configures the port which is only available to the app (not the buildpack.)

Here's an example:

# buildout.cfg
[buildout]
extends = https://raw.github.com/plock/pins/master/plone-4-3

[user]
packages = collective.loremipsum
# heroku.cfg
[buildout]
extends = buildout.cfg

[plone]
http-address = ${env:PORT}
# Procfile
web: buildout -c heroku.cfg; plone console

Opinionated

The Plock Buildpack is built on Plock, an "opinionated" installer for Plone. It may eventually use Plock itself, but currently only uses Plock Pins.

27 Jun 2015 11:09pm GMT

26 Jun 2015

feedPlanet Plone

Blue Dynamics: Boosting Travis CI: Python and Buildout caching

Ariane Start - Albert Einstein Start 1zc.buildout is Pythons swiss knife to build complex enviroments. MacGywer would love it. Travis CI together with GitHub is a wonderful service for OpenSource projects to do collaborative development hand in hand with Test Driven Development and Continious Integration.

But complex Python builds are taking its time - mainly because of the long list of dependencies and bunch of downloads. It is boring to wait 15minutes for a test

So it was for collective.solr, a package that integrates the excellent Solr open source search platform, (written in Java, from the Apache Lucene project) with the Plone Enterprise CMS. Additional to the complex Plone, it downloads and configures a complete Solr for the test environment.

Since a while Travis CI offers caching on its container based infrastructure.

Using it with buildout is easy once ones knows how.

  1. The file .travis.yaml configures Travis CI, open it.
  2. set language: python if you not already have
  3. an important setting is sudo = false which switches explicit to container based infrastructure. This is default for projects created at Travis CI before 2015-01-01, but explicit is better than implicit!
  4. next the caching is defined. We enable also pip caching. This looks like so
    cache:
      pip: true
      directories:
        - $HOME/buildout-cache
    
  5. in order to create our caching directories a before-install step is needed. In this step we install buildout too. Note: there is no need to use the old and busted bootstrap.py any longer (except old and busted Plone builds, since 4.3 at least it will work).
    before_install:
      - mkdir -p $HOME/buildout-cache/{eggs,downloads}
      - virtualenv .
      - bin/pip install --upgrade pip setuptools zc.buildout
    
  6. Next we need to tweak file travis.cfg. This is the buildout configuration file used for travis. Under section [buildout] add the lines:

    eggs-directory = /home/travis/buildout-cache/eggs
    download-cache = /home/travis/buildout-cache/downloads
    
    Note, the $HOME environment variable is not available as buildout variable, so we need to set this fixed to /home/travis - Travis CI can not guarantee that this will stick for all future. So if there is a way to access environment variables before buildout runs the parts please let me know.

The second time Travis CI builds the project it took about 3 minutes instead of 15!
The full files as we use it for collective.solr:

.travis.yaml

language: python
# with next we get on container based infrastructure, this enables caching
sudo: false
python:
  - 2.7
cache:
  pip: true
  directories:
    - $HOME/buildout-cache
env:
  - PLONE_VERSION=4.3.x SOLR_VERSION=4.10.x
  - PLONE_VERSION=5.0.x SOLR_VERSION=4.10.x
before_install:
  - mkdir -p $HOME/buildout-cache/{eggs,downloads}
  - virtualenv .
  - bin/pip install --upgrade pip setuptools zc.buildout
install:
  - sed -ie "s#plone-x.x.x.cfg#plone-$PLONE_VERSION.cfg#" travis.cfg
  - sed -ie "s#solr-x.x.x.cfg#solr-$SOLR_VERSION.cfg#" travis.cfg
  - bin/buildout -N -t 20 -c travis.cfg
script:
  - bin/code-analysis
  - bin/test
after_success:
  - pip install -q coveralls
  - coveralls

travis.cfg

[buildout]
extends =
    base.cfg
    plone-x.x.x.cfg
    solr.cfg
    solr-x.x.x.cfg
    versions.cfg
parts +=
    code-analysis

# caches, see also .travis.yaml
# one should not depend on '/home/travis' but it seems stable in containers.
eggs-directory = /home/travis/buildout-cache/eggs
download-cache = /home/travis/buildout-cache/downloads

[code-analysis]
recipe = plone.recipe.codeanalysis
pre-commit-hook = False
# return-status-codes = True

We may enhance this, so you can always look at its current state at github/collective/collective.solr.

image by "Albert Einstein - Start 1" by DLR German Aerospace Center at Flickr

26 Jun 2015 7:35am GMT

25 Jun 2015

feedPlanet Plone

Andreas Jung: The Docker way on dealing with "security"

RANT: The Docker developers are so serious about security

25 Jun 2015 4:04am GMT

23 Jun 2015

feedPlanet Plone

Andreas Jung: Moving forward with Elasticsearch and Plone

23 Jun 2015 6:34pm GMT

Davide Moro: Kotti CMS - how to turn your Kotti CMS into an intranet

In the previous posts we have seen that Kotti is a minimal but robust high-level Pythonic web application framework based on Pyramid that includes an extensible CMS solution, both user and developer friendly. For developer friendly I mean that you can be productive in one or two days without any knowledge of Kotti or Pyramid if you already know the Python language programming.

If you have to work relational databases, hierarchical data, workflows or complex security requirements Kotti is your friend. It uses well know Python libraries.

In this post we'll try to turn our Kotti CMS public site into a private intranet/extranet service.

I know, there are other solutions keen on building intranet or collaboration portals like Plone (I've been working 8 years on large and complex intranets, big public administration customers with thousands of active users and several editor teams, multiple migrations, etc) or the KARL project. But let's pretend that in our use case we have simpler requirements and we don't need too complex solutions, features like communities, email subscriptions or similar things.

Thanks to the Pyramid and Kotti's architectural design, you can turn your public website into an intranet without having to fork the Kotti code: no forks!

How to turn your site into an intranet

This could be an hard task if you use other CMS solutions, but with Kotti (or the heavier Plone) it will requires you just 4 steps:

  1. define a custom intranet workflow
  2. apply your custom worklows to images and files (by default they are not associated to any workflow, so once added they are immediatly public)
  3. set a default fallback permission for all views
  4. override the default root ACL (populators)

1 - define a custom intranet workflow

Intranet workflows maybe different depending on your organization requirements. It might be very simple or with multiple review steps.

The important thing is: no more granting the view permission for anonymous users, unless you are willing to define an externally published state

With Kotti you can design your workflow just editing an xml file. For further information you can follow the Kotti CMS - workflow reference article.

2 - apply your custom workflow to images and files

By default they are not associated to any workflow, so once added they are immediately public.

This step will requires you just two additional lines of code in your includeme or kotti_configure function.

Already described here: Kotti CMS - workflow reference, see the "How to enable the custom workflow for images and files" section.

3 - set a default fallback permission

In your includeme function you just need to tell the configurator to set a default permission even for public views already registered.

I mean that if somewhere into the Kotti code there is any callable view not associated to a permission, it won't be accessible by anonymous after this step.

In your includeme function you'll need to :

def includeme(config):
...
# set a default permission even for public views already registered
# without permission
config.set_default_permission('view')

If you want to bypass the default permission for certain views, you can decorate them with a special permission (NO_PERMISSION_REQUIRED from pyramid.security) which indicates that the view should always be executable by entirely anonymous users, regardless of the default permission. See:

4 - override the default root ACL (populators)

The default Kotti's ACL associated with the root of the site

from kotti.security import SITE_ACL

gives view privileges to every user, including anonymous.
You can override this configuration to require users to log in before they can view any of your site's pages. To achieve this, you'll have to set your site's ACL as shown on the following url:

You'll need you add or override the default populator. See the kotti.populators options here:

Results

After reading this article you should be able to close your Kotti site for anonymous users and obtaining a simple, private intranet-like area.

Off-topic: you can also use Kotti as a content backend-only administration area for public websites, with a complete decoupled frontend solution.

UPDATE 20150623: now you can achieve the same goals described in this article installing kotti_backend. See https://github.com/Kotti/kotti_backend

Useful links

All posts about Kotti

All Kotti posts published by @davidemoro:


23 Jun 2015 3:44pm GMT

Blue Dynamics: Speedup "RelStorage with memcached" done right

Part of PunchcardRelStorage is a great option as backend for ZODB. RelStorage uses shared Memcached as second level cache for all instances storing to the same database.

In comparision a classical ZEO-Client with its ZEO-Server as backend uses one filesystem cache per running instance (shared by all connection-pools of this instance). In both (ZEO/ RelStorage) cases pickled objects are stored in the cache. ZEO writes the pickles to the filesystem which takes its time unless you're using a RAM-disk. So while reading back its probably in RAM (OS-level disk-caches), but you can not know. Having enough free RAM helps here, but prediction is difficult. Also the one cache per-instance limitation while running 2 or more instances for some larger site makes this way of caching less efficient.

Additionally sysadmins usally are hating ZEO-Server (because its exotic) and loving PostgreSQL (well documented 'standard' tech they know how to work with) - a good reason to use PostgreSQL. On the ZEO-client side there are advantages too. While first level connection cache is the same as a usal ZEO-client, the second level cache is shared between all ZEO-clients.

[apt-get|yum] install memcached - done. Really?

No! We need to choose between pylibmc and python-memcached. But which one is better?

Also memcached is running on the same machine as the instances! So we can use unix sockets instead of TCP/IP. But what is better?

Benchmarking FTW!

I did some benchmarks. Assuming we have random data to write and read with different keys and also want to check if the overhead accessing non-existent keys has an effect. I quickly put together a little script giving me numbers. After configuring two similar memcached each with 1024MB, one with tcp and the other as socket I run this script and got the following result:

Benchmark pylibmc vs. python-memcached

In short:

Now this was valid on my machine. How does it behave in different environments? If you want to try/tweak the script and have similar or different results, please let me now!

Overall RelStorage will be faster if configured with sockets. If this is not possible choosing the right library will speedup things a least a bit.

Picture at top by Gregg Sloan at Flickr

23 Jun 2015 12:55pm GMT

22 Jun 2015

feedPlanet Plone

Paul Roeland: Tokyo symposium, a very Plone event

Last week, I had the pleasure to participate in the first Plone symposium held in Asia.

It started already on Friday, when we (Eric Steele, Alexander Loechel, Sven Strack and me) were invited into the CMScom offices by the organizers, Takeshi Yamamoto and Manabu Terada.

There, we met other people, like Masami Terada (whom I first met at the Bristol conference) and were treated to some great cakes. All the while having an inspired conversation on the Japanese Plone community, Pycon JP and Pycon APAC.

Later, at a rather nice restaurant, we were joined by more people, including Tres Seaver and several of the other speakers and staff of the Symposium.

The following morning we headed for Ochanomizu University, who had not only supplied the meeting space, but thoughtfully also some cute turtles and a sleepy red cat to enhance the breaks.

The Symposium itself was astounding, especially when you consider it was the first time it was held. With 67 people participating from as far away as Osaka and beyond and a wide range of talks, both in Japanese and English, there was something to be found for everyone.

Personal highlights:

That was also the feeling that ran through the day. Not only in lovely details like hand-made Plone cookies but mostly in the talks and in the hallway conversations, this is a community aimed at helping each other. Nice touch also to include talks on other Python frameworks and technologies.

After Lightning talks (always fun, even in a foreign language!) most of us headed for the afterparty at a nearby Izakaya. Where that curious phenomenon happened again: people trying to tell you that their English is not so good, and then conversing with you in super good English…

It was fun to meet such a diverse group of people, and see that the "put a bunch of Plone people in a room and good conversations happen" principle is universally applicable.

Next day was Sprinting day. Despite knowing that sprinting is not yet as popular as it should be within the Japanese community, a pretty nice number turned up, and we all set out on our various tasks.

As said before, I mostly worked with Max Nagane on accessibility issues. The good news: Plone 5 will be a massive step in the right direction. But obviously, there is still room for improvement. If you want to help, join the Anniversary sprint in Arnhem/NL when you can, or just start working on the relatively small list in the metaticket.

The next day unfortunately it was time already to fly back, feeling a bit sad to leave but very happy to have met the vibrant, kind and knowledgeable Plone Japan community. Of whom I hope to see many again in the future, both inside and outside Japan.

And who knows, apparently "Tokyo 2020″ is a thing now ;-)

22 Jun 2015 10:46am GMT

Plumi: Looking ahead to Plumi 4.5.2

EngageMedia have been working with Sam Stainsby of Sustainable Software on a 4.5.2 release of Plumi.

Our main aims with this release are:

We've been working on this release since late 2014. Please get in touch if you'd like to contribute.

Sam and Markos from Mist.io both have some really interesting ideas for the future of Plumi, which we'll be bouncing to the Plumi list soon for feedback.

22 Jun 2015 3:46am GMT