30 Sep 2014

feedPlanet Plone

UW Oshkosh How-To's: A minimalist view for collections

This view lets you display minimalist collection results: just the title of the objects, no by-line, and no "Read more...". It also bumps up the font size of the object title.

  1. Go to ZMI -> portal_skins -> custom
  2. Add a Page Template
  3. Give it the ID "collection_minimal_view"
  4. Paste the page template code below into the body.
  5. Go to ZMI -> portal_types -> Collections
  6. Add "collection_minimal_view" to the available views
  7. When you are looking at a Collection, use the Display menu and choose "Minimal view".
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"
      xmlns:tal="http://xml.zope.org/namespaces/tal"
      xmlns:metal="http://xml.zope.org/namespaces/metal"
      xmlns:i18n="http://xml.zope.org/namespaces/i18n"
      lang="en"
      metal:use-macro="here/main_template/macros/master"
      i18n:domain="plone">

<body>

<metal:content-core fill-slot="content-core">
<metal:block use-macro="context/standard_view/macros/content-core">

    <metal:entries fill-slot="entries">
    <metal:block use-macro="context/standard_view/macros/entries">
    <metal:entry fill-slot="entry">

        <div class="tileItem visualIEFloatFix"
             tal:define="obj item/getObject">
            <a href="#"
                  tal:condition="obj/image|nothing"
                  tal:attributes="href item/getURL">
                  <div class="tileImage">
                      <img src="" alt=""
                           tal:define="scales obj/@@images;
                                       scale python:scales.scale('image', 'thumb')"
                           tal:replace="structure python:scale and scale.tag(css_class='tileImage') or None" />
                  </div>
            </a>

            <h1 class="tileHeadline" metal:define-macro="listitem">
                <a href="#"
                   class="summary url"
                   tal:attributes="href python:item_type in use_view_action and item_url+'/view' or item_url;"
                   tal:content="item/Title">
                    Item Title
                </a>
            </h1>
            <div class="visualClear"><!-- --></div>
        </div>

    </metal:entry>
    </metal:block>
    </metal:entries>

</metal:block>
</metal:content-core>

</body>
</html>

The above page template comes from taking the original at buildout-cache/eggs/plone.app.collection-1.0.9-py2.7.egg/plone/app/collection/browser/templates/summary_view.pt

30 Sep 2014 8:13pm GMT

UW Oshkosh How-To's: A simple "question and answer" discussion forum using PloneFormGen, a mailer, a custom script adapter, and built-in workflow

A campus unit wanted to create a Q&A format discussion forum that would let anonymous users ask a question that would be answered by forum editors.

They started with a very simple PloneFormGen form that contained just one text field, into which the user types their question.

A mailer adapter sends an email to forum editors letting them know a question has been submitted.

A custom script adapter creates a Plone page in a specific folder (in this case, it's the folder containing the PloneFormGen form), and leaves it in the private state. A forum editor edits the page to include an answer to the question, then publishes the page.

The custom script adapter is given Manager proxy so it can create Plone pages even for non-logged in users.

The script is as follows.

# Available parameters:
#  fields  = HTTP request form fields as key value pairs
#  request = The current HTTP request. 
#            Access fields by request.form["myfieldname"]
#  ploneformgen = PloneFormGen object
# 
# Return value is not processed -- unless you
# return a dictionary with contents. That's regarded
# as an error and will stop processing of actions
# and return the user to the form. Error dictionaries
# should be of the form {'field_id':'Error message'}

from Products.CMFPlone.utils import normalizeString

def suggestId(parent, title):
    return normalizeString(title[:20])

question = request.form['comments']

parent = context.aq_inner.aq_parent
grandparent = parent.aq_inner.aq_parent

id = suggestId(grandparent, question)
while hasattr(grandparent, id):
  id += "_1" # make it unique

newItemId = grandparent.invokeFactory(id=id, type_name='Document')
newItem = grandparent[newItemId]
newItem.setTitle(question[:20])
newItem.setText(question)
newItem.reindexObject()

context.plone_utils.addPortalMessage("Question has been created (ID %s); it will be reviewed soon." % newItemId, 'info')

30 Sep 2014 7:35pm GMT

UW Oshkosh How-To's: Using Custom Script Adapters with PloneFormGen forms to do a lot behind the scenes

One nice thing you can do with PloneFormGen is present the user with a simple form that does a bunch more in the back end, using custom script adapters.

In these cases we wanted to give users one form to add a story to a web site, including a headline, text, photo, etc., and have the form's custom script adapter create the necessary folder, then the news item, the page, etc., and apply keywords/tags/categories as needed.

Here are the scripts we used with the forms.

Script 1

This script creates a News Item in a particular folder.

# Available parameters:
#  fields  = HTTP request form fields as key value pairs
#  request = The current HTTP request. 
#            Access fields by request.form["myfieldname"]
#  ploneformgen = PloneFormGen object
# 
# Return value is not processed -- unless you
# return a dictionary with contents. That's regarded
# as an error and will stop processing of actions
# and return the user to the form. Error dictionaries
# should be of the form {'field_id':'Error message'}

from Products.CMFPlone.utils import normalizeString


def suggestId(parent, title):
    return normalizeString(title)


title = request.form['entry-title']
image = request.form['entry-image_file']
imageDescription = request.form['entry-image-description']
storyDescription = request.form['entry-description']
story = request.form['story']
category = request.form['entry-category']

parent = context.aq_inner.aq_parent
grandparent = parent.aq_inner.aq_parent
entries = grandparent.get('all-entries')

id = suggestId(entries, title)
while hasattr(entries, id):
  id += "_1" # make it unique

newNewsItemId = entries.invokeFactory(id=id, type_name='News Item')
newNewsItem = entries[newNewsItemId]
newNewsItem.setTitle(title)
newNewsItem.setDescription(storyDescription)
newNewsItem.setImage(image)
newNewsItem.setImageCaption(imageDescription)
newNewsItem.setText(story)
newNewsItem.setSubject(category)
newNewsItem.reindexObject()

context.plone_utils.addPortalMessage("Created a news item with ID %s" % newNewsItemId, 'info')

portal_state = ploneformgen.restrictedTraverse('@@plone_portal_state')
url = portal_state.portal_url() + '/search'
request.response.redirect('%s?Creator=%s&sort_on=created&sort_order=reverse' % (url, portal_state.member().getId()))

Script 2

This one is a bit more complicated. It creates a folder that holds a News Item and a Document (Page).

# Available parameters:
#  fields  = HTTP request form fields as key value pairs
#  request = The current HTTP request. 
#            Access fields by request.form["myfieldname"]
#  ploneformgen = PloneFormGen object
# 
# Return value is not processed -- unless you
# return a dictionary with contents. That's regarded
# as an error and will stop processing of actions
# and return the user to the form. Error dictionaries
# should be of the form {'field_id':'Error message'}

from Products.CMFPlone.utils import normalizeString

def suggestId(parent, title):
   return normalizeString(title)

title = request.form['story-title']
image = request.form['main-image_file']
imageDescription = request.form['image-description']
storyDescription = request.form['story-description']
category = request.form['category']

stories = context.stories
defaultPageContent = stories['default-page-content'].getText()

id = suggestId(stories, title)
while hasattr(stories, id):
  id += "_1" # make it unique

newStoryFolderId = stories.invokeFactory(id=id, type_name='Folder')
newStoryFolder = stories[newStoryFolderId]
newStoryFolder.setSubject(category)
newStoryFolder.setTitle(title)
newStoryFolder.reindexObject()

id = suggestId(newStoryFolder, title)
id = "%s-1" % id
newStoryPageId = newStoryFolder.invokeFactory(id=id, type_name='Document')
newStoryPage = newStoryFolder[newStoryPageId]
newStoryPage.setTitle(title)
newStoryPage.setDescription(storyDescription)
newStoryPage.setSubject(category)
newStoryPage.setText(defaultPageContent)
newStoryPage.reindexObject()

id = suggestId(newStoryFolder, title)
id = "%s-2" % id
newNewsItemId = newStoryFolder.invokeFactory(id=id, type_name='News Item')
newNewsItem = newStoryFolder[newNewsItemId]
newNewsItem.setTitle(title)
newNewsItem.setDescription(storyDescription)
newNewsItem.setImage(image)
newNewsItem.setImageCaption(imageDescription)
newNewsItem.setSubject(category)
newNewsItem.reindexObject()

id = suggestId(newStoryFolder, title)
id = "%s-3" % id
newStoryImageId = newStoryFolder.invokeFactory(id=id, type_name='Image')
newStoryImage = newStoryFolder[newStoryImageId]
newStoryImage.setTitle(title)
newStoryImage.setImage(image)
newStoryImage.setDescription(imageDescription)
newStoryImage.setSubject(category)
newStoryImage.reindexObject()

30 Sep 2014 4:50pm GMT

UW Oshkosh How-To's: How to export a collection of Dexterity objects to CSV

We wanted something like the SmartCSVExporter https://plone.org/products/smart-csv-exporter add-on but we wanted to output *all* the fields in the collection's returned objects, not just the ones in the catalog that are available to a collection's "display as table" view.

I also prefer to create External Methods rather than install new add-ons via buildout, so I created this External Method, based on SmartCSVExporter code.

This assumes the collection returns Dexterity objects.

It starts off by outputting all the columns you set in the collection's "Table Columns" field. Then it additionally looks at the first returned object to get the list of Dexterity fields to output.

Because this method looks at actual objects (not the catalog) it is slower ... but thorough since it outputs *every* field in the objects and does not require you to create and maintain catalog indexes or metadata (which slow down your site in other ways).

https://github.com/tkimnguyen/miscellaneous/blob/master/export_dexterity_collection_csv.py

This method respects the collection's search results limits (see the checkbox "Limit Search Results"). If your output contains only, say, 10 objects even though you know there are 48, it's because you've checked that box and you've set the "Number of items" to 10.

30 Sep 2014 3:23pm GMT

29 Sep 2014

feedPlanet Plone

eGenix: eGenix PyCon UK 2014 Talks & Videos

The PyCon UK Conference is the premier conference for Python users and developers in the UK. This year it was held from September 19-22 in Coventry, UK.

eGenix Talks at PyCon UK 2014

At this year's PyCon UK, Marc-André Lemburg, CEO of eGenix, gave the following talks at the conference. The presentations are available for viewing and download from our Presentations and Talks section.

When performance matters ...

Simple idioms you can use to make your Python code run faster and use less memory.

Python applications sometimes need all the performance they can get. Think of e.g. web, REST or RPC servers. There are several ways to address this: scale up by using more processes, use Cython, use PyPy, rewrite parts in C, etc.

However, there are also quite a few things that can be done directly in Python. This talk goes through a number of examples and show cases how sticking to a few idioms can easily enhance the performance of your existing applications without having to revert to more complex optimization strategies.

The talk was complemented with a lightning talk titled "Pythons and Flies", which addresses a memory performance idiom and answers one of the audience questions raised in the above talk.

Click to proceed to the PyCon UK 2014 talk and lightning talk videos and slides ...

Python Web Installer

Installing Python packages is usually done with one of the available package installation systems, e.g. pip, easy_install, zc.buildout, or manually by running "python setup.py install" in a package distribution directory.

These systems work fine as long as you have Python-only packages. For packages that contain binaries, such as Python C extensions or other platform dependent code, the situation is a lot less bright.

In this talk, we present a new web installer system that we're currently developing to overcome these limitations.

The system combines the dynamic Python installation interface supported by all installers ("python setup.py install"), with a web installer which automatically selects, downloads, verifies and installs the binary package for your platform.

Click to proceed to the PyCon 2014 talk video and slides ...

If you are interested in learning more about these idioms and techniques, eGenix now offers Python project coaching and consulting services to give your project teams advice on how to achieve best performance and efficiency with Python. Please contact our eGenix Sales Team for information.

Enjoy !

Charlie Clark, eGenix.com Sales & Marketing

29 Sep 2014 11:00am GMT

Four Digits: Verslag Silicon Alley LIghtning Talks 26 september

Afgelopen vrijdag vonden er Lightning Talks plaats, in samenwerking met onze buren van WebIQ.

Zitzakken in de ruimte van Concepts 26Voor het eerst op een andere locatie, namelijk de conferentieruimte van WebIQ in het Concepts 026-pand. De samenwerking bracht niet alleen een nieuwe ruimte, maar ook meer diversiteit aan sprekers.

Tom KortbeekTom Kortbeek van Kunstlab Arnhem beet het spits af met "Kunst & Techniek, the next frontier". Zijn wens (en dat van zijn collega Stefanie Hesseling) is om van een steegje in Arnhem een compositieruimte te maken voor licht en geluid, en deze wereldwijd te verbinden.

Vervolgens liet Reinier Meenhorst (DJUST) Sketch zien, een designtooldie de samenwerking tussen ontwerpers en ontwikkelaars belooft te verbeteren. Sebastian Kügler sprak over hoe je platform-onafhankelijk native mobiele applicaties kunt maken met Qt, en zo de ontwikkelstraat voor complexe projecten aanmerkelijk kunt versimpelen.

Babo Koffie: Bonne vertelt en Rob schenkt inNa een korte pauze vertelden Bonne Postma en Rob Kerkhoff (Babo Koffie) over de herkomst van koffie, en lieten ze ons een paar bijzondere drankjes proeven. Mirna van der Veen (IFO Fair Travelers) legde uit wat ze wil bereiken met Fair Travelers, en wat daarbij komt kijken. Tenslotte kondigde Rob Gietema (Four Digits) een Hackathon aan die dit weekend plaatsvond.

Foto's door Patrick Kreling (Catharsis Design)

29 Sep 2014 9:05am GMT

27 Sep 2014

feedPlanet Plone

CodeSyntax: Presenting Buildout at PySS 14

Buildout is a tool we use in all of the development and deployments of our applications, and we have given a talk about it at PySS 14.

27 Sep 2014 9:46am GMT

26 Sep 2014

feedPlanet Plone

Martijn Faassen: Life at the Boundaries: Conversion and Validation

In software development we deal with boundaries between systems.

Examples of boundaries are:

It's important to recognize these boundaries. You want to do things at the boundaries of our application, just after input has arrived into your application across an outer boundary, and just before you send output across an inner boundary.

If you read a file and what's in that file is a string representing a number, you want to convert the string to a number as soon as possible after reading it, so that the rest of your codebase can forget about the file and the string in it, and just deal with the number.

Because if you don't and pass a filename around, you may have to open that file multiple times throughout your codebase. Or if you read from the file and leave the value as a string, you may have to convert it to a number each time you need it. This means duplicated code, and multiple places where things can go wrong. All that is more work, more error prone, and less fun.

Boundaries are our friends. So much so that programming languages give us tools like functions and classes to create new boundaries in software. With a solid, clear boundary in place in the middle of our software, both halves can be easier to understand and easier to manage.

One of the most interesting things that happen on the boundaries in software is conversion and validation of values. I find it very useful to have a clear understanding of these concepts during software development. To understand each other better it's useful to share this understanding out loud. So here is how I define these concepts and how I use them.

I hope this helps some of you see the boundaries more clearly.

Following a HTML form submit through boundaries

Let's look at an example of a value going across multiple boundaries in software. In this example, we have a web form with an input field that lets the user fill in their date of birth as a string in the format 'DD-MM-YYYY'.

I'm going to give examples based on web development. I also give a few tiny examples in Python. The web examples and Python used here only exist to illustrate concepts; similar ideas apply in other contexts. You shouldn't need to understand the details of the web or Python to understand this, so don't go away if you don't.

Serializing a web form to a request string

In a traditional non-HTML 5 HTTP web form, the input type for dates is text`. This means that the dates are in fact not interpreted by the browser as dates at all. It's just a string to the browser, just like adfdafd. The browser does not know anything about the value otherwise, unless it has loaded JavaScript code that checks whether it the input is really a date and shows an error message if it's not.

In HTML 5 there is a new input type called date, but for the sake of this discussion we will ignore it, as it doesn't change all that much in this example.

So when the user submits a form with the birth date field, the inputs in the form are serialized to a longer string that is then sent to the server as the body of a POST request. This serialization happens according to what's specified in the form tag's enctype attribute. When the enctype is multipart/form-data, the request to the server will be a string that looks a lot like this:

POST /some/path HTTP/1.1
Content-type: multipart/form-data, boundary=AaB03x

--AaB03x
content-disposition: form-data; name="birthdate"

21-10-1985
--AaB03x--

Note that this serialization of form input to the multipart/form-data format cannot fail; serialization always succeeds, no matter what form data was entered.

Converting the request string to a Request object

So now this request arrives at the web server. Let's imagine our web server is in Python, and that there's a web framework like Django or Flask or Pyramid or Morepath in place. This web framework takes the serialized HTTP request, that is, the string, and then converts it into a request object.

This request object is much more convenient to work with in Python than the HTTP request string. Instead of having one blob of a string, you can easily check indidivual aspects of the request -- what request method was used (POST), what path the request is for, what the body of the request was. The web framework also recognizes multipart/form-data and automatically converts the request body with the form data into a convenient Python dictionary-like data structure.

Note that the conversion of HTTP request text to request object may fail. This can happen when the client did not actually format the request correctly. The server should then return a HTTP error, in this case 400 Bad Request, so that the client software (or the developer working on the client software) knows something went wrong.

The potential that something goes wrong is one difference between conversion and serialization; both transform the data, but conversion can fail and serialization cannot. Or perhaps better said: if serialization fails it is a bug in the software, whereas conversion can fail due to bad input. This is because serialization goes from known-good data to some other format, whereas conversion deals with input data from an external source that may be wrong in some way.

Thanks to the web framework's parsing of web form into a Python data structure, we can easily get the field birthdate from our form. If the request object was implemented by the Webob library (like for Pyramid and Morepath), we can get it like this:

 >>> request.POST['birthdate']
'21-10-1985'

Converting the string to a date

But the birthdate at this point is still a string 21-10-1985. We now want to convert it into something more convenient to Python. Python has a datetime library with a date type, so we'd like to get one of those.

This conversion could be done automatically by a form framework -- these are very handy as you can declaratively describe what types of values you expect and the framework can then automatically convert incoming strings to convenient Python values accordingly. I've written a few web form frameworks in my time. But in this example we'll do it it manually, using functionality from the Python datetime library to parse the date:

>>> from datetime import datetime
>>> birthdate = datetime.strptime(request.POST['birthdate'], '%d-%m-%Y').date()
datetime.date(1985, 10, 21)

Since this is a conversion operation, it can fail if the user gave input that is not in the right format or is not a proper date Python will raise a ValueError exception in this case. We need to write code that detects this and then signal the HTTP client that there was a conversion error. The client needs to update its UI to inform the user of this problem. All this can get quite complicated, and here again a form framework can help you with this.

It's important to note that we should isolate this conversion to one place in our application: the boundary where the value comes in. We don't want to pass the birth date string around in our code and only convert it into a date when we need to do something with it that requires a date object. Doing conversion "just in time" like that has a lot of problems: code duplication is one of them, but even worse is that we would need worry about conversion errors everywhere instead of in one place.

Validating the date

So now that we have the birth date our web application may want to do some basic checking to see whether it makes sense. For example, we probably don't expect time travellers to fill in the form, so we can safely reject any birth dates set in the future as invalid.

We've already converted the birth date from a string into a convenient Python date object, so validating that the date is not in the future is now easy:

>>> from datetime import date
>>> birthdate <= date.today()
True

Validation needs the value to be in a convenient form, so validation happens after conversion. Validation does not transform the value; it only checks whether the value is valid according to additional criteria.

There are a lot of possible validations:

  • validate that required values are indeed present.
  • check that a value is in a certain range.
  • relate the value to another value elsewhere in the input or in the database. Perhaps the birth date is not supposed to be earlier than some database-defined value, for instance.
  • etc.

If the input passes validation, the code just continues on its merry way. Only when the validation fails do we want to take special action. The minimum action that should be taken is to reject the data and do nothing, but it could also involve sending information about the cause of the validation failure back to the user interface, just like for conversion errors.

Validation should be done just after conversion, at the boundary of the application, so that after that we can stop worrying about all this and just trust the values we have as valid. Our life is easier if we do validation early on like this.

Serialize the date into a database

Now the web application wants to store the birth date in a database. The database sits behind a boundary. This boundary may be clever and allow you to pass in straight Python date objects and do a conversion to its internal format afterward. That would be best.

But imagine our database is dumb and expects our dates to be in a string format. Now the task is up to our application: we need transform the date to a string before the database boundary.

Let's say the database layer expects date strings in the format 'YYYY-MM-DD'. We then have to serialize our Python date object to that format before we pass it into the database:

>>> birthdate.strftime('%Y-%m-%d')
'1985-10-21'

This is serialization and not conversion because this transformation always succeeds.

Concepts

So we have:

Transformation:
Transform data from one type to another. Transformation by itself cannot fail, as it is assumed to always get correct input. It is a bug in the software if it does not. Conversion and serialization both do transformation.
Conversion:
Transform input across a boundary into a more convenient form inside that boundary. Fails if the input cannot be transformed.
Serialization
Transform valid data as output across a boundary into a form convenient to outside. Cannot fail if there are no bugs in the software.
Validation:
Check whether input across a boundary that is already converted to convenient form is valid inside that boundary. Can fail. Does not transform.

Reuse

Conversion just deals with converting one value to another and does not interact with the rest of the universe. The implementation of a converter is therefore often reusable between applications.

The behavior of a converter typically does not depend on state or configuration. If conversion behavior does depend on application state, for instance because you want to parse dates as 'MM-DD-YYYY' instead of 'DD-MM-YYYY', it is often a better approach to just swap in a different converter based on the locale than to have the converter itself to be aware of the locale.

Validation is different. While some validations are reusable across applications, a lot of them will be application specific. Validation success may depend on the state of other values in the input or on application state. Reusable frameworks that help with validation are still useful, but they do need additional information from the application to do their work.

Serialization and parsing

Serialization is transformation of data to a particular type, such as a string or a memory buffer. These types are convenient for communicating across the boundary: storing on the file system, storing data in a database, or passing data through the network.

The opposite of serialization is deserialization and this is done by parsing: this takes data in its serialized form and transforms it into a more convenient form. Parsing can fail if its input is not correct. Parsing is therefore conversion, but not all conversion is parsing.

Parsing extracts information and checks whether the input conforms to a grammar in one step, though if you treat the parser as a black box you can view these as two separate phases: input validation and transformation.

There are transformation operations in an application that do not serialize but can also not fail. I don't have a separate word for these besides "transformation", but they are quite common. Take for instance an operation that takes a Python object and transforms it into a dictionary convenient for serialization to JSON: it can only consist of dicts, lists, strings, ints, floats, bools and None.

Some developers argue that data should always be kept in such a format instead of in objects, as it can encourage a looser coupling between subsystems. This idea is especially prevalent in Lisp-style homoiconic language communities, where even code is treated as data. It is interesting to note that JSON has made web development go in the direction of more explicit data structures as well. Perhaps it is as they say:

Whoever does not understand LISP is doomed to reinvent it.

Input validation

We can pick apart conversion and find input validation inside. Conversion does input validation before transformation, and serialization (and plain transformation) does not.

Input validation is very different from application-level validation. Input validation is conceptually done just before the convenient form is created, and is an inherent part of the conversion. In practice, a converter typically parses data, doing both in a single step.

I prefer to reserve the term "validation" for application-level validation and discuss input validation only when we talk about implementing a converter.

But sometimes conversion from one perspective is validation from another.

Take the example above where we want to store a Python date in a database. What if this operation does not work for all Python date objects? The database layer could accept dates in a different range than the one supported by the Python date object. The database may therefore may therefore be offered a date that is outside of its range and reject it with an error.

We can view this as conversion: the database converts a date value that comes in, and this conversion may fail. But we can also view this in another way: the database transforms the date value that comes in, and then there is an additional validation that may fail. The database is a black box and both perspectives work. That comes in handy a little bit later.

Validation and layers

Consider a web application with an application-level validation layer, and another layer of validation in the database.

Maybe the database also has a rule to make sure that the birth date is not in the future. It gives an error when we give a date in the future. Since validation errors can now occur at the database layer, we need to worry about properly handling them.

But transporting such a validation failure back to the user interface can be tricky: we are on the boundary between application code and database at this point, far from the boundary between application and user interface. And often database-level validation failure messages are in a form that is not very informative to a user; they speak in terms of the database instead of the user.

We can make our life easier. What we can do is duplicate any validation the database layer does at the outer boundary of our application, the one facing the web. Validation failures there are relatively simple to propagate back to the user interface. Since any validation errors that can be given by the database have already been detected at an earlier boundary before the database is ever reached, we don't need to worry about handling database-level validation messages anymore. We can act as if they don't exist, as we've now guaranteed they cannot occur.

We treat the database-level validation as an extra sanity check guarding against bugs in our application-level code. If validation errors occur on the database boundary, we have a bug, and this should not happen, and we can just report a general error: on the web this is a 500 internal server error. That's a lot easier to do.

The general principle is: if we do all validations that the boundary to a deeper layer already needs at a higher layer, we can effectively the inner boundary as not having any validations. The validations in the deeper layer then only exist as extra checks that guard against bugs in the validations at the outer boundary.

We can also apply this to conversion errors: if we already make sure we clean up the data with validations at an outer boundary before it reaches an inner boundary that needs to do conversions, the conversions cannot fail. We can treat them as transformations again. We can do this as in a black box we can treat any conversion as a combination of transformation and validation.

Validation in the browser

In the end, let's return to the web browser.

We've seen that doing validation at an outer boundary can let us ignore validation done deeper down in our code. We do validation once when values come into the web server, and we can forget about doing them in the rest of our server code.

We can go one step further. We can lift our validation out of the server, into the client. If we do our validation in JavaScript when the user inputs values into the web form, we are in the right place to give really accurate user interface feedback in easiest way possible. Validation failure information has to cross from JavaScript to the browser DOM and that's it. The server is not involved.

We cannot always do this. If our validation code needs information on the server that cannot be shared securily or efficiently with the client, the server is still involved in validation, but at least we can still do all the user interface work in the client.

Even if we do not need server-side validation for the user interface, we cannot ignore doing server-side validation altogether, as we cannot guarantee that our JavaScript program is the only program that sends information to the server. Through that route, or because of bugs in our JavaScript code, we can still get input that is potentially invalid. But now if the server detects invalid information, it does not need do anything complicated to report validation errors to the client. Instead it can just generate an internal server error.

If we could somehow guarantee that only our JavaScript program is the one that sends information to the server, we could forgo doing validation on the server altogether. Someone more experienced in the arts of encryption may be able to say whether this is possible. I suspect the answer will be "no", as it usually is with JavaScript in web browsers and encryption.

In any case, we may in fact want to encourage other programs to use the same web server; that's the whole idea behind offering HTTP APIs. If this is our aim, we need to handle validation on the server as well, and give decent error messages.

26 Sep 2014 1:50pm GMT

Davide Moro: collective.angularstarter (Plone + AngularJS + Yeoman kickstarter project)

Get started with Plone + AngularJS without any of the normal headaches associated with a manual setup of useful tools that let you improve your development experience and the deploy of your application.

Since I have been using AngularJS on Plone, I decided to create a reusable starter scaffold (or something or similar) based on Yeoman that let me save precious time.

That's why I created:

This is a plugin that let you bootstrap single page web applications (or heavy Javascript logics) based on Plone+AngularJS+Yeoman.

Yeoman workflow benefits

collective.angularstarter is powered by the Yeoman workflow. If you want to see what are the Yeoman benefits due to an integration with a framework you might have a look at:

Next sections will talk about what you can build with Plone if you are not familiar with it, in particular heavily dynamic Javascript based verticalizations built with collective.angularstarter or similar techniques.

Plone

Plone is not only a CMS but a framework built with the Python programming language that let you build complex web applications, intranet, websites with strong focus on:

You can extend the features provided by default by Plone thanks to a considerable number third party plugins.

You can see a couple of examples about you can build with Plone verticalizations with collective.angularstarter.

Coworking application

You can create custom content types for meeting rooms, private desks or common desks seats.

Basically the main object you are sharing is a folderish with metadata that let you configure the resource, for example:

Once the resource has been published, users can buy the suitable time slots depending on the type of resource and availability.

For private or common desks you can choose to search for multiple seats for multiple days or months. For meeting rooms you can buy just one unit time slot, multiple hours, group of hours (ex: morning) or the full day.

For example if you are searching for a meeting room you can choose partial days mode and select the available day you want to select:



The infinite scrolling shows you only the days that fits what you are searching for (2 private desks)
One selected the day, you can choose one or more available slots (for example morning):

And then: buy it!

The reservation objects are based on event types, since the reservation has a start and end datetime. So you can easily perform non-expensive catalog queries in order to search for slots available.

The project itself it is more complex because Plone it is integrated with an external invoice management software, a Paypal interface that let you buy more credits and a personal user box that let the users to see invoice PDF files and other notifications.

Advanced search forms

You can also use plone as a backend for a highly dynamic single page web application.

You can mix Plone data with external resources provided by third party server in order to build a complex search form.

For example the main search prompt to the user a master-slave AJAX widget

where the vocabulary of slave selects depend on the value of the previous one):

or search for different criteria:

In this particular case results will appear after you fill all the needed information, but it is quite easy to implement a live search.

If you need more info about how to create a master-select widget component with Angular you may have a look at this article http://davidemoro.blogspot.com/2014/09/angularjs-master-slave-select-with.html.

What collective.angularstarter is

The collective.angularstarter plugin is:

  1. a Plone + AngularJS kickstarter project. You can use this package when you want to develop a single page web applications powered by Angular using Plone as backend. With all the benefits of the Yeoman workflow
  2. scaffolding tool that let you extend this package, add more features and then clone it creating a more sophisticated application. You can redistribute it with another name. Or you can develop a rapid prototype of your reusable application and after create a new zopeskel or yo package generator with one or more options. The clone hack might fail in some corner case but it should help you to convert an existing package to another. Anyway if something goes wrong you can easily correct the problems by hand. I get used to apply a similar script when me or other colleagues chose a very ugly package name and then you have to rename it. Maurizio, remember?! How many days we saved with this script?

Anyway when you install collective.angularstarter and visit the @@angularstarter browser view it shows an example of AngularJS app with enabled by default:

Here you can see how the @@angularstarter view looks like:

collective.angularstarter screenshot. Fill the input text and you'll see the page instantly updated

After that it's up to you coding with AngularJS and Plone!

Results

The following screenshot show you what happens if you analyze the network section of Firebug when you are in development mode:

or in production more:

Wait a moment! The resulting resultim bootstrap.css weights in at only 3,2 KB?! That's the power of minification and uncss tasks

A you can see you'll get (see the part 1 article of Pyramid starter seed project for further details about uncss and other tips explained):

How did I did it? Basically I played with Plone's resource registrations and layers. See https://github.com/collective/collective.angularstarter/blob/master/collective/angularstarter/browser/configure.zcml

collective.angularstarter wraps a modified Yeoman AngularJS project (browser/angular): asset paths modified, bower_components folder renamed and a couple and other local changes to the Gruntfile.js file.

Hope you'll find collective.angularstarter useful. Feedback will be very appreciated!

26 Sep 2014 7:31am GMT

Davide Moro: Pyramid starter seed template powered by Yeoman (part 1)

Book of the month I'm reading this summer: Pylons/Pyramid (http://docs.pylonsproject.org/en/latest).


Pyramid (http://www.pylonsproject.org) is a minimal Python-based web development framework that let you "start small and finish big".

It stole a lot of (good) ideas and concepts from other mature Python web frameworks and it is build with the pluggable and extensible concepts in mind. Read: no need to fork applications.

Furthermore Pyramid is database and template engine agnostic: you are free.

From the very beginning Pyramid allows you to become productive quickly. So why not start with something of useful?

Pyramid + Yeoman

The goal of this experiment is integrate yeoman with Pyramid (or other frameworks like NodeJs/Express with AngularJS or Plone as already did), preserving the yeoman's workflow.

UPDATE 20140926: here you can see a Plone + AngularJS + Yeoman article (collective.angularstarter)

In this article I'll talk about what are the benefits you get integrating your Pyramid app with Yeoman, in future posts I'll discuss how they work under the hood with additional technical details omitted here (each used component deserves an entire blog post).

Yeoman

You might wonder why? Because of the importance of tooling. Since it is very important build an effective developer tooling ecosystem, I want to integrate the simple starter demo app with commonly used tools to help you stay productive. So this simple application prototype it is just an experiment that should help you to integrate with modern web development tools provided by the yeoman workflow stack (http://yeoman.io).

Choosing the right tools is very important for the best develop experience and I cannot work anymore without Yeoman, especially when coding with Javascript.

Grunt

Yeoman it is internally based on three important components (nodejs powered):

Bower


So with the yeoman's tools you can just code, avoid annoying repetitive tasks and don't worry about:

So let's see together what happened to our pyramid starter demo template created with pcreate -t starter integrated with a yeoman's generator-webapp project.

The result will be a Pyramid starter seed project integrated with modern non Python-based web development tools.

Goals

Management of third party assets

You no longer have to manually download and manage your scripts with the Bower package manager.

From http://bower.io:

"""Bower works by fetching and installing packages from all over, taking care of hunting, finding, downloading, and saving the stuff you're looking for."""

So just type something like: bower install angular-translate --save and you'll get the rigth resource with pinning support.

Tasks automation

Automation, automation, automation.

From http://gruntjs.com:

"""Why use a task runner? In one word: automation. The less work you have to do when performing repetitive tasks like minification, compilation, unit testing, linting, etc, the easier your job becomes. After you've configured it, a task runner can do most of that mundane work for you-and your team-with basically zero effort."""

Examples:

Jslint

No more deploy Javascript code with bad indentation, syntax errors or bad code practices.

All syntax errors or bad practise will be found.

Image minification

The build process will detect and minify automatically all your asset images.

Uncss task

Modern (and heavy) UI frameworks like Twitter Bootstrap provide an excellent solution for prototyping your initial project, but most of the times you are using a very minimal subset of their functionalities.

https://twitter.com/davidemoroThis inspiring Addy Osmani's blog post helps you to remove unused css in your pages with a grunt task named grunt-uncss (https://github.com/addyosmani/grunt-uncss):

The original not-minified bootstrap.css weights in at 120 kB before removing unused rule.

Css concat and minification

You can split your css code into different files and then the build process will concat and minify them creating a unique app.css file. This way you write modular and better readable css files, reducing the number of browser requests.

The theme.css file is quite small but in real projects you can save more. In this case:

The configured build pipeline is concat, uncss and cssmin. 122.85 kB (original bootstrap.css) -> 4.64 kB (uncss) -> 3.45 kB (minification)

Automatic CDN-ification

It is handy using unminified versions of third party javascript libraries during development and switch to CDN versions in production mode with well known benefits for your website.

Don't worry: the cdnify task will take care about this boring issue. Automatically.

You save a boring manual and error-prone configuration.

Composable bootstrap.js version

The Pyramid starter project is based on Twitter Bootstrap.

Twitter Bootstrap

Depending on your project you can load the whole Twitter Bootstrap Javascript code at once or including individual plugins.

As you can see the Javascript component of Twitter Bootstrap is very modular: http://getbootstrap.com/javascript. So if you don't use a particular feature, just don't include it.

This way in development mode you will have all individual plugins splitted in different files, in production it will served a unique concatenated and minified Javascript file built automatically.

So if you just need alert.js and dropdown.js you can get a 2.79 kB plugins.js:

The concatenation of alert.js and dropdown.js produces a 7.06 kB, that weight in at 2.79 kB after minification instead of the 8.9 kB (gzipped) bootstrap-min.js corresponding to not gzipped 27.2 kB.

Html (template) minification

Since the ZPT/Chameleon templating language is an extension of HTML with xml syntax,

Brower are able to display unrendered ZPT/Chameleon templates

theorically it can play well with html minificators.

I know, template minification can lead to potential unexpected problems due to minification issues on template files... but this is my personal playground, so let me play please!

So... why not switch to a pre-compiled minified template of your ZPT/Chameleon files when you are in "production mode"?

Obviously during development you will use the original template files.

The interesting side of this approach is that there is no overhead at response time, since the minification task runs just one time before deploying your application. It might be an option if you want just your html minified and you cannot feasibly add to your site or project additional optimization tools at web server level.

Anyway I have tried this mad experiment and... if you don't use too aggressive minification params, it seems to work fine with good results. Try it at your own risk or just disable it. Here you can the effects on the generated index.html used in production:

Template minified (7.62 kB -> 4.16 kB)

Result: a lighter Pyramid

Same results but a lighter Pyramid app:

Let's see how it behave the standard Pyramid starter project:

Standard Pyramid starter project (production.ini)

And the Pyramid starter seed:

Pyramid starter seed (production.ini)

As you can see the seed version is ~38 Kb smaller and more performant.

Useful links

That's all?

No, you can do more, for example:

Let me know what you think about that, please. Hope soon I will manage to write the second part of this blog post explaining how I did it. In the mean time you can:

Links

26 Sep 2014 7:19am GMT

24 Sep 2014

feedPlanet Plone

Mikko Ohtamaa: Avoiding load average spike alerts on Munin monitoring

Munin is a server monitoring tool written in Perl. In this post I'll introduce some monitoring basics and how to avoid unnecessary monitoring alerts on temporary server conditions.

Munin has vibrant plugin community. You can easily write your own plugins, even using shell script. Plugins are autodiscovered, so dropping a script file on the server is enough to create your own monitoring graph. Munin master data collection is driven by cron and by default outputs static HTML files. It is very easy to setup and secure, being immune to web attacks e.g. what most legacy PHP systems suffer.

Munin has its downsides, too. Like with most open source projects, Munin documentation is rough and not very helpful. The configuration file has its own format and discovering potential variables, values and opportunities is cumbersome. Munin scaling might not be ideal for large operations, both computer-wise and management-wise, but it's ok up to 10-20 servers. Also Munin alert mechanism is quite naive, and you cannot have alert trigger math functions like e.g. with Zabbix.

This brings to the problem: because Munin lacks any kind of math it can perform on monitoring data, it alerts immediately when some value is out of bounds. This may result to unnecessary alerts on temporary conditions which really don't need system administrator action.

1. Monitoring your server CPU resources

Screen Shot 2014-09-24 at 14.37.00

Server CPU resource sufficiency is best monitored with load average. When the load average gets too high, your server tasks are delayed and your response times start to suffer. However there might be various causes for temporary load average spikes, which resolve itself and should not be subject for alerting your devops team.

Most of these issues are resolved automatically, within seconds or minutes when they begin. However, because Munin naively monitors only the latest load value, it cannot have logic to determine whether the load alert is genuine or temporary. In Zabbix this problem can be avoided by monitoring the minimum load value over a time period.

2. Smoothing monitored value over a time period

Because Munin does not support trigger functions, one must calculate the load average smoothing on the server-side. Below is a sample Python script which will monitoring the minimum load average over four Munin report cycles (20 minutes). Thus, it will alert you only if the load has stayed too high over 20 minutes.

There exists a Python framework to build Munin plugins. But because the use case is so simple, the script can be self-contained.

#!/usr/bin/python
#
# Munin plugin for getting the minimum sytem load avg. of 20 minutes.
#
# We do not want to alert devops if the high load situation
# resolves itself in few minutes.
#
# Compared to load avg (what is usually monitored by default),
# load minimum smoothes out load spikes caused by
#
# - Brute force attacks before security starts mitigating them
#
# - Timed jobs
#
# - Lost of network connection
#
# All these are visible in max load of time period, avg. load of time period,
# but do not affect min load (the base load level).
#
# Because Munin does not support trigger filtering functions, like e.g. Zabbix,
# we do the minimum load filtering on the server side by taking samples and
# then calculating out the minimum.
#
# Installation:
#
#   Put or symlink this script to /etc/munin/plugins/minload
#   chmod a+rx /etc/munin/plugins/minload
#   service munin-node restart
#
# Testing:
#
#   sudo -u munin munin-run minload
#
#

from __future__ import print_function

import os
import json
import io

__author__ = "Mikko Ohtamaa <mikko@opensourcehacker.com>"
__license__ = "MIT"

# This is the file which tracks the load data
PERSISTENT_STORAGE = "/tmp/munin-minload.json"

# Read or construct data persitent data
try:
    data = json.load(io.open(PERSISTENT_STORAGE, "rb"))
except:
    data = []

# munin-cron calls the server every 5 minutes
# well keep the last 4 entries, take min,
# so we get minimum for 20 minutes

data.append(os.getloadavg()[0])
data = data[-4:]
min_load = min(data)

with io.open(PERSISTENT_STORAGE, "wb") as f:
    json.dump(data, f)

print("""graph_title Load min 20 minutes
graph_vlabel load
graph_category system
rate.label rate
rate.value {}
rate.warning 20
rate.critical 40
graph_info The minimum load of the system for last 20 minutes""".format(min_load))

Subscribe to RSS feed Follow me on Twitter Follow me on Facebook Follow me Google+

24 Sep 2014 11:38am GMT

Davide Moro: Pyramid starter seed template powered by Yeoman (part 2)

In the previous blog post we have seen what are the benefits of using the Yeoman workflow fully integrated with a web development framework like Pyramid. See:

Now we'll add more technical details about:

How to install pyramid_starter_seed

Prerequisites

As you can imagine, nodejs tools are required.

I strongly suggest to:

I won't cover the nodejs installation but it is quite simple if you follow these instructions provided by this useful link:

The nvm achronym stands for NodeJS Version Manager. Once installed nvm, installing nodejs it is as simple as typing nvm install VERSION (at this time of writing 0.10.32).

Nodejs is shipped with the command line utility named npm (Nodejs Package Manager) and we will use npm for installing what we need.

We need to install our global (-g option) dev dependencies, so just type:

$ npm install -g bower
$ npm install -g grunt-cli
$ npm install -g karma

Starter seed project installation

Create an isolated Python environment as explained in the official Pyramid documentation and instal Pyramid.

Once installed you can clone pyramid_starter_seed from github:

$ git clone git@github.com:davidemoro/pyramid_starter_seed.git
$ cd pyramid_starter_seed
$ YOUR_VIRTUALENV_PYTHON_PATH/bin/python setup.py develop

Not finished yet, continue.

Yeoman initialization

Go to the folder where it lives our Yeoman project and initialize it.

These are the standard commands (but, wait a moment, see the "Notes and known issues" subsection):

$ cd pyramid_starter_seed/webapp
$ bower install
$ npm install --loglevel verbose

Known issues:

Build phase

Just type:

$ grunt

and... probably it will fail because of a couple of known issues shipped with the latest version of generator-webapp or its dependencies.

Probably these issues will be fixed in newer generator-webapp releases. However here it is how to solve these problems, so don't worry:

  1. grunt-contrib-imagemin fix
    Problem with grunt:

    Warning: Running "imagemin:dist" (imagemin) task
    Warning: Bad argument Use --force to continue.

    Solution:

    $ npm cache clean
    $ rm -fR node_modules # not sure it is needed, don't remember
    $ npm install grunt-contrib-imagemin

  2. Mocha/PhantomJS issue

Problem with Mocha/PhantomJS launching grunt

Warning: PhantomJS timed out, possibly due to a missing Mocha run() call. Use --force to continue.

Solution:

$ cd test
$ bower install

Run bower install in the test directory of webapp (pyramid_starter_seed/webapp/test). This is a known issue, see https://github.com/yeoman/generator-webapp/issues/446.

Run you Pyramid app

Now can choose to run Pyramid in development or production mode.
Just type:

$ YOUR_VIRTUALENV_PYTHON_PATH/bin/pserve development.ini

or:

$ YOUR_VIRTUALENV_PYTHON_PATH/bin/pserve production.ini

Done!



In the next blog post with topic Pyramid + Yeoman (coming soon) I'm going to talk about:

So stay tuned..

Links

24 Sep 2014 11:34am GMT

Davide Moro: Pyramid starter seed powered by Yeoman (part 3)

In the previous articles we have seen:

Once installed pyramid_starter_seed, we will see now:

How it works under the hood (narrative)

pyramid_starter_seed registers only one route (home -> /) and static assets views.

The home route is associated to a view callable with a webapp/%s/index.html renderer.

from pyramid.view import view_config

@view_config(route_name='home', renderer='webapp/%s/index.html')
def my_view(request):
return {'project': 'pyramid_starter_seed'}

Views can be associated to routes imperatively or through a scan.

Wait, there is no .html renderer handled by default in Pyramid! The pyramid_starter_seed will register a .html renderer that will replace the string token with app or dist depending on the production settings and it calls the original .pt renderer.

You can choose between production vs development mode running your pyramid app with the appropriate .ini file provided by pyramid_starter_seed.

Here you can see the relevant parts of the production.ini:

[app:main]
use = egg:pyramid_starter_seed

PRODUCTION = true
minify = dist

...

and development.ini:

[app:main]
use = egg:pyramid_starter_seed

PRODUCTION = false
minify = app

...

As you can imagine the PRODUCTION configuration tells the application to switch between production or development. The minify configuration tells the .html renderer how to construct templates paths.

Let's see how looks like the index.html template. When you write css or javascript files you want to keep things separated on different modules when you are in development mode, but in production mode you might want a unique concatenated and minified/uglified resource. Here you can see how you can do that for the Bootstrap javascripts modules you might want to enable:

<!doctype html>
<html class="no-js"
lang="${request.locale_name}"
tal:define="minify python:request.registry.settings['minify'];
production python:request.registry.settings.get('PRODUCTION', 'false') == 'true'">
...

<tal:production tal:condition="production">
<script src="${request.static_url('pyramid_starter_seed:webapp/%s/scripts/plugins.js' % minify)}"></script>
</tal:production>
<tal:not_production tal:condition="not:production">
<script src="${request.static_url('pyramid_starter_seed:webapp/%s/bower_components/bootstrap/js/alert.js' % minify)}"></script>
<script src="${request.static_url('pyramid_starter_seed:webapp/%s/bower_components/bootstrap/js/dropdown.js' % minify)}"></script>

</tal:not_production>
<!-- build:js scripts/plugins.js -->
<tal:comment replace="nothing">
<!-- DO NOT REMOVE this block (minifier) -->
<script src="./bower_components/bootstrap/js/alert.js"></script>
<script src="./bower_components/bootstrap/js/dropdown.js"></script>

</tal:comment>
<!-- endbuild -->
...
</html>

So in development mode you will have two separate javascript files: alert.js and dropdown.js. When you are in production mode it will served a unique concatenated and uglyfied scripts/plugins.js. At first time it might seem a bit complicated or a task with a too verbose setup, but it is a simple and very powerful mechanism.

For example: do you want to add another javascript file to the plugins.js bundle? Just add two lines and you are ok!

It works without any other configuration thanks to the start and end comments blocks build:js and endbuild that groups assets groups.

Simple, isn't it? Same thing for css files.

And if you have to include images is even more simple with:

<img class="logo img-responsive" src="${request.static_url('pyramid_starter_seed:webapp/%s/images/pyramid.png' % minify)}" alt="pyramid web framework">

How to manage things with grunt

The concatenation, minification/uglyfication and image optimization (with other tasks specified in the grunt's pipeline) it is automatically performed just running the following command:

$ grunt build

This is just one of the possible implementations. Feel free to contribute and improve pyramid_starter_seed.

How to clone pyramid_starter_seed

Fetch pyramid_starter_seed, personalize it and then clone it!

Pyramid starter seed can be fetched, personalized and released with another name. So other developer can bootstrap, build, release and distribute their own starter templates without having to write a new package template generator. For example you could create a more opinionated starter seed based on SQLAlchemy, ZODB nosql or powered by a javascript framework like AngularJS and so on.

The clone method should speed up the process of creation of new more evoluted packages based on Pyramid, also people that are not keen on writing their own reusable scaffold templates.

So if you want to release your own customized template based on pyramid_starter_seed you'll have to call a console script named pyramid_starter_seed_clone with the following syntax (obviously you'll have to call this command outside the root directory of pyramid_starter_seed):

$ YOUR_VIRTUALENV_PYTHON_PATH/bin/pyramid_starter_seed_clone new_template

and you'll get as a result a perfect renamed clone new_template:

A new starter template cloned from pyramid_starter_seed

If you provide tests you can check immediately if something went wrong during the cloning process.

In effect the clone console script it might not work in some corner cases just in case you choose a new package name that contains reserved words or the name of a dependency of your plugin, but it should be quite easy to fix by hand or improving the console script. Anyway this mechanism has been tested several years: I have built this script years ago because I was fed up with ugly package names chose by me or other colleagues of mine and allowed me to save a lot of time.


If you want to disable the console script on your new template (for example: new_template_clone) drop from setup.py the console script configuration.
So it sounds like a viral extension mechanism (I hope).

End of story?


In the next future I'd like to create a new Pyramid starter seed for single page web apps based on SQLAlchemy and powered by AngularJS. So if you similar plans... together is better: if you want to share your thoughts, improvements, feedback in general, or if you are going to create your own template based on pyramid_starter_seed please contact me (Twitter, Google+, Linkedin)!


Anyway I hope you'll save time with pyramid_starter_seed.

Links

24 Sep 2014 11:33am GMT

23 Sep 2014

feedPlanet Plone

eGenix: Python Meeting Düsseldorf - 2014-09-30

The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.

Ankündigung

Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:

Dienstag, 30.09.2014, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf


Neuigkeiten

Bereits angemeldete Vorträge

Charlie Clark
"Generator Gotchas"

Marc-Andre Lemburg
"Pythons und Fliegen - Speicherbedarf von Python Objekten optimieren"

Weiterhin werden wir die Ergebnisse des PyDDF Sprints 2014 vom kommenden Wochenende präsentieren.

Wir suchen noch weitere Vorträge. Bei Interesse, bitte unter info@pyddf.de melden.

Startzeit und Ort

Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.

Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.

Über dem Eingang steht ein großes "Schwimm'in Bilk" Logo. Hinter der Tür direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.

>>> Eingang in Google Street View

Einleitung

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:

Programm

Das Python Meeting Düsseldorf nutzt eine Mischung aus Open Space und Lightning Talks, wobei die Gewitter bei uns auch schon mal 20 Minuten dauern können :-)

Lightning Talks können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit XGA Auflösung steht zur Verfügung. Folien bitte als PDF auf USB Stick mitbringen.

Lightning Talk Anmeldung bitte formlos per EMail an info@pyddf.de

Kostenbeteiligung

Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.

Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.

Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.

Anmeldung

Da wir nur für ca. 20 Personen Sitzplätze haben, möchten wir bitten, sich per EMail anzumelden. Damit wird keine Verpflichtung eingegangen. Es erleichtert uns allerdings die Planung.

Meeting Anmeldung bitte formlos per EMail an info@pyddf.de

Weitere Informationen

Weitere Informationen finden Sie auf der Webseite des Meetings:

http://pyddf.de/

Viel Spaß !

Marc-Andre Lemburg, eGenix.com

23 Sep 2014 8:00am GMT

Four Digits: Vrijdag 26 september Silicon Alley LIghtning Talks

Vrijdag staan er weer Lightning Talks op het programma! Deze keer bij de buren.

We hebben tot dusverre vijf sprekers:

Het programma begint om vier uur met inloop.

Let op, andere locatie! We organiseren de LIghtning Talks met onze buren van Concepts26, aan Jansbinnensingel 20.

23 Sep 2014 7:55am GMT