29 Jul 2016

feedPlanet Grep

Dries Buytaert: Setting higher standards for corporate contributions to Drupal

Last week I made a comment on Twitter that I'd like to see Pantheon contribute more to Drupal core. I wrote that in response to the announcement that Pantheon has raised a $30 million Series C. Pantheon has now raised $50 to $60 million dollars of working capital (depending on Industry Ventures' $8.5M) and is in a special class of companies. This is an amazing milestone. Though it wasn't meant that way, Pantheon and Acquia compete for business and my Tweet could be read as a cheap attack on a competitor, and so it resulted in a fair amount of criticism. Admittedly, Pantheon was neither the best nor the only example to single out. There are many companies that don't contribute to Drupal at all - and Pantheon does contribute to Drupal in a variety of ways such as sponsoring events and supporting the development of contributed modules. In hindsight, I recognize that my tweet was not one of my best, and for that I apologize.

Having said that, I'd like to reiterate something I've said before, in my remarks at DrupalCon Amsterdam and many times on this blog: I would like to see more companies contribute more to Drupal core - with the emphasis on "core". Drupal is now relied upon by many, and needs a strong base of commercial contributors. We have to build Drupal together. We need a bigger and more diverse base of organizations taking on both leadership and contribution.

Contribution to Drupal core is the most important type of contribution in terms of the impact it can make. It touches every aspect of Drupal and all users who depend on it. Long-term and full-time contribution to core is not within everyone's reach. It typically requires larger investment due to a variety of things: the complexity of the problems we are solving, our need for stringent security and the importance of having a rigorous review-process. So much is riding on Drupal for all of us today. While every module, theme, event and display of goodwill in our community is essential, contributions to core are quite possibly the hardest and most thankless, but also the most rewarding of all when it comes to Drupal's overall progress and success.

I believe we should have different expectations for different organizations based on their maturity, their funding, their profitability, how strategic Drupal is for them, etc. For example, sponsoring code sprints is an important form of contribution for small or mid-sized organizations. But for any organization that makes millions of dollars with Drupal, I would hope for more.

The real question that we have to answer is this: at what point should an organization meaningfully contribute to Drupal core? Some may say "never", and that is their Open Source right. But as Drupal's project lead it is also my right and responsibility to encourage those who benefit from Drupal to give back. It should not be taboo for our community to question organizations that don't pull their weight, or choose not to contribute at all.

For me, committing my workdays and nights to Drupal isn't the exhausting part of my job. It's dealing with criticism that comes from false or incomplete information, or tackling differences in ideals and beliefs. I've learned not to sweat the small stuff, but it's on important topics like giving back that my emotions and communication skills get tested. I will not apologize for encouraging organizations to contribute to Drupal core. It's a really important topic and one that I'm very passionate about. I feel good knowing that I'm pushing these conversations from inside the arena rather than from the sidelines, and for the benefit of the Drupal project at large.

29 Jul 2016 4:46pm GMT

Frank Goossens: Music from Our Tube; Floating Points live at KEXP

Floating Points may have gotten popular with his (great) laptop-electro tracks and his DJ sets feature a lot of soul and disco, but his latest release is very jazzy and … earthy. Enjoy this live show he did in the KEXP studio's in May this year;

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

29 Jul 2016 1:50pm GMT

Mattias Geniar: Enable QUIC protocol in Google Chrome

The post Enable QUIC protocol in Google Chrome appeared first on ma.ttias.be.

Google has support for the QUIC protocol in the Chrome browser, but it's only enabled for their own websites by default. You can enable it for use on other domains too -- assuming the webserver supports it. At this time, it's a setting you need to explicitly enable.

To start, open a new tab and go to chrome://flags/. Find the Experimental QUIC protocol and change the setting to Enabled. After the change, restart Chrome.

chrome_quic_support_setting

To find out of QUIC is enabled in your Chrome in the first place, go to chrome://net-internals/#quic.

In my case, it was disabled (which is the "default" value).

chrome_quic_internals_enabled

After changing the setting to enable QUIC support and restarting Chrome, the results were much better.

chrome_quic_internals_status_enabled

On the same page, you can also get a live list of which sessions are using the QUIC protocol. If it's enabled, it'll probably only be Google services for now.

chrome_quic_internals_sessions

I'm working on a blogpost to explain the QUIC protocol and how it compares to HTTP/2, so stay tuned for more QUIC updates!

The post Enable QUIC protocol in Google Chrome appeared first on ma.ttias.be.

29 Jul 2016 9:55am GMT

28 Jul 2016

feedPlanet Grep

Philip Van Hoof: Truly huge files and the problem of continuous virtual address space

As we all know does mmap, or even worse on Windows CreateFileMapping, need contiguous virtual address space for a given mapping size. That can become a problem when you want to load a file of a gigabyte with mmap.

The solution is of course to mmap the big file using multiple mappings. For example like adapting yesterday's demo this way:

void FileModel::setFileName(const QString &fileName)
{
    ...
    if (m_file->open(QIODevice::ReadOnly)) {
        if (m_file->size() > MAX_MAP_SIZE) {
            m_mapSize = MAX_MAP_SIZE;
            m_file_maps.resize(1 + m_file->size() / MAX_MAP_SIZE, nullptr);
        } else {
            m_mapSize = static_cast(m_file->size());
            m_file_maps.resize(1, nullptr);
        }
        ...
    } else {
        m_index->open(QFile::ReadOnly);
        m_rowCount = m_index->size() / 4;
    }
    m_file_maps[0] = m_file->map(0, m_mapSize, QFileDevice::NoOptions);
    qDebug() << "Done loading " << m_rowCount << " lines";
    map_index = m_index->map(0, m_index->size(), QFileDevice::NoOptions);

    beginResetModel();
    endResetModel();
    emit fileNameChanged();
}

And in the data() function:

QVariant FileModel::data( const QModelIndex& index, int role ) const
{
    QVariant ret;
    ...
    quint32 mapIndex = pos_i / MAX_MAP_SIZE;
    quint32 map_pos_i = pos_i % MAX_MAP_SIZE;
    quint32 map_end_i = end_i % MAX_MAP_SIZE;
    uchar* map_file = m_file_maps[mapIndex];
    if (map_file == nullptr)
        map_file = m_file_maps[mapIndex] = m_file->map(mapIndex * m_mapSize, m_mapSize, QFileDevice::NoOptions);
    position = m_file_maps[mapIndex] + map_pos_i;
    if (position) {
            const int length = static_cast(end_i - pos_i);
            char *buffer = (char*) alloca(length+1);
            if (map_end_i >= map_pos_i)
                strncpy (buffer, (char*) position, length);
            else {
                const uchar *position2 = m_file_maps[mapIndex+1];
                if (position2 == nullptr) {
                    position2 = m_file_maps[mapIndex+1] = m_file->map((mapIndex+1) *
                         m_mapSize, m_mapSize, QFileDevice::NoOptions);
                }
                strncpy (buffer, (char*) position, MAX_MAP_SIZE - map_pos_i);
                strncpy (buffer + (MAX_MAP_SIZE - map_pos_i), (char*) position2, map_end_i);
            }
            buffer[length] = 0;
            ret = QVariant(QString(buffer));
        }
    }
    return ret;
}

You could also not use mmap for the very big source text file and use m_file.seek(map_pos_i) and m_file.read(buffer, length). The most important mapping is of course the index one, as the reading of the individual lines can also be done fast enough with normal read() calls (as long as you don't have to do it for each and every line of the very big file and as long as you know in a O(1) way where the QAbstractListModel's index.row()'s data is).

But you already knew that. Right?

28 Jul 2016 12:42pm GMT

Mattias Geniar: Varnish Agent: an HTML frontend to manage & monitor your varnish installation

The post Varnish Agent: an HTML frontend to manage & monitor your varnish installation appeared first on ma.ttias.be.

I've been using Varnish for several years, but I only just recently learned of the Varnish Agent. It's a small daemon that can connect to a running Varnish instance to help manipulate it: load new VCLs, see statistics, watch the varnishlog, flush caches, ...

If you're new to Varnish, this is an easier way of getting started than by learning all the CLI tools.

Installing Varnish Agent

The installation is pretty straight forward, assuming you're already using the Varnish repositories.

$ yum install varnish-agent

If you don't have the package available in your repositories, clone the source from varnish/vagent2 on Github and compile it yourself.

After that, start the service and it will bind on port :6085 by default.

$ systemctl start varnish-agent

By default, the web interface is protected by a simple HTTP authentication requiring username + password. Those get randomly generated during the installation and you can find them in /etc/varnish/agent_secret.

$ cat /etc/varnish/agent_secret
varnish:yourpass

After that, browse to $IP:6085, log in and behold Varnish Agent.

What does Varnish Agent look like?

To give you an idea, here's a screenshot of the Varnish agent running on this server.

(As you can see, it's powered by the Bootstrap CSS framework that I also used on this site.)

varnish_agent_demo

A couple of features are worth diving into even further.

Cache invalidation via Varnish Agent

One of the very useful features is that the Varnish Agent offers you a simple form to purge the cache for certain URLs. In Varnish terminology, this is called "banning".

varnish_agent_cache_invalidation

There are limits though: you pass the URL parameters, but you can't (yet?) pass the host. So if you want to ban the URL /index.html, you'll purge it for all the sites on that Varnish instance.

See cache misses

Another useful one is the parsing of varnishtop right in the web frontend.

varnish_agent_varnishtop_cache_misses

It instantly shows you which URLs are being fetched from the backend and are thus cache misses. These are probably the URLs or HTTP calls to focus on and see where cacheability can be improved.

Inline VCL editor

I consider this a very dangerous feature but a lifesaver at the same time: the web frontend allows you to edit the VCL of Varnish and instantly load it in the running Varnish instance (without losing the cache). If you're hit by a sudden traffic spike or need to quickly manipulate the HTTP requests, having the ability directly modify the Varnish VCL is pretty convenient.

Important to know is that the VCL configs aren't persisted on disk: they are passed to the running Varnish instance directly, but restarting the server (or the Varnish service) will cause the default .vcl file to be loaded again.

Varnishstat: statistics, graphs & numbers

The CLI tool varnishstat shows you number of hits/misses, connections per second, ... from the command line. But it isn't very useful to see historical data. That's usually handled by your monitoring system which fetches those datapoints and shows them in a timeline.

The Varnish Agent can parse those numbers and show you a (limited) timeline about how they evolved. It looks like this.

varnish_agent_graphs

The use case is limited, but it helps for a quick glance of the state of your Varnish instance.

Conclusion

While I personally still prefer the command line, I see the benefits of a simple web interface to quickly assess the state of your Varnish instance.

Having a built-in form to perform cache invalidation is useful and prevents having to create your own varnish URL purger.

If you're going to run Varnish Agent, make sure to look into firewalling the Varnish Agent port so only you are allowed access.

The post Varnish Agent: an HTML frontend to manage & monitor your varnish installation appeared first on ma.ttias.be.

28 Jul 2016 12:15pm GMT

27 Jul 2016

feedPlanet Grep

Wouter Verhelst: DebConf16 low resolution videos

By popular request...

If you go to the Debian video archive, you will notice the appearance of an "lq" directory in the debconf16 subdirectory of the archive. This directory contains low-resolution re-encodings of the same videos that are available in the toplevel.

The quality of these videos is obviously lower than the ones that have been made available during debconf, but their file sizes should be up to about 1/4th of the file sizes of the full-quality versions. This may make them more attractive as a quick download, as a version for a small screen, as a download over a mobile network, or something of the sorts.

Note that the audio quality has not been reduced. If you're only interested in the audio of the talks, these files may be a better option.

27 Jul 2016 8:13pm GMT

26 Jul 2016

feedPlanet Grep

Philip Van Hoof: Loading truly truly huge text files with a QAbstractListModel

Sometimes people want to do crazy stuff like loading a gigabyte sized plain text file into a Qt view that can handle QAbstractListModel. Like for example a QML ListView. You know, the kind of files you generate with this commando:

base64 /dev/urandom | head -c 100000000 > /tmp/file.txt

But, how do they do it?

FileModel.h

So we will make a custom QAbstractListModel. Its private member fields I will explain later:

#ifndef FILEMODEL_H
#define FILEMODEL_H

#include <QObject>
#include <QVariant>
#include <QAbstractListModel>
#include <QFile>

class FileModel: public QAbstractListModel {
    Q_OBJECT

    Q_PROPERTY(QString fileName READ fileName WRITE setFileName NOTIFY fileNameChanged )
public:
    explicit FileModel( QObject* a_parent = nullptr );
    virtual ~FileModel();

    int columnCount(const QModelIndex &parent) const;
    int rowCount( const QModelIndex& parent =  QModelIndex() ) const Q_DECL_OVERRIDE;
    QVariant data( const QModelIndex& index, int role = Qt::DisplayRole ) const  Q_DECL_OVERRIDE;
    QVariant headerData( int section, Qt::Orientation orientation,
                         int role = Qt::DisplayRole ) const  Q_DECL_OVERRIDE;
    void setFileName(const QString &fileName);
    QString fileName () const
        { return m_file->fileName(); }
signals:
    void fileNameChanged();
private:
    QFile *m_file, *m_index;
    uchar *map_file;
    uchar *map_index;
    int m_rowCount;
    void clear();
};

#endif// FILEMODEL_H

FileModel.cpp

We will basically scan the very big source text file for newline characters. We'll write the offsets of those to a file suffixed with ".mmap". We'll use that new file as a sort of "partition table" for the very big source text file, in the data() function of QAbstractListModel. But instead of sectors and files, it points to newlines.

The reason why the scanner itself isn't using the mmap's address space is because apparently reading blocks of 4kb is faster than reading each and every byte from the mmap in search of \n characters. Or at least on my hardware it was.

You should probably do the scanning in small qEventLoop iterations (make sure to use nonblocking reads, then) or in a thread, as your very big source text file can be on a unreliable or slow I/O device. Plus it's very big, else you wouldn't be doing this (please promise me to just read the entire text file in memory unless it's hundreds of megabytes in size: don't micro optimize your silly homework notepad.exe clone).

Note that this is demo code with a lot of bugs like not checking for \r and god knows what memory leaks and stuff was remaining when it suddenly worked. I leave it to the reader to improve this. An example is that you should check for validity of the ".mmap" file: your very big source text file might have changed since the newline partition table was made.

Knowing that I'll soon find this all over the place without any of its bugs fixed, here it comes ..

#include "FileModel.h"

#include <QDebug>

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <pthread.h>
#include <unistd.h>

FileModel::FileModel( QObject* a_parent )
    : QAbstractListModel( a_parent )
    , m_file (nullptr)
    , m_index(nullptr)
    , m_rowCount ( 0 ) { }

FileModel::~FileModel() { clear(); }

void FileModel::clear()
{
    if (m_file) {
        if (m_file->isOpen() && map_file != nullptr)
            m_file->unmap(map_file);
        delete m_file;
    }
    if (m_index) {
        if (m_index->isOpen() && map_index != nullptr)
            m_index->unmap(map_index);
        delete m_index;
    }
}

void FileModel::setFileName(const QString &fileName)
{
   clear();
   m_rowCount = 0;
   m_file = new QFile(fileName);
   int cur = 0;
   m_index = new QFile(m_file->fileName() + ".mmap");
   if (m_file->open(QIODevice::ReadOnly)) {
       if (!m_index->exists()) {
           char rbuffer[4096];
           m_index->open(QIODevice::WriteOnly);
           char nulbuffer[4];
           int idxnul = 0;
           memset( nulbuffer +0, idxnul >> 24 & 0xff, 1 );
           memset( nulbuffer +1, idxnul >> 16 & 0xff, 1 );
           memset( nulbuffer +2, idxnul >>  8 & 0xff, 1 );
           memset( nulbuffer +3, idxnul >>  0 & 0xff, 1 );
           m_index->write( nulbuffer, sizeof(quint32));
           qDebug() << "Indexing to" << m_index->fileName();
           while (!m_file->atEnd()) {
               int in = m_file->read(rbuffer, 4096);
               if (in == -1)
                   break;
               char *newline = (char*) 1;
               char *last = rbuffer;
               while (newline != 0) {
                   newline = strchr ( last, '\n');
                   if (newline != 0) {
                     char buffer[4];
                     int idx = cur + (newline - rbuffer);
                     memset( buffer +0, idx >> 24 & 0xff, 1 );
                     memset( buffer +1, idx >> 16 & 0xff, 1 );
                     memset( buffer +2, idx >>  8 & 0xff, 1 );
                     memset( buffer +3, idx >>  0 & 0xff, 1 );
                     m_index->write( buffer, sizeof(quint32));
                     m_rowCount++;
                     last = newline + 1;
                  }
               }
               cur += in;
           }
           m_index->close();
           m_index->open(QFile::ReadOnly);
           qDebug() << "done";
       } else {
           m_index->open(QFile::ReadOnly);
           m_rowCount = m_index->size() / 4;
       }
       map_file= m_file->map(0, m_file->size(), QFileDevice::NoOptions);
       qDebug() << "Done loading " << m_rowCount << " lines";
       map_index = m_index->map(0, m_index->size(), QFileDevice::NoOptions);
   }
   beginResetModel();
   endResetModel();
   emit fileNameChanged();
}

static quint32
read_uint32 (const quint8 *data)
{
    return data[0] << 24 |
           data[1] << 16 |
           data[2] << 8 |
           data[3];
}

int FileModel::rowCount( const QModelIndex& parent ) const
{
    Q_UNUSED( parent );
    return m_rowCount;
}

int FileModel::columnCount(const QModelIndex &parent) const
{
    Q_UNUSED( parent );
    return 1;
}

QVariant FileModel::data( const QModelIndex& index, int role ) const
{
    if( !index.isValid() )
        return QVariant();
    if (role == Qt::DisplayRole) {
        QVariant ret;
        quint32 pos_i = read_uint32(map_index + ( 4 * index.row() ) );
        quint32 end_i;
        if ( index.row() == m_rowCount-1 )
            end_i = m_file->size();
        else
            end_i = read_uint32(map_index + ( 4 * (index.row()+1) ) );
        uchar *position;
        position = map_file +  pos_i;
        uchar *end = map_file + end_i;
        int length = end - position;
        char *buffer = (char*) alloca(length +1);
        memset (buffer, 0, length+1);
        strncpy (buffer, (char*) position, length);
        ret = QVariant(QString(buffer));
        return ret;
    }
    return QVariant();
}

QVariant FileModel::headerData( int section, Qt::Orientation orientation, int role ) const
{
    Q_UNUSED(section);
    Q_UNUSED(orientation);
    if (role != Qt::DisplayRole)
           return QVariant();
    return QString("header");
}

main.cpp

#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include <QtQml>// qmlRegisterType

#include "FileModel.h"

int main(int argc, char *argv[])
{
    QGuiApplication app(argc, argv);
    qmlRegisterType<FileModel>( "FileModel", 1, 0, "FileModel" );
    QQmlApplicationEngine engine;
    engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
    return app.exec();
}

main.qml

import QtQuick 2.3
import QtQuick.Window 2.2
import FileModel 1.0

Window {
    visible: true

    FileModel { id: fileModel }
    ListView {
        id: list
        anchors.fill: parent
        delegate: Text { text: display }
        MouseArea {
            anchors.fill: parent
            onClicked: {
                list.model = fileModel
                fileModel.fileName = "/tmp/file.txt"
            }
        }
    }
}

profile.pro

TEMPLATE = app
QT += qml quick
CONFIG += c++11
SOURCES += main.cpp \
    FileModel.cpp
RESOURCES += qml.qrc
HEADERS += \
    FileModel.h

qml.qrc

<RCC>
    <qresource prefix="/">
        <file>main.qml</file>
    </qresource>
</RCC>

26 Jul 2016 7:15pm GMT

Mattias Geniar: Why do we automate?

The post Why do we automate? appeared first on ma.ttias.be.

Well that's a stupid question, to save time obviously!

I know, it sounds obvious. But there's more to it.

The last few years I've been co-responsible for determining our priorities at Nucleus. Deciding what to automate and where to focus our development and sysadmin efforts. Which tasks do we automate first?

That turns out to be a rather complicated question with many if's and but's.

For a long time, I only looked at the time-saved metric to determine what we should do next. I should've been looking at many more criteria.

To save time

This is the most common reason to automate and it's usually the only factor that helps decide whether there should be an effort to automate a certain task.

Example: time consuming capacity planning

Task: every week someone has to gather statistics about the running infrastructure to calculate free capacity in order to purchase new capacity in time. This task takes takes an hour, every week.

Efforts to automate: it takes a developer 2 days work to gather info via API's and create a weekly report to management.

Gain: the development efforts pay themselves back in about 16 weeks. Whether this is worth it or not depends on your organisation.

xkcd-automation

Source: XKCD: Automation

It's an image usually referenced when talking about automation, but it holds a lot of truth.

The "time gained" metric is multiplied by the people affected by it. If you can save 10 people 5 minutes every day, you've practically gained an extra workday every week.

To gain consistency

Sometimes a task is very complicated but doesn't need to happen very often. There are checklists and procedures to follow, but it's always a human (manual) action.

Example: complicated migrations

Task: an engineer sometimes has to move e-mail accounts from one server to another. This doesn't happen very often but consists of a large number of steps where human error is easily introduced.

Efforts to automate: it may take a sysadmin a couple of hours to create a series of scripts to help automate this task.

Gain: the value in automating this is in the quality of the work. It guarantees a consistent method of migrations that everyone can follow and creates a common baseline for clients. They know what to expect and the quality of the results is the same every time.

At the same time, this kind of automation reduces human made mistakes and leads to a combined knowledge set. If everyone who is an expert in his/her own domain contributes to the automation, it can bring together the skill set of very different people to create a much bigger whole: a collection of experiences, knowledge and opinions that ultimately lead to better execution and higher quality.

To gain speed, momentum and velocity

There are times when things just take a really long time in between tasks. It's very easy to lose focus or forget about follow-up tasks because you're distracted in the meanwhile.

Example: faster server setups and deliveries

Task: An engineer needs to install a new Windows server. Traditionally, this takes many rounds of Windows Updates and reboots. Most software installations require even more reboots.

Efforts to automate: a combination of PXE servers or golden templates and a series of scripts or config management to get the software stack to a reasonable state. A sysadmin (or team of) can spend several days automating this.

Gain: the immediate gain is in peace of mind and speed of operations. It reduces the time of go-live from several hours to mere minutes. It allows an operation to move much faster and consider new installations trivial.

This same logic applies to automating deployments of code or applications. By taking away the burden of performing deploys, it becomes much cheaper and easier to deploy very simple changes instead of prolonging deploys and going for big waterfall-like go-lives with lots of changes at once.

To schedule tasks

Some things need to happen at ungodly hours or at such a rapid interval that it's either impossible or impractical for a human to do.

Example: nightly maintenances

Task: Either as a one-time task or a recurring event, a set of MySQL tables needs to be altered. Given the application impact, this needs to happen outside office hours.

Efforts to automate: It will depend on the task at hand, but it's usually more work to automate than it is to do manually.

Gain: No one has to look at the task anymore. The fact that the maintenance can now be scheduled during off hours without human intervention makes it so that all preparations can be done during office hours -- well in advance -- and won't cause anyone to lose sleep over it.

It's quite common to spend more time making the script or automation than the time you would spend on it manually. The benefit however is that you no longer need to do things at night and you can prepare things, ask feedback from colleagues and take your time to think about the best possible way to handle it.

There is an additional benefit too: you automate to make things happen when they should, not when you remember they should.

To reduce boring or less fun tasks

If there's a recurring task that no one likes to do but is crucial to the organisation, it's probably worth automating.

Example: combining and merging related support tickets

Task: In a support department, someone is tasked to categorise incoming support tickets, merge the same tickets or link related tickets and distribute tasks.

Efforts to automate: A developer may spend several days writing the logic and algorithms to find and merge tickets automatically, based on pre-defined criteria.

Gain: A task that may be put on hold for too long because no one likes to do it, suddenly happens automatically. While it may not have been time consuming, the fact that it was put on hold too often impacts the organisation.

The actual improvement is to reduce the mental burden of having to perform those tasks in the first place. If your job consists of a thousand little tasks every day, it becomes easy to lose track of priorities.

To keep sysadmins and developers happy

Sometimes you automate things, not necessarily for any of the reasons above, but because your colleagues have signalled that it would be fun to automate it.

The tricky part here is assessing the value for the business. In the end, there should be value for the company.

Example: creating a dashboard with status reports

Task: Create a set of dashboards to be shown on monitors and TVs in the office.

Efforts to automate: Some hardware hacking with Raspberry Pi's, scripts to gather and display data and visualise the metrics and graphs.

Gain: More visibility in open alerts and overall status of the technology in the company.

Everyone that has dashboards knows the value they bring, but assessing whether it's worth the time and energy put into creating them is a very hard thing to do. How much time can you afford to spend creating them?

Improvements like these often come from colleagues. Listen to them and give them the time and resources to help implement them.

When to automate?

Given all these reasons on why to automate, this leaves the most difficult question of all: when to automate?

How and when do you decide whether something is worth automating? The time spent vs. time gained metric is easy to calculate, but how do you define the happiness of colleagues? How much is speed worth in your organisation?

Those are the questions that keep me up.

The post Why do we automate? appeared first on ma.ttias.be.

26 Jul 2016 2:02pm GMT

25 Jul 2016

feedPlanet Grep

Mattias Geniar: A new website layout, focussed on speed and simplicity

The post A new website layout, focussed on speed and simplicity appeared first on ma.ttias.be.

Out with the old, in with the new!

After a couple of years I felt it was time for a refresh of the design of this blog. It's already been through a lot of iterations, as it usually goes with WordPress websites. It's so easy to download and install a theme you can practically switch every day.

But the downside of WordPress themes is also obvious: you're running the same website as thousands of others.

Not this time, though.

Ps; if you're reading this in your RSS feed or mailclient, consider clicking through to the website to see the actual results.

Custom theme & design

This time, I decided to do do it myself. Well, sort of. The layout is based on the bootstrap CSS framework by Twitter. The design is inspired by Troy Hunt's site. Everything else I hand-crafted with varying degrees of success.

In the end, it's a WordPress theme that started out like this.

<?php

?>

Pretty empty.

Focus on the content

The previous design was chosen with a single goal in mind: maximise advertisement revenue. There were distinct locations for Google AdSense banners in the sidebar and on the top.

This time, I'm doing things differently: fuck ads.

I'm throwing away around 1.000eur per year in advertisement revenue, but what I'm gaining is more valuable to me: peace of mind. Knowing there are ads influences your writing and topic selection. You're all about the pageviews. More views = more money. You chose link-bait titles. You write as quickly as you can just to get the exclusive of a story, not always for the better.

So from now on, it's a much more simple layout: content comes first. No more ads. No more bloat.

Speed improvements

The previous site was -- rather embarrassingly -- loading over 100 resources on every pageview. From CSS to JavaScript to images to remote trackers to ... well, everything.

blog_performance_old_theme

The old design: 110 requests with a total of 1.7MB of content. The page took more than 2 seconds to fully render.

With this new design, I focussed on getting as few requests as possible. And I think it worked.

blog_performance_new_theme

Most pages load with 14 HTTP requests for a total of ~300KB. It also renders a lot faster.

There are still some requests being made that I'd like to get rid of, but they're well hidden inside plugins I use -- even though I don't need their CSS files.

A lot of the improvements came from not including the default Facebook & Twitter widgets but working with the Font Awesome icon set to render the same buttons & links, without 3rd party tools.

Social links

I used to embed the Twitter follow & Facebook share buttons on the site. It had a classic "like this page" at the right column. But those are loaded from a Twitter/Facebook domain and do all sorts of JavaScript and AJAX calls in the background, all slowing down the site.

Not to mention the tracking: just by including those pieces of JavaScript I made every visitor involuntarily give their browsing habbits to those players, all for their advertisement gains. No more.

To promote my social media, you can now find all necessary links in the top right corner -- in pure CSS.

social_follow

Want to share a page on social media? Those links are embedded in the bottom, also in CSS.

social_share

While the main motivator was speed and reducing the number of HTTP requests, not exposing my visitors to tracking they didn't ask for feels like a good move.

Why no static site generator?

If I'm going for speed, why didn't I pick a static site generator like Jeckyll, Hugo, or Octopress?

My biggest concern were comments.

With a statically generated site, I would have to embed some kind of 3rd party comment system like Disqus. I'm not a big fan for a couple of reasons:

So, no static generator for me.

I do however combine WordPress with a static HTML plugin (similar to Wordfence). For most visitors, this should feel like a static HTML page with fast response times. It also helps me against large traffic spikes so my server doesn't collapse.

Typography

I'm a bit of a font-geek. I was a fan of webfonts for all the freedom they offered, but I'm taking a step back now to focus on speed. You see, webfonts are rather slow.

An average webfont that isn't in the browser cache takes about 150-300ms to load. All that for some typography? Doesn't seem worth it.

Now I'm following Github's font choice.

font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";

In short: it takes the OS default wherever possible. This site will look slightly different on Mac OSX vs Windows.

For Windows, it looks like this.

font_windows.png

And for Mac OSX:

font_macosx

Despite being a Linux and open source focussed blog, hardly any of my visitors use a Linux operating system -- so I've decided to ignore those special fonts for now.

Having said that, I do still use the Font Awesome webfonts for all the icons and glyphs you see. In terms of speed, I found it to be faster & more responsive to load a single webfont than to load multiple images and icons. And since I'm no frontend-guru, sprites aren't my thing.

Large, per-post cover images

This post has a cover image at the very top that's unique to this post. I now have the ability to give each post (or page) a unique feeling and design, just by modifying the cover image.

For most of my post categories I have sane defaults in case older posts don't have a custom header image. I like this approach, as it gives a sense of familiarity to each post. For instance, have a look at the design of these pages;

I like how the design can be changed for each post.

At the same time, I'm sacrificing a bit of my identity. All my previous layouts all had the same theme for each page, causing -- hopefully -- a sense of familiarity and known-ground. I'll have to see how this goes.

There's a homepage

This blog has always been a blog, pur sang. Nothing more.

But as of today, there is an actual homepage! One that doesn't just list the latest blogposts.

I figured it was time for some kind of persona building and having a homepage to showcase relevant projects or activities might persuade more visitors to keep up with my online activities (aka: Twitter followers).

Feedback appreciated!

I'm happy with the current layout, but I want to hear from you want you think: is it better or worse?

There are a couple of things I'm considering but haven't quite decided on yet:

If there are pages that need some additional markup, I'm all ears. Ping me on Twitter with a link!

The post A new website layout, focussed on speed and simplicity appeared first on ma.ttias.be.

25 Jul 2016 8:30am GMT

24 Jul 2016

feedPlanet Grep

Frank Goossens: Preparing (for) Autoptimize 2.0.3 or 2.1.0

It's that time of the year again where I humbly ask Autoptimize's users to download and test the "beta"-version of the upcoming release. I'm not entirely sure whether this should be 2.0.3 (a minor release) or 2.1.0 (a major one), but I'll let you guys & girls decide, OK?

Anyway, the following changes are in said new release;

So, if you're curious about Pablo's beautiful menu or if you just want to help Autoptimize out, download the beta and provide me with your feedback. If all goes well, we'll be able to push it (2.1.0?) out in the first half of August!

Possibly related twitterless twaddle:

24 Jul 2016 8:46am GMT

23 Jul 2016

feedPlanet Grep

Dieter Adriaenssens: 2016

23 Jul 2016 10:47am GMT

22 Jul 2016

feedPlanet Grep

Mattias Geniar: vsftpd on linux: 500 OOPS: vsftpd: refusing to run with writable root inside chroot()

The post vsftpd on linux: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() appeared first on ma.ttias.be.

The following error can occur when you just installed vsftpd on a Linux server and trying to FTP to it.

Command:   USER xxx
Response:       331 Please specify the password.
Command:        PASS ******************
Response:       500 OOPS: vsftpd: refusing to run with writable root inside chroot()
Error:          Critical error: Could not connect to server

This is caused by the fact that the directory of the user you're connecting to, is write-enabled. In normal chroot() situations, the parent directory needs to be read-only.

This means for most situations of useradd, which will create a home directory owned and writeable by the user, the above error of "vsftpd: refusing to run with writable root inside chroot()" will be shown.

To fix this, modify the configuration as such.

$ cat /etc/vsftpd/vsftpd.conf
...
allow_writeable_chroot=YES

If that parameter is missing, just add it to the bottom of the config. Next, restart vsftpd.

$ service vsftpd restart

After that, FTP should run smoothly again.

Alternatively: please consider using sFTP (FTP over SSH) or FTPs (FTP via TLS) with a modified, non-writeable, chroot.

The post vsftpd on linux: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() appeared first on ma.ttias.be.

22 Jul 2016 6:30pm GMT

21 Jul 2016

feedPlanet Grep

Dries Buytaert: City of Boston launches Boston.gov on Drupal

The before and after of Boston.gov

Yesterday the City of Boston launched its new website, Boston.gov, on Drupal. Not only is Boston a city well-known around the world, it has also become my home over the past 9 years. That makes it extra exciting to see the city of Boston use Drupal.

As a company headquartered in Boston, I'm also extremely proud to have Acquia involved with Boston.gov. The site is hosted on Acquia Cloud, and Acquia led a lot of the architecture, development and coordination. I remember pitching the project in the basement of Boston's City Hall, so seeing the site launched less than a year later is quite exciting.

The project was a big undertaking as the old website was 10 years old and running on Tridion. The city's digital team, Acquia, IDEO, Genuine Interactive, and others all worked together to reimagine how a government can serve its citizens better digitally. It was an ambitious project as the whole website was redesigned from scratch in 11 months; from creating a new identity, to interviewing citizens, to building, testing and launching the new site.

Along the way, the project relied heavily on feedback from a wide variety of residents. The openness and transparency of the whole process was refreshing. Even today, the city made its roadmap public at http://roadmap.boston.gov and is actively encouraging citizens to submit suggestions. This open process is one of the many reasons why I think Drupal is such a good fit for Boston.gov.

Boston gov tell us what you think

More than 20,000 web pages and one million words were rewritten in a more human tone to make the site easier to understand and navigate. For example, rather than organize information primarily by department (as is often the case with government websites), the new site is designed around how residents think about an issue, such as moving, starting a business or owning a car. Content is authored, maintained, and updated by more than 20 content authors across 120 city departments and initiatives.

Boston gov tools and apps

The new Boston.gov is absolutely beautiful, welcoming and usable. And, like any great technology endeavor, it will never stop improving. The City of Boston has only just begun its journey with Boston.gov - I'm excited see how it grows and evolves in the years to come. Go Boston!

Boston gov launch event
Boston gov launch event
Boston gov launch event
Last night there was a launch party to celebrate the launch of Boston.gov. It was an honor to give some remarks about this project alongside Boston Mayor Marty Walsh (pictured above), as well as Lauren Lockwood (Chief Digital Officer of the City of Boston) and Jascha Franklin-Hodge (Chief Information Officer of the City of Boston).

21 Jul 2016 4:50pm GMT

Lionel Dricot: Printeurs 39

6775216057_e6eedeb5d5_b

Ceci est le billet 39 sur 39 dans la série Printeurs

Nellio, Eva, Max et Junior fuient l'usine de mannequins sexuels à bord d'un taxi automatique gratuit.

Le taxi nous emmène à toute allure.
- Junior, tu es sûr que l'on ne sera pas tracé ?
- Pas si on utilise le mode gratuit. Les données sont agrégées et anonymisées. Un vieux reliquat d'une ancienne loi. Et comme le système informatique fonctionne, personne n'ose le mettre à jour ni triturer un peu trop les bases de données. Par contre, si on achète quoi que ce soit dans le tunnel, nous serions immédiatement remarqués !

Tout en répondant, il regarde avec émerveillement les doigts métalliques que Max lui a greffé.

- Waw, dire que j'ai attendu tout ce temps pour me faire greffer un implant auriculaire ! C'est génial !
- C'était nécessaire pour t'implanter le logiciel de gestions des doigts, ajoute Max. Mais l'implant auriculaire est fournit avec une légère euphorie pour atténuer la douleur.
- Au fait, Max, où va-t-on ?
- J'ai contacté FatNerdz sur le réseau. Il m'a filé les coordonnées du siège du conglomérat de la zone industrielle.
- Peut-on réellement faire confiance à ce FatNerdz que personne n'a jamais vu ni ne connait ?

Max semble hésiter un instant.

- À vrai dire, que peut-il nous arriver de pire que nous faire descendre par des drones explosifs ? Et c'est ce qui nous arrivera si nous ne faisons rien. Il y a un combat certain pour te capturer, Nellio. Autant tirer tout cela au clair une bonne fois pour toute…

Je me tourne vers Eva.

- Eva ? Parle moi ! Aide-nous !

Elle me darde d'un regard froid, cruel.

- Je pense savoir qui est FatNerdz. Je n'ai pas de preuve mais j'ai l'intime conviction que je le connais bien. Trop bien même…

Je n'ai pas le temps d'exprimer mon étonnement que la voiture ralentit soudainement. Toutes les vitres descendent et nos sièges se tournent automatiquement vers l'extérieur. Junior nous hurle un ordre avec un ton incroyablement autoritaire.

- Surtout, ne touchez rien, n'achetez rien ! Gardez les mains coincées en dessous de vos fesses.

Devant nos yeux se mettent à défiler des distributeurs nous présentant toutes sortes de produits : barres sucrées, boissons colorées, alcools, vêtements, accessoires…

- Junior, fais-je un peu honteux d'avouer mon ignorance, je n'ai jamais pris les tunnels gratuits. J'ai toujours pu me payer des courses individuelles…
- Heureux veinard ! Les tunnels gratuits n'ont de gratuit que le nom. À force de les utiliser, ils coûtent bien plus cher à l'usager que de payer directement des courses individuelles. C'est ce qui rend les pauvres encore plus pauvres : ils vendent la seule chose qui leur reste, leur personnalité et leur libre arbitre, pour une illusion de gratuité.

Des hologrammes commencent à danser devant mes yeux, des femmes et des hommes nus se trémoussent, boivent d'alléchantes boissons et me tendent langoureusement des cuillerées de yaourt ou des morceau de fruits recomposés. Je sens monter en moi un mélange d'appétit, de désir sexuel, de fringale… Instinctivement, je tends le bras vers une délicieusement rafraichissante bouteille de jus…

- Non ! me hurle Junior en me tapant violemment sur le bras. Si tu touches le moindre objet, il te sera crédité via un scan rétinien. Les transactions financières étant étroitement surveillées dans le cadre des lois anti-terroristes, nous serons pulvérisés dans la seconde ! Tiens bon !

La voiture me semble de plus en plus lente. Ce tunnel est interminable.

- Tant qu'on n'achète pas, la voiture ralentit, me souffle Junior. Mais il y a une durée maximale. Tiens bon !

Je ferme les yeux afin de soulager mes pulsions mais les phéromones de synthèse aguichent mes sens. Mes nerfs sont à fleur de peau, je me sens agressé, écorché, violé. Le désir monte en moi, j'ai envie de hurler, je me mords les mains jusqu'au sang. Je…

Lumière !

- Nous sommes sortis !

La voiture reprend de la vitesse Je respire douloureusement. De grosses gouttes de sueur perlent sur mon front. De sa main cybernétique, Junior me caresse l'épaule.

- C'est vrai que ça doit être violent si c'est la première fois. Le problème c'est que lorsqu'on y est exposé enfant, on développe une forme d'accoutumance. Les réflexes d'achats sont ceux ancrés dans la petite enfance. Les publicitaires sont donc dans une concurrence de plus en plus violente afin d'outrepasser ces habitudes.

Je me tourne vers Eva, qui semble être restée impassible.

- Eva, pourtant toi aussi tu m'avais dit ne pas avoir été exposé à la publicité. Encore moins que moi ! Tu m'as raconté que tes parents avaient fait d'énormes sacrifice pour cela.

Elle hésite. Se triture les lèvres. Un silence gêné s'installe que Max rompt.
- Eva, il est peut-être temps de lui dire la vérité.
- Je ne sais pas s'il est prêt à l'entendre…

Je hurle !

- Bon sang, je suis manipulé, pourchassé et traqué, j'ai bien le droit de savoir ce qui m'arrive ! Merde, Eva, je croyais sincèrement que je pouvais compter sur toi.
- Tu as toujours pu compter sur moi, Nellio. Toujours ! Je ne t'ai menti que sur une seule chose : mon origine.
- Alors dis moi tout !
- Je croyais que ce que tu as vu à l'usine Toy & Sex était suffisant.
- Et bien non ! Cela a rendu tout encore plus confus pour moi ! Pourquoi ces poupées gonflables nouvelle génération sont-elles à ton effigie ?

Max émet un son qui, s'il avait un larynx biologique, ressemblerait sans doute à un toussotement.

- Nellio, continue Eva doucement. Ces poupées ne sont pas à mon effigie.
- Mais…
- C'est moi qui suis…

Une formidable explosion retentit soudain. La voiture est soufflée et projetée violemment sur le flanc. Des crépitements d'armes à feu se font entendre.

- Ils nous ont repéré, hurlé-je !
- Non, me répond Junior. Si c'était le cas, nous serions mort. C'est certainement un attentat.

Nous sommes tous les quatre emmêlés, culs par dessus tête. Max tente de s'extirper du véhicule. Ses pieds et se genoux me broient les côtes mais la douleur reste supportable.

- Oh merde, un attentat, soupiré-je en portant la main à mon front ensanglanté. Encore ces foutus militants du sultanats islamiques !
- Ou alors, des policiers en service commandé, ajoute Junior avec un sourire narquois.
- Hein ?
- Oui, s'il n'y a pas assez d'attentat, on en organise des petits histoires de justifier les budgets. Parfois ce sont des initiatives locales. Parfois, c'est carrément des ordres qui viennent d'en haut afin de faire passer des lois ou de prendre des mesures. Dans tous les cas, ça fait consommer de l'info, ça occupe les télépass.

La voix de Max nous parvient de l'extérieur.

- Dîtes, vous vous magnez le train ? Ils sont en train de descendre tout le monde de l'autre côté de la rue. Mais ils risque bien de venir canarder les survivants de l'explosion.
- Après toi, fais-je à Junior d'un air blasé, heureux de vivre enfin une explosion dont je ne suis pas la cible prioritaire.

Photo par Oriolus.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

21 Jul 2016 2:23pm GMT

20 Jul 2016

feedPlanet Grep

Joram Barrez: Benchmarking the message queue based Activiti Async Executor

A bit of history One thing that never ceases to amaze me is how Activiti is being used in some very large organisations at some very impressive scales. In the past, this has led to various optimizations and refactorings, amongst which was the async executor - replacement for the old job executor. For the uninitiated: these executors handle […]

20 Jul 2016 11:45am GMT

19 Jul 2016

feedPlanet Grep

FOSDEM organizers: Next FOSDEM: 4 & 5 February 2017

FOSDEM 2017 will take place at ULB Campus Solbosch on Saturday 4 and Sunday 5 February 2017. Further details and calls for participation will be announced in the coming weeks and months. Have a nice summer!

19 Jul 2016 3:00pm GMT