24 Apr 2015

feedPlanet Grep

Xavier Mertens: Challenge Ahead, Free Tickets for Hack in Paris 2015!

Hack in ParisLike the previous two years, I'm happy to be a media partner of the French security conference "Hack in Paris". The schedule is now online, great talks are foreseen! As a media partner, I receive a bunch of coupons for you. They will allow you to attend the two-days event for free.

Wanna play? The challenge starts by downloading this file. Be curious!

As usual, every contest needs some rules and recommendations:

One coupon grants you:

Not included:

You don't want to play and directly register? The registration page is here.

24 Apr 2015 4:04pm GMT

Frederic Hornain: [PAAS] Openshift Enterprise @ University

Openshift @ University

The Middleware Services Group within Information Technology Services (ITS) at the University of North Carolina at Chapel Hill needed a comprehensive, dynamic solution for frequent server provisioning requests and, in particular, managed servers. Without such a solution, the likelihood that users would employ outside vendors significantly increased. Use of outside vendors would potentially increase security concerns, present additional costs, and further complicate system administration. Moving to a fully-interoperable Platform-as-a-Service (PaaS) offering, built on OpenShift Enterprise by Red Hat®, has allowed the middleware services team to deliver a flexible development and hosting environment that has fostered innovation and increased peace of mind.

Ref :







24 Apr 2015 3:13pm GMT

Paul Cobbaut: Raspberry Pi case with Lego

In case you want to build a Raspberry Pi case out of Lego, here is mine.

24 Apr 2015 9:13am GMT

23 Apr 2015

feedPlanet Grep

Mattias Geniar: Using Zabbix To Notify If Your Site/Domain Makes It To The HackerNews Frontpage

The post Using Zabbix To Notify If Your Site/Domain Makes It To The HackerNews Frontpage appeared first on ma.ttias.be.

This is the Ops way to do it.

A few days ago, an article on Twilio made it to the HackerNews frontpage. Its goal was to get a notification whenever their site was promoted to the HN frontpage, so they could prepare/react on the incoming traffic spike.

Cool blog post. But I've got a much easier solution. Here's the Ops way to deal with it.

Zabbix + Webchecks = <3

I'm an avid Zabbix user. It's an advanced monitoring tool we use to keep our servers in check.

One of the most powerful features of Zabbix are their webchecks. It allows you to configure a URL, an update interval, an expected HTTP status code and string in the source code.

If either the HTTP status code is wrong or the checkstring is missing, Zabbix considers the webcheck as failed and can trigger an action (like sending an SMS, a push notification or a plain old email).

I'm using the reverse logic to monitor the HN frontpage: I get an alert whenever a checkstring is found on the frontpage.

Zabbix Config

Assuming you've already got Zabbix running, adding this kind of monitoring is dead simple.

First, add a webcheck to retrieve the HN frontpage every 5 minutes. You start by making a Scenario.


Once you've got a Scenario, add a single step in that scenario: retrieve the frontpage.

This is where you configure the expected HTTP status code and the string that should be present in the source code. Zabbix retrieves the raw HTML, it does no parsing of that HTML. That means you can include HTML characters as well.


If your goal is to be alerted when a traffic spike is possible, I think it's more valuable to monitor the New Links section, where all new submissions are sent. This is where your post either gets upvoted or where it goes to die.

So I'll add monitoring on the New Links page as well.


Once the scenario is created, I've added the single step to retrieve the New Links page.


Getting data in Zabbix

Since about 99,99% of the time, your website or domain won't be on the Hacker News frontpage, Zabbix will retrieve the URLs and find that the expected pattern doesn't match. Your webchecks will look like this.


And this makes sense, since the string https://ma.ttias.be indeed isn't on the HN frontpage.

Triggers and Actions

Now to make sure I receive an alert whenever this blog is added, I create 2 triggers.

Zabbix has a concept of failed and successful webchecks. A webcheck is failed when one of its steps doesn't match the expectation: either the HTTP status code or the required pattern is missing.

My triggers look different from most: I'm triggering for when the webcheck is succeeding instead of failing, since a succeeded webcheck would mean the domain name is present.

Here's my alert for when the domain name is found on the frontpage.


The trigger looks like this:

{ma.ttias.be:web.test.fail[Check if domain ma.ttias.be is on the HN homepage].last(0)}=0

The web.test.fail item will have the value 1 when the test failed, and 0 when the test contained no errors. Hence, I'm alerting when the value reaches 0.

The same check is added for the webcheck that monitors the New Links page.


Which looks like this:

{ma.ttias.be:web.test.fail[Check if domain ma.ttias.be is on the HN New Links page].last(0)}=0

What It Looks Like

As soon as the pattern is found on either the homepage or the New Links section, a trigger fires in the dashboard.


You'll notice the webcheck returns OK, because both the HTTP status code and the expected string were found.


It hasn't made it to the frontpage. Yet.

Because I've got actions configured to e-mail me any alert, I got the following alerted by mail as well.



While it's a simple trick to (ab)use Zabbix like this, if someone uses a bit.ly shortener or an alternative domain name to point to my site, the check won't work.

The only reason this works on HackerNews, is because they are kind enough to include the actual destination URL in the source code. They could've changed outgoing links to URLs like out.php?id=1234, to track the amount of clicks and hide the destination URL.

This Zabbix webcheck monitoring technique will also work on Reddit, as the actual destination URLs are in the source code.

If your site doesn't have a unique domain name, you may get some false positive alerts.

The Ops Way

Granted, Twilio's solution is sexier: they've got NodeJS, API's, hackers, ...

My solution uses old and boring technology. But it works like a charm.

Monitoring my domain on Hacker News took me less than 5 minutes. Writing this blogpost took far longer.

I like doing things The Ops Way.

The post Using Zabbix To Notify If Your Site/Domain Makes It To The HackerNews Frontpage appeared first on ma.ttias.be.

Related posts:

  1. Zabbix: monitor a TCP port with the Zabbix Agent If you want to monitor a remote host from the...
  2. Zabbix: zabbix_agentd: Can't recreate Zabbix semaphores for IPC key 0x123456 Semaphore ID 123456. Operation not permitted. You can get the following error when you're switching between...
  3. MoZBX: The Mobile Zabbix Client I'm a pretty big fan of Zabbix in general, the...

23 Apr 2015 6:11pm GMT

Kristof Willen: A Pebble NMBS app


Since I have a Pebble smartwatch, I've allways wanted to dive into Pebble programming. And of course finding at the same time a solution for one of my itches. When communting by train, checking the NMBS Android app can sometimes be a hassle, certainly if you're carrying a laptop bag while descending the stairs. So a Pebble app for quickly checking when your train leaves would be great ! It even has the advantage that this could be written in Javascript, avoiding the default C coding, as my C skills have become quite ruste after all those years.

Developping a Pebble.js app turned out to be quite easy : the most difficult part was understanding JSON (never used it before) and wrapping my head around the iRail API. After a few hours, I got a first prototype running, which showed me the next 5 trains leaving Brussels-South, together with the departure time, platform and duration of the trip. Today, I've added an option to choose your starting point, reaching a point at which this could be called a first alfa release.

There's still more work to do : not everyone uses the same commute stations as I do (they are currently hardcoded into the app), so those need to be configuration items in the app. So this needs to be addressed first before I can release it into the wild.
Also, the app currently only supports direct connections. There's some administrative work to do for releasing it on the Pebble appstore. The code currently lives in CloudPebble, I need it to import it into my local git repo as the code changes dramatically from day to day (really need to install GitLab onto my machine too). And finally, converting it to SDK3 for the new Pebble Time, so the departure times could appear into your timeline.

23 Apr 2015 3:14pm GMT

Mattias Geniar: Blogging Tip: Send Yourself Blogpost Anniversary Reminders

The post Blogging Tip: Send Yourself Blogpost Anniversary Reminders appeared first on ma.ttias.be.

This is something I've tried out for the last 2 weeks, and it seems to be working.

This won't work for every kind of blog. Mine is very technical and contains quite a few guides, howto's, debug-quests, ... most of these are rather timeless. If they're true today, chances are they'll be true next year as well.

But most of that older content goes to die after a week of publishing. New posts come and the older ones disappear. This is where these kind of anniversary reminders of your own blogposts come in handy.

Revisit old content

For me, the benefit is twofold: I'm reminded of the content I wrote 1, 2, 3, ... 5 years ago and I can keep it up-to-date. In technology, even though the general outline of a technical post remains the same, the future you may have different views or opinions on how to handle certain situations.

This gives me a chance to re-read what I once wrote, curse at myself for doing it in that particular way, and update the blogpost with more accurate details.

Spread the content again

I'm mostly publishing all of my blogposts on twitter. Facebook is for friends and family, they don't care about the technical posts I wrote here.

As my follower count grows, it could be interesting to tweet about a 3 year old blogpost again. I probably didn't have that many followers back then, so I now have a chance to publish the same post to a wider audience. Mostly effortless, as I didn't have to write an entirely new post.

Those older posts would only deserve attention today if they're still accurate, so I have to update them to todays standards. And not every post is worth posting again. You have to be selective.

Script: blogpost anniversary reminders

To facilitate me with sending reminders to both myself, and eventually my tweeps, I wrote a little script that mails me daily whenever one of my posts gets to celebrate its yearly anniversary.

You can grab it here: post_reminders.php.

Change the parameters at the top for your own database credentials, enter your e-mail address and add the script to cron.

$ crontab -l
0 9 * * * /usr/bin/php /path/to/post_reminder.php > /dev/null 2> /dev/null

The above sends me an e-mail every day at 9AM, if one of my blogposts has been posted on the same day, one or more years ago.

For completeness sake, here's the full script. And it's super short and easy to change.


# Parameters
$db_host = 'localhost';
$db_user = 'your_username';
$db_pass = 'your_password';
$db_name = 'your_db';
$mailto  = 'your@email';

try {
  $dbh     = new PDO(
    'mysql:host='. $db_host .';dbname='. $db_name,
} catch (PDOException $e) {
  die('Sorry, database connection could not be made. Error: '. $e->getMessage() ."\n");

$posts = "SELECT *
            FROM blog_posts
              DAY(post_date) = '". date("j") ."'
              AND MONTH(post_date) = '". date("n") ."'
              AND YEAR(post_date) != '". date("Y") ."'
              AND post_status = 'publish'
            ORDER BY post_date ASC";

try {
  $rows = $dbh->query($posts);
} catch (PDOException $e) {
  die('Sorry, the query could not be executed. Error: '. $e->getMessage() ."\n");

if ($rows->fetchColumn() > 0) {
  $body = "The following posts were published on the same day in the past:<br /><br />\n\n";
  foreach ($dbh->query($posts) AS $row) {
    $body .= date("Y", strtotime($row['post_date'])) .": <a href='". $row['guid'] ."'>". $row['post_title'] ."</a><br />\n";

  # Mail results
  $headers  = 'MIME-Version: 1.0' . "\r\n";
  $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n";

  mail($mailto, 'Wordpress: blog post anniversaries! ('. count($rows) .')', $body, $headers);


This script works for me and I've gotten value (read: pageviews) out of it. Hopefully you can get the same!

The post Blogging Tip: Send Yourself Blogpost Anniversary Reminders appeared first on ma.ttias.be.

Related posts:

  1. The 2014 Blog in Numbers I've taken a small break from blogging last year (and...
  2. 2010 in numbers: statistics, statistics & statistics I figure I'll share these numbers for a change. :-)...
  3. A good week for this blog! This has been an interesting week for this blog, in...

23 Apr 2015 11:38am GMT

Frank Goossens: Music from Our Boiler Room: The Gaslamp Killer

The Gaslamp Killer killing it in the Boiler Room (silly pun) with the most eclectic of musical styles. Love it!

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Possibly related twitterless twaddle:

23 Apr 2015 11:05am GMT

Frederic Hornain: Resource optimization capabilities introduced in latest JBoss BPM Suite and JBoss BRMS releases

Resources Optimization

Organizations of all sizes and across many industries must orchestrate and plan daily business operations, such as scheduling, vehicle routing, or timetabling. Many organizations must be able to cope with changing and increasing demands on the business with a limited set of resources, and quickly adapt plans when established processes are interrupted by unanticipated resource changes.

Red Hat's business resource planning technology helps enterprises use limited resources in a more cost effective manner. Applying rules-based technology in conjunction with business resource planning provides a unique and powerful combination that allows for greater scale and adaptability. The planner is included with a subscription to JBoss BRMS at no additional cost. All of the resource planning and business rules management components are included in Red Hat's business process management offering, JBoss BPM Suite.

Ref :






23 Apr 2015 9:42am GMT

22 Apr 2015

feedPlanet Grep

Lionel Dricot: Écrire un livre ? Quelle drôle d’idée !


Régulièrement, des lecteurs de mon blog ou des personnes assistant à une de mes conférences me demandent si j'ai publié des livres reprenant les idées que je développe.

Malheureusement, je dois répondre que non. Et ce n'est pas dans mes projets.

La raison en est toute simple : si je publiais un livre, il serait déjà obsolète avant même que vous puissiez le tenir entre vos mains.

Mes idées évoluent en permanence. Je publie des billets sur ce qui m'interpelle, sur ce qui m'intéresse. Un nouveau billet peut parfois contredire un plus ancien. Ou le compléter. Chaque billet a d'ailleurs un lectorat différent, imprévu.

Un livre fige un instant passé. Il remplit pour faire plus sérieux. Si pour la fiction ou pour les expériences intemporelles le livre peut être approprié, il ne l'est plus pour un phénomène aussi mouvant que les idées et la réflexion. Si, de plus, vous le voulez sur arbre mort, diffusé par une maison traditionnelle, son obsolescence n'en sera que plus grande. Quel serait votre intérêt de lire une version longue des idées que j'ai eu il y a près d'un an ?

Pourtant, le livre garde une aura. Publier un livre fait de vous quelqu'un d'important. Les médias font énormément de bruit autour des livres. La sortie d'un livre est un événement. Être auteur publié, c'est un gage d'autorité. C'est la garantie d'être invité comme expert sur les plateaux télé, surtout si le titre est accrocheur : Et nous cédons la parole à Ploum, auteur du remarqué « Internet et ses dangers », publié chez Plouc.

Peu importe les âneries que vous ayez écrite, peu importe que votre livre se soit vendu à 200 exemplaires, vous êtes un auteur, vous êtes un expert, vous êtes détenteur de la Vérité. Car, tout texte imprimé représente la Vérité. Un blogueur, même s'il est lu par des dizaines de milliers de lecteurs, c'est un amateur. Rien à voir avec cet auteur que personne n'a lu excepté celui chargé de rédiger la critique.

C'est entièrement logique car, comme je l'expliquais dans mon billet « Il faudra la construire sans eux », les médias appartiennent à la génération de l'information centralisée dont l'élément principal reste l'imprimerie. En publiant un livre, vous devenez un média, vous faîtes partie de leur monde, ils vous soutiennent. À leurs yeux, le web n'est qu'un outil de promotion pour leurs livres, leurs émissions ou leurs journaux.

Si je publiais un livre, je le percevrais au contraire comme un outil de promotion de ce blog ! Une simple porte d'entrée pour inviter les gens à me lire sur le web, à apprendre un mode de pensée dynamique, changeant, décentralisé.

Si je publiais un livre, ce serait pour obtenir la reconnaissance d'institutions que je juge obsolètes et délétères. Des institutions qui sont des freins au progrès.

Au fond, c'est le web qui me nourrit, me fait grandir. C'est le web qui m'apporte des idées, me fait réfléchir. C'est donc sur le web que je veux contribuer et apporter ma modeste contribution.

Moi, publier un livre de non-fiction ? Vous ne voulez pas que je l'écrive à la plume sur du vélin tant que vous y êtes ? Ça aurait son charme, je le reconnais, mais en attendant je vous encourage vivement à lire sur le web. Vous verrez, c'est un nouveau monde !

L'illustration s'intitule « Vanité », de Pieter Claesz et est photographiée par Thomas Hawk. Vous seriez sans doute intéressé par la lecture de La mort de la presse ? Tant mieux ! et par mes techniques pour Lire rapidement sur le web.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

22 Apr 2015 8:28pm GMT

21 Apr 2015

feedPlanet Grep

Mattias Geniar: Using JavaScript To Read L3 CPU Cache

The post Using JavaScript To Read L3 CPU Cache appeared first on ma.ttias.be.

Remarkable. And dangerous (PDF).

Side channel analysis is a remarkably powerful class of cryptanalytic attack. It lets attackers extract secret information hidden inside a secure device by analyzing the physical signals (power, radiation, heat, etc.) the device emits as it performs a secure computation [15].


The attack code itself, executes a Javascript-based cache attack, which allows it to track accesses to the DUT's last-level cache (LLC) over time. Since this single cache is shared by all CPU cores and by all users, processes and protection rings, this information can provide the attacker with a detailed knowledge of the user and the system under attack.

The practical details and proof-of-concept are being withheld until all browsers have had a change to push an update and patch for this problem.

Using JavaScript to read data from the shared L3 CPU cache. Impressive.

The full research paper is available online: The Spy in the Sandbox - Practical Cache Attacks in Javascript.

The post Using JavaScript To Read L3 CPU Cache appeared first on ma.ttias.be.

Related posts:

  1. A Recipe For Disaster: XSS, Google-Analytics.com And DNS Cache Poisoning Here's a scary idea that popped up in the evil...
  2. Remote Code Execution Through Intel CPU Bugs Kris Kaspersky, who isn't related to the famous Anti-Virus company,...
  3. When Private Browsing Isn't Private On iOS: HTML5 And AirPlay Private Browsing: the illusion of privacy. This applies to mobile...

21 Apr 2015 9:05pm GMT

Dieter Adriaenssens: Gorges du Tarn 2015

It was an amazing week, climbing in Gorges du Tarn with Bleau Climbing team during the second week of the Easter holiday.
Beautiful weather, nice people, good atmosphere, a lot of climbing, some personal bests and climbing improvements on both a physical and mental level.

Some impressions :

Great trip, looking forward to the next one!

21 Apr 2015 3:03pm GMT

Mattias Geniar: Magento eCommerce PHP Remote Code Execution

The post Magento eCommerce PHP Remote Code Execution appeared first on ma.ttias.be.

The fun just never ends. A remote code execution exploit was found on February 9th, 2015.

Checkpoint released a blogpost yesterday with more details on that particular vulnerability.

Check Point researchers recently discovered a critical RCE (remote code execution) vulnerability in the Magento web e-commerce platform that can lead to the complete compromise of any Magento-based store, including credit card information as well as other financial and personal data, affecting nearly two hundred thousand online shops.
Analyzing the Magento Vulnerability

The patch to the Remote Code Execution vulnerability is available on the Magento site; Magento Downloads, patch SUPEE-5344.


Magento's Open Source Community Policy

One very annoying part of the Open Source edition of Magento, is that the downloads available on the site do not contain the patches yet. You have to download the latest release,, and still download and apply every patch available.

It's very common for users to just download the latest release thinking that should be the up-to-date one, patches included. It boggles my mind why Magento would willingly distribute unsafe code this way, assuming users would just find out to download the patches separately.

Added to that is the fact that version numbers don't increase with the patches being applied. Seriously, it's 2015 Magento, get your act together. This is a very lame tactic to force your users to consider the commercially supported version.

The patch

If you're wondering if you should apply the patch to your Magento installation or note, let me answer this with a very clear yes:

The vulnerability is actually comprised of a chain of several vulnerabilities that ultimately allow an unauthenticated attacker to execute PHP code on the web server.

Since the patch is behind a very annoying login-wall, I've mirrored it here: PATCH_SUPEE-5344_CE_1.8.0.0_v1-2015-02-10-08-10-38.sh

The patch contains a bunch of whitespace, but the actual fix is this;

--- app/code/core/Mage/Admin/Model/Observer.php
+++ app/code/core/Mage/Admin/Model/Observer.php
@@ -43,6 +43,10 @@ class Mage_Admin_Model_Observer
         $session = Mage::getSingleton('admin/session');
         /** @var $session Mage_Admin_Model_Session */
+        /**
+         * @var $request Mage_Core_Controller_Request_Http
+         */
         $request = Mage::app()->getRequest();
         $user = $session->getUser();
@@ -56,7 +60,7 @@ class Mage_Admin_Model_Observer
         if (in_array($requestedActionName, $openActions)) {
         } else {
-            if($user) {
+            if ($user) {
             if (!$user || !$user->getId()) {
@@ -67,13 +71,14 @@ class Mage_Admin_Model_Observer
                     $user = $session->login($username, $password, $request);
                     $request->setPost('login', null);
-                if (!$request->getParam('forwarded')) {
+                if (!$request->getInternallyForwarded()) {
+                    $request->setInternallyForwarded();
                     if ($request->getParam('isIframe')) {
                         $request->setParam('forwarded', true)
-                    } elseif($request->getParam('isAjax')) {
+                    } elseif ($request->getParam('isAjax')) {
                         $request->setParam('forwarded', true)
diff --git app/code/core/Mage/Core/Controller/Request/Http.php app/code/core/Mage/Core/Controller/Request/Http.php
index 368f392..123e89e 100644
--- app/code/core/Mage/Core/Controller/Request/Http.php
+++ app/code/core/Mage/Core/Controller/Request/Http.php
@@ -76,6 +76,13 @@ class Mage_Core_Controller_Request_Http extends Zend_Controller_Request_Http
     protected $_beforeForwardInfo = array();
+     * Flag for recognizing if request internally forwarded
+     *
+     * @var bool
+     */
+    protected $_internallyForwarded = false;
+    /**
      * Returns ORIGINAL_PATH_INFO.
      * This value is calculated instead of reading PATH_INFO
      * directly from $_SERVER due to cross-platform differences.
@@ -530,4 +537,27 @@ class Mage_Core_Controller_Request_Http extends Zend_Controller_Request_Http
         return false;
+    /**
+     * Define that request was forwarded internally
+     *
+     * @param boolean $flag
+     * @return Mage_Core_Controller_Request_Http
+     */
+    public function setInternallyForwarded($flag = true)
+    {
+        $this->_internallyForwarded = (bool)$flag;
+        return $this;
+    }
+    /**
+     * Checks if request was forwarded internally
+     *
+     * @return bool
+     */
+    public function getInternallyForwarded()
+    {
+        return $this->_internallyForwarded;
+    }
diff --git lib/Varien/Db/Adapter/Pdo/Mysql.php lib/Varien/Db/Adapter/Pdo/Mysql.php
index 7b903df..a688695 100644
--- lib/Varien/Db/Adapter/Pdo/Mysql.php
+++ lib/Varien/Db/Adapter/Pdo/Mysql.php
@@ -2651,10 +2651,6 @@ class Varien_Db_Adapter_Pdo_Mysql extends Zend_Db_Adapter_Pdo_Mysql implements V
         $query = '';
         if (is_array($condition)) {
-            if (isset($condition['field_expr'])) {
-                $fieldName = str_replace('#?', $this->quoteIdentifier($fieldName), $condition['field_expr']);
-                unset($condition['field_expr']);
-            }
             $key = key(array_intersect_key($condition, $conditionKeyMap));
             if (isset($condition['from']) || isset($condition['to'])) {

Please patch!

The post Magento eCommerce PHP Remote Code Execution appeared first on ma.ttias.be.

21 Apr 2015 9:33am GMT

20 Apr 2015

feedPlanet Grep

Mattias Geniar: Nginx Open Sources TCP Load Balancing

The post Nginx Open Sources TCP Load Balancing appeared first on ma.ttias.be.

A move we can only applaud.

Stream: port from NGINX+.

diffstat 20 files changed, 6079 insertions(+), 2 deletions(-) [+]
Changeset commit: changeset 6115:61d7ae76647d

A cryptic commit message for anyone that doesn't follow Nginx. But here's what it means: the TCP load balancing present in Nginx+ is now available in Nginx Open Source.

This kind of load balancing was reserved for paying Nginx+ customers, until now.

TCP Load Balancing

NGINX Plus terminates TCP connections, makes a load-balancing decision and then establishes a connection to the upstream server, relaying data between the client and server on demand. NGINX Plus delivers high availability using inline and synthetic health checks, slow-start for recovered servers, concurrency control, and the ability to designate servers as active, backup, or down.

Nginx+ Load Balancing

TCP Load Balancing would allow for setups to remove HAProxy or an alternative TCP load balancer and use Nginx for all of it. Previously, Nginx would do HTTP, POP3 and IMAP load balancing, but always within the protocol. Now, it will support native TCP connections as well.

More info on the TCP load balancing in Nginx+ can be found on the announcement of Nginx R6: Announcing NGINX Plus Release 6 with Enhanced Load Balancing.

A great move to make this Open Source, can't wait to see this made available in their RPM and DEB packages.

Would I be too optimistic in hoping that the Nginx+ Application Health Checks would also be ported into Nginx Open Source? Because that would be awesome and would eliminate Varnish as a advanced health-check proxy for backends in some of my configs.

The post Nginx Open Sources TCP Load Balancing appeared first on ma.ttias.be.

Related posts:

  1. Nginx: nginx: [warn] load balancing method redefined You may receive the following warning when reloading/configtesting an Nginx...
  2. Nginx HTTP/2 Support Coming Late 2015 As anticipated. We're pleased to announce that we plan to...
  3. Nginx Getting JavaScript Scripting Engine I missed the original hint in October 2014, so this...

20 Apr 2015 4:22pm GMT

19 Apr 2015

feedPlanet Grep

Wouter Verhelst: Youn Sun Nah 5tet: Light For The People

About a decade ago, I played in the (now defunct) "Jozef Pauly ensemble", a flute choir connected to the musical academy where I was taught to play the flute. At the time, this ensemble had the habit of goin on summer trips every year; sometimes these trips were large international concert tours (like our 2001 trip to Australia), but that wasn't always the case; there have also been smaller trips, like the 2002 one to the French Ardennes.

While there, we went on a day trip to the city of Reims. As a city close to the front in the first world war, it has a museum dedicated to that subject that I remembered going to. But the fondest memory of that day was going to a park where a podium was set up, with a few stacks of fold-up chairs standing nearby. I took one and listened to the music.

That was the day when I realized that I kindof like jazz. I had come into contact with Jazz before, but it had always been something to be used as a kind of musical wallpaper; something you put on, but don't consciously listen to. Watching this woman sing, however, was a different kind of experience altogether. I'm still very fond of her rendition of "Besame Mucho".

After having listened to the concert for about two hours, they called it quits, but did tell us that there was a record which you could buy. Of course, after having enjoyed the afternoon so much, I couldn't imagine not buying it, so that happened.

Fast forward several years, in the move from my apartment above my then-office to my current apartment (just around the corner), the record got put into the wrong box, and when I unpacked things again it got lost; permanently, I thought. Since I also hadn't digitized it yet at the time, I haven't listened to it anymore in quite a while.

But that time came to an end today. The record which I thought I'd lost wasn't, it was just in a weird place, and while cleaning yesterday, I found it sitting among a bunch of old stuff that I was going to throw out. Putting on the record today made me realize again how good it really is, and I thought that I might want to see if she was still active, and if she might perhaps have made another album.

It was great to find out that not only had she made six more albums since the one I bought, she'd also become a lot more known in the Jazz world (which I must admit I don't really follow all that well), and won a number of awards.

At the time, Youn Sun Nah was just a (fairly) recent graduate from a particular Jazz school in Paris. Today, she appears to be so much more...

19 Apr 2015 9:25am GMT

17 Apr 2015

feedPlanet Grep

Mattias Geniar: Double-clicking On The Web

The post Double-clicking On The Web appeared first on ma.ttias.be.

Here's a usability feature for the web: disable double-clicks on links and form submits.

Before you think I'm a complete idiot, allow me to talk some sense into the idea.

The Double-click Outside The Web

Everywhere in the Operating System, whether it's Windows or Mac OSX, the default behaviour to navigate between directories is by double-clicking them. We're trained to double-click anything.

Want to open an application? Double-click the icon. Want to open an e-mail in your mail client? Double-click the subject. Double-clicks everywhere.

Except on the web. The web is a single-click place.

Double The Click, Twice The Fun

We know we should only single-click a link. We know we should only click a form submit once. But sometimes, we double-click. Not because we do so intentionally, but because our brains are just hardwired to double-click everything.

For techies like us, a double-click happens by accident. It's an automated double-click, one we don't really think about. One we didn't mean to do.

For lesser-techies, also know as the common man or woman, double-clicks happen all the time. The user doesn't have a technical background, so they don't know the web works with single-clicks. Or perhaps they do, and don't see the harm in double-clicking.

But default browser behaviour is to accept user input. However foolish it may be.

If you accidentally double-click a form submit, you submit it twice. It's that simple. - - [18/Apr/2015:00:37:06 +0400] "POST /index.php HTTP/1.1" 200 0 - - [18/Apr/2015:00:37:07 +0400] "POST /index.php HTTP/1.1" 200 0

If you double-click a link, it opens twice. - - [18/Apr/2015:00:37:06 +0400] "GET /index.php HTTP/1.1" 200 9105 - - [18/Apr/2015:00:37:07 +0400] "GET /index.php HTTP/1.1" 200 9104

The problem is sort of solved with fast servers. If the page loads fast enough, the next page may already be downloading/rendering, so the second click of that double-click is hitting some kind of void, the limbo in between the current and the next page.

For slower servers, that just take more time to generate a response, a double-click would still happen and re-submit or re-open a link.


I recently filed a feature request at our devs for a similar problem.

If you accidentally (and we've all done this) double-click a form submit, you submit it twice. That means whatever action was requested, is executed by the server twice.

The fix client-side is relatively simple, to disable the form submit button after the first submit was registered. There's a simple jquery snippet that can solve this for you.

        setTimeout(function() {
            $('input').attr('disabled', 'disabled');
            $('a').attr('disabled', 'disabled');
        }, 50);

Server-side, a fix could be to implement some kind of rate limiting or double-submit protection within a particular timeframe. Server-side, this is a much harder problem to solve.

It's 2015, why is this even a thing to consider?

Proposed Solution

I can not think of a single reason why something like a form submit should have to be executed twice as a result of a double-click.

For a slow responding server, it's reasonable for a user to hit the submit again after more than a few seconds have passed and no feedback has been given. Because of the lack of visual feedback that the request is still being processed, the expectation has been raised that the form submit did not work.

So the user submits again, thinking he must have made a mistake the first attempt. If the same form submit has been registered by the browser in less than 2 seconds, surely that must have been a mistake and would count as an accidental double-click?

Why should every web service implement a double-click protection, either client-side or server-side, and reinvent the wheel? Wouldn't this make for a great browser feature?

What if a double-click is blocked by default, and can be enabled again by setting a new attribute on the form?

<form action="/something.php" allowmultiplesubmits>

Setting the allowmultiplesubmits attribute causes the browser to allow multiple submits to the same form in the same page, and by default the browser has some kind of flood/repeat/double-click protection to prevent this.

Maybe I'm over thinking it and this isn't an issue. But anyone who's active on the web has, at one point, accidentally double-clicked. And I think we've got all the technology available to fix that, once and for all.

The post Double-clicking On The Web appeared first on ma.ttias.be.

Related posts:

  1. Service Side Push in HTTP/2 With nghttp2 At this pace of development, nghttp2 is a project to...
  2. HTTP/1 vs HTTP/2 Page Loading An interesting proof-of-contept: http2.golang.org. Especially with simulated latency, HTTP/2 shows...
  3. The Surprising Mixed Content Handling on SSL/HTTPS Enabled Websites I already mentioned mixed content warnings as one of the...

17 Apr 2015 8:52pm GMT

Philip Van Hoof: De dierentuin: geboortebeperking versus slachten

Michel Vandenbosch versus Dirk Draulans: ben ooit zo'n 15 jaar vegetariër geweest om vandaag tevreden te zijn over dit goed voorbereid en mooi gebalanceerd debat. Mijn dank aan de redactie van Terzake.

Ik was het eens met beide heren. Daarom was dit een waardig filosofische discussie: geboortebeperking versus het aan de leeuwen voeden van overbodige dieren plus het nut en de zin van goed gerunde dierenparken. Dat nut is me duidelijk: educatie voor de dwaze mens (z'n kinders, in de hoop dat de opvolging minder dwaas zal zijn)

Hoewel ik het eens was met beide ben ik momenteel zelf meer voor het aan de leeuwen voeden van overbodige dieren dan dat ik voor geboortebeperking van wel of niet bedreigde diersoorten ben. Dat leek me, met groot respect voor Vandenbosch's, Draulans' standpunt te zijn. Ethisch snapte ik Vandenbosch ook: is het niet beter om aan geboortebeperking te doen teneinde het leed van een slachting te vermijden?

Ik kies voor het standpunt van Draulans omdat dit het meeste de echte wereld nabootst. Ik vind het ook zeer goed dat het dierenpark de slachting van de giraffe aan de kinderen toonde. Want dit is de werkelijkheid. Onze kinderen moeten de werkelijkheid zien. We moeten met ons verstand de werkelijkheid nabootsen. Geen eufemisme zoals het doden van een giraffe een euthanasie noemen. Laten we onze kinders opvoeden met de werkelijkheid.

17 Apr 2015 7:40pm GMT