03 Aug 2015

feedPlanet Grep

Lionel Dricot: Printeurs 35

3184410155_f8f98d5d99_z

Ceci est le billet 35 sur 35 dans la série Printeurs

Nellio, Junior et Eva se sont engouffrés dans des capsules intertube les conduisant aux mystérieuses coordonnées communiquées par Max.

Les méandres de l'esprit humain sont impénétrables. Alors que mon corps est inconfortablement compressé dans un espace exigu contenant à peine de quoi respirer, je ne peux m'empêcher de philosopher.

Comment expliquer que cette partie de l'intertube soit déjà fonctionnelle alors que la définition d'un projet gouvernemental implique généralement un retard conséquent ?

La réponse demande un certain cheminement mental. En dehors de se faire élire, le rôle des politiciens qui composent le gouvernement est de faire en sorte que l'argent public s'évapore le plus vite possible.

Il n'est bien sûr plus question, de nos jours, de détournements directs. Le risque serait bien trop grand de se faire prendre et condamner. Un minimum de subtilité est devenu nécessaire.

Dès qu'un peu d'argent public est disponible, le politicien le dépensera de manière à optimiser sa visibilité sur les réseaux. Inaugurer la toute première liaison d'intertube semble, sur ce point, une excellente idée. Mais le plus important est assurément d'obtenir, légalement, un pourcentage sur ces dépenses. Et quoi de plus facile que de financer des grands travaux, une liaison intertube par exemple, en utilisant comme prestataire surpayé une société dont on est actionnaire ? Ou qui nous engagera comme consultant après notre retraite politique bien méritée ?

Le fait que je sois bringuebalé dans cet intertube signifie donc qu'il y a dans les parages un politicien en fin de parcours qui vide les caisses. En annonçant une station d'intertube, il laissera l'image d'un gestionnaire visionnaire et entreprenant. Son successeur, par contre, héritera de l'impopularité due à une situation budgétaire catastrophique.

Alors que je suis emporté à des centaines de kilomètre/heure dans le noir absolu, je ne peux m'empêcher de m'indigner. Comment se fait-il que notre système de gouvernement soit à ce point corrompu ?

Mais au fond, cela a-t-il encore la moindre importance ? Les élections sont vécues comme un divertissement, à mi-chemin entre les compétitions sportives et les séries si chères aux télé-pass. Les commissariats privés imposent leurs propres règles et plus personnes ne fait vraiment attention aux lois que débattent les politiciens, lois qui réglementent de toutes façons des domaines dans lesquels ils sont complètement incompétents. Nous nous contentons de leur verser un impôt avec le seul espoir qu'ils nous foutent la paix. Ces impôts servent à financer une administration qui tourne désormais en vase clos : les différents ministères travaillent les uns pour les autres en déconnexion totale du reste du monde.

Dans l'étanche obscurité de mon cercueil projectile, l'absurdité de notre société me frappe soudainement comme un éclair. J'ai l'impression de découvrir le monde, d'être un nouveau-né, un extra-terrestre.

Dans un monde automatisé, le travail n'apporte plus de valeur mais, au contraire, de l'inefficacité. De qualité il devient une tare. Sans changement de paradigme économique, la valeur ne se crée plus, elle se dissipe. Le seul moyen de s'enrichir est donc de devenir soi-même un point d'évaporation. Soit en récoltant la valeur et en prétendant la redistribuer au nom du bien public, ce que fait la politique, soit en convaincant le public de nous acheter un bien ou un service quelconque, quel que soit son inutilité.

Il ne s'agit donc plus d'être utile mais de convaincre le monde qu'on l'est. L'apparence a pris le pas sur l'essence, donnant naissance à la publicité ! La publicité ! Le maillon central ! C'est la raison pour laquelle je n'avais jamais pris le recul nécessaire. La publicité nous formate, nous empêche de nous concentrer. Son omniprésence transforme le cerveau en simple récepteur. Il m'a fallu cette cure sans lentille de contact et cet isolement sensoriel pour que, enfin, mes neurones se remettent à fonctionner.

Face à ce modèle de société, le printeur représente la menace ultime. En mettant à nu l'inutilité de la plupart des emplois actuels, le printeur poussera les travailleurs à remettre en question l'utilité de tous, y compris de leurs dirigeants. La rigidité morale qui fait des télé-pass des parias, des sous-hommes, des fainéants, n'est possible que s'ils sont en minorité et si on continue à leur fournir un espoir, celui de devenir un jour utiles. Si cet espoir disparaît, si la compétition entre eux n'a plus lieu d'être, si la majorité de la population devient télé-pass…

Je frissonne. Jamais encore je ne n'avais envisagé les conséquences sociétales du printeur. Les motivations de Georges Farreck me semblent désormais moins obscures : après tout, malgré sa richesse et sa notoriété, il n'a jamais été qu'un pion, un outil publicitaire, un homme sandwich de luxe. Les printeurs auraient inéluctablement été inventés et finiront, quoi qu'il puisse arriver, par chambouler l'ordre social. Autant être du bon côté…

Je…

Un choc ! Je m'assomme à moitié sur la paroi de mon récipient avant de constater que toute vibration, tout changement de direction a cessé. Je suis certainement arrivé à destination.

Poussant la trappe, je m'extirpe et pose le pied dans un court couloir bien éclairé. Pas la moindre trace d'Eva, qui devrait pourtant m'avoir précédé. Elle n'a pu que sortir pas cette porte rouge, brillante. Tout est incroyablement propre. Il flotte dans l'air cette odeur caractéristique des nouveaux bâtiments.

Un bruit. Junior vient d'arriver. J'ouvre la trappe de sa capsule et je suis immédiatement accueilli par un hurlement. Il est couvert de sang et se tient la main droite en gémissant.
- Mes doigts ! Mes doigts !
Tout en le hissant sur le sol du couloir, j'examine sa blessure. Les doigts de sa main droite ont tous été coupés nets à hauteur des métacarpes. Je frémis d'horreur. Il a fait tout ce trajet dans le noir en hurlant et en se couvrant de son propre sang !
- Que s'est-il passé ?
- Le départ était trop rapide, j'ai pas eu le temps de retirer ma main.
Arrachant un morceau de mon t-shirt, je lui fais un bandage de fortune.
- Saloperie de corps biologique de merde ! Rien ne serait arrivé avec un avatar. Et je ne pourrais même plus taper au clavier !
- T'as pas une pilule sur toi qui pourrait faire office d'anti-douleur ?
- Dans ma poche droite… Du tiroflan… Argh, ça fait mal !
Lui fouillant le pantalon, je prends aussitôt deux gélules oranges que je lui fais gober.

Sa respiration se fait moins rapide, plus espacée.
- Viens ! Il faut qu'on trouve un moyen pour soigner cela un peu mieux.

L'attrapant par la taille, je l'aide à marcher et nous nous dirigeons vers la porte rouge. À notre approche, celle-ci s'ouvre automatiquement, sans le moindre bruit.

La pièce dans laquelle nous nous trouvons est emplie d'appareils électroniques de mesure et d'écrans d'ordinateurs. Je sursaute et manque de pousser un cri d'effroi. Sur une table, Eva est étendue, complètement nue, les yeux grands ouverts, le regard vide. Elle ne fait pas le moindre mouvement.

Debout entre ses jambes, un homme en combinaison blanche, le pantalon sur les chevilles, est en train de la violer consciencieusement.

Photo par Glen Scott.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

03 Aug 2015 2:24pm GMT

02 Aug 2015

feedPlanet Grep

Mattias Geniar: How To Increase Amount of Disk inodes in Linux

The post How To Increase Amount of Disk inodes in Linux appeared first on ma.ttias.be.

It doesn't happen often, but at times you may run out of inodes on a Linux system.

To find your current inode usage, run df -i.

$ df -i
Filesystem                Inodes   IUsed     IFree  IUse%   Mounted on
/dev/mapper/centos-root 19374080   19374080  0      100%   /

Out of the available 19.374.080 inodes, none were free. This is pretty much the equivalent of a disk full, except it doesn't show in terms of capacity but in terms of inodes.

If you're confused on what an inode exactly is, Wikipedia has a good description.

In a Unix-style file system, an index node, informally referred to as an inode, is a data structure used to represent a filesystem object, which can be one of various things including a file or a directory. Each inode stores the attributes and disk block location(s) of the filesystem object's data.
Wikipedia: inode

A disk with 0 available inodes is probably full of very small files, somewhere in a specific directory (applications, tmp-files, pid files, session files, ...). Each file uses (at least) 1 inode. Many million files would use many million inodes.

If your disks' inodes are full, how do you increase it? The tricky answer is, you probably can't.

The amount of inodes available on a system is decided upon creation of the partition. For instance, a default partition of EXT3/EXT4 has a bytes-per-inode ratio of one inode every 16384 bytes (16 Kb).

A 10GB partition would have would have around 622.592 inodes. A 100GB partition has around 5.976.883,2 inodes (taking into account the reserved space for super-users/journalling).

Do you want to increase the amount of inodes? Either increase the capacity of the disk entirely (Guide: Increase A VMware Disk Size (VMDK) LVM), or re-format the disk using mkfs.ext4 -i to manually overwrite the bytes-per-inode ratio.

As usual, the Archwiki has a good explanation on why we don't just make the default inode number 10x higher.

For partitions with size in the hundreds or thousands of GB and average file size in the megabyte range, this usually results in a much too large inode number because the number of files created never reaches the number of inodes.

This results in a waste of disk space, because all those unused inodes each take up 256 bytes on the filesystem (this is also set in /etc/mke2fs.conf but should not be changed). 256 * several millions = quite a few gigabytes wasted in unused inodes.
Archwiki: ext4

You may be able to create a new partition if you have spare disks/space in your LVM and chose a filesystem that's better suited to handle many small files, like ReiserFS.

The post How To Increase Amount of Disk inodes in Linux appeared first on ma.ttias.be.

Related posts:

  1. Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7 This guide will explain how to grow an XFS filesystem...
  2. Understanding the /bin, /sbin, /usr/bin and /usr/sbin Split A short but illuminating read [pdf]. When their root filesystem...
  3. Increase open-files-limit in MariaDB on CentOS 7 with systemd Gone are the days where simply changing /etc/my.cnf would be...

02 Aug 2015 7:52pm GMT

Mattias Geniar: How To Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7

The post How To Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7 appeared first on ma.ttias.be.

This guide will show you how to add an extra IP address to an existing interface in Red Hat Enterprise Linux / CentOS 7. It's slightly different than CentOS 6, so there may be some confusion if you're trying this on a CentOS 7 system for the first time.

You may be used to adding a new network-scripts file in /etc/sysconfig/network-scripts/, but you'll find that doesn't work in RHEL / CentOS 7 as you'd expect. Here's what a config would look like in CentOS 6:

$ cat ifcfg-eth0:0
NAME="eth0:0"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR="10.50.10.5"
NETMASK="255.255.255.0"

After a network reload, the primary IP address will be removed from the server and only the IP address from the alias interface will be present. That's not good.

The simplest/cleanest way to add a new IP address to an existing interface in CentOS 7 is to use the nmtui tool (Text User Interface for controlling NetworkManager).

$ nmtui

centos7_nmtui

Once nmtui is open, go to the Edit a network connection and select the interface you want to add an alias on.

nmtui_select_interface

Click Edit and tab your way through to Add to add extra IP addresses.

nmtui_add_alias_interface

Save the configs and the extra IP will be added.

If you check the text-configs that have been created in /etc/sysconfig/network-scripts/, you can see how nmtui has added the alias.

$ cat /etc/sysconfig/network-scripts/ifcfg-ens192
...
# Alias on the interface
IPADDR1="10.50.23.11"
PREFIX1="32"

If you want, you can modify the text file, but I find using nmtui to be much easier.

The post How To Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7 appeared first on ma.ttias.be.

Related posts:

  1. Re-enabling IPv6 support on CentOS kernels after update A Kernel update on one box led to the following...
  2. Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7 This guide will explain how to grow an XFS filesystem...
  3. Install VMware Tools via the Yum Package Manager on RHEL and CentOS The most common way to install the VMware Tools is...

02 Aug 2015 7:29pm GMT

Mattias Geniar: Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7

The post Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7 appeared first on ma.ttias.be.

This guide will explain how to grow an XFS filesystem once you've increased in the underlying storage.

If you're on a VMware machine, have a look at this guide to increase the block device, partition and LVM volume first: Increase A VMware Disk Size (VMDK) Formatted As Linux LVM without rebooting. Once you reach the resize2fs command, return here, as that only applies to EXT2/3/4.

To see the info of your block device, use xfs_info.

$ xfs_info /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=1210880 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=4843520, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Once the volume group/logical volume has been extended (see this guide for increasing lvm), you can expand the partition using xfs_growfs.

$  xfs_growfs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=1210880 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=4843520, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

The increase will happen in near-realtime and probably won't take more than a few seconds.

Using just xfs_growfs, the filesystem will be increased to its maximum available size. If you want to only increase for a couple of blocks, use the -D option.

If you don't see any increase in disksize using df, check this guide: Df command in Linux not updating actual diskspace, wrong data.

The post Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7 appeared first on ma.ttias.be.

Related posts:

  1. How To Increase Amount of Disk inodes in Linux It doesn't happen often, but at times you may run...
  2. Increase open-files-limit in MariaDB on CentOS 7 with systemd Gone are the days where simply changing /etc/my.cnf would be...
  3. How To Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7 This guide will show you how to add an extra...

02 Aug 2015 7:09pm GMT

Mattias Geniar: Apache 2.4: ProxyPass (For PHP) Taking Precedence Over Files/FilesMatch In Htaccess

The post Apache 2.4: ProxyPass (For PHP) Taking Precedence Over Files/FilesMatch In Htaccess appeared first on ma.ttias.be.

I got to scratch my head on this one for a while. If you're writing a PHP-FPM config for Apache 2.4, don't use the ProxyPassMatch directive to pass PHP requests to your FPM daemon.

This will cause you headaches:

# don't
<IfModule mod_proxy.c>
  ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/
</IfModule>

You will much rather want to use a FilesMatch block and refer those requests to a SetHandler that passes everything to PHP.

# do this instead
# Use SetHandler on Apache 2.4 to pass requests to PHP-PFM
<FilesMatch \.php$>
  SetHandler "proxy:fcgi://127.0.0.1:9000"
</FilesMatch>

Why is this? Because the ProxyPassMatch directives are evaluated first, before the FilesMatch configuration is being run.

That means if you use ProxyPassMatch, you can't deny/allow access to PHP files and can't manipulate your PHP requests in any way anymore.

So for passing PHP requests to an FPM daemon, you'd want to use FilesMatch + SetHandler, not ProxyPassMatch.

The post Apache 2.4: ProxyPass (For PHP) Taking Precedence Over Files/FilesMatch In Htaccess appeared first on ma.ttias.be.

Related posts:

  1. CentOS, Apache & mod_fcgid: IPCCommTimeout not working as expected If you're running Apache with the mod_fcgid module to let...
  2. Porting standard Apache's mod_rewrite rules to Nginx Most webframeworks will provide you with a .htaccess file that...
  3. Combine Apache's HTTP authentication with X-Forwarded-For IP whitelisting in Varnish Such a long title for a post. If you want...

02 Aug 2015 8:05am GMT

01 Aug 2015

feedPlanet Grep

Philip Van Hoof: Gebruik maken van verbanden tussen metadata

Ik beweerde onlangs ergens dat een systeem dat verbanden (waar, wanneer, met wie, waarom) in plaats van louter metadata (titel, datum, auteur, enz.) over content verzamelt een oplossing zou kunnen bieden voor het probleem dat gebruikers van digitale media meer en meer zullen hebben; namelijk dat ze teveel materiaal gaan verzameld hebben om er ooit nog eens iets snel genoeg in terug te vinden.

Ik denk dat verbanden meer gewicht moeten krijgen dan louter de metadata omdat het door middel van verbanden is dat wij mensen in onze hersenen informatie onthouden. Niet door middel van feiten (titel, datum, auteur, enz.) maar wel door middel van verbanden (waar, wanneer, met wie, waarom) .

Ik gaf als hypothetisch voorbeeld dat ik een video wilde vinden die ik gekeken had met Erika toen ik op vakantie was met haar en die zij als super tof had gemarkeerd.

Wat zijn de verbanden die we moeten verzamelen? Dit is een eenvoudig oefeningetje in analyse: gewoon de zelfstandige naamwoorden onderlijnen en het probleem opnieuw uitschrijven:

Dus laat ik deze use-case eens in RDF gieten en oplossen met SPARQL. Dit moeten we verzamelen. Ik schrijf het in pseudo TTL. Bedenk er even bij dat deze ontology helemaal bestaat:

<erika> a Person ; name "Erika" .
<vakantiePlek> a PointOfInterest ; title "De vakantieplek" .
<filmA> a Movie ; lastSeenAt <vakantiePlek> ; sharedWith <erika>; title "The movie" .
<erika> likes <filmA> .

Dit is daarna de SPARQL query:

SELECT ?m { ?v a Movie ; title ?m . ?v lastSeenAt ?p . ?p title ?pt . ?v sharedWith <erika> . <erika> likes ?v . FILTER (?pt LIKE '%vakantieplek%') }

Ik laat het als een oefening aan de lezer om dit naar de ontology Nepomuk om te zetten (volgens mij kan het deze hele use-case aan). En dan kan je dat eens op je N9 of je standaard GNOME desktop testen met de tool tracker-sparql. Wedden dat het werkt. :-)

Het grote probleem is inderdaad de data aquisitie van de verbanden. De query maken is vrij eenvoudig. De ontology vastleggen en afspreken met alle partijen al wat minder. De informatie verzamelen is dé moeilijkheid.

Oh ja. En eens verzameld, de informatie veilig bijhouden zonder dat mijn privacy geschonden wordt. Dat lijkt tegenwoordig gewoonweg onmogelijk. Helaas.

Het is in ieder geval niet nodig dat een supercomputer of zo dit centraal moet oplossen (met AI en heel de gruwelijk complexe hype zooi van vandaag).

Ieder klein toestelletje kan dit soort use-cases zelfstandig oplossen. De bovenstaande inserts en query zijn eenvoudig op te lossen. SQLite doet dit in een paar milliseconden met een gedenormalizeerd schema. Uw fancy hipster NoSQL oplossing waarschijnlijk ook.

Dat is omdat het gewicht van data aquisitie op de verbanden ligt in plaats van op de feiten.

01 Aug 2015 2:48pm GMT

31 Jul 2015

feedPlanet Grep

Frank Goossens: I Am A Cyclist, And I Am Here To Fuck You Up

I Am A Cyclist, And I Am Here To Fuck You Up

It is morning. You are slow-rolling off the exit ramp, nearing the end of the long-ass commute from your suburban enclave. You have seen the rise of the city grow larger and larger in your windshield as you crawled through sixteen miles of bumper-to-bumper traffic. You foolishly believed that, now that you are in the city, your hellish morning drive is coming to an end.

Just then! I emerge from nowhere to whirr past you at twenty-two fucking miles per hour, passing twelve carlengths to the stoplight that has kept you prisoner for three cycles of green-yellow-red. The second the light says go, I am GOING, flying, leaving your sensible, American, normal vehicle in my dust.

Possibly related twitterless twaddle:

31 Jul 2015 7:34am GMT

30 Jul 2015

feedPlanet Grep

Joram Barrez: The Activiti Performance Showdown Running on Amazon Aurora

Earlier this week, Amazon announced that Amazon Aurora is generally available on Amazon RDS. The Aurora website promises a lot: Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better […]

30 Jul 2015 3:09pm GMT

Frank Goossens: Vouwfiets-dilemma’s 2015

Na 5 jaar vouwfietsen, naar schatting 23.000 km en 3 nieuwe stuurscharnieren (!) heb ik mijn Dahon Vitesse D7HG dan toch vervangen. Ik heb getwijfeld of ik dan toch voor Brompton zou gaan, maar zelfs het basismodel kost bijna dubbel zoveel en rekening houdend met de opties die ik zou moeten bijbetalen, mijn fietskilometers, het terrein (Brussel is veeleisend voor fiets en berijder) en mijn … slijtige rijstijl, zie ik me die extra investering daar absoluut niet uithalen.

Oud (2010) en nieuw (2015) nog even samen

Een nieuwe Dahon Vitesse D7HG dus (en ja, opnieuw met die Shimano Nexus 7-speed internal hub, wie zou er nu om god nog met een derailleur willen rondrijden?), maar daar stopte het twijfelen niet; meer dan 20% goedkoper (!) online kopen, of voor de fietsenmaker om de hoek kiezen. Het is de fietsenmaker geworden; voor reparaties onder garantie (stuurscharnieren, bijvoorbeeld) moet je bij de online winkel de fiets opsturen en ben je hem al gauw enkele weken kwijt. En voor gewone reparaties ben ik daar de afgelopen 5 jaar (en daarvoor, met mijn andere fietsen) ondanks de drukte altijd snel, goed en goedkoop geholpen. Nee, die 20% investering in de beste service-na-verkoop (en in de lokale economie), die haal ik er dan wel weer uit.

Possibly related twitterless twaddle:

30 Jul 2015 7:21am GMT

29 Jul 2015

feedPlanet Grep

Mattias Geniar: Why We’re Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History)

The post Why We're Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History) appeared first on ma.ttias.be.

WordPress offers an API that can list the PHP versions used in the wild. It shows some interesting numbers that warrant some extra thoughts.

Here's the current statistics in PHP versions used in WordPress installations. It uses jq for JSON formatting at the CLI.

$ curl http://api.wordpress.org/stats/php/1.0/ | jq '.'
{
  "5.2": 13.603,
  "5.3": 32.849,
  "5.4": 40.1,
  "5.5": 9.909,
  "5.6": 3.538
}

Two versions stand out: PHP 5.3 is used in 32.8% of all installations, PHP 5.4 on 40.1%.

Both of these versions are end of life. Only PHP 5.4 receives security updates [2] until mid-september of this year. No more bug fixes. That's 1.5 months left on the counter, without any bugfixes.

But if they're both considered end of life, why do they still account for 72.9% of all WordPress installations?

Prologue: Shared Hosting

These stats are gathered by WordPress anonymously. Since most of the WordPress installations are on shared hosting, it's safe to assume they are done once, never looked at again. It's a good thing WordPress can auto-update, or the web would be doomed.

There are of course WordPress installations on custom servers, managed systems, ... etc, but they will account for a small percentage of all WordPress installations. It's important to keep in mind that the rest of these numbers will be mostly applicable to shared hosting, only.

PHP Version Support

Here's a quick history of relevant PHP versions, meaning 5.0 and upwards. I'll ignore the small percentage of sites still running on PHP 4.x.

Version Released End Total duration
5.0 July 13th, 2004 September 5th, 2005 419 days
5.1 November 24th, 2005 August 24th, 2006 273 days
5.2 November 2nd, 2006 January 6th, 2011 1526 days
5.3 June 30th, 2009 August 14th, 2014 1871 days
5.4 March 1st, 2012 September 14th, 2015 1292 days
5.5 June 20th, 2013 July 10th, 2016 1116 days
5.6 August 28th, 2014 August 28th, 2017 1096 days

It's no wonder we're still seeing PHP 5.3 in the wild, the version has been supported for more than 5 years. That means a lot of users will have installed WordPress on a PHP 5.3 host and just never bothered updating, because of the install once, update never mentality.

As long as their WordPress continues to work, why would they -- right? [1]

If my research was correct, in 2005 there were 2 months where there wasn't a supported version of PHP 5. At that time, support for 5.0 was dropped and 5.1 wasn't released until a couple of months later.

Versions vs. Server Setups

PHP has been around for a really long time and it's seen its fair share of server setups. It's been run as mod_php in Apache, CGI, FastCGI, embedded, CLI, litespeed, FPM and many more. We're now evolving to multiple PHP-FPM masters per server, each for its own site.

With the rise of HHVM, we'll see even more different types of PHP deployments.

From what I can remember of my earlier days in hosting, this was the typical PHP setup on shared hosting.

Version Server setup
5.0 Apache + mod_php
5.1 Apache + mod_php
5.2 Apache + suexec + CGI
5.3 Apache + suexec + FastCGI
5.4 Apache + FPM
5.5 Apache + FPM
5.6 Apache + FPM

The server-side has seen a lot of movement. The current method of running PHP as FPM daemons is far superior to running it as mod_php or CGI/FastCGI. But it took the hosting world quite some time to adopt this.

Even with FPM support coming to PHP 5.3, most servers were still running as CGI/FastCGI.

That was/is a terrible way to run PHP.

It's probably what made it take so long to adopt PHP 5.4 on shared hosting servers. It required a complete rewrite of everything that is shared hosting. No more CGI/FastCGI, but implementing proxy-setups to pass data to PHP-FPM. Since FPM support didn't come to PHP 5.3 since a couple of minor versions in, most hosting providers only experienced FPM on 5.4. Once their FPM config was ready, adopting PHP 5.5 and 5.6 was trivial.

Only PHP 5.5's changed opcache made for some configuration changes, but didn't have any further server-side impact.

PHP 5.3 has been supported for a really long time. PHP 5.4 took ages to be implemented on most shared server setups, prolonging the life of PHP 5.3 even long past its expiration date..

If you're installing PHP on a new Red Hat Enterprise Linux/CentOS 7, you get version 5.4. RHEL still backports security fixes[2] from newer releases to 5.4 if needed, but it's essentially an end of life version. It may get security fixes[2], but it won't get bug fixes.

This causes the increase in PHP 5.4 worldwide. It's the default version on the latest RHEL/CentOS.

Moving PHP forward

In order to let these ancient versions of PHP finally rest in peace, a few things need to change drastically. The same reasons that have kept PHP 5.3 alive for so long.

  1. WordPress needs to bump its minimal PHP version from 5.2 to at least PHP 5.5 or 5.6
  2. Drupal 7 also runs on PHP 5.2, with Drupal 8 bumping the minimum version to 5.5.
  3. Shared Hosting providers need to drop PHP 5.2, 5.3 and 5.4 support and move users to 5.5 or 5.6.
  4. OS vendors and packagers need to make at least PHP 5.5 or 5.6 the default, instead of 5.4 that's nearly end of life.

We are doing what we can to improve point 3), by encouraging shared hosting users to upgrade to later releases. Fingers crossed WordPress and OS vendors do the same.

It's unfair to blame PHP, the company, that we're still seeing 5.3 and 5.4 in the wild today. But because both versions have been supported for such a really long time, their install base is naturally large.

Later releases of PHP have seen shorter support cycles, which will make users think more about upgrading and schedule accordingly. Having a consistent release and deprecation schedule is vital for faster adoption rates.

[1] Well, if you ignore security, speed and scalability as added benefits.
[2] I've proclaimed "PHP's CVE Vulnerabilities as being irrelevant, and I still stand by that.

The post Why We're Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History) appeared first on ma.ttias.be.

Related posts:

  1. PHP's CVE vulnerabilities are irrelevant ircmaxell wrote a good blog post about usage of PHP...
  2. Clear the APC cache in PHP How do you clear the APC cache? There are basically...
  3. The PHP circle: from Apache to Nginx and back As with many technologies, the PHP community too evolves. And...

29 Jul 2015 7:32pm GMT

Frank Goossens: The 2 Bears Getting Together on Our Tube

The 2 Bears is a duo comprised of Hot Chip's Joe Goddard and Raf Rundell. "Get Together" is one of the songs on their 2012 debut album "Be Strong".

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Possibly related twitterless twaddle:

29 Jul 2015 6:05am GMT

28 Jul 2015

feedPlanet Grep

Xavier Mertens: Integrating VirusTotal within ELK

VirusTotal Scan[This blogpost has also been published as a guest diary on isc.sans.org]

Visualisation is a key when you need to keep control of what's happening on networks which carry daily tons of malicious files. virustotal.com is a key player in fighting malwares on a daily basis. Not only, you can submit and search for samples on their website but they also provide an API to integrate virustotal.com in your software or scripts. A few days ago, Didiers Stevens posted some SANS ISC diaries about the Integration of VirusTotal into Microsoft sysinternal tools (here, here and here). The most common API call is to query the database for a hash. If the file was already submitted by someone else and successfilly scanned, you'll get back interesting results, the most known being the file score in the form "x/y". The goal of my setup is to integrate virustotal.com within my ELK setup. To feed virustotal, hashes of interesting files must be computed. I'm getting interesting hashes via my Suricata IDS which inspect all the Internet traffic passing through my network.

The first step is to configure the MD5 hashes support in Suricata. The steps are described here. Suricata logs are processed by a Logstash forwarder and MD5 hashes are stored and indexed via the field 'fileinfo.md5':

MD5 Hash

(Click to enlarge)

Note: It is mandatory to configure Suricata properly to extract files from network flows. Otherwise, the MD5 hashes won't be correct. It's like using a snaplen of '0' with tcpdump. In Suricata, have a look at the inspected response body size for HTTP requests and the stream reassembly depth. This could also have an impact on performances, fine tune them to match your network behavior.

To integrate VirusTotal within ELK, a Logstash filter already exists, developed by Jason Kendall. The code is available on github.com. To install it, follow this procedure:

# cd /data/src
# git clone https://github.com/coolacid/logstash-filter-virustotal.git
# cd logstash-filter-virustotal
# gem2.0 build logstash-filter-awesome.gemspec
# cd /opt/logstash
# bin/plugin install /data/src/logstash-filter-virustotal/logstash-filter-virustotal-0.1.1.gem

Now, create a new filter which will call the plugin and restart Logstash.

filter {
    if ( [event_type] == "fileinfo" and
         [fileinfo][filename] =~ /(?i)\.(doc|pdf|zip|exe|dll|ps1|xls|ppt)/ ) {
        virustotal {
            apikey => '<put_your_vt_api_key_here>'
            field => '[fileinfo][md5]'
            lookup_type => 'hash'
            target => 'virustotal'
        }
    }
}

The filter above will query for the MD5 hash stored in 'fileinfo.md5' to virustotal;com if the event contains file information generated by Suricata and if the filename contains an interesting extension. Of course, you can adapt the filter to your own environment and match only specific file format using 'fileinfo.magic' or a minimum file size using 'fileinfo.size'. If conditions match a file, a query will be performed using the virustotal.com API and results stored into a new 'virustotal' field:

VirusTotal Results

(Click to enlarge)

Now, it's up to you to build your ElasticSearch queries and dashboard to detect suspicious activities in your network. During the implementation, I detected that too many requests sent in parallel to virustotal.com might freeze my Logstash (mine is 1.5.1). Also, keep an eye on your API key consumption to not break your request rate or daily/monthly quota.

28 Jul 2015 5:57pm GMT

Xavier Mertens: The Rough Life of Defenders VS. Attackers

Scale of JusticeYesterday, It was the first time that I heard the expression "Social Engineering" in Belgian public media! If this topic came in the news, you can imagine that something weird (or juicy from a journalist perspective) happened. The Flemish administration had the good idea to test the resistance of their 15K officials against a phishing attack. As people remain the weakest link, it sounds a good initiative right? But if it was disclosed in the news, you can imagine that it was in fact … a flop! (The article is available here in French)

The scenario was classic but well written. People received an email from Thalys, an international train operator (and used by many Belgian travellers), which reported a billing issue with their last trip. If they did not provide their bank details, their credit card will be charged up to 20K EUR. The people behind this scenario have not thought about the possible side effects of such a massive mailing. People flooded the Thalys customer support center with angry calls, others simply notified the Police. Thalys, being a commercial company, reacted about the lack of communication and the unauthorized usage of their brand in the rogue email.

I already performed the same kind of social engineering attacks for customers and I know that it's definitively not easy. Instead of breaking into computers, we are trying to break into humans' behavior and their reactions can be very different: fear, shame, anger, … I suppose that the Flemish government was working with a partner or contractor to organize the attack. They should have to follow the following rules:

But a few hours ago, while driving back to home and thinking about this bad story, I realized that this proves once again the big differences between defenders and attackers! Attackers are using copyrighted material all the time, they build fake websites or compromize official ones to inject malicious payloads in visitors' browser. They are sending millions of emails targeting everybody. On the other side, defenders have to perform their job while defending their ass at the same time! And the recent changes like the updated Waasenaar arrangement won't help in the future. I'm curious about the results of this giant test. How many people really clicked, opened a file or communicated their bank details? That was not reported in the news…

28 Jul 2015 8:37am GMT

Kris Buytaert: The power of packaging software, package all the things

Software delivery is hard, plenty of people all over this planet are struggling with delivering software in their own controlled environment. They have invented great patterns that will build an artifact, then do some magic and the application is up and running.

When talking about continuous delivery, people invariably discus their delivery pipeline and the different components that need to be in that pipeline.
Often, the focus on getting the application deployed or upgraded from that pipeline is so strong that teams
forget how to deploy their environment from scratch.

After running a number of tests on the code , compiling it where needed, people want to move forward quickly and deploy their release artifact on an actual platform.
This deployment is typically via a file upload or a checkout from a source-control tool from the dedicated computer on which the application resides.
Sometimes, dedicated tools are integrated to simulate what a developer would do manually on a computer to get the application running. Copy three files left, one right, and make sure you restart the service. Although this is obviously already a large improvement over people manually pasting commands from a 42 page run book, it doesn't solve all problems.

Like the guy who quickly makes a change on the production server, never to commit the change, (say goodbye to git pull for your upgrade process)
If you package your software there are a couple of things you get for free from your packaging system.
Questions like, has this file been modified since I deployed it, where did this file come from, when was it deployed,
what version of software X do I have running on all my servers, are easily answered by the same
tools we use already for every other package on the system. Not only can you use existing tools you are also using tools that are well known by your ops team and that they
already use for every other piece of software on your system.

If your build process creates a package and uploads it to a package repository which is available for the hosts in the environment you want to deploy to, there is no need anymore for
a script that copies the artifact from a 3rd party location , and even less for that 42 page text document which never gets updated and still tells you to download yaja.3.1.9.war from a location where you can only find
3.2 and 3.1.8 and the developer that knows if you can use 3.2 or why 3.1.9 got removed just left for the long weekend.

Another, and maybe even more important thing, is the current sadly growing practice of having yet another tool in place that translates that 42 page text document to a bunch of shell scripts created from a drag and drop interface, typically that "deploy tool" is even triggered from within the pipeline. Apart from the fact that it usually stimulates a pattern of non reusable code, distributing even more ssh keys , or adding yet another agent on all systems. it doesn't take into account that you want to think of your servers as cattle and be able to deploy new instances of your application fast.
Do you really want to deploy your five new nodes on AWS with a full Apache stack ready for production, then reconfigure your load balancers only to figure out that someone needs to go click in your continuous integration tool or deployment to deploy the application to the new hosts? That one manual action someone forgets?
Imvho Deployment tools are a phase in the maturity process of a product team.. yes it's a step up from manually deploying software but it creates more and other problems , once your team grows in maturity refactoring out that tool is trivial.

The obvious and trivial approach to this problem, and it comes with even more benefits. is called packaging. When you package your artifacts as operating system (e.g., .deb or .rpm) packages,
you can include that package in the list of packages to be deployed at installation time (via Kickstart or debootstrap). Similarly, when your configuration management tool
(e.g., Puppet or Chef) provisions the computer, you can specify which version of the application you want to have deployed by default.

So, when you're designing how you want to deploy your application, think about deploying new instances or deploying to existing setups (or rather, upgrading your application).
Doing so will make life so much easier when you want to deploy a new batch of servers.

28 Jul 2015 6:35am GMT

26 Jul 2015

feedPlanet Grep

Mattias Geniar: This American Life: The DevOps Episode

The post This American Life: The DevOps Episode appeared first on ma.ttias.be.

If you're a frequent podcast listener, chances are you've heard of the This American Life podcast. It's probably the most listened-to podcast available.

While it normally features all kind of content, from humorous stories to gripping drama, last weeks episode felt a bit different.

They ran a story about NUMMI, a car plant where Toyota and GM worked together to improve productivity.

Throughout the story, there are a lot of topics being mentioned that can all be brought back to our DevOps ways;

There are lot more details available in the podcast and you'd be amazed how many can be analogies to our DevOps movement.

If you're using the Overcast podcast player (highly recommended), you can get the episode here: NUMMI 2015. Or you can grab it from the official website/itunes at ThisAmericanLife.org.

The post This American Life: The DevOps Episode appeared first on ma.ttias.be.

Related posts:

  1. Your Daily Read in December: Sysadvent It's that time of the year again. One article for...

26 Jul 2015 8:49am GMT

24 Jul 2015

feedPlanet Grep

Frank Goossens: How technology (has not) improved our lives

The future is bright, you've got to wear shades? But why do those promises for a better future thanks to technology often fail to materialize? And how is that linked with the history of human flight, folding paper and the web? Have a look at "Web Design: first 100 years", a presentation by Maciej Cegłowski (the guy behind pinboard.in). An interesting read!

Possibly related twitterless twaddle:

24 Jul 2015 4:57pm GMT