29 Sep 2016

feedPlanet Grep

Mattias Geniar: Heads up: CentOS 5 goes end-of-life in 6 months

The post Heads up: CentOS 5 goes end-of-life in 6 months appeared first on ma.ttias.be.

This is a friendly reminder to let you know that any CentOS 5 systems you still have running will stop receiving updates from March 31st, 2017.

20. What is the support 'end of life' for each CentOS release?

CentOS-5: updates until March 31, 2017
CentOS-6: updates until November 30, 2020
CentOS-7: updates until June 30, 2024

So you have 6 months left to plan and execute a migration to a newer OS, like CentOS 7 (which'll get you to 2024).

Let's face it, CentOS 5 has had it's prime and is long due for a more modern replacement. Go and get the benefits of a more up-to-date kernel and packages!

The post Heads up: CentOS 5 goes end-of-life in 6 months appeared first on ma.ttias.be.

29 Sep 2016 11:32am GMT

Xavier Mertens: [SANS ISC Diary] SNMP Pwn3ge

I published the following diary on isc.sans.org: "SNMP Pwn3ge".

Sometimes getting access to company assets is very complicated. Sometimes it is much easier (read: too easy) than expected. If one of the goals of a pentester is to get juicy information about the target, preventing the IT infrastructure to run efficiently (deny of service) is also a "win". Indeed, in some business fields, if the infrastructure is not running, the business is impacted and the company may lose a lot of money. Think about traders… [Read more]

[The post [SANS ISC Diary] SNMP Pwn3ge has been first published on /dev/random]

29 Sep 2016 7:47am GMT

28 Sep 2016

feedPlanet Grep

Dries Buytaert: The transformation of Drupal 8 for continuous innovation

In the past, after every major release of Drupal, most innovation would shift to two areas: (1) contributed modules for the current release, and (2) core development work on the next major release of Drupal. This innovation model was the direct result of several long-standing policies, including our culture of breaking backward compatibility between major releases.

In many ways, this approach served us really well. It put strong emphasis on big architectural changes, for a cleaner, more modern, and more flexible codebase. The downsides were lengthy release cycles, a costly upgrade path, and low incentive for core contributors (as it could take years for their contribution to be available in production). Drupal 8's development was a great example of this; the architectural changes in Drupal 8 really propelled Drupal's codebase to be more modern and flexible, but also came at the cost of four and a half years of development and a complex upgrade path.

As Drupal grows - in lines of code, number of contributed modules, and market adoption - it becomes harder and harder to rely purely on backward compatibility breaks for innovation. As a result, we decided to evolve our philosophy starting after the release of Drupal 8.

The only way to stay competitive is to have the best product and to help people adopt it more seamlessly. This means that we have to continue to be able to reinvent ourselves, but that we need to make the resulting changes less scary and easier to absorb. We decided that we wanted more frequent releases of Drupal, with new features, API additions, and an easy upgrade path.

To achieve these goals, we adopted three new practices:

  1. Semantic versioning: a major.minor.patch versioning scheme that allows us to add significant, backwards-compatible improvements in minor releases like Drupal 8.1.0 and 8.2.0.
  2. Scheduled releases: new minor releases are timed twice a year for predictability. To ensure quality, each of these minor releases gets its own beta releases and release candidates with strict guidelines on allowed changes.
  3. Experimental modules in core: optional alpha-stability modules shipped with the core package, which allow us to distribute new functionality, gather feedback, and iterate faster on the modules' planned path to stability.

Now that Drupal 8 has been released for about 10 months and Drupal 8.2 is scheduled to be released next week, we can look back at how this new process worked. Drupal 8.1 introduced two new experimental modules (the BigPipe module and a user interface for data migration), various API additions, and usability improvements like spell checking in CKEditor. Drupal 8.2 further stabilizes the migration system and introduces numerous experimental alpha features, including significant usability improvements (i.e. block placement and block configuration), date range support, and advanced content moderation - among a long list of other stable and experimental improvements.

It's clear that these regular feature updates help us innovate faster - we can now add new capabilities to Drupal that previously would have required a new major version. With experimental modules, we can get features in users' hands early, get feedback quickly, and validate that we are implementing the right things. And with the scheduled release cycle, we can deliver these improvements more frequently and more predictably. In aggregate, this enables us to innovate continuously; we can bring more value to our users in less time in a sustainable manner, and we can engage more developers to contribute to core.

It is exciting to see how Drupal 8 transformed our capabilities to continually innovate with core, and I'm looking forward to seeing what we accomplish next! It also raises questions about what this means for Drupal 9 - I'll cover that in a future blog post.

28 Sep 2016 8:46am GMT

27 Sep 2016

feedPlanet Grep

Sven Vermeulen: We do not ship SELinux sandbox

A few days ago a vulnerability was reported in the SELinux sandbox user space utility. The utility is part of the policycoreutils package. Luckily, Gentoo's sys-apps/policycoreutils package is not vulnerable - and not because we were clairvoyant about this issue, but because we don't ship this utility.

What is the SELinux sandbox?

The SELinux sandbox utility, aptly named sandbox, is a simple C application which executes its arguments, but only after ensuring that the task it launches is going to run in the sandbox_t domain.

This domain is specifically crafted to allow applications most standard privileges needed for interacting with the user (so that the user can of course still use the application) but removes many permissions that might be abused to either obtain information from the system, or use to try and exploit vulnerabilities to gain more privileges. It also hides a number of resources on the system through namespaces.

It was developed in 2009 for Fedora and Red Hat. Given the necessary SELinux policy support though, it was usable on other distributions as well, and thus became part of the SELinux user space itself.

What is the vulnerability about?

The SELinux sandbox utility used an execution approach that did not shield off the users' terminal access sufficiently. In the POC post we notice that characters could be sent to the terminal through the ioctl() function (which executes the ioctl system call used for input/output operations against devices) which are eventually executed when the application finishes.

That's bad of course. Hence the CVE-2016-7545 registration, and of course also a possible fix has been committed upstream.

Why isn't Gentoo vulnerable / shipping with SELinux sandbox?

There's some history involved why Gentoo does not ship the SELinux sandbox (anymore).

First of all, Gentoo already has a command that is called sandbox, installed through the sys-apps/sandbox application. So back in the days that we still shipped with the SELinux sandbox, we continuously had to patch policycoreutils to use a different name for the sandbox application (we used sesandbox then).

But then we had a couple of security issues with the SELinux sandbox application. In 2011, CVE-2011-1011 came up in which the seunshare_mount function had a security issue. And in 2014, CVE-2014-3215 came up with - again - a security issue with seunshare.

At that point, I had enough of this sandbox utility. First of all, it never quite worked enough on Gentoo as it is (as it also requires a policy which is not part of the upstream release) and given its wide open access approach (it was meant to contain various types of workloads, so security concessions had to be made), I decided to no longer support the SELinux sandbox in Gentoo.

None of the Gentoo SELinux users ever approached me with the question to add it back.

And that is why Gentoo is not vulnerable to this specific issue.

27 Sep 2016 6:47pm GMT

26 Sep 2016

feedPlanet Grep

Sven Vermeulen: Mounting QEMU images

While working on the second edition of my first book, SELinux System Administration - Second Edition I had to test out a few commands on different Linux distributions to make sure that I don't create instructions that only work on Gentoo Linux. After all, as awesome as Gentoo might be, the Linux world is a bit bigger. So I downloaded a few live systems to run in Qemu/KVM.

Some of these systems however use cloud-init which, while interesting to use, is not set up on my system yet. And without support for cloud-init, how can I get access to the system?

Mounting qemu images on the system

To resolve this, I want to mount the image on my system, and edit the /etc/shadow file so that the root account is accessible. Once that is accomplished, I can log on through the console and start setting up the system further.

Images that are in the qcow2 format can be mounted through the nbd driver, but that would require some updates on my local SELinux policy that I am too lazy to do right now (I'll get to them eventually, but first need to finish the book). Still, if you are interested in using nbd, see these instructions or a related thread on the Gentoo Forums.

Luckily, storage is cheap (even SSD disks), so I quickly converted the qcow2 images into raw images:

~$ qemu-img convert root.qcow2 root.raw

With the image now available in raw format, I can use the loop devices to mount the image(s) on my system:

~# losetup /dev/loop0 root.raw
~# kpartx -a /dev/loop0
~# mount /dev/mapper/loop0p1 /mnt

The kpartx command will detect the partitions and ensure that those are available: the first partition becomes available at /dev/loop0p1, the second /dev/loop0p2 and so forth.

With the image now mounted, let's update the /etc/shadow file.

Placing a new password hash in the shadow file

A google search quickly revealed that the following command generates a shadow-compatible hash for a password:

~$ openssl passwd -1 MyMightyPassword
$1$BHbMVz9i$qYHmULtXIY3dqZkyfW/oO.

The challenge wasn't to find the hash though, but to edit it:

~# vim /mnt/etc/shadow
vim: Permission denied

The image that I downloaded used SELinux (of course), which meant that the shadow file was labeled with shadow_t which I am not allowed to access. And I didn't want to put SELinux in permissive mode just for this (sometimes I /do/ have some time left, apparently).

So I remounted the image, but now with the context= mount option, like so:

~# mount -o context="system_u:object_r:var_t:s0: /dev/loop0p1 /mnt

Now all files are labeled with var_t which I do have permissions to edit. But I also need to take care that the files that I edited get the proper label again. There are a number of ways to accomplish this. I chose to create a .autorelabel file in the root of the partition. Red Hat based distributions will pick this up and force a file system relabeling operation.

Unmounting the file system

After making the changes, I can now unmount the file system again:

~# umount /mnt
~# kpart -d /dev/loop0
~# losetup -d /dev/loop0

With that done, I had root access to the image and could start testing out my own set of commands.

It did trigger my interest in the cloud-init setup though...

26 Sep 2016 5:26pm GMT

25 Sep 2016

feedPlanet Grep

Philip Van Hoof: Het gedeelde beroepsgeheim tussen artsen en politie

Ik dacht eerst een beknopt overzicht te maken, maar waarom post ik niet gewoon de hele zooi?

Waarom blog ik dit? Omdat dit de randen van onze vrijheden raakt. Zijnde het medische geheim. Ik geloof niet dat het schenden van het beroepsgeheim van dokters ons welke terrorist dan ook zal opleveren. Toch gaan we dit moeten afgeven. Ik vraag me af of mensen die echt psychische problemen hebben hier mee gediend zullen zijn?

(lees de E-mail-boom van onder naar boven voor volledige context)

On Wed, 2016-09-07 at 15:04 +0000, FMF Kabinet Info (ACA) wrote:
> Geachte heer,
>
> Het gedeeld beroepsgeheim dat wij willen instellen is een uitzonderingsmaatregel
> op het bestaande beroepsgeheim. Hierdoor zal het mogelijk worden voor
> beroepsgroepen met geheimhoudingsplicht om binnen een bepaalde context
> "geheime informatie" te delen.

Worden in jullie voorstel de mensen in die beroepsgroepen opgeleid om de
juist om te gaan met de keuze deze geheime informatie te delen?

Welk zullen de criteria zijn?

> Het is uitdrukkelijk niet de bedoeling dat de artsen zomaar preventief toegang
> kunnen krijgen tot gerechtelijke informatie of tot de politiedatabanken.

En omgekeerd? Zal de politie preventief toegang kunnen krijgen tot
medische dossiers zonder duidelijke voorafgaande goedkeuring en opdracht
van een onderzoeksrechter (die proportionaliteit e.d. toetst)?

> Het is wél de bedoeling dat er (op structurele basis) een overleg kan plaatsvinden,
> ook tussen artsen, politie, parket en bestuurlijke overheden, over bijv.
> risicopatiënten, herhaaldelijke incidenten of hotspots en dat er op basis daarvan
> afspraken kunnen worden gemaakt aangaande beveiliging.

Tussen artsen wil dus zeggen dat deze geheime informatie gedeeld zal
worden met een relatief grote groepen mensen? M.a.w. zal ze vrij snel gelekt
worden. Want al die mensen hun systemen beveiligen is een onmogelijke
zaak. Dokters hun computersystemen worden al geviseerd door oa.
cybercriminelen en er is al sprake van dat deze gegevens via zwarte
markten internationaal verkocht worden. Zelfs op massale schaal.

Moesten jullie dit onbezonnen doen met computerbestanden die her en der
op allerlei netwerken en individuele dokters hun computers komen te
staan, ga je dus over een paar jaar mensen hebben die nooit meer werk
gaan kunnen vinden. Want het is vrijwel zeker dat die gegevens aan HR
bedrijven gaan verkocht worden.

Recent nog was er een Belgisch bedrijf dat medische dossiers voor een
Nederlandse firma beheert, gehacked. Daarbij waren tienduizenden geheime
dossiers buitgemaakt.

Hoe gaan jullie er voor zorgen dat dit niet gebeurt? Het budget voor
cyberbeveiliging is voorts bitter laag.

> Ik ben dus wel van mening dat we de wettelijke beperkingen moeten versoepelen,

Ja, maar wordt de controle er op strenger?

Alleen maar versoepelen zal, maar dit is slechts mijn mening, leiden tot
chaotisch misbruik.

> zodat het mogelijk wordt om absoluut noodzakelijke informatie met de
> juiste partners te delen, ook al is deze informatie verkregen in het
> kader van de vertrouwensrelatie van het beroep.

Met de juiste partners.

Zal er dan ook naast een comité I en P een comité M komen om de dokters
te controleren? Of zal het sterk onderbemande comité I dit doen?

> Dit moet niet alleen in geval van een accute noodsituatie kunnen,
> maar ook om weerkerende problemen of risico's aan te pakken. De inschatting
> van de opportuniteit om deze informatie te delen binnen dat wettelijk
> kader blijft dan wel bij de houder van de informatie.

Dus niet bij een daarvoor opgeleide persoon, zoals een
onderzoeksrechter?

> Uw opmerkingen zijn overigens terecht: het criterium "mensen die geweld
> gebruik(t)en" zou veel te vaag zijn om een gedegen risicoanalyse te laten
> voeren.

Precies. Dus de inschatting om deze informatie te delen moet dus gemaakt
worden door iemand die hiervoor opgeleid is?

> En dit zou trouwens erg stigmatiserend werken en contra-productief ten
> aanzien van de vertrouwensrelatie tussen arts en patiënt en de bereidheid
> tot behandeling.

Dus we zijn het met elkaar eens dat enkel iemand die hiervoor opgeleid
wordt de afweging kan maken? M.a.w. een onderzoeksrechter.

Want die persoon kan deze afweging al maken, zolang hij - zij maar overleg
pleegt met de orde der geneesheren.

Misschien heb ik de wet inzake bijzondere inlichtingen niet goed
begrepen, natuurlijk …

Vriendelijke groet,

Philip

From: FMF Kabinet Info (ACA) <info.kabinet@just.fgov.be>
To: philip@codeminded.be <philip@codeminded.be>
Subject: RE: Het gedeelde beroepsgeheim tussen artsen en politie
Date: Wed, 7 Sep 2016 15:04:15 +0000 (09/07/2016 05:04:15 PM)

Geachte heer,

Het gedeeld beroepsgeheim dat wij willen instellen is een uitzonderingsmaatregel op het bestaande beroepsgeheim. Hierdoor zal het mogelijk worden voor beroepsgroepen met geheimhoudingsplicht om binnen een bepaalde context "geheime informatie" te delen.

Het is uitdrukkelijk niet de bedoeling dat de artsen zomaar preventief toegang kunnen krijgen tot gerechtelijke informatie of tot de politiedatabanken.

Het is wél de bedoeling dat er (op structurele basis) een overleg kan plaatsvinden, ook tussen artsen, politie, parket en bestuurlijke overheden, over bijv. risicopatiënten, herhaaldelijke incidenten of hotspots en dat er op basis daarvan afspraken kunnen worden gemaakt aangaande beveiliging.

Ik ben dus wel van mening dat we de wettelijke beperkingen moeten versoepelen, zodat het mogelijk wordt om absoluut noodzakelijke informatie met de juiste partners te delen, ook al is deze informatie verkregen in het kader van de vertrouwensrelatie van het beroep. Dit moet niet alleen in geval van een accute noodsituatie kunnen, maar ook om weerkerende problemen of risico's aan te pakken. De inschatting van de opportuniteit om deze informatie te delen binnen dat wettelijk kader blijft dan wel bij de houder van de informatie.

Uw opmerkingen zijn overigens terecht: het criterium "mensen die geweld gebruik(t)en" zou veel te vaag zijn om een gedegen risicoanalyse te laten voeren.

En dit zou trouwens erg stigmatiserend werken en contra-productief ten aanzien van de vertrouwensrelatie tussen arts en patiënt en de bereidheid tot behandeling.

Vriendelijke groet,

Voor de Minister,

Trees Van Eykeren

Persoonlijk medewerkster Minister Geens

Kabinet Minister van Justitie| cabinet Ministre de la Justice Koen Geens
Waterloolaan 115
1000 Brussel
Tel +32 2 542 8011

www.koengeens.be

--Oorspronkelijk bericht--
Van: Philip Van Hoof [mailto:philip@codeminded.be]
Verzonden: Saturday 20 August 2016 7:06 PM
Aan: FMF Kabinet Info (ACA)
Onderwerp: Het gedeelde beroepsgeheim tussen artsen en politie

Dag Koen,

Wanneer de ruil voor een gedeeld beroeps geheim betekent dat daarvoor de politie in noodsituatie toegang moeten kunnen krijgen tot medische informatie van mensen die geweld gebruiken, vraag ik me af wat het criterium zal zijn voor "mensen die geweld gebruiken". Aan welke punten zal je als Belgisch burger moeten voldoen teneinde je een mens bent die, geweld gebruikt?

Wat zal met andere woorden de definitie zijn van "geweld gebruiken", teneinde je een burger wordt die geweld gebruikte in het verleden.

M.a.w. vanaf wanneer ben je lid van de groep, die dokters als ongewenst kunnen beschouwen?

En wat gebeurt er met het dossierdelen van een burger zijn of haar dossier wanneer je berecht en nadien gestraft bent voor "gewelddelicten", en-maar uw straf uitgezeten is?

Zullen dokters blijvend inzage in dat dossier krijgen? Met andere woorden, zullen deze mensen blijvend en voor altijd gestraft blijven? U weet natuurlijk ook dat heel wat dokters zullen weigeren deze mensen hulp te bieden.

Hoe zal dit zorgen voor de herintegratie van deze mensen? Ik dacht dat onze samenleving er voor stond dat eens veroordeeld, gestraft en eens de straf uitgezeten; je terug geïntegreerd wordt in de samenleving. Maar dat geldt dan niet, of wel, wat betreft medische zorgen?

Hoe zorgt U er met de wetsvoorstellen voor dat mensen die hulp nodig hebben, doordat deze wetten inzage het dossierdelen tussen politie en arts bestaan, niet zullen afzien om bij een expert ter zake hulp te gaan zoeken?

Met andere woorden; wanneer iemand psychische problemen heeft maar nog wel helder genoeg is te beseffen dat psycholoog of psychiater een zekere plicht heeft de psychische problemen aan de politie te melden, denk ik dat die persoon zal afzien van hulp te zoeken. Hoe zal U ervoor zorgen dat uw wetsvoorstellen deze situatie vermijden?

Denkt U voorts veel psychopathische criminelen te vangen met dit nieuwe systeem?

Waarom zouden psychopathische mensen, die doorgaans intelligent zijn, plots aan de dokter hun kwaadaardige gedachten melden? Vooral nu iedereen (dus ook de psychopathische mensen met kwaadaardige gedachten) weet dat de dokter zo goed als verplicht wordt om zulke kwaadaardige gedachten met de politie te delen.

Met vriendelijke groeten,

Philip

25 Sep 2016 6:48pm GMT

23 Sep 2016

feedPlanet Grep

Xavier Mertens: Go Hunt for Malicious Activity!

What do security analysts when they aren't on fire? They hunt for malicious activity on networks and servers! A few days ago, some suspicious traffic was detected. It was an HTTP GET request to a URL like hxxp://xxxxxx.xx/south/fragment/subdir/… Let's try to access this site from a sandbox. Too bad, I landed on a login page which looked like a C&C. I tried some classic credentials, searched for the URL or some patterns on Google, in mailing lists and private groups, nothing! Too bad…

Then, you start some stupid tricks like moving to the previous directory in the path (like doing a "cd ..") again and again to finally… find another (unprotected) page! This page was indexing screenshots sent by the malware from compromised computers. Let's do a quick 'wget -m' to recursively collect the data. I came back a few hours later, pressed 'F5' and the number of screenshots increased. The malware was still in the wild. A few hours and some 'F5' later, again more screenshots! Unfortunately, the next day, the malicious content was removed from the server. Hopefully, I got copies of the screenshots. Just based on them, it is possible to get interesting info about the attack / malware:

Here is a selection of interesting screenshots (anonymized). The original screenshots were named "<hostname>_<month>_<day>_<hour>_<min>_<sec>.jpg". Based on the filename format, it seems that the malware is taking one screenshot per minute. I renamed all the files with their MD5 hash to prevent disclosure of sensitive info.

f7c75af9f6d84a761f979ebf490f921d ee517028d9b1bfaf2aae8abf6176735f e640309d8a27c14118906c3be7308363 e17d33f4f6969970d29f67063f416820 e6f74e098268b361261f26842fe05701 da5c267c26529951d914b1985b2b70df beae96aee2e7977bdda886c130c0d769 c0c429c65a61d6ef039b33c0b52263a2 c1f0b66cea6740c74b55b27e5eff72b7 c8d73ddafc18e8f3ecb1c2c69091b0bb d351e118cb3f9ce0e319ad9e527e650d d0344809b6b32ddec99d98eb96ff5995 b78c32559c276048e028e8af2b06f1ed b10b50a956d1dfd3952678161b9a8242 b1f39eaf121a3d7c9bb1093dc5e5e66b af66c8924f1bb047f44f0d3be39247f7 9643b3c28fa9cf71df8fbc1568e7d82e 957dc126433c79c71383a37ee3da4a5f 0134fc9dda9c6ffd2d3a2ed48c000851 81d74df34b1e85bd326570726dd6eacb 018b6037b4fa2ae9790e3c6fb98fb1e7 9fda6c140a772b5069bd07b7ee898dba 9ed4787a1e215f341aff9b5099846bfe 09c5cfb440193b35017ae2a5552cd748 8c64f33d219f5cd0eadd90e1fcdc97ec 8c7c1fd9938e9cb78b0e649079a714df 6b76b6456af4a2ab54c4bd5935a5726a 6a4c19fb2a13121ee03577c9b37924a9 5aaf455193b2d4bfd13128a5c2502db8 4ba9db95f7bbeb58f73969f2262eea8b 2c48880ea3a8644985ffe038fe9a1260

[The post Go Hunt for Malicious Activity! has been first published on /dev/random]

23 Sep 2016 12:52pm GMT

22 Sep 2016

feedPlanet Grep

Claudio Ramirez: Intel NUC 5CPYH and Ubuntu 16.04

Intel NUCI decided to move my home backup drives to ZFS because I wanted built-in file checksumming as a prevention against silent data corruption. I chose ZFS over BrtFS because I have considerable experience with ZFS on Solaris.

I knew that ZFS loves RAM, hence I upgraded my home "server" (NFS/Samba/Docker) from an old laptop with 2GB of RAM to the cheapest Intel NUC I could find with USB3, Gigabit ethernet and 8GB of RAM. The C5CPYH model fitted the bill.

Two remarks for those that want to install Linux on this barebones mini-pc:


Filed under: Uncategorized Tagged: "home server", Linux, nuc, Ubuntu

22 Sep 2016 1:10pm GMT

21 Sep 2016

feedPlanet Grep

Fabian Arrotin: CentOS Infra public service dashboard

As soon as you're running some IT services, there is one thing that you already know : you'll have downtimes, despite all your efforts to avoid those...

As the old joke says : "What's up ?" asked the Boss. "Hopefully everything !" answered the SysAdmin guy ....

You probably know that the CentOS infra is itself widespread, and subject to quick move too. Recently we had to announce an important DC relocation that impacts some of our crucial and publicly facing services. That one falls in the "scheduled and known outages" category, and can be prepared. For such "downtime" we always announced that through several mediums, like sending a mail to the centos-announce, centos-devel (and in this case , also to the ci-users) mailing lists. But even when we announce that in advance, some people forget about it, or people using (sometimes "indirectly") the concerned service are surprized and then ask about it (usually in #centos or #centos-devel on irc.freenode.net).

In parallel to those "scheduled outages", we have also the worst ones : the unscheduled ones. For those ones, depending on the impact/criticity of the impacted service, and also the estimated RTO, we also send a mail to the concerned mailing lists (or not).

So we just decided to show a very simple and public dashboard for the CentOS Infra, but only covering the publicly facing services, to have a quick overview of that part of the Infra. It's now live and hosted on https://status.centos.org.

We use Zabbix to monitor our Infra (so we build it for multiple arches, like x86_64,i386,ppc64,ppc64le,aarch64 and also armhfp) , including through remote zabbix proxies (because of our "distributed" network setup right now, with machines all around the world). For some of those services listed on status.centos.org, we can "manually" announce a downtime/maintenance period, but Zabbix also updates on its own that dashboard. The simple way to link those together was to use zabbix custom alertscripts and you can even customize those to send specific macros and have that alertscript just parsing and then updating the dashboard.

We hope to enhance that dashboard in the future, but it's a good start, and I have to thank again Patrick Uiterwijk who wrote that tool for Fedora initially (and that we adapted to our needs).

21 Sep 2016 10:00pm GMT

19 Sep 2016

feedPlanet Grep

Dries Buytaert: Traveling through Tuscany

Four weeks ago we went on a vacation in Tuscany. I finally had some time to process the photos and write down our memories from the trip.

Day 1

Al magrini farmhouse

We booked a last-minute house in a vineyard called Fattoria di Fubbiano. The vineyard has been producing wine and olive oil since the 14th century. On the eastern edge of the estate, is Al Magrini, a Tuscan farmhouse surrounded by vines and olive trees.

When we arrived, we were struck by the remoteness. We had to drive through dirt roads for 10 minutes to get to our house. But once we got there, we were awestruck. The property overlooks a valley of olive groves and vines. We could have lunch and dinner outside among the rose bushes, and enjoy our own swimming pool with its own sun beds, deck chairs and garden umbrellas.

While it was full of natural beauty, it was also very simple. We quickly realized there was no TV or internet, no living room, and only a basic kitchen; we couldn't run two appliances at the same time. But nothing some wine and cheese can't fix. After some local cheese, olives and wine, we went for a swim in the pool. Vacation had started!

We had dinner in a great little restaurant in the middle of nowhere. We ate some local, traditional food called "tordelli lucchesi". Nearly every restaurant in Lucca serves a version of this traditional Lucchesan dish. Tordelli look like ravioli, but that is where the resemblance ends. The filling is savory rather than cheesy, and the cinnamon- and sage-infused ragù with which the tordelli are served is distinctly Tuscan. The food was exceptional.

Day 2

Swimming pool

We were woken up by loud screaming from Stan: "Axl got hurt! He fell out of the window!". Our hearts skipped several beats because the bedrooms were on the second floor and we told them they couldn't go downstairs in the morning.

Turns out Axl and Stan wanted to surprise us by setting the breakfast table outside. They snuck downstairs and originally set the table inside, wrote a sweet surprise note in their best English, and made "sugar milk" for everyone -- yes, just like it sounds they added tablespoons full of sugar to the milk. Axl then decided he wanted to set the table outside instead. They overheard us saying how much we enjoyed eating breakfast outside last time we were in Italy. They couldn't open the door to the backyard so Axl decided to climb out of the window, thinking he could unlock the door from the outside. In the process, he fell out of the window from about one meter. Fortunately since it was a first floor window (ground level window), Axl got nothing but a few scratches. Sweet but scary.

Later on, we went to the grocery store and spent most of the day at the pool. The boys can't get enough of playing in the water with the inflatable crocodile "Crocky" raft Stan had received for his birthday two years ago. Vanessa can't get enough of the sun and she also confiscated my Kindle.

With no Kindle to read on, I discovered poop next to the pool. I thought it was from a wild horse and was determined to go to look for it in the coming days.

In the late afternoon, we had snacks and prosecco, something which became our daily tradition on vacation. The Italian cheese was great and the "meloni" was so sweet. The food was simple, but tasted so much better than at home. Maybe it's the taste of vacation.

Vanessa did our first load of laundry which needed to dry in the sun. The clothes were a little crunchy, but there was something fulfilling about the simplicity of it.

Day 3

Hike up the hill

In good tradition, I made coffee in the morning. As I head downstairs the morning light peeks through all the cracks of the house, and highlights the old brick and stone walls. The coffee machine is charmingly old school. We had to wait 20 minutes or so for the whole pot to brew.

Vanessa made french toast for breakfast. She liked to shout in Dutch "Het is vakantie!" during the breakfast preparation. Stan moaned repeatedly during breakfast - he loved the french toast! It made us laugh really hard.

Today was a national holiday in Italy so everything is closed. We decided to spend the time at the pool; no one was complaining about that. Most weeks feels like a marathon at work, so it was nice to do absolutely nothing for a few days, not keep track of time, and just enjoy our time together.

To take a break from the pool, we decided to walk through the olive groves looking for those wild horses. Axl and Stan weren't especially fond of the walk as it started off uphill. Stan told us "I'm sweating" as if we would turn back. Instead of wild horses we found a small mountain village. The streets were empty and the shutters were closed to keep the peak heat of the day out. It seemed like we had stepped back in time 30-40 years.

Sitting next to the pool gave me a lot of time to think and reflect. It's nice to have some headspace. Our afternoon treat by the pool was iced coffee! We kept the leftover coffee from the morning to pour over ice for a refreshing drink. One of Vanessa's brilliant ideas.

Our evening BBQs are pretty perfect. We made Spanish style bruschetta; first grilling the bread, then rubbing it with garlic and tomato, drizzle some local olive oil over it, and add salt and pepper. After the first bite it was requested we make this more often.

We really felt we're all connecting. We even had an outdoor dance party as the sun was setting. Axl wrote in our diary: "Vanessa laughed so hard she almost peed her pants. LOL.". Stan wanted to know if his moves made her laugh the hardest.

Every evening we would shower to wash off the bug spray, because mosquitos were everywhere. When it was finally my time to shower, we ran out of water -- just when I was all soaped up. Fortunately, we had a bottle of Evian that I could use to rinse off (just like the Kardashians).

Day 4

Italian house

We set the alarm for 7:30am so we could head to Lucca, a small city 30 minutes from our house -- 15 minutes of that is spent getting out of the vineyard and mountain trails. We were so glad we rented "Renny", our 4x4 Jeep Renegade, as there are no real paved roads in the vineyard.

We visited "La Boutique Dei Golosi", a tiny shop that sold local wines, olive oils and other Italian goods. The shop owner, Alain, opened bottles of wine and let us taste different olive oils on bread. He offered the boys samples of everything the adults tried and was impressed that they liked it. Interestingly enough, all four of us preferred the same olive oil. We shipped 5 bottles home, along with several bottles of wine, limoncello and 3 jars of white truffle paste. It was fun knowing a big box of Italian goods would arrive once we were home.

When we got back from Lucca, we fired up the grill and drank our daily bottle of prosecco. Every hour we hear bells ring -- it's from the little town up on the hill. The bells are how we kept track of time. The go-at-your-own-pace lifestyle is something all North Americans should experience. The rhythm of Tuscany's countryside is refreshing -- the people there know how to live.

Axl and Stan enjoyed the yard. When they weren't playing soccer or hunting for salamanders, they played ninjas using broomsticks. Axl was "Poison Ivy" and Stan was "Bamboo Sham". Apparently, they each have special moves that they can use once every battle.

Day 5

Wine tasting fattoria di fubbiano

Today we went wine tasting at our vineyard, Fattoria di Fubbiano, and got a tour of the cellar. It was great that the tour was in "inglese". We learned that they manage 45 hectares and produce 100,000 bottle of wine annually. We bought 21 of them and shipped them home so there is only 99,979 left. The best part? We could walk home afterwards. :)

Our charcoal reserves are running low; a sign of a great vacation.

Day 6

Funicular montecatini alto

We visited Montecatini Alto, about a 40 minute drive from our house. To get to Montecatini Alto, we took a funicular built in 1898. They claim it is the oldest working cable car in the world. I believe them.

Montecatini Alto is a small medieval village that dates back to 1016. It's up on a hill. The views from the village are amazing, overlooking a huge plain. I closed my eyes and let my mind wonder, trying to image how life was back then over a thousand year ago.

At the very top there was an old church where we lit a candle for Opa. I think about Opa almost every day. I imagined all of the stories and historic facts he would tell if he were still with us.

The city square was filled with typical restaurants, cafes and shops. We poked around in some of the shops and Stan found a wooden sword he wanted, but couldn't decide if that's what he wanted to spend his money on. To teach Axl and Stan about money, we let them spend €20 of their savings on vacation. Having to use their own money made them think long and hard on their purchases. Since the shops close from 1pm to 2:30pm, we went for lunch in one of the local restaurants on the central square while Stan contemplated his purchase. It's great to see Axl explore the menu and try new things. He ordered the carbonara and loved it. Stan finally decided he wanted the sword bad enough, so we went back and he bought it for €10.

When we got back to our vineyard, we spotted wild horses! Finally proof that they exist. Vanessa quickly named them Hocus, Pocus and Dominocus.

In the evening we had dinner in a nearby family restaurant called "Da Mi Pa". The boys had tordelli lucchesi and then tiramisu for dessert. Chances are slim but I hope that they will remember those family dinners. They talked about the things that are most important in life, as well as their passions (computer programming for Axl and soccer for Stan). The conversations were so fulfilling and a highlight of the vacation.

Day 7

Leaning tower of pisa

Spontaneous last minute decision on what to do today. We came up with a list of things to do and Axl came up with a voting system. We decided to visit the Leaning Tower of Pisa. We were all surprised how much the tower actually leans and of course we did the goofy photos to prove we were there. These won't be published.

Day 8

Ponte vecchio florence

Last day of the vacation. We're all a bit sad to go home. The longer we stay, the happier we get. Happier not because of where we were, but about how we connected.

Today, we're making the trek to Florence. One of the things Florence is known for is leather. Vanessa wanted to look for a leather jacket, and I wanted to look for a new duffel bag. We found a shop that was recommended to us; one of the shop owners is originally from the Greater Boston area. Enio, her husband, was very friendly and kind. He talked about swimming in Walden Pond, visiting the Thoreau's House, etc. The boys couldn't believe he had been to Concord, MA. Enio really opened up and gave us a private tour to his leather workshop. His workshop consisted of small rooms filled with piles and piles of leather and all sorts of machinery and tools.

I had a belt made with my initials on it (on the back). Stan got a bracelet made out of the leftover leather from the belt. Axl also got a bracelet made, and both had their initials stamped on them. Vanessa bought a gorgeous brown leather jacket, a purse and funky belt. And last but not least, l found a beautiful handmade ram-skin duffel bag in a cool green color. Enio explained that it takes him two full days to make the bag. It was expensive but will hopefully last for many, many years. I wanted to buy a leather jacket but as usual they didn't have anything my size.

We strolled across the Ponte Vecchio and made some selfies (like every other tourist). We had a nice lunch. Pasta for Vanessa, Axl and myself. Stan still has an aversion to ragù even though he ate it 3 times that week and loved it every time. Then we had our "grand finale gelato" before we headed to the airport.

19 Sep 2016 1:38am GMT

16 Sep 2016

feedPlanet Grep

Lionel Dricot: Laissez-moi la nuit !

20130409_151605

Pour Stoyan, mon ami et lecteur de ce blog, décédé le 8 septembre 2016.

Laissez-moi la nuit !

Depuis que je suis tout petit,
J'ai été entouré et vous m'avez appris
Que le soleil est source de toute vie.
Aujourd'hui, regardez, j'ai grandi.
Alors laissez-moi la nuit.

Laissez moi ma colère contre les traditions,
Contre les religions et toutes ces règles à la con
Auxquelles on devrait obéir sans poser de question.
Laissez-moi leur hurler, leur crier mon nom
Les secouer dans un grand bruit,
Dans un tohu-bohu, un charivari,
Oui, laissez-moi la nuit.

Laissez moi la nuit
Laissez moi partir dans le noir
Laissez moi vierge de vos espoirs
Laissez moi hurler
Laissez moi choquer
Laissez-moi même si vous n'aimez pas
Laissez-moi même si vous ne comprenez pas
Laissez-moi choisir
Laissez-moi haïr
Laissez-moi être incompris
Laissez-moi la nuit

Ne cherchez pas de responsabilité
Ne vous demandez pas comment me changer
N'essayez pas de modifier le passé !
De l'amour, vous m'en apportez,
Comprenez que personne n'a failli, j'ai choisi
Alors, laissez-moi la nuit.

Lorsque s'évanouit l'illusion d'éternité,
Lorsque meurt le voile de la naïveté,
Lorsque la lumière fait place à l'obscurité,
Il ne reste plus qu'une insatiable quête de liberté.
Laissez-moi ce chemin que j'ai choisi.
Laissez-moi la liberté de la nuit.

20130409_143819

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

16 Sep 2016 8:09pm GMT

15 Sep 2016

feedPlanet Grep

Xavier Mertens: IP Address Open Source Intelligence for the Win

During the last edition of the Troopers security conference in March, I attended a talk about "JustMetaData". It's a tool developed by Chris Truncer to perform open source intelligence against IP addresses. Since then, I used this tool on a regular basis. Often when you're using a tool, you have ideas to improve it. JustMetaData being written in Python and based on "modules", it is easy to write your own. My first contribution to the project was a module to collect information from DShield.

Passive DNS is a nice technique to track the use of IP addresses, mainly to track malicious domains but it can be combined with other techniques to have a better view of the "scope" of an IP address. My new contribution is a module which uses the Microsoft bing.com API to collected hostnames associated with an IP address. Indeed, bing.com has a nice search query 'ip:x.x.x.x'. To have a good view of an IP address is a key when you collect them to build blacklists. Indeed, behind an IP address, hundreds of websites can be hosted (in a shared environment). A good example is big WordPress providers. Blacklisting an IP address might have multiple impacts:

Here is an example of the bing.com module:

$ ./Just-Metadata.py
################################################################################
# Just-Metadata #
################################################################################

[>] Please enter a command: load ip.tmp
[*] Loaded 1 systems
[>] Please enter a command: gather Bing_IP
Found 549 hostnames for 184.168.47.225
[>] Please enter a command: ip_info 184.168.47.225
...stuff deleted...
hostnames: la-reserve.be
cityplus.be
www.mintjens.be
consortiunews.com
www.marten-glas.be
...stuff delete...
[>] Please enter a command: exit

I wrote a Dockerfile for Just-Metadata to run it in a container. To fully automate the gathering process and reporting, I wrote a second patch to allow the user to specific a filename where to save the state of a research or the export of results in the right volume. Once the Docker created with the following command:

# docker build -t justmetadata --build-arg SHODAN_APIKEY=<yourkey> --build-arg BING_APIKEY= <yourkey> .

Now, the analyse of IP addresses can be fully automated (with the target IP addresses stored in ip.tmp):

# docker run -it --rm -v /tmp:/data justmetadata \
             -l /data/ip.tmp -g All \
             -e /data/results.csv

As a bonus, CVS files can be index by a Splunk instance (or any other tool) for further processing:

Splunk IP Address Details(Click to zoom)

Here is an example of complete OSINT details about an IP address:

Splunk IP OSINT(Click to zoom)

Based on these details, you can generate more accurate blacklists. I sent a pull request to Chris with my changes. In the meantime, you can use my repository to use the "Bing_IP" module.

[The post IP Address Open Source Intelligence for the Win has been first published on /dev/random]

15 Sep 2016 6:28pm GMT

Frank Goossens: Autoptimize cache size: the canary in the coal mine

another-canary-in-a-coal-mineCopy/ pasted straight from a support question on wordpress.org;

Auto-deleting the cache would only solve one problem you're having (disk space), but there are 2 other problems -which I consider more important- that auto-cleaning can never solve:
1. you will be generating new autoptimized JS very regularly, which slows your site down for users who happen to be the unlucky ones requesting that page
2. a visitor going from page X to page Y will very likely have to request a different autoptimized JS file for page Y instead of using the one from page X from cache, again slowing your site down

So I actually consider the cache-size warning like a canary in the coal mines; if the canary dies, you know there's a bigger problem.

You don't (or shouldn't) really want me to take away the canary! :)

Possibly related twitterless twaddle:

15 Sep 2016 4:11pm GMT

Mattias Geniar: Varnish Cache 5.0 Released

The post Varnish Cache 5.0 Released appeared first on ma.ttias.be.

This just got posted to the varnish-announce mailing list.

We have just released Varnish Cache 5.0:

http://varnish-cache.org/docs/5.0/whats-new/relnote-5.0.html

This release comes almost exactly 10 years after Varnish Cache 1.0,
and also marks 10 years without any significant security incidents.

Next major release (either 5.1 or 6.0) will happen on March 15 2017.

Enjoy!

PS: Please help fund my Varnish habit: http://phk.freebsd.dk/VML/

---
Poul-Henning Kamp

varnish-announce mailing list

Lots of changes (HTTP/2, Shard director, ban lurker improvements, ...) and info on upgrading to Varnish 5!

I'll be working on updated configs for Varnish 5 (as I did for Varnish 4) as soon as I find some time for it.

The post Varnish Cache 5.0 Released appeared first on ma.ttias.be.

15 Sep 2016 11:31am GMT

Johan Van de Wauw: Foss4g.be - The Belgian conference on open-source geospatial software

Next week, the second edition of Foss4g.be, the Belgian conference on open-source geospatial software will take place in Tour and Taxis in Brussels.

The conference will bring together developers and users of open source geomatics software from Belgium and all over the world. Participation is free of charge but registration is required.

We have a varied program for you: in the plenary session we will present the current state of many of the major Open source GIS applications - this includes novelties demonstrated at the last FOSS4G conference in Bonn.

Next in the main track the different regions of Belgium will present their open (geo) data sets. Moreover Christian Quest from Open StreetMap France will explain how the Openstreetmap community was actually part of the creation and maintenance of the address database in France.

In our parallel sessions we have presentations covering all aspects of Open source geospatial. Presentations on the usage of some of the most widely used programs, such as QGis, Openlayers3 and PostGIS, but also less well known solutions for handling 3D data, building SDI and doing advanced analyses. With the open streetmap conference happening just the day after this conference, we also have a special track with specialised

Last but not least, conferences are all about meeting networking and meeting new people, and you will get the chance to do so during our breaks!

Hope to see many of you in Brussels!

http://2016.foss4g.be

15 Sep 2016 9:17am GMT

14 Sep 2016

feedPlanet Grep

Dries Buytaert: Can Drupal outdo native applications?

I've made no secret of my interest in the open web, so it won't come as a surprise that I'd love to see more web applications and fewer native applications. Nonetheless, many argue that "the future of the internet isn't the web" and that it's only a matter of time before walled gardens like Facebook and Google - and the native applications which serve as their gatekeepers - overwhelm the web as we know it today: a public, inclusive, and decentralized common good.

I'm not convinced. Native applications seem to be winning because they offer a better user experience. So the question is: can open web applications, like those powered by Drupal, ever match up to the user experience exemplified by native applications? In this blog post, I want to describe inversion of control, a technique now common in web applications and that could benefit Drupal's own user experience.

Native applications versus web applications

Using a native application - for the first time - is usually a high-friction, low-performance experience because you need to download, install, and open the application (Android's streamed apps notwithstanding). Once installed, native applications offer unique access to smartphone capabilities such as hardware APIs (e.g. microphone, GPS, fingerprint sensors, camera), events such as push notifications, and gestures such as swipes and pinch-and-zoom. Unfortunately, most of these don't have corresponding APIs for web applications.

A web application, on the other hand, is a low-friction experience upon opening it for the first time. While native applications can require a large amount of time to download initially, web applications usually don't have to be installed and launched. Nevertheless, web applications do incur the constraint of low performance when there is significant code weight or dozens of assets that have to be downloaded from the server. As such, one of the unique challenges facing web applications today is how to emulate a native user experience without the drawbacks that come with a closed, opaque, and proprietary ecosystem.

Inversion of control

In the spirit of open source, the Drupal Association invited experts from the wider front-end community to speak at DrupalCon New Orleans, including from Ember and Angular. Ed Faulkner, a member of the Ember core team and contributor to the API-first initiative, delivered a fascinating presentation about how Drupal and Ember working in tandem can enrich the user experience.

One of Ember's primary objectives is to demonstrate how web applications can be indistinguishable from native applications. And one of the key ideas of JavaScript frameworks like Ember is inversion of control, in which the client side essentially "takes over" from the server side by driving requirements and initiating actions. In the traditional page delivery model, the server is in charge, and the end user has to wait for the next page to be delivered and rendered through a page refresh. With inversion of control, the client is in charge, which enables fluid transitions from one place in the web application to another, just like native applications.

Before the advent of JavaScript and AJAX, distinct states in web applications could be defined only on the server side as individual pages and requested and transmitted via a round trip to the server, i.e. a full page refresh. Today, the client can retrieve application states asynchronously rather than depending on the server for a completely new page load. This improves perceived performance. I discuss the history of this trend in more detail in this blog post.

Through inversion of control, JavaScript frameworks like Ember provide much more than seamless interactions and perceived performance enhancements; they also offer client-side storage and offline functionality when the client has no access to the server. As a result, inversion of control opens a door to other features requiring the empowerment of the client beyond just client-driven interactions. In fact, because the JavaScript code is run on a client such as a smartphone rather than on the server, it would be well-positioned to access other hardware APIs, like near-field communication, as web APIs become available.

Inversion of control in end user experiences

Application-like animation using Ember and Drupal
When a user clicks a teaser image on the homepage of an Ember-enhanced Drupal.com, the page seamlessly transitions into the full content page for that teaser, with the teaser image as a reference point, even though the URL changes.

In response to our recent evaluation of JavaScript frameworks and their compatibility with Drupal, Ed applied the inversion of control principle to Drupal.com using Ember. Ed's goal was to enhance Drupal.com's end user experience with Ember to make it more application-like, while also preserving Drupal's editorial and rendering capabilities as much as possible.

Ed's changes are not in production on Drupal.com, but in his demo, clicking a teaser image causes it to "explode" to become the hero image of the destination page. Pairing Ember with Drupal in this way allows a user to visually and mentally transition from a piece of teaser content to its corresponding page via an animated transition between pages - all without a page refresh. The animation is very impressive and the animated GIF above doesn't do it full justice. While this transition across pages is similar to behavior found in native mobile applications, it's not currently possible out of the box in Drupal without extensive client-side control.

Rather than the progressively decoupled approach, which embeds JavaScript-driven components into a Drupal-rendered page, Ed's implementation inverts control by allowing Ember to render what is emitted by Drupal. Ember maintains control over how URLs are loaded in the browser by controlling URLs under its responsibility; take a look at Ed's DrupalCon presentation to better understand how Drupal and Ember interact in this model.

These impressive interactions are possible using the Ember plugin Liquid Fire. Fewer than 20 lines of code were needed to build the animations in Ed's demo, much like how SDKs for native mobile applications provide easy-to-implement animations out of the box. Of course, Ember isn't the only tool capable of this kind of functionality. The RefreshLess module for Drupal by Wim Leers (Acquia) also uses client-side control to enable navigating across pages with minimal server requests. Unfortunately, RefreshLess can't tap into Liquid Fire or other Ember plugins.

Inversion of control in editorial experiences

In-place editing using Ember and Drupal
In CardStack Editor, an editorial interface with transitions and animations is superimposed onto the content page in a manner similar to outside-in, and the editor benefits from an in-context, in-preview experience that updates in real time.

We can apply this principle of inversion of control not only to the end user experience but also to editorial experiences. The last demos in Ed's presentation depict CardStack Editor, a fully decoupled Ember application that uses inversion of control to overlay an administrative interface to edit Drupal content, much like in-place editing.

CardStack Editor communicates with Drupal's web services in order to retrieve and manipulate content to be edited, and in this example Drupal serves solely as a central content repository. This is why the API-first initiative is so important; it enables developers to use JavaScript frameworks to build application-like experiences on top of and backed by Drupal. And with the help of SDKs like Waterwheel.js (a native JavaScript library for interacting with Drupal's REST API), Drupal can become a preferred choice for JavaScript developers.

Inversion of control as the rule or exception?

Those of you following the outside-in work might have noticed some striking similarities between outside-in and the work Ed has been doing: both use inversion of control. The primary purpose of our outside-in interfaces is to provide for an in-context editing experience in which state changes take effect live before your eyes; hence the need for inversion of control.

Thinking about the future, we have to answer the following question: does Drupal want inversion of control to be the rule or the exception? We don't have to answer that question today or tomorrow, but at some point we should.

If the answer to that question is "the rule", we should consider embracing a JavaScript framework like Ember. The constellation of tools we have in jQuery, Backbone, and the Drupal AJAX framework makes using inversion of control much harder to implement than it could be. With a JavaScript framework like Ember as a standard, implementation could accelerate by becoming considerably easier. That said, there are many other factors to consider, including the costs of developing and hosting two codebases in different languages.

In the longer term, client-side frameworks like Ember will allow us to build web applications which compete with and even exceed native applications with regard to perceived performance, built-in interactions, and a better developer experience. But these frameworks will also enrich interactions between web applications and device hardware, potentially allowing them to react to pinch-and-zoom, issue native push notifications, and even interact with lower-level devices.

In the meantime, I maintain my recommendation of (1) progressive decoupling as a means to begin exploring inversion of control and (2) a continued focus on the API-first initiative to enable application-like experiences to be developed on Drupal.

Conclusion

I'm hopeful Drupal can exemplify how the open web will ultimately succeed over native applications and walled gardens. Through the API-first initiative, Drupal will provide the underpinnings for web and native applications. But is it enough?

Inversion of control is an important principle that we can apply to Drupal to improve how we power our user interactions and build robust experiences for end users and editors that rival native applications. Doing so will enable us to enhance our user experience long into the future in ways that we may not even be able to think of now. I encourage the community to experiment with these ideas around inversion of control and consider how we can apply them to Drupal.

Special thanks to Preston So for contributions to this blog post and to Angie Byron, Wim Leers, Kevin O'Leary, Matt Grill, and Ted Bowman for their feedback during its writing.

14 Sep 2016 6:54pm GMT