20 Feb 2017

feedPlanet Grep

Wim Leers: OpenTracker

This is an ode to Dirk Engling's OpenTracker.

It's a BitTorrent tracker.

It's what powered The Pirate Bay in 2007-2009.

I've been using it to power the downloads on http://driverpacks.net since the end of November 2010. >6 years. It facilitated 9839566 downloads since December 1, 2010 until today. That's almost 10 million downloads!


It's one of the most stable pieces of software I ever encountered. I compiled it in 2010, it never once crashed. I've seen uptimes of hundreds of days.

wim@ajax:~$ ls -al /data/opentracker
total 456
drwxr-xr-x  3 wim  wim   4096 Feb 11 01:02 .
drwxr-x--x 10 root wim   4096 Mar  8  2012 ..
-rwxr-xr-x  1 wim  wim  84824 Nov 29  2010 opentracker
-rw-r--r--  1 wim  wim   3538 Nov 29  2010 opentracker.conf
drwxr-xr-x  4 wim  wim   4096 Nov 19  2010 src
-rw-r--r--  1 wim  wim 243611 Nov 19  2010 src.tgz
-rwxrwxrwx  1 wim  wim  14022 Dec 24  2012 whitelist


The simplicity is fantastic. Getting up and running is incredibly simple: git clone git://erdgeist.org/opentracker .; make; ./opentracker and you're up and running. Let me quote a bit from its homepage, to show that it goes the extra mile to make users successful:

opentracker can be run by just typing ./opentracker. This will make opentracker bind to and happily serve all torrents presented to it. If ran as root, opentracker will immediately chroot to . and drop all priviliges after binding to whatever tcp or udp ports it is requested.

Emphasis mine. And I can't emphasize my emphasis enough.

Performance & efficiency

All the while handling dozens of requests per second, opentracker causes less load than background processes of the OS. Let me again quote a bit from its homepage:

opentracker can easily serve multiple thousands of requests on a standard plastic WLAN-router, limited only by your kernels capabilities ;)

That's also what the homepage said in 2010. It's one of the reasons why I dared to give it a try. I didn't test it on a "plastic WLAN-router", but everything I've seen confirms it.


Its defaults are sane, but what if you want to have a whitelist?

  1. Uncomment the #FEATURES+=-DWANT_ACCESSLIST_WHITE line in the Makefile.
  2. Recompile.
  3. Create a file called whitelist, with one torrent hash per line.

Have a need to update this whitelist, for example a new release of your software to distribute? Of course you don't want to reboot your opentracker instance and lose all current state. It's got you covered:

  1. Append a line to whitelist.
  2. Send the SIGHUP UNIX signal to make opentracker reload its whitelist1.


I've been in the process of moving off of my current (super reliable, but also expensive) hosting. There are plenty of specialized companies offering HTTP hosting2 and even rsync hosting3. Thanks to their standardization and consequent scale, they can offer very low prices.

But I also needed to continue to run my own BitTorrent tracker. There are no companies that offer that. I don't want to rely on another tracker, because I want there to be zero affiliation with illegal files4. This is a BitTorrent tracker that does not allow anything to be shared: it only allows the software releases made by http://driverpacks.net to be downloaded.

So, I found the cheapest VPS I could find, with the least amount of resources. For USD $13.505, I got the lowest specced VPS from a reliable-looking provider: with 128 MB RAM. Then I set it up:

  1. ssh'd onto it.
  2. rsync'd over the files from my current server (alternatively: git clone and make)
  3. added @reboot /data/opentracker/opentracker -f /data/opentracker/opentracker.conf to my crontab.
  4. removed the CNAME record for tracker.driverpacks.net, and instead made it an A record pointing to my new VPS.
  5. watched http://tracker.driverpacks.net:6969/stats?mode=tpbs&format=txt on both the new and the old server, to verify traffic was moving over to my new cheap opentracker VPS as the DNS changes propagated

Drupal module

Since driverpacks.net runs on Drupal, there of course is an OpenTracker Drupal module to integrate the two (I wrote it). It provides an API to:

You can see the live stats at http://driverpacks.net/stats.


opentracker is the sort of simple, elegant software design that makes it a pleasure to use. And considering the low commit frequency over the past decade, with many of those commits being nitpick fixes, it also seems its simplicity also leads to excellent maintainability. It involves the HTTP and BitTorrent protocols, yet only relies on a single I/O library, and its source code is very readable. Not only that, but it's also highly scalable.

It's the sort of software many of us aspire to write.

Finally, its license. A glorious license indeed!

The beerware license is very open, close to public domain, but insists on honoring the original author by just not claiming that the code is yours. Instead assume that someone writing Open Source Software in the domain you're obviously interested in would be a nice match for having a beer with.

So, just keep the name and contact details intact and if you ever meet the author in person, just have an appropriate brand of sparkling beverage choice together. The conversation will be worth the time for both of you.

Dirk, if you read this: I'd love to buy you sparkling beverages some time :)

  1. kill -s HUP pidof opentracker ↩︎

  2. I'm using Gandi's Simple Hosting. ↩︎

  3. https://rsync.net ↩︎

  4. Also: all my existing torrents use http://tracker.driverpacks.net:6969/announce as the URL. The free trackers I could find were all using udp://. So new leechers would then no longer be able to find the hundreds of existing seeders. Adding trackers to already-downloaded torrents is not possible. ↩︎

  5. $16.34 including 21% Belgian VAT. ↩︎

  6. Rather than having the Drupal module send a SIGHUP from within PHP, which requires elevated rights, I instead opted for a cronjob that runs every 10 minutes: */10 * * * * kill -s HUP pidof opentracker. ↩︎

20 Feb 2017 9:26pm GMT

18 Feb 2017

feedPlanet Grep

Frank Goossens: Music from our Tube (& Nova); Sampha

Sampha, live on Radio Nova, keyboards only;

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

18 Feb 2017 5:26pm GMT

17 Feb 2017

feedPlanet Grep

Jeroen De Dauw: Why Every Single Argument of Dan North is Wrong

Alternative title: Dan North, the Straw Man That Put His Head in His Ass.

This blog post is a reply to Dans presentation Why Every Element of SOLID is Wrong. It is crammed full with straw man argumentation in which he misinterprets what the SOLID principles are about. After refuting each principle he proposes an alternative, typically a well-accepted non-SOLID principle that does not contradict SOLID. If you are not that familiar with the SOLID principles and cannot spot the bullshit in his presentation, this blog post is for you. The same goes if you enjoy bullshit being pointed out and broken down.

What follows are screenshots of select slides with comments on them underneath.

Dan starts by asking "What is a single responsibility anyway". Perhaps he should have figured that out before giving a presentation about how it is wrong.

A short (non-comprehensive) description of the principle: systems change for various different reasons. Perhaps a database expert changes the database schema for performance reasons, perhaps a User Interface person is reorganizing the layout of a web page, perhaps a developer changes business logic. What the Single Responsibility Principle says is that ideally changes for such disparate reasons do not affect the same code. If they did, different people would get in each others way. Possibly worse still, if the concerns are mixed together, and you want to change some UI code, suddenly you need to deal with, and thus understand, the business logic and database code.

How can we predict what is going to change? Clearly you can't, and this is simply not needed to follow the Single Responsibility Principle or to get value out of it.

Write simple code… no shit. One of the best ways to write simple code is to separate concerns. You can be needlessly vague about it and simply state "write simple code". I'm going to label this Dan North's Pointlessly Vague Principle. Congratulations sir.

The idea behind the Open Closed Principle is not that complicated. To partially quote the first line on the Wikipedia Page (my emphasis):

… such an entity can allow its behaviour to be extended without modifying its source code.

In other words, when you ADD behavior, you should not have to change existing code. This is very nice, since you can add new functionality without having to rewrite old code. Contrast this to shotgun surgery, where to make an addition, you need to modify existing code at various places in the codebase.

In practice, you cannot gain full adherence to this principle, and you will have places where you will need to modify existing code. Full adherence to the principle is not the point. Like with all engineering principles, they are guidelines which live in a complex world of trade offs. Knowing these guidelines is very useful.

Clearly it's a bad idea to leave in place code that is wrong after a requirement change. That's not what this principle is about.

Another very informative "simple code is a good thing" slide.

To be honest, I'm not entirely sure what Dan is getting at with his "is-a, has-a" vs "acts-like-a, can-be-used-as-a". It does make me think of the Interface Segregation Principle, which, coincidentally, is the next principle he misinterprets.

The remainder of this slide is about the "favor compositions about inheritance" principle. This is really good advice, which has been well-accepted in professional circles for a long time. This principle is about code sharing, which is generally better done via composition than inheritance (the later creates very strong coupling). In the last big application I wrote I have several 100s of classes and less than a handful inherit concrete code. Inheritance has a use completely different from code reuse: sub-typing and polymorphism. I won't go into detail about those here, and will just say that this is at the core of what Object Orientation is about, and that even in the application I mentioned, this is used all over, making the Liskov Substitution Principle very relevant.

Here Dan is slamming the principle for being too obvious? Really?

"Design small , role-based classes". Here Dan changed "interfaces" into "classes". Which results in a line that makes me think of the Single Responsibility Principle. More importantly, there is a misunderstanding about the meaning of the word "interface" here. This principle is about the abstract concept of an interface, not the language construct that you find in some programming languages such as Java and PHP. A class forms an interface. This principle applies to OO languages that do not have an interface keyword such as Python and even to those that do not have a class keyword such as Lua.

If you follow the Interface Segregation Principle and create interfaces designed for specific clients, it becomes much easier to construct or invoke those clients. You won't have to provide additional dependencies that your client does not actually care about. In addition, if you are doing something with those extra dependencies, you know this client will not be affected.

This is a bit bizarre. The definition Dan provides is good enough, even though it is incomplete, which can be excused by it being a slide. From the slide it's clear that the Dependency Inversion Principle is about dependencies (who would have guessed) and coupling. The next slide is about how reuse is overrated. As we've already established, this is not what the principle is about.

As to the DIP leading to DI frameworks that you than depend on… this is like saying that if you eat food you might eat non-nutritious food which is not healthy. The fix here is to not eat non-nutritious food, it is not to reject food altogether. Remember the application I mentioned? It uses dependency injection all the way, without using any framework or magic. In fact, 95% of the code does not bind to the web-framework used due to adherence to the Dependency Inversion Principle. (Read more about this application)

That attitude explains a lot about the preceding slides.

Yeah, please do write simple code. The SOLID principles and many others can help you with this difficult task. There is a lot of hard-won knowledge in our industry and many problems are well understood. Frivolously rejecting that knowledge with "I know better" is an act of supreme arrogance and ignorance.

I do hope this is the category Dan falls into, because the alternative of purposefully misleading people for personal profit (attention via controversy) rustles my jimmies.

If you're not familiar with the SOLID principles, I recommend you start by reading their associated Wikipedia pages. If you are like me, it will take you practice to truly understand the principles and their implications and to find out where they break down or should be superseded. Knowing about them and keeping an open mind is already a good start, which will likely lead you to many other interesting principles and practices.

17 Feb 2017 12:34pm GMT

15 Feb 2017

feedPlanet Grep

Xavier Mertens: Integrating OpenCanary & DShield

Being a volunteer for the SANS Internet Storm Center, I'm a big fan of the DShield service. I think that I'm feeding DShield with logs for eight or nine years now. In 2011, I wrote a Perl script to send my OSSEC firewall logs to DShield. This script has been running and pushing my logs every 30 mins for years. Later, DShield was extended to collect other logs: SSH credentials collected by honeypots (if you've a unused Raspberry Pi, there is a nice setup of a honeypot available). I've my own network of honeypots spread here and there on the Wild Internet, running Cowrie. But recently, I reconfigured all of them to use another type of honeypot: OpenCanary.

Why OpenCanary? Cowrie is a very nice honeypot which can emulate a fake vulnerable host, log commands executed by the attackers and also collect dropped files. Here is an example of Cowrie session replayed in Splunk:

Splunk Honeypot Session Replay

It's nice to capture a lot of data but most of them (to not say "all of them") are generated by bots. Honestly, I never detected a human attacker trying to abuse of my SSH honeypots. That's why I decided to switch to OpenCanary. It does not record a detailed log as Cowrie but it is very modular and supports by default the following protocols:

Writing extra modules is very easy, examples are provided. By default, OpenCanary is able to write logs to the console, a file, Syslog, a JSON feed over TCP or an HPFeed. There is no DShield support by default? Never mind, let's add it.

As I said, OpenCanary is very modular and a new logging capability is just a new Python class in the logger.py module:

class DShieldHandler(logging.Handler):
    def __init__(self, dshield_userid, dshield_authkey, allowed_ports):
        self.dshield_userid = str(dshield_userid)
        self.dshield_authkey = str(dshield_authkey)
            # Extract the list of allowed ports
            self.allowed_ports = map(int, str(allowed_ports).split(','))
            # By default, report only port 22
            self.allowed_ports = [ 22 ]

    def emit(self, record):

The DShield logger needs three arguments in your opencanary.conf file:

"logger": {
    "class" : "PyLogger",
    "kwargs" : {
        "formatters": {
            "plain": {
                "format": "%(message)s"
        "handlers": {
            "dshield": {
                "class": "opencanary.logger.DShieldHandler",
                "dshield_userid": "xxxxxx",
                "dshield_authkey": "xxxxxxxx",
                "allowed_ports": "22,23"

The DShield UserID and authentication key are available in your DShield account. I added an 'allowed_ports' parameter that contains the list of interesting ports that will be reported to DShield (by default only SSH connections are reported). Now, I'm reporting many more connections attempts:

Daily Connections Report

Besides DShield, JSON logs are processed by my Splunk instance to generate interesting statistics:

OpenCanary Splunk Dashboard

A pull request has been submitted to the authors of OpenCanary to integrate my code. In the mean time, the code is available on my Github repository.

[The post Integrating OpenCanary & DShield has been first published on /dev/random]

15 Feb 2017 5:04pm GMT

Xavier Mertens: [SANS ISC Diary] How was your stay at the Hotel La Playa?

I published the following diary on isc.sans.org: "How was your stay at the Hotel La Playa?".

I made the following demo for a customer in the scope of a security awareness event. When speaking to non-technical people, it's always difficult to demonstrate how easily attackers can abuse of their devices and data. If successfully popping up a "calc.exe" with an exploit makes a room full of security people crazy, it's not the case for "users". It is mandatory to demonstrate something that will ring a bell in their mind… [Read more]

[The post [SANS ISC Diary] How was your stay at the Hotel La Playa? has been first published on /dev/random]

15 Feb 2017 12:12pm GMT

14 Feb 2017

feedPlanet Grep

Dries Buytaert: Distributions remain a growing opportunity for Drupal

Yesterday, after publishing a blog post about Nasdaq's Drupal 8 distribution for investor relations websites, I realized I don't talk enough about "Drupal distributions" on my blog. The ability for anyone to take Drupal and build their own distribution is not only a powerful model, but something that is relatively unique to Drupal. To the best of my knowledge, Drupal is still the only content management system that actively encourages its community to build and share distributions.

A Drupal distribution packages a set of contributed and custom modules together with Drupal core to optimize Drupal for a specific use case or industry. For example, Open Social is a free Drupal distribution for creating private social networks. Open Social was developed by GoalGorilla, a digital agency from the Netherlands. The United Nations is currently migrating many of their own social platforms to Open Social.

Another example is Lightning, a distribution developed and maintained by Acquia. While Open Social targets a specific use case, Lightning provides a framework or starting point for any Drupal 8 project that requires more advanced layout, media, workflow and preview capabilities.

For more than 10 years, I've believed that Drupal distributions are one of Drupal's biggest opportunities. As I wrote back in 2006: Distributions allow us to create ready-made downloadable packages with their own focus and vision. This will enable Drupal to reach out to both new and different markets..

To capture this opportunity we needed to (1) make distributions less costly to build and maintain and (2) make distributions more commercially interesting.

Making distributions easier to build

Over the last 12 years we have evolved the underlying technology of Drupal distributions, making them even easier to build and maintain. We began working on distribution capabilities in 2004, when the CivicSpace Drupal 4.6 distribution was created to support Howard Dean's presidential campaign. Since then, every major Drupal release has advanced Drupal's distribution building capabilities.

The release of Drupal 5 marked a big milestone for distributions as we introduced a web-based installer and support for "installation profiles", which was the foundational technology used to create Drupal distributions. We continued to make improvements to installation profiles during the Drupal 6 release. It was these improvements that resulted in an explosion of great Drupal distributions such as OpenAtrium (an intranet distribution), OpenPublish (a distribution for online publishers), Ubercart (a commerce distribution) and Pressflow (a distribution with performance and scalability improvements).

Around the release of Drupal 7, we added distribution support to Drupal.org. This made it possible to build, host and collaborate on distributions directly on Drupal.org. Drupal 7 inspired another wave of great distributions: Commerce Kickstart (a commerce distribution), Panopoly (a generic site building distribution), Opigno LMS (a distribution for learning management services), and more! Today, Drupal.org lists over 1,000 distributions.

Most recently we've made another giant leap forward with Drupal 8. There are at least 3 important changes in Drupal 8 that make building and maintaining distributions much easier:

  1. Drupal 8 has vastly improved dependency management for modules, themes and libraries thanks to support for Composer.
  2. Drupal 8 ships with a new configuration management system that makes it much easier to share configurations.
  3. We moved a dozen of the most commonly used modules into Drupal 8 core (e.g. Views, WYSIWYG, etc), which means that maintaining a distribution requires less compatibility and testing work. It also enables an easier upgrade path.

Open Restaurant is a great example of a Drupal 8 distribution that has taken advantage of these new improvements. The Open Restaurant distribution has everything you need to build a restaurant website and uses Composer when installing the distribution.

More improvements are already in the works for future versions of Drupal. One particularly exciting development is the concept of "inheriting" distributions, which allows Drupal distributions to build upon each other. For example, Acquia Lightning could "inherit" the standard core profile - adding layout, media and workflow capabilities to Drupal core, and Open Social could inherit Lightning - adding social capabilities on top of Lightning. In this model, Open Social delegates the work of maintaining Layout, Media, and Workflow to the maintainers of Lightning. It's not too hard to see how this could radically simplify the maintenance of distributions.

The less effort it takes to build and maintain a distribution, the more distributions will emerge. The more distributions that emerge, the better Drupal can compete with a wide range of turnkey solutions in addition to new markets. Over the course of twelve years we have improved the underlying technology for building distributions, and we will continue to do so for years to come.

Making distributions commercially interesting

In 2010, after having built a couple of distributions at Acquia, I used to joke that distributions are the "most expensive lead generation tool for professional services work". This is because monetizing a distribution is hard. Fortunately, we have made progress on making distributions more commercially viable.

At Acquia, our Drupal Gardens product taught us a lot about how to monetize a single Drupal distribution through a SaaS model. We discontinued Drupal Gardens but turned what we learned from operating Drupal Gardens into Acquia Cloud Site Factory. Instead of hosting a single Drupal distribution (i.e. Drupal Gardens), we can now host any number of Drupal distributions on Acquia Cloud Site Factory.

This is why Nasdaq's offering is so interesting; it offers a powerful example of how organizations can leverage the distribution "as-a-service" model. Nasdaq has built a custom Drupal 8 distribution and offers it as-a-service to their customers. When Nasdaq makes money from their Drupal distribution they can continue to invest in both their distribution and Drupal for many years to come.

In other words, distributions have evolved from an expensive lead generation tool to something you can offer as a service at a large scale. Since 2006 we have known that hosted service models are more compelling but unfortunately at the time the technology wasn't there. Today, we have the tools that make it easier to deploy and manage large constellations of websites. This also includes providing a 24x7 help desk, SLA-based support, hosting, upgrades, theming services and go-to-market strategies. All of these improvements are making distributions more commercially viable.

14 Feb 2017 7:40pm GMT

Julien Pivotto: Augeas resource for mgmt

Last week, I joined the mgmt hackathon, just after Config Management Camp Ghent. It helped me understanding how mgmt actually works and that helped me to introduce two improvements in the codebase: prometheus support, and an augeas resource.

I will blog later about the prometheus support, today I will focus about the Augeas resource.

Defining a resource

Currently, mgmt does not have a DSL, it only uses plain yaml.

Here is how you define an Augeas resource:

graph: mygraph
  - name: sshd_config
    lens: Sshd.lns
    file: "/etc/ssh/sshd_config"
      - path: X11Forwarding
        value: no

As you can see, the augeas resource takes several parameters:

Setting file will create a Watcher, which means that each time you change that file, mgmt will check if it is still aligned with what you want.


The code can be found there: https://github.com/purpleidea/mgmt/pull/128/files

We are using go bindings for Augeas: https://github.com/dominikh/go-augeas/ Unfortunately, those bindings only support a recent version of Augeas. It was needed to vendor it to make it build on travis.

Future plans

Future plans regarding this resource is to add some parameters, probably a parameter to use as Puppet "onlyif" parameter, and a "rm" parameter.

14 Feb 2017 7:34pm GMT

Wim Leers: A career thanks to open source

Hasselt University Professor Frank Neven asked me to come and talk a bit about my experience in open source, and how it helped me. It helped me during my studies, in my career and even in life :)


Slides with transcript

14 Feb 2017 4:15pm GMT

13 Feb 2017

feedPlanet Grep

Dries Buytaert: How Nasdaq offers a Drupal distribution as-a-service

Nasdaq CIO and vice president Brad Peterson at the Acquia Engage conference showing the Drupal logo on Nasdaq's MarketSite billboard at Times Square NYC

Last October, I shared the news that Nasdaq Corporate Solutions has selected Acquia and Drupal 8 for its next generation Investor Relations and Newsroom Website Platforms. 3,000 of the largest companies in the world, such as Apple, Amazon, Costco, ExxonMobil and Tesla are currently eligible to use Drupal 8 for their investor relations websites.

How does Nasdaq's investor relations website platform work?

First, Nasdaq developed a "Drupal 8 distribution" that is optimized for creating investor relations sites. They started with Drupal 8 and extended it with both contributed and custom modules, documentation, and a default Drupal configuration. The result is a version of Drupal that provides Nasdaq's clients with an investor relations website out-of-the-box.

Next, Nasdaq decided to offer this distribution "as-a-service" to all of their publicly listed clients through Acquia Cloud Site Factory. By offering it "as-a-service", Nasdaq's customers don't have to worry about installing, hosting, upgrading or maintaining their investor relations site. Nasdaq's new IR website platform also ensures top performance, scalability and meets the needs of strict security and compliance standards. Having all of these features available out-of-the-box enables Nasdaq's clients to focus on providing their stakeholders with critical news and information.

Offering Drupal as a web service is not a new idea. In fact, I have been talking about hosted service models for distributions since 2007. It's a powerful model, and Nasdaq's Drupal 8 distribution as-a-service is creating a win-win-win-win. It's good for Nasdaq's clients, good for Nasdaq, good for Drupal, and in this case, good for Acquia.

It's good for Nasdaq's customers because it provides them with a platform that incorporates the best of both worlds; it gives them the maintainability, reliability, security and scalability that comes with a cloud offering, while still providing the innovation and freedom that comes from using Open Source.

It is great for Nasdaq because it establishes a business model that leverages Open Source. It's good for Drupal because it encourages Nasdaq to invest back into Drupal and their Drupal distribution. And it's obviously good for Acquia as well, because we get to sell our Acquia Site Factory Platform.

If you don't believe me, take Nasdaq's word for it. In the video below, which features Stacie Swanstrom, executive vice president and head of Nasdaq Corporate Solutions, you can see how Nasdaq pitches the value of this offering to their customers. Swanstrom explains that with Drupal 8, Nasdaq's IR Website Platform brings "clients the advantages of open source technology, including the ability to accelerate product enhancements compared to proprietary platforms".

13 Feb 2017 8:29pm GMT

Luc Verhaegen: The beginning of the end of the RadeonHD driver.

Soon it will be a decade since we started the RadeonHD driver, where we pushed ATI to a point of no return, got a proper C coded graphics driver and freely accessible documentation out. We all know just what happened to this in the end, and i will make a rather complete write-up spanning multiple blog entries over the following months. But while i was digging out backed up home directories for information, i came across this...

It is a copy of the content of an email i sent to an executive manager at SuSE/Novell, a textfile called bridgmans_games.txt. This was written in July 2008, after the RadeonHD project gained what was called "Executive oversight". After the executive had first hand seen several of John Bridgmans games, he asked me to provide him with a timeline of some of the games he played with us. The below explanation covers only things that i knew that this executive was not aware of, and it covers a year of RadeonHD development^Wstruggle, from July 2007 until april 2007. It took me quite a while to compile this, and it was a pretty tough task mentally, as it finally showed, unequivocally, just how we had been played all along. After this email was read by this executive, he and my department lead took me to lunch and told me to let this die out slowly, and to not make a fuss. Apparently, I should've counted myself lucky to see these sort of games being played this early on in my career.

This is the raw copy, and only the names of some people, who are not in the public eye have been redacted out (you know who you are), some other names were clarified. All changes are marked with [].

These are:
SuSE Executive manager: [EXEC]
SuSE technical project manager: [TPM]
AMD manager (from a different department than ATI): [AMDMAN]
AMD liaison: [SANTA] (as he was the one who handed me and Egbert Eich a 1 cubic meter box full of graphics cards ;))

[EXEC], i made a rather extensive write-up about the goings-on throughout 
the whole project. I hope this will not intrude too much on your time, 
please have a quick read through it, mark what you think is significant and 
where you need actual facts (like emails that were sent around...) Of 
course, this is all very subjective, but for most of this emails can be 
provided to back things up (some things were said on the weekly technical 
call only).

First some things that happened beforehand...

Before we came around with the radeonhd driver, there were two projects 

One was Dave Airlie who claimed he had a working driver for ATI Radeon R500
hardware. This was something he wrote during 2006, from information under 
NDA, and this nda didn't allow publication. He spent a lot of time bashing 
ATI for it, but the code was never seen, even when the NDA was partially 
lifted later on.

The other was a project started in March 2007 by Nokia's Daniel Stone and
Ubuntu's [canonical's] Matthew Garreth. The avivo driver. This driver was a
driver separate from the -ati or -radeon drivers, under the GPL, and written
from scratch by tracing the BIOS and tracking register changes [(only the
latter was true)]. Matthew and Daniel were only active contributors for the
first few weeks and quickly lost interest. The bulk of the development was
taken over by Jerome Glisse after that.

Now... Here is where we encounter John [Bridgman]...

Halfway july, when we first got into contact with Bridgman, he suddenly 
started bringing up AtomBIOS. When then also we got the first lowlevel 
hardware spec, but when we asked for hardware details, how the different 
blocks fitted together, we never got an answer as that information was said 
to not be available. We also got our hands on some of the atombios 
interpreter code, and a rudimentary piece of documentation explaining how to 
handle the atombios bytecode. So Matthias [Hopf] created an atombios 
disassembler to help us write up all the code for the different bits. And 
while we got some further register information when we asked for it, 
bridgman kept pushing atombios relentlessly.

This whining went on and on, until we, late august, decided to write up a
report of what problems we were seeing with atombios, both from an interface 
as from an actual implementation point of view. We never got even a single 
reply to this report, and we billed AMD 10 or so mandays, and [with] 
bridgman apparently berated, as he never brought the issue up again for the 
next few months... But the story of course didn't end there of course.

At the same time, we wanted to implement one function from atomBIOS at the
time: ASICInit. This function brings the hardware into a state where
modesetting can happen. It replaces a full int10 POST, which is the 
standard, antique way of bringing up VGA on IBM hardware, and ASICInit 
seemed like a big step forward. John organised a call with the BIOS/fglrx 
people to explain to us what ASICInit offered and how we could use it. This 
is the only time that Bridgman put us in touch with any ATI people directly. 
The proceedings of the call were kind of amusing... After 10 minutes or so, 
one of the ATI people said "for fglrx, we don't bother with ASICInit, we 
just call the int10 POST". John then quickly stated that they might have to 
end the call because apparently other people were wanting to use this 
conference room. The call went on for at least half an hour longer, and john 
from time to time stated this again. So whether there was truth to this or 
not, we don't know, it was a rather amusing coincidence though, and we of 
course never gained anything from the call.

Late august and early september, we were working relentlessly towards 
getting our driver into a shape that could be exposed to the public. 
[AMDMAN] had warned us that there was a big marketing thing planned for the 
kernel summit in september, but we never were told any of this through our 
partner directly. So we were working like maniacs on getting the driver in a 
state so that we could present it at the X Developers Summit in Cambridge.

Early september, we were stuck on several issues because we didn't get any
relevant information from Bridgman. In a mail sent on september 3, [TPM] 
made the following statement: "I'm not willing to fall down to Mr. B.'s mode 
of working nor let us hold up by him. They might (or not) be trying to stop 
the train - they will fail." It was very very clear then already that things 
were not being played straight.

Bridgman at this point was begging to see the code, so we created a copy of 
a (for us) working version, and handed this over on september the 3rd as 
well. We got an extensive review of our driver on a very limited subset of 
the hardware, and we were mainly bashed for producing broken modes (monitor 
synced off). This had of course nothing to do with us using direct 
programming, as this was some hardware limits [PLL] that we eventually had 
to go and find ourselves. Bridgman never was able to help provide us with 
suitable information on what the issue could be or what the right limits 
were, but the fact that the issue wasn't to be found in atombios versus 
direct programming didn't make him very happy.

So on the 5th of september, ATI went ahead with its marketing campaign, the 
one which we weren't told about at all. They never really mentioned SUSE's 
role in this, and Dave Airlie even made a blog entry stating how he and Alex 
were the main actors on the X side. The SUSE role in all of this was 
severely downplayed everywhere, to the point where it became borderline 
insulting. It was also one of the first times where we really felt that 
Bridgman maybe was working more closely with Dave and Alex than with us.

We then had to rush off to the X Developers Summit in Cambridge. While we 
met ATIs Matthew Tippett and Bridgman beforehand, it was clear that our role 
in this whole thing was being actively downplayed, and the attitude towards 
egbert and me was predominantly hostile. The talk bridgman and Tippett gave 
at XDS once again fully downplayed our effort with the driver. During our 
introduction of the radeonHD driver, John Bridgman handed Dave Airlie a CD 
with the register specs that went around the world, further reducing our 
relevance there. John stated, quite noisily, in the middle of our 
presentation, that he wanted Dave to be the first to receive this 
documentation. Plus, it became clear that Bridgman was very close to Alex 

The fact that we didn't catch all supper/dinner events ourselves worked 
against us as well, as we were usually still hacking away feverishly at our 
driver. We also had to leave early on the night of the big dinner, as we had 
to come back to nürnberg so that we could catch the last two days of the 
Labs conference the day after. This apparently was a mistake... We later 
found a picture (http://flickr.com/photos/cysglyd/1371922101/) that has the 
caption "avivo de-magicifying party". This shows Bridgman, Tippett, Deucher, 
all the redhat and intel people in the same room, playing with the (then 
competing) avivo driver. This picture was taken while we were on the plane 
back to nürnberg. Another signal that, maybe, Mr Bridgman isn't every 
committed to the radeonhd driver. Remarkably though, the main developer 
behind the avivo driver, Jerome Glisse, is not included here. He was found 
to be quite fond of the radeonhd driver and immediately moved on to 
something else as his goal was being achieved by radeonhd now.

On monday the 17th, right before the release, ATI was throwing up hurdles... 
We suddenly had to put AMD copyright statements on all our code, a 
requirement never made for any AMD64 work which is as far as i understood 
things also invalid under german copyright law. This was a tough nut for me 
to swallow, as a lot of the code in radeonhd modesetting started life 5years 
earlier in my unichrome driver. Then there was the fact that the atombios 
header didn't come with an acceptable license and copyright statement, and a 
similar situation for the atombios interpreter. The latter two gave Egbert a 
few extra hours of work that night.

I contacted the phoronix.com owner (Michael Larabel) to talk to him about an 
imminent driver release, and this is what he stated: "I am well aware of the 
radeonHD driver release today through AMD and already have some articles in 
the work for posting later today." The main open source graphics site had 
already been given a story by some AMD people, and we were apparently not 
supposed to be involved here. We managed to turn this article around there 
and then, so that it painted a more correct picture of the situation. And 
since then, we have been working very closely with Larabel to make sure that 
our message gets out correctly.

The next few months seemed somewhat uneventful again. We worked hard on 
making our users happy, on bringing our driver up from an initial 
implementation to a full featured modesetting driver. We had to 
painstakingly try to drag more information out of John, but mostly find 
things out for ourselves. At this time the fact that Dave Airlie was under 
NDA still was mentioned quite often in calls, and John explained that this 
NDA situation was about to change. Bridgman also brought up that he had 
started the procedure to hire somebody to help him with documentation work 
so that the flow of information could be streamlined. We kind of assumed 
that, from what we saw at XDS, this would be Alex Deucher, but we were never 
told anything.

Matthias got in touch with AMDs GPGPU people through [AMDMAN], and we were 
eventually (see later) able to get TCore code (which contains a lot of very
valuable information for us) out of this. Towards the end of oktober, i 
found an article online about AMDs upcoming Rv670, and i pasted the link in 
our radeonhd irc channel. John immediately fired off an email explaining 
about the chipset, apologising for not telling us earlier. He repeated some 
of his earlier promises of getting hardware sent out to us quicker. When the 
hardware release happened, John was the person making statements to the open 
source websites (messing up some things in the process), but [SANTA] was the 
person who eventually sent us a board.

Suddenly, around the 20th of november, things started moving in the radeon
driver. Apparently Alex Deucher and Dave Airlie had been working together 
since November 3rd to add support for r500 and r600 hardware to the radeon 
driver. I eventually dug out a statement on fedora devel, where redhat guys, 
on the last day of oktober, were actively refusing our driver from being 
accepted. Dave Airlie stated the following: "Red Hat and Fedora are 
contemplating the future wrt to AMD r5xx cards, all options are going to be 
looked at, there may even be options that you don't know about yet.."

They used chunks of code from the avivo driver, chunks of code from the
radeonhd driver (and the bits where we spent ages finding out what to do), 
and they implemented some parts of modesetting using AtomBIOS, just like 
Bridgman always wanted it. On the same day (20th) Alex posted a blog entry 
about being hired by ATI as an open source developer. Apparently, Bridgman 
mentioned that Dave had found something when doing AtomBIOS on the weekly 
phonecall beforehand. So Bridgman had been in close communication with Dave 
for quite a while already.

Our relative silence and safe working ground was shattered. Suddenly it was
completely clear that Bridgman was playing a double game. As some diversion
mechanisms, John now suddenly provided us with the TCore code, and suddenly
gave us access to the AMD NDA website, and also told us on the phone that 
Alex is not doing any Radeon work in his worktime and only does this in his 
spare time. Surely we could not be against this, as surely we too would like 
to see a stable and working radeon driver for r100-r400. He also suddenly 
provided those bits of information Dave had been using in the radeon driver, 
some of it we had been asking for before and never got an answer to then.

We quickly ramped up the development ourselves and got a 1.0.0 driver 
release out about a week later. John in the meantime was expanding his 
marketing horizon, he did an interview with the Beyond3D website (there is 
of course no mention about us), and he started doing some online forums 
(phoronix), replying to user posts there, providing his own views on 
"certain" situations (he has since massively increased his time on that 

One interesting result of the competing project is that suddenly we were
getting answers to questions we had been asking for a long time. An example 
of such an issue is the card internal view of where the cards own memory 
resides. We spent weeks asking around, not getting anywhere, and suddenly 
the registers were used in the competing driver, and within hours, we got 
the relevant information in emails from Mr Bridgman (November 17 and 20, 
"Odds and Ends"). Bridgman later explained that Dave Airlie knew these 
registers from his "previous r500 work", and that Dave asked Bridgman for 
clearance for release, which he got, after which we also got informed about 
these registers as well. The relevant commit message in the radeon driver 
predates the email we received with the related information by many hours.

[AMDMAN] had put us in touch with the GPGPU people from AMD, and matthias 
and one of the GPGPU people spent a lot of time emailing back and forth. But
suddenly, around the timeframe where everything else happened (competing
driver, alex getting hired), John suddenly conjured up the code that the 
GPGPU people had all along: TCore. This signalled to us that John had caught 
our plan to bypass him, and that he now took full control of the situation. 
It took about a month before John made big online promises about how this 
code could provide the basis for a DRM driver, and that it would become 
available to all soon. We managed to confirm, in a direct call with both the 
GPGPU people and Mr Bridgman that the GPGPU people had no idea about that 
John intended to make this code fully public straigth away. The GPGPU people 
assumed that Johns questions were fully aimed at handing us the code without 
us getting tainted, not that John intended to hand this out publically 
immediately. To this day, TCore has not surfaced publically, but we know 
that several people inside the Xorg community have this code, and i 
seriously doubt that all of those people are under NDA with AMD.

Bridgman also ramped up the marketing campaign. He did an interview with
Beyond3D.com on the 30th where he broadly smeared out the fact that a 
community member was just hired to work with the community. John of course 
completely overlooked the SUSE involvement in everything. An attempt to 
rectify this with an interview of our own to match never materialised due to 
the heavy time constraints we are under.

On the 4th of december, a user came to us asking what he should do with a
certain RS600 chipset he had. We had heard from John that this chip was not
relevant to us, as it was not supposed to be handled by our driver (the way 
we remember the situation), but when we reported this to John, he claimed 
that he thought that this hardware never shipped and that it therefor was 
not relevant. The hardware of course did ship, to three different vendors, 
and Egbert had to urgently add support for it in the I2C and memory 
controller subsystems when users started complaining.

One very notable story of this time is how the bring-up of new hardware was
handled. I mentioned the Rv670 before, we still didn't get this hardware at
this point, as [SANTA] was still trying to get his hands on this. What we 
did receive from [SANTA] on the 11th of december was the next generation 
hardware, which needed a lot of bring-up work: rv620/rv635. This made us 
hopeful that, for once, we could have at least basic support in our driver 
on the same day the hardware got announced. But a month and a half later, 
when this hardware was launched, John still hadn't provided any information. 
I had quite a revealing email exchange with [SANTA] about this too, where he 
wondered why John was stalling this. The first bit of information that was 
useful to us was given to us on February the 9th, and we had to drag a lot 
of the register level information out of John ourselves. Given the massive 
changes to the hardware, and the induced complications, it of course took us 
quite some time to get this work finished. And this fact was greedily abused 
by John during the bring-up time and afterwards. But what he always 
overlooks is that it took him close to two months to get us documentation to 
use even atombios successfully.

The week of december the 11th is also where Alex was fully assimilated into
ATI. He of course didn't do anything much in his first week at AMD, but in 
his second week he started working on the radeon driver again. In the weekly 
call then John re-assured us that Alex was doing this work purely in his 
spare time, that his task was helping us get the answers we needed. In all 
fairness, Alex did provide us with register descriptions more directly than 
John, but there was no way he was doing all this radeon driver work in his 
spare time. But it would take John another month to admit it, at which time 
he took Alex working on the radeon driver as an acquired right.

Bridgman now really started to spend a lot of time on phoronix.com. He 
posted what i personally see as rather biased comparisons of the different 
drivers out there, and he of course beat the drums heavily on atombios. This 
is also the time where he promised the TCore drop.

Then games continued as usual for the next month or so, most of which are
already encompassed in previous paragraphs.

One interesting sidenote is that both Alex and John were heavy on rebranding
AtomBIOS. They actively used the word scripts for the call tables inside
atombios, and were actively labelling C modesetting code as legacy. Both 
quite squarely go in against the nature of AtomBIOS versus C code.

Halfway february, discussions started to happen again about the RS690 
hardware. Users were having problems. After a while, it became clear that 
the RS690, all along, had a display block called DDIA capable of driving a 
DVI digital signal. RS690 was considered to be long and fully supported, and 
suddenly a major display block popped into life upon us. Bridgmans excuse 
was the same as with the RS600; he thought this never would be used in the 
first place and that therefor we weren't really supposed to know about this.

On the 23rd and 24th of February, we did FOSDEM. As usual, we had an X.org 
Developers Room there, for which i did most of the running around. We worked
with Michael Larabel from phoronix to get the talks taped and online. 
Bridgman gave us two surprises at FOSDEM... For one, he released 3D 
documentation to the public, we learned about this just a few days before, 
we even saw another phoronix statement about TCore being released there and 
then, but this never surfaced.

We also learned, on friday evening, that the radeon driver was about to gain
Xvideo support for R5xx and up, through textured video code that alex had 
been working on. Not only did they succeed in stealing the sunshine away 
from the actual hard work (organising fosdem, finding out and implementing 
bits that were never supposed to be used in the first place, etc), they gave 
the users something quick and bling and showy, something we today still 
think was provided by others inside AMD or generated by a shader compiler. 
And this to the competing project.

At FOSDEM itself Bridgman of course was full of stories. One of the most
remarkable ones, which i overheard when on my knees clearing up the 
developers room, was bridgman talking to the Phoronix guy and some community 
members, stating that people at ATI actually had bets running on how long it 
would take Dave Airlie to implement R5xx 3D support for the radeon driver. 
He said this loudly, openly and with a lot of panache. But when this support 
eventually took more than a month, i took this up with him on the phonecall, 
leading of course to his usual nervous laughing and making up a bit of a 
coverstory again.

Right before FOSDEM, Egbert had a one-on-one phonecall with Bridgman, hoping 
to clear up some things. This is where we first learned that bridgman had 
sold atombios big time to redhat, but i think you now know this even better 
than i do, as i am pretty hazy on details there. Egbert, if i remember 
correctly, on the very same day, had a phonecall with redhat's Kevin Martin 
as well. But since nothing of this was put on mail and i was completely 
swamped with organising FOSDEM, i have no real recollection of what came out 
of that.

Alex then continued with his work on radeon, while being our only real 
point of contact for any information. There were several instances where our 
requests for information resulted in immediate commits to the radeon driver, 
where issues were fixed or bits of functionality were added. Alex also ended 
up adding RENDER acceleration to the radeon driver, and when Bridgman was in
Nuernberg, we clearly saw how Bridgman sold this to his management: alex was
working on bits that were meant to be ported directly to RadeonHD.

In march, we spent our time getting rv620/635 working, dragging information,
almost register per register out of our friends. We learned about the 
release of RS780 at CEBIT, meaning that we had missed yet another 
opportunity for same day support. John had clearly stated, at the time of 
the rv620/635 launch that "integrated parts are the only place where we are 
'officially' trying to have support in place at or very shortly after 

And with that, we pretty much hit the time frame when you, [EXEC], got

One interesting kind of new development is Kgrids. Kgrids is some code 
written by some ATI people that has some useful information for driver 
development, especially useful for the DRM. John Bridgman told the phoronix 
author 2-3 weeks ago already that he had this, and that he was planning to 
release this immediately. The phoronix author immediately informed me, but 
the release of this code never happened so far. Last thursday, Bridgman 
brought up Kgrids in the weekly technical phonecall already... When asked 
how long he knew about this, he admitted that they knew about this for 
several weeks already, which makes me wonder why we weren't informed 
earlier, and why we suddenly were informed then...

As said, the above, combined with what already was known to this executive, and the games he saw being played first hand from april 2008 throughout june 2008, marked the start of the end of the RadeonHD project.

I too more and more disconnected myself from development on this, with Egbert taking the brunt of the work. I instead spent more time on the next SuSE enterprise release, having fun doing something productive and with a future. Then, after FOSDEM 2009, when 24 people at the SuSE Nuernberg office were laid off, i was almost relieved to be amongst them.

Time flies.

13 Feb 2017 10:08am GMT

12 Feb 2017

feedPlanet Grep

Mattias Geniar: PHP 7.2 to get modern cryptography into its standard library

The post PHP 7.2 to get modern cryptography into its standard library appeared first on ma.ttias.be.

This actually makes PHP the first language, over Erlang and Go, to get a secure crypto library in its core.

Of course, having the ability doesn't necessarily mean it gets used properly by developers, but this is a major step forward.

The vote for the Libsodium RFC has been closed. The final tally is 37 yes,0 no.

I'll begin working on the implementation with the desired API (sodium_*instead of \Sodium\*).

Thank you for everyone who participated in these discussions over the past year or so and, of course, everyone who voted for better cryptography inPHP 7.2.

Scott Arciszewski


Source: php.internals: [RFC][Vote] Libsodium vote closes; accepted (37-0)

As a reminder, the Libsodium RFC:

Title: PHP RFC: Make Libsodium a Core Extension

Libmcrypt hasn't been touched in eight years (last release was in 2007), leaving openssl as the only viable option for PHP 5.x and 7.0 users.

Meanwhile, libsodium bindings have been available in PECL for a while now, and has reached stability.

Libsodium is a modern cryptography library that offers authenticated encryption, high-speed elliptic curve cryptography, and much more. Unlike other cryptography standards (which are a potluck of cryptography primitives; i.e. WebCrypto), libsodium is comprised of carefully selected algorithms implemented by security experts to avoid side-channel vulnerabilities.

Source: PHP RFC: Make Libsodium a Core Extension

The post PHP 7.2 to get modern cryptography into its standard library appeared first on ma.ttias.be.

12 Feb 2017 4:39pm GMT

Xavier Mertens: Think Twice before Posting Data on Pastebin!

Pastebin.com is one of my favourite playground. I'm monitoring the content of all pasties posted on this website. My goal is to find juicy data like configurations, database dumps, leaks of credentials. Sometimes you can find also malicious binary files.

For sure, I knew that I'm not the only one to have interests in the pastebin.com content. Plenty of researchers or organizations like CERT's and SOC's are doing the same but I was very surprised by the number of hits that I got on my latest pastie:

Pastebin Hits

For the purpose of my last ISC diary, I posted some data on pastebin.com and did not communicate the link by any mean. Before posting the diary, I had a quick look at my pastie and it had already 105 unique views! It was posted only a few minutes before., think twice before posting data to

Conclusion: Think twice before posting data to pastebin. Even if you delete quickly your pastie, there are chances that it will be already scrapped by many robots (and mine! ;-))

[The post Think Twice before Posting Data on Pastebin! has been first published on /dev/random]

12 Feb 2017 10:09am GMT

Xavier Mertens: [SANS ISC Diary] Analysis of a Suspicious Piece of JavaScript

I published the following diary on isc.sans.org: "Analysis of a Suspicious Piece of JavaScript".

What to do on a cloudy lazy Sunday? You go hunting and review some alerts generated by your robots. Pastebin remains one of my favourite playground and you always find interesting stuff there. In a recent diary, I reported many malicious PE files stored in Base64 but, today, I found a suspicious piece of JavaScript code… [Read more]

[The post [SANS ISC Diary] Analysis of a Suspicious Piece of JavaScript has been first published on /dev/random]

12 Feb 2017 9:41am GMT

11 Feb 2017

feedPlanet Grep

Frank Goossens: No REST for the wicked

After the PR-beating WordPress took with the massive defacements of non-upgraded WordPress installations, it is time to revisit the point-of-view of the core-team that the REST API should be active for all and that no option should be provided to disable it (as per the decisions not options philosophy). I for one installed the "Disable REST API" plugin.

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

11 Feb 2017 2:47pm GMT

Claudio Ramirez: Vim as a Perl 6 editor

EDITED on 20170211: syntastic-perl6 configuration changes

If you're a Vim user you probably use it for almost everything. Out of the box, Perl 6 support is rather limited. That's why many people use editors like Atom for Perl 6 code.

What if with a few plugins you could configure vim to be a great Perl 6 editor? I made the following notes while configuring Vim on my main machine running Ubuntu 16.04. The instructions should be trivially easy to port to other distributions or Operating Systems. Skip the applicable steps if you already have a working vim setup (i.e. do not overwrite you .vimrc file).

I maintain my Vim plugins using pathogen, as it allows me to directly use git clones from github. This is specially important for plugins in rapid development.
(If your .vim directory is a git repository, replace 'git clone' in the commands by 'git submodule add'.)

Basic vim Setup

Install vim with scripting support and pathogen. Create the directory where the plugins will live:
$ sudo apt-get install vim-nox vim-pathogen && mkdir -p ~/.vim/bundle

$ vim-addons install pathogen

Create a minimal .vimrc in your $HOME, with at least this configuration (enabling pathogen). Lines commencing with " are comments:

"Enable extra features (e.g. when run systemwide). Must be before pathogen
set nocompatible

"Enable pathogen
execute pathogen#infect()
"Enable syntax highlighting
syntax on
"Enable indenting
filetype plugin indent on

Additionally I use these settings (the complete .vimrc is linked atthe end):

"Set line wrapping
set wrap
set linebreak
set nolist
set formatoptions+=l

"Enable 256 colours
set t_Co=256

"Set auto indenting
set autoindent

"Smart tabbing
set expandtab
set smarttab
set sw=4 " no of spaces for indenting
set ts=4 " show \t as 2 spaces and treat 2 spaces as \t when deleting

"Set title of xterm
set title

" Highlight search terms
set hlsearch

"Strip trailing whitespace for certain type of files
autocmd BufWritePre *.{erb,md,pl,pl6,pm,pm6,pp,rb,t,xml,yaml,go} :%s/\s\+$//e

"Override tab espace for specific languages
autocmd Filetype ruby,puppet setlocal ts=2 sw=2

"Jump to the last position when reopening a file
au BufReadPost * if line("'\"") > 1 && line("'\"") <= line("$") |
\ exe "normal! g'\"" | endif

"Add a coloured right margin for recent vim releases
if v:version >= 703
set colorcolumn=80

"Ubuntu suggestions
set showcmd " Show (partial) command in status line.
set showmatch " Show matching brackets.
set ignorecase " Do case insensitive matching
set smartcase " Do smart case matching
set incsearch " Incremental search
set autowrite " Automatically save before commands like :next and :make
set hidden " Hide buffers when they are abandoned
set mouse=v " Enable mouse usage (all modes)

Install plugins

vim-perl for syntax highlighting:

$ git clone https://github.com/vim-perl/vim-perl.git ~ /.vim/bundle/vim-perl


vim-airline and themes for a status bar:
$ git clone https://github.com/vim-airline/vim-airline.git ~/.vim/bundle/vim-airline
$ git clone https://github.com/vim-airline/vim-airline-themes.git ~/.vim/bundle/vim-airline-themes
In vim type :Helptags

In Ubuntu the 'fonts-powerline' package (sudo apt-get install fonts-powerline) installs fonts that enable nice glyphs in the statusbar (e.g. line effect instead of '>', see the screenshot at https://github.com/vim-airline/vim-airline/wiki/Screenshots.

Add this to .vimrc for airline (the complete .vimrc is attached):
"airline statusbar
set laststatus=2
set ttimeoutlen=50
let g:airline#extensions#tabline#enabled = 1
let g:airline_theme='luna'
"In order to see the powerline fonts, adapt the font of your terminal
"In Gnome Terminal: "use custom font" in the profile. I use Monospace regular.
let g:airline_powerline_fonts = 1


Tabular for aligning text (e.g. blocks):
$ git clone https://github.com/godlygeek/tabular.git ~/.vim/bundle/tabular
In vim type :Helptags

vim-fugitive for Git integration:
$ git clone https://github.com/tpope/vim-fugitive.git ~/.vim/bundle/vim-fugitive
In vim type :Helptags

vim-markdown for markdown syntax support (e.g. the README.md of your module):
$ git clone https://github.com/plasticboy/vim-markdown.git ~/.vim/bundle/vim-markdown
In vim type :Helptags

Add this to .vimrc for markdown if you don't want folding (the complete .vimrc is attached):
"markdown support
let g:vim_markdown_folding_disabled=1

synastic-perl6 for Perl 6 syntax checking support. I wrote this plugin to add Perl 6 syntax checking support to synastic, the leading vim syntax checking plugin. See the 'Call for Testers/Announcement' here. Instruction can be found in the repo, but I'll paste it here for your convenience:

You need to install syntastic to use this plugin.
$ git clone https://github.com/scrooloose/syntastic.git ~/.vim/bundle/synastic
$ git clone https://github.com/nxadm/syntastic-perl6.git ~/.vim/bundle/synastic-perl6

Type ":Helptags" in Vim to generate Help Tags.

Syntastic and syntastic-perl6 vimrc configuration, (comments start with "):

"airline statusbar integration if installed. De-comment if installed
"set laststatus=2
"set ttimeoutlen=50
"let g:airline#extensions#tabline#enabled = 1
"let g:airline_theme='luna'
"In order to see the powerline fonts, adapt the font of your terminal
"In Gnome Terminal: "use custom font" in the profile. I use Monospace regular.
"let g:airline_powerline_fonts = 1

"syntastic syntax checking
let g:syntastic_always_populate_loc_list = 1
let g:syntastic_auto_loc_list = 1
let g:syntastic_check_on_open = 1
let g:syntastic_check_on_wq = 0
set statusline+=%#warningmsg#
set statusline+=%{SyntasticStatuslineFlag()}
set statusline+=%*

"Perl 6 support
"Optional comma separated list of quoted paths to be included to -I
"let g:syntastic_perl6lib = [ '/home/user/Code/some_project/lib', 'lib' ]
"Optional perl6 binary (defaults to perl6 in your PATH)
"let g:syntastic_perl6_exec = '/opt/rakudo/bin/perl6'
"Register the checker provided by this plugin
let g:syntastic_perl6_checkers = ['perl6']
"Enable the perl6 checker (disabled out of the box because of security reasons:
"'perl6 -c' executes the BEGIN and CHECK block of code it parses. This should
"be fine for your own code. See: https://docs.perl6.org/programs/00-running
let g:syntastic_enable_perl6_checker = 1

Screenshot of syntastic-perl6

You complete me fuzzy search autocomplete:

$ git clone https://github.com/Valloric/YouCompleteMe.git ~/.vim/bundle/YouCompleteMe

Read the YouCompleteMe documentation for the dependencies for your OS and for the switches for additional non-fuzzy support for additional languages like C/C++, Go and so on. If you just want fuzzy complete support for Perl 6, the default is ok. If someone is looking for a nice project, a native Perl6 autocompleter for YouCompleteMe (instead of the fuzzy one) would be a great addition. You can install YouCompleteMe like this:
$ cd ~/.vim/bundle/YouCompleteMe && ./install.py


That's it. I hope my notes are useful to someone. The complete .vimrc can be found here.

Filed under: Uncategorized Tagged: Perl, perl6, vim

11 Feb 2017 12:14am GMT

10 Feb 2017

feedPlanet Grep

Lionel Dricot: Promis, je reste !

- S'il-te-plait, ne m'abandonne pas ! Résiste ! Reste !

Écrasé par la douleur, je broie sans m'en rendre compte les doigts frêles posé sur le lit d'hôpital. De longues larmes lourdes et pesantes ruissellent sur ma joue, inondant le drap.

Elle tourne vers moi un regard fatigué, épuisé par la douleur.

- Je t'aime, fais-je d'une voix implorante.

D'un clignement des yeux, elle me répond.

- Je t'aime, murmure un souffle, une ébauche de sourire.

Soudain, je me sens apaisé. Mon esprit s'est clarifié. D'une voix nette et fluide, je me mets à parler.

- J'ai toujours été deux avec toi. Ma vie s'est construite sur nous. Je n'ai jamais imaginé que l'un de nous puisse partir avant l'autre. L'amour, ce concept abstrait des poètes, a guidé chacun de mes pas, chacun de mes soupirs. C'est vers toi que j'ai toujours marché, c'est pour toi que j'ai toujours respiré.

Comme un barrage soudainement détruit, j'éclate en sanglot. Ma voix se déforme.

- Que vais-je faire sans toi ? Comment puis-je encore vivre ? Ne me laisse pas ! Reste !
- Je… Je te promets de rester, balbutie une voix faible. De rester aussi longtemps que tu le souhaiteras. Je partirai seulement quand tu me laisseras partir. Promis, je reste…
- Mon amour…

Pendant des heures, je baise cette main désormais décharnée, je pleure, je ris.

- Je t'aime ! Je t'aime mon amour !

Rien n'a plus d'importance que l'amour qui nous unit.

Une poigne ferme s'abat soudainement sur mon épaule.

- Monsieur ! Monsieur !

Hébété, je me retourne.

- Docteur ? Que…

- Je suis désolé. Il n'y a plus rien à faire. Nous devons procéder à la toilette du corps.

- Hein ? Mais…

Perdu, je me tourne vers ma bien aimée. Ses yeux sont fermés, un très léger sourire illumine son visage.

- Elle dort ! Elle s'est simplement assoupie !

Dans mes doigts, sa main est devenue glacée, rigide.

- Venez, me dit doucement le docteur en m'accompagnant. Avez-vous de la famille à appeler ?


J'entends à peine le chauffeur démarrer et faire demi-tour derrière moi. Sous mes pieds, les familiers graviers de l'allée crissent et se mélangent. Machinalement, j'ai introduit ma clé et ouvert la porte. Un sombre silence m'accueille. Ma bouche est sèche, mes tempes bourdonnent d'avoir trop pleuré.

Sans allumer la lumière, je traverse le hall d'entrée et m'installe dans la cuisine. Ouvrant le robinet, je me sers un verre d'eau.

Un frisson me parcourt l'échine. Une porte claque. Dans l'armoire du salon, les verres en cristal se mettent à chanter.

- Qui est là ?

Une fenêtre s'ouvre violemment et un tourbillon de vent envahit la pièce, m'enveloppant dans l'air froid de la nuit.

À mon oreille, une voix proche et lointaine susurre :

- Promis, je reste…

Photo par Matthew Perkins.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

10 Feb 2017 1:17pm GMT