17 Oct 2018

feedPlanet Grep

Xavier Mertens: Hack.lu 2018 Wrap-Up Day #2

The second day started early with an eye-opener talk: "IPC - the broken dream of inherent security" by Thanh Bui. IPC or "Inter-Process Communications" are everywhere. You can compare them as a network connection between a client and a server but inside the operating system. The idea of Thanh's research was to see how to intercept communications occurring via IPC based on MitMA or "Man in the Machine" attacks. There are multiple attack vectors available. Usually, a server bind to a specific identifier (a network socket, a named pipe). Note that they are also secure methods via socket pairs or unnamed pipes. In the case of a network socket, the server binds to a specific port on 127.0.0.1:<port> and waits for a client connection but, often, they have failover ports. A malicious process could bind to the official port and connect to the server via the failover port and receive connections from the client on the first port. Thanh reviewed also named pipes on Windows systems (accessible via \\.\pipe\). The second part of the talk focused on juicy targets that use IPC: password managers. Most of them use IPC for communications between the desktop client and the browser (to fill passwords automatically). Thanh demonstrated how it is possible to intercept juicy information! Another popular device using IPC is the Fido U2F security key. The attack scenario is the following:
  1. Attacker signs in using the 1st factor and receive a challenge
  2. Attacker keeps sending the challenge at the device at a high rate
  3. Victim signs in to ANY service using the same security key and touches the button
  4. Attacker receives the response with high probability

A nice talk to start the day! Then, I decided to attend a workshop about Sigma. I knew the tool for a while but I never had the opportunity to play with it. This workshop was the perfect opportunity to learn more.
Sigma is an open signature format that allows you to write rules in YAML and export them to many log management solution (Splunk, ES, QRadar, <name your preferred tool here>). Here is a simple example of a Sigma rule to detect Mimikatz binaries launched on a Windows host:

title: Exercise 1
status: experimental
description: Detects the execution of Mimikatz binaries
references:
    - https://website
author: Xavier Mertens
data: 2018/10/17
tags:
    - attack.execution
logsource:
    product: windows
    service: sysmon
detection:
    selection:
        EventID: 1
    filter:
        Hashes:
            - 97f93fe103c5a3618b505e82a1ce2e0e90a481aae102e52814742baddd9fed41
            - 6bfc1ec16f3bd497613f57a278188ff7529e94eb48dcabf81587f7c275b3e86d
            - e46ba4bdd4168a399ee5bc2161a8c918095fa30eb20ac88cac6ab1d6dbea2b4a
    condition: selection and filter
level: high
After the lunch break, I attended a new set of presentation. It started with Ange Albertini who presented the third episode of his keynote trilogy: "Education & communication". The idea was the same as the previous presentations: Can we imagine a world where everything is secure? Definitively not. And today, our life is bound to computers. It's a fact that infosec is a life requirement for everybody, said Ange. We need to share our expertise because we are all on the same boat. But the problem is that not everybody is acting in the same way when facing a security issue or something suspicious. It's so easy to blame people reacting in a bad way. But, according to Ange, education and communication are part of our daily job as infosec professionals. Keep in mind that "not knowing is not a crime" said Ange. Interesting keynote but unfortunately not easy to implement on a day to day basis.
The next talk was presented by Saumil Shad, the classic speaker at hack.lu. Saumil is an awesome speaker and has always very interesting topics. This year it was: "Make ARM Shellcode Great Again". It was quite hard for me… Saumil's talk was about ARM shellcode. But as usual, demos well prepared and just working!
Then Alicia Hickey and Dror-John Roecher came on stage to present "Finding the best threat intelligence provider for a specific purpose: trials and tribulations". IOCs are a hot topic for a while and many organizations are looking into implementing security controls based on IOCs to detect suspicious activities. Speaking about IOCs, there are the classic two approaches: free sources and a commercial feed. What's the best one? It depends on you but Alicia & Dror make a research about commercial feeds and tried to compare them, which was not so easy! They started with a huge list of potential partners but the study focused on a pool of four (They did not disclose the names). They used MISP to store received IOCs but they had many issues like a lack of standard format between the tested partners. About the delivery of IOCs, there were also differences in terms of time but what do you prefer? Common IOCs quickly released or a set of valuable IOCs within a fixed context? IMHO, both are required and must be used for different purposes. This was an interesting research.
The next talk was presented by Sva: "Pretty Easy Privacy". The idea behind this talk was to present the PEP project. I already saw this talk last year at FSEC.
My best choice for today was the talk presented by Stephan Gerling: "How to hack a Yacht - swimming IoT". Last year, Stephan presented an awesome talk about smart locks. This year, he came back with something again smart but way bigger: Yacht! Modern Yachts are big floating IoT devices. They have a complete infrastructure based on a router, WiFi access-points, radars, cameras, AIS ("Automatic Identification System") and also entertainment systems. There is also a CANbus like cars that carry data from multiple sensors to pilot the boat. If it's easy to jam a GPS signal (to make the boat blind), Stephan tried to find ways to pwn the boat from the Internet connection. The router (brand: Locomarine) i, in fact,t a Mikrotik router that is managed by a crappy management tool that uses FTP to retrieve/push XML configuration files. In the file, the WiFi password can be found in clear text, amongst other useful information. The funniest demo was when Stephan used Shodan to find live boats. The router interface is publicly available and the administrative page can be accessed without authentication thanks to this piece of (pseudo) code:
if (user == "Dealer") { display_admin_page(); }
Awesome and scary talk at the same time!
The last regular talk was presented by Irena Damsky: "Simple analysis using pDNS". Passive DNS is not new but many people do not realize the power of these databases. After a quick recap about how DNS works, Irena performed several live demonstrations by querying a passive DNS instance to collect juicy information. What you can find? (short list)

This is a very useful tool for incident responders but don't forget that there is only ONE condition: the domain must have been queried at least once!

And the second day ended with a set of lightning talks (always interesting ideas and tools promoted in a few minutes) followed by the famous PowerPoint karaoké! Stay tuned tomorrow for the next wrap-up!

[The post Hack.lu 2018 Wrap-Up Day #2 has been first published on /dev/random]

17 Oct 2018 9:41pm GMT

Mattias Geniar: The convincing Bitcoin scam e-mail extorting you

The post The convincing Bitcoin scam e-mail extorting you appeared first on ma.ttias.be.

A few months ago I received an e-mail that got me worried for a few seconds. It looked like this, and chances are you've seen it too.

From: Kalie Paci 
Subject: mattias - UqtX7m

It seems that, UqtX7m, is your pass word. You do not know me and you are probably thinking
why you are getting this mail, correct?

Well, I actually placed a malware on the adult video clips (porn) web-site and guess what,
you visited this site to have fun (you know what I mean). While you were watching videos,
your browser started operating as a RDP (Remote control Desktop) that has a keylogger which
gave me access to your display and also web camera. Immediately after that, my software
program collected your entire contacts from your Messenger, FB, and email.

What exactly did I do?

I created a double-screen video. First part displays the video you were viewing (you have
a nice taste lol), and second part displays the recording of your web camera.

What should you do?

Well, in my opinion, $1900 is a fair price for our little secret. You'll make the payment
through Bitcoin (if you do not know this, search "how to buy bitcoin" in Google).

BTC Address: 1MQNUSnquwPM9eQgs7KtjDcQZBfaW7iVge
(It is cAsE sensitive, so copy and paste it)

Important:
You now have one day to make the payment. (I've a unique pixel in this message, and right
now I know that you have read this email message). If I don't get the BitCoins, I will
send your video recording to all of your contacts including members of your family,
colleagues, and many others. Having said that, if I do get paid, I will destroy the video
immidiately. If you need evidence, reply with "Yes!" and I definitely will send your video
recording to your 11 friends. This is a non-negotiable offer, and so please don't waste
my personal time and yours by responding to this mail.

If you read it, it looks like spam -- doesn't it?

Well, the thing that got me worried for a few seconds was that the subject line and the body contained an actual password I used a while back: UqtX7m.

To receive an email with a -- what feels like -- personal secret in the subject, it draws your attention. It's clever in the sense that you feel both violated and ashamed for the consequences. It looks legit.

Let me tell you clearly: it's a scam and you don't need to pay anyone.

I first mentioned it on my Twitter describing what feels like the brilliant part of this scam:

Whoever is running this scam thought about the psychology of this one and found the sweet spot: it gets your attention and it gets you worried.

Well played. But don't fall for it.

The post The convincing Bitcoin scam e-mail extorting you appeared first on ma.ttias.be.

17 Oct 2018 12:51pm GMT

16 Oct 2018

feedPlanet Grep

Luc Verhaegen: Pleased to flash you, hope you change my name...

Remember that time when ATI employees tried to re-market their atomBIOS bytecode as scripts?

You probably don't, it was a decade ago.

It was in the middle of the RadeonHD versus -ati shitstorm. One was the original, written in actual C poking display registers directly, and depending on ATIs atomBIOS as little as practical. The other was the fork, implementing everything the fglrx way, and they stopped at nothing to smear the real code (including vandalizing the radeonhd repo, _after_ radeonhd had died). It was AMD teamed with SUSE versus ATI teamed with redhat. From a software and community point of view, it was real open source code and the goal of technical excellence, versus spite and grandstanding. From an AMD versus ATI point of view, it was trying to gain control over and clean up a broken company, and limiting the damage to server sales, versus fighting the new "corporate" overlord and people fighting to keep their old (ostensibly wrong) ways of working.

The RadeonHD project started april 2007, when we started working on a proposal for an open source driver. AMD management loved it, and supported innovations such as fully free docs (the first time since 3dfx went bust in 1999), and we started coding in July 2007. This is also when we were introduced to John Bridgman. At SUSE, we were told that John Bridgman was there to provide us with the information we needed, and to make sure that the required information would go through legal and be documented in public register documents. As an ATI employee, he had previously been tasked by AMD to provide working documentation infrastructure inside ATI (or bring one into existence?). From the very start, John Bridgman was underhandedly working on slowly killing the RadeonHD project. First by endlessly stalling and telling a different lie every week about why he did not manage to get us information this time round either. Later, when the RadeonHD driver did make it out to the public, by playing a very clear double game, specifically by supporting a competing driver project (which did things the ATI way) and publicly deriding or understating the role of the AMD backed SUSE project.

In November 2007, John Bridgman hired Alex Deucher, a supposed open source developer and x.org community member. While the level of support Bridgman had from his own ATI management is unclear to me, to AMD management he claimed that Alex was only there to help out the AMD sponsored project (again, the one with real code, and public docs), and that Alex was only working on the competing driver in his spare time (yeah right!). Let's just say that John slowly "softened" his claim there over time, as this is how one cooks a frog, and Mr Bridgman is an expert on cooking frogs.

One particularly curious instance occurred in January 2008, when John and Alex started to "communicate" differently about atomBIOS, specifically by consistently referring to it as a set of "scripts". You can see one shameful display here on alex his (now defunct) blog. He did this as part of a half-arsed series of write-ups trying to educate everyone about graphics drivers... Starting of course with... those "scripts" called atomBIOS...

I of course responded with a blog entry myself. Here is a quote from it: "At no point do AtomBIOS functions come close to fitting the definition of script, at least not as we get them. It might start life as "scripts", but what we get is the bytecode, stuck into the ROM of our graphics cards or our mainboard." (libv, 2008-01-28)

At the same time, Alex and John were busy renaming the C code for r100-r4xx to "legacy", inferring both that these old graphics cards were too old to support, and that actual C code is the legacy way of doing things. "The warping of these two words make it impossible to deny: it is something wrong, that somehow has to be made to appear right." (libv, 2008-01-29) Rewriting an obvious truth... Amazing behaviour from someone who was supposed to be part of the open source community.

At no point did ATI provide us with any tools to alter these so-called scripts. They provided only the interpreter for the bytecode, the bare minimum of what it took for the rest of the world to actually use atomBIOS. There never was atomBIOS language documentation. There was no tooling for converting to and from the bytecode. There never was tooling to alter or create PCI BIOS images from said atomBIOS scripts. And no open source tool to flash said BIOSes to the hardware was available. There was an obscure atiflash utility doing the rounds on the internet, and that tool still exits today, but it is not even clear who the author is. It has to be ATI, but it is all very clandestine, i think it is safe to assume that some individuals at some card makers sometimes break their NDAs and release this.

The only tool for looking at atomBIOS is Matthias Hopfs excellent atomdis. He wrote this in the first few weeks of the RadeonHD project. This became a central tool for RadeonHD development, as this gave us the insight into how things fit together. Yes, we did have register documentation, but Mr. Bridgman had given us twice 500 pages of "this bit does that" (the same ones made public a few months later), in the hope that we would not see the forest through the trees. Atomdis, register docs, and the (then) most experienced display driver developer on the planet (by quite a margin) made the RadeonHD a viable driver in record time, and it gave ATI no way back from an open source driver. When we showed Mr. Bridgman the output of atomdis in september 2007, he was amazed just how readable its output was compared to ATI internal tools. So much for scripts eh?

I have to confess though that i convinced Matthias to hold off making atomdis public, as i knew that people like Dave Airlie would use it against us otherwise (as they ended up doing with everything else, they just did not get to use atomdis for this purpose as well). In Q2 2009. after the RadeonHD project was well and truly dead, Matthias brought up the topic again, and i wholeheartedly agreed to throw it out. ATI and the forkers had succeeded anyway, and this way we would give others a tool to potentially help them move on from atomBIOS. Sadly, not much has happened with this code since.

One major advantage of full openness is the ability to support future use cases that were unforeseeable at the time of hardware, code or documentation release. One such use-case is the today rather common (still) use of GPUs for cryptocurrency "mining". This was not a thing back in 2007-2009 when we were doing RadeonHD. It was not a thing when AMD created a GPGPU department out of a mix of ATI and new employees (this is the department that has been pushing out 3D ISA information ever since). It was also not considered when ATI stopped the flow of (non shader ISA) documentation back in Q1 2009 (coinciding with radeonHD dying). We only just got to the point of providing a 3d driver then, and never got near considering pushing for open firmware for Radeon GPUs. Fully open power management and fully open 3d engine firmware could mean that both could be optimised to provide a measurable boost to a very specific use-case, which cryptocurrency mining usually is. By retreating in its proprietary shell with the death of the RadeonHD driver, and by working on the blob based fig-leaf driver to keep the public from hating the fglrx driver as much, ATI has denied us the ability to gain those few 10ths of percents or even whole percents that there are likely to be gained from optimised support.

Today though, miners are mostly just altering the gpu and memory frequencies and voltages in their atomBIOS based data tables (not touching the function tables). They tend to use a binary only gui tool to edit those values. Miners also extensively use the rather horrible atiflash. That all seems very limited compared to what could have been if AMD had not lost the internal battle.

So much for history, as giving ATI the two fingered salute is actually more of an added bonus for me. I primarily wanted to play around with some newer ideas for tracing registers, and i wanted to see how much more effective working with capstone would make me. Since SPI chips are simple hardware, and SPI engines are just as trivial, ati spi engines seemed like an easy target. The fact that there is one version of atiflash (that made it out) that runs under linux, made this an obvious and fun project. So i spent the last month, on and off, instrumenting this binary, flexing my REing muscle.

The information i gleaned is now stuck into flashrom, a tool to which i have been contributing since 2007, from right before i joined SUSE. I even held a coreboot devroom at FOSDEM in 2010, and had a talk on how to RE BIOSes to figure out board specific flash enables. I then was on a long flashrom hiatus from 2010 til earlier this year, when i was solely focused on ARM. But a request for a quick board enable got me into flashrom again. Artificial barriers are made to be breached, and it is fun and rewarding to breach them, and board enables always were a quick fix.

The current changes are still in review at review.coreboot.org, but i have a github clone for those who want immediate and simple access to the complete tree.

To use the ati_spi programmer in flashrom you need to use:

./flashrom -pati_spi

and then specify the usual flashrom arguments and commands.

Current support is limited to the hardware i have in my hands today, namely Rx6xx, and only spans to that hardware that was obviously directly compatible to the Rx6xx SPI engine and GPIO block. This list of a whopping 182 devices includes:
* rx6xx: which are the radeon HD2xxx and HD3xxx series, released april 2007
* rx7xx: or HD4xxx, released june 2008.
* evergreen: or HD5xxx, released september 2009.
* northern island: HD6xxx series, released oktober 2010
* Lombok, part of the Southern Island family, release in january 2012.

The other Southern Islanders have some weird IO read hack that i need to go test.

I have just ordered 250eur worth of used cards that should extend support across all PCIE devices all the way through to Polaris. Vega is still out of reach, as that is prohibitively expensive for a project that is not going to generate any revenue for yours truly anyway (FYI, i am self-employed these days, and i now need to find a balance between fun, progress and actual revenue, and i can only do this sort of thing during downtime between customer projects). According to pci.ids, Vega 12 and 20 are not out yet, but they too will need specific changes when they come out. Having spent all that time instrumenting atiflash, I do have enough info to quickly cover the entire range.

One good thing about targetting AMD/ATI again is that I have been blackballed there since 2007 anyway, so i am not burning any bridges that were not already burned a decade ago \o/

If anyone wants to support this or related work, either as a donation, or as actually invoiced time, drop me an email. If any miners are interested in specifically targetting Vega, soon, or doing other interesting things with ATI hardware, I will be happy to set up a coinbase wallet. Get in touch.

Oh, and test out flashrom, and report back so we can update the device table to "Tested, OK" status, as there are currently 180 entries ranked as "Not-Tested", a number that is probably going to grow quite a bit in the next few weeks as hardware trickles in :)

16 Oct 2018 10:49pm GMT

Xavier Mertens: Hack.lu 2018 Wrap-Up Day #1

The 14th edition (!) of hack.lu is ongoing in Luxembourg. I arrived yesterday to attend the MISP summit which was a success. It's great to see that more and more people are using this information sharing platform to fight bad guys! Today, the conference officially started with the regular talk. I spent my day in the main room to follow most of the scheduled talks. Here is my quick recap…

There was no official keynote speaker this year, the first ones to come on stage where

Ankit Gangwal and Eireann Leverett with a talk about ransomware: ""Come to the dark side! We have radical insurance groups & ransomware". But it was a different one with an interesting approach. Yesterday Eireann already presented the results of his research based on MISP: "Logistical Budget: Can we quantitatively compare APTs with MISP". It was on the same topic today: How to quantify the impact of ransomware attacks. Cyber insurance likes quantifiable risks. How to put some numbers (read: the amount of money) on this threat? They reviewed the ransomware life cycle as well as some popular malware families and estimated the financial impact when a company gets infected. It was an interesting approach. Also the analysis of cryptocurrency used to pay the ransom (how often, when - weekend vs week ays). Note that also developed a set of script to help to extract IOCs from ransomware sample (the code is available here).
Back to the technical side with the next talk presented by Matthieu Tarral. He presented his solution to debug malware in a virtualized environment. What are the problems related to classic debuggers? They are noisy, they alter the environment, they can affect what the analyst sees or the system view can be incomplete. For Matthieu, a better approach is to put the debugger at level -1 to be stealthier and be able to perform a full analysis of the malware. Another benefit is that the guest is unmodified (no extra process, no serial connection, … Debuggers working at hypervisor level are not new, he mentioned HyperDBG (from 2010!), virtdgb and PulseDBG. For the second part, Matthieu presented his own project based on LibVMI which is a VMI abstraction layer library independent of any hypervisor. At the moment, Xen is fully supported and KVM is coming. He showed a nice demo (video) about a malware analysis. Very nice project!
The next slide was again less technical but quite interesting. It was presented by Fabien Mathey and was focussing on problems related to performing risks assessment in a company (time constraints, budgets, motivated people and lack of proper tool). The second part was a presented of MONARC, a tool developed in Luxembourg and dedicated to performing risks assessment through a nice web interface.
And the series of non-technical talks continued with the one of Elle Armageddon about threat intelligence. The first part of the talk was a comparison between medical and infosec environment. If you check them carefully, you'll quickly spot many common ideas like:

The second part was too long and less interesting (IMHO) and focused on the different types of threats like stalkers, state repression, etc. What they have in common, what are the differences, etc. I really like the comparison of medical/infosec environments!

After the lunch break (we get always good food at hack.lu!), Dan Demeter came on stage to talk about YARA and particularly the tools he developed: "Let me Yara that for you!". YARA is a well-known tool for security researchers. It helps to spot malicious files but it may also become quickly very difficult to manage all the rules that you collected here and there or that you developed by yourself. Klara is the tool developed by Dan which helps to automate this. It can be described as a distributed YARA scanner. It offers a web interface, users groups, email notifications and more useful options. The idea is to be able to manager rules but also to (re)apply them to a set of samples in a quick way (for retro-search purposes). About performances, Klara is able to scan 10TB in 30 mins!
I skipped the next talk - "The (not so profitable) path towards automated heap exploitation". It looked to be a technical one. The next slop was assigned to Emmanual Nicaise who presented a talk about "Neuro Hacking" or "The science behind social engineering and an effective security culture". Everybody knows what is social engineering and the goal to convince the victim to perform actions or to disclose sensitive information. But to achieve an efficient social engineering, it is mandatory to understand how the human brain is working and Emmanuel has a deep knowledge about this topic. Our brain is very complex and it can be compared to a computer with inputs/outputs, a bus, a firewall etc. Compared to computers, humans do not perform multi-tasking but time-sharing. He explained what are neurotransmitters and how they can be excitatory (ex: glutamate) or inhibitory (GABA). Hormones were also covered. Our brains have two main functions:
Emmanual demonstrated with several examples of how our brain put a picture in a category or another one just based on the context. Some people might look good or bad depending on the way the brain see the pictures. The way we process information is also important (fast vs slow mode). For the social engineer, it's best to keep the brain in fast mode to make it easier to manipulate or predict. As a conclusion, to perform efficient social engineering:
The next talk was about Turla or Snake: "The Snake keeps reinventing itself" by Jean-Ian Boutin and Matthieu Faou. This is a group very active for a long time that targeted governments and diplomats. It has a large toolset targetting all major platforms. What is the infection vector? Mosquito was a backdoor distributed via a fake Flash installer via the URL admdownload[.]adobe[.]com. It was pointing to an IP in the Akamai CDN used by Adobe. How did it work? Mitm? BGP hijack? Based on the location of the infected computers, it was via a compromised ISP. Once installed, Mosquito used many tools to perform lateral movement:

A particular attention was discovered to cover all the tracks (deletion of all files, logs, etc). Then, the speakers presented the Outlook backdoor used to interact with the compromised computer (via the MAPI protocol). All received emails were exfiltrated to a remote address and commands received via attached PDF files in emails. They finished the presentation with a nice demo of a compromised Outlook which popped up a calc.exe via a malicious email received.

The next presentation was (for me) the best one of this first day. Not only because the content was technically great but the way it was presented by funny. The title was "What the fax?" by Eyal Itkin, Yaniv Balmas. Their research started last year at the hotel bar with a joke. Is it possible to compromise a company just be sending a fax? Challenge accepted! Fax machines are old devices (1966) and the ITU standard was defined in 1980 but, still today, many companies accept faxes as a communication channel. Today, fax machines disappeared and are replaced by MFPs ("Multi-Functions Printers"). And such devices are usually connected to the corporate network. They focused their research on an HP printer due to the popularity of the brand. They explained how to grad the firmware, how to discovered the compression protocol used. Step by step, the explained how they discovered vulnerabilities (CVE-2018-5925 & CVE-2018-5925). Guess what? They made a demo: a printer received a fax and, using EternalBlue, they compromised a computer connected on the same LAN. Awesome to see how an old technology can still be (ab)used today!
I skipped the last two talks: One about the SS7 set of protocols and how it can be abused to collect information about mobile users or intercept SMS message. The last one was about DDoS attacks based on IoT devices.
There is already a Youtube channel with all the presentations (added as soon as possible) and slides will be uploaded on the archive website. That's all for today folks, stay tuned for another wrap-up tomorrow!

[The post Hack.lu 2018 Wrap-Up Day #1 has been first published on /dev/random]

16 Oct 2018 9:50pm GMT

Lionel Dricot: Le jour où j’ai désinstallé mon app préférée !

Ça y'est, c'est fait ! J'ai désinstallé Pocket de mon smartphone. Pocket qui est pourtant l'application que j'ai toujours trouvée la plus importante, l'application qui a justifié que je repasse sur un Kobo afin de lire les articles sauvegardés. Pocket qui est, par définition, une manière de garder des articles glanés sur le web « pour plus tard », liste qui tend à se remplir au gré des messages sur les réseaux sociaux.

Or, ne glanant plus sur le web depuis ma déconnexion, force est de constater que je n'ajoute rien de nouveau à Pocket (sauf quand vous m'envoyez un mail avec un lien qui devrait selon vous m'intéresser. J'adore ces recommandations, merci, continuez !). Le fait d'avoir le temps pour lire la quantité d'articles amassés non encore lus (une petite centaine) m'a fait réaliser à quel point ces articles sont très très rarement intéressants (comme je le signalais dans le ProofOfCast 12, ceux sur la blockchain disent absolument tous la même chose). En fait, sur la centaine d'articles, j'en ai partagé 4 ou 5 sur le moment même, j'en retiens un seul qui m'a vraiment appris des choses (un article qui résume l'œuvre d'Harold Adams Innis) et 0 qui ont changé quoi que ce soit à ma perspective.

L'article sur Innis est assez paradoxal dans le sens que son livre le plus connu est dans ma liste de lecture depuis des mois, que je n'ai encore jamais pris le temps de le lire. Globalement, je peux dire que ma liste Pocket était inutile à 99% et, à 1%, un succédané d'un livre que j'ai envie de lire. Que lire ces 100 articles m'a sans doute pris le temps que j'aurais pris à lire rapidement le livre en question.

Le résultat est donc assez peu brillant…

Mais il y'a pire !

Pocket possède un flux d'articles « recommandés ». Ce flux est extrêmement mal conçu (beaucoup de répétitions, des mélanges à chaque rafraichissement, affiche même des articles qui sont déjà dans notre liste de lecture) mais est la seule application qui reste sur mon téléphone à fournir un flux d'actualités.

Vous me voyez venir…

Mon cerveau a très rapidement pris l'habitude de lancer Pocket pour « vider ma liste de lecture » avant d'aller consulter les « articles recommandés » (entre nous, la qualité est vraiment déplorable).

Aujourd'hui, alors qu'il ne me reste que 2 articles à lire, voici deux suggestions qui se suivaient dans mon flux :

La coïncidence est absolument troublante !

D'abord un article pour nous faire rêver et dont le titre peut se traduire par « Devenir milliardaire en 2 ans à 20 ans, c'est possible ! » suivi d'un article « Le réchauffement climatique, c'est la faute des milliardaires ».

Ces deux articles résument sans doute à eux seuls l'état actuel des médias que nous consommons. D'un côté, le rêve, l'envie et de l'autre le catastrophisme défaitiste.

Il est évident qu'on a envie de cliquer sur le premier lien ! Peut-être qu'on pourrait apprendre la technique pour faire pareil ? Même si ça parait stupide, le simple fait que je sois dans un flux prouve que mon cerveau ne cherche pas à réfléchir, mais à se faire du bien.

L'article en question, pour le préciser, ne contient qu'une seule information factuelle : la startup Brex, fondée par deux vingtenaires originaires du Brésil, vient de lever 100 millions lors d'un Round C, ce qui place sa valeur à 1 milliard. Dit comme ça, c'est beaucoup moins sexy et sans intérêt si vous n'êtes pas dans le milieu de la fintech. Soit dit en passant, cela ne fait pas des fondateurs des milliardaires vu qu'ils doivent à présent bosser pour faire un exit profitable (même si, financièrement, on se doute qu'ils ne doivent pas gagner le SMIC et que l'aspect financier n'est plus un problème pour eux).

Le second article, lui, nous rappelle que les plus grosses industries sont celles qui polluent le plus (quelle surprise !) et qu'elles ont toujours fait du lobby contre toute tentative de réduire leurs profits (sans blague !). Le rapport avec les milliardaires est extrêmement ténu (on sous-entend que ce sont des milliardaires qui sont dans le conseil d'administration de ces entreprises). L'article va jusqu'à déculpabiliser le lecteur en disant que, vu les chiffres, le consommateur moyen ne peut rien faire contre la pollution. Alors que la phrase porte en elle sa solution : dans « consommateur », il y'a « consommer » et donc les « consommateurs » peuvent décider de favoriser les entreprises qui polluent moins (ce qui, remarquons-le, est en train de se passer depuis plusieurs années, d'où le green-washing des entreprises suivi d'un actuel mouvement pour tenter de voir clair à travers le green-washing).

Bref, l'article est inutile, dangereusement stupide, sans rapport avec son titre, mais le titre et l'image donnent envie de cliquer. Pire en fait : le titre et l'image donnent envie de discuter, de réagir. J'ai été témoin de nombreux débats sur Facebook dans les commentaires d'un article, débat traitant… du titre de l'article !

Lorsqu'un commentateur un peu plus avisé que les autres signale que les commentaires sont à côté de la plaque, car la remarque est adressée dans l'article et/ou l'article va plus loin que son titre, il n'est pas rare de voir le commentateur pris en faute dire « qu'il n'a pas le temps de lire l'article ». Les réseaux sociaux sont donc peuplés de gens qui ne lisent pas plus loin que les titres, mais se lancent dans des diatribes (car on a toujours le temps de réagir). Ce qui est tout bénéf pour les réseaux sociaux, car ça fait de l'animation, des interactions, des visites, des publicités affichées. Mais également pour les auteurs d'articles car ça fait des likes et des vues sur leurs articles.

Le fait que personne ne lise le contenu ? Ce n'est pas l'objectif du business. Tout comme ce n'est pas l'objectif d'un fast-food de s'inquiéter que vous ayez une alimentation équilibrée riche en vitamines.

Si vous voulez une alimentation équilibrée, il faut trouver un fournisseur dont le business model n'est pas de vous « remplir » mais de vous rendre « meilleur ». Intellectuellement, cela signifique se fournir directement chez vos petits producteurs locaux, vos blogueurs et écrivains bio qui ne vivent que de vos contributions.

Mais passons cette intrusion publicitaire éhontée pour remarquer que j'ai désinstallé Pocket, mon app la plus indispensable, après seulement 12 jours de déconnexion.

Suis-je en train d'établir des changements durables ou bien suis-je encore dans l'enthousiasme du début, excité par la nouveauté que représente cette déconnexion ? Vais-je tenir le coup sans absolument aucun flux ? (et punaise, c'est bien plus difficile que prévu) Vais-je abandonner la jalousie envieuse de ceux devenus milliardaires (car si ça m'arrivait, je pourrais me consacrer à l'écriture) et me mettre à écrire en réduisant ma consommation (vu que moins exposé aux pubs) ?

L'avenir nous le dira…

Photo by Mantas Hesthaven on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

16 Oct 2018 9:32am GMT

14 Oct 2018

feedPlanet Grep

Dries Buytaert: Better image performance on dri.es

For a few years now I've been planning to add support for responsive images to my site.

The past two weeks, I've had to take multiple trips to the West Coast of the United States; last week I traveled from Boston to San Diego and back, and this week I'm flying from Boston to San Francisco and back. I used some of that airplane time to add responsive image support to my site, and just pushed it to production from 30,000 feet in the air!

When a website supports responsive images, it allows a browser to choose between different versions of an image. The browser will select the most optimal image by taking into account not only the device's dimensions (e.g. mobile vs desktop) but also the device's screen resolution (e.g. regular vs retina) and the browser viewport (e.g. full-screen browser or not). In theory, a browser could also factor in the internet connection speed but I don't think they do.

First of all, with responsive image support, images should always look crisp (I no longer serve an image that is too small for certain devices). Second, my site should also be faster, especially for people using older smartphones on low-bandwidth connections (I no longer serve an image that is too big for an older smartphone).

Serving the right image to the right device can make a big difference in the user experience.

Many articles suggest supporting three image sizes, however, based on my own testing with Chrome's Developer Tools, I didn't feel that three sizes was sufficient. There are so many different screen sizes and screen resolutions today that I decided to offer six versions of each image: 480, 640, 768, 960, 1280 and 1440 pixels wide. And I'm on the fence about adding 1920 as a seventh size.

Because I believe in being in control of my own data, I host almost 10,000 original images on my site. This means that in addition to the original images, I now also store 60,000 image variants. To further improve the site experience, I'm contemplating adding WebP variants as well - that would bring the total number of stored images to 130,000.

If you notice that my photos are clearer and/or page delivery a bit faster, this is why. Through small changes like these, my goal is to continue to improve the user experience on dri.es.

14 Oct 2018 10:36pm GMT

13 Oct 2018

feedPlanet Grep

Dries Buytaert: A fresh look for dri.es

In 1999, I decided to start dri.es (formally buytaert.net) as a place to blog, write, and deepen my thinking. While I ran other websites before dri.es, my blog is one of my longest running projects.

Working on my site helps me relax, so it's not unusual for me to spend a few hours now and then making tweaks. This could include updating my photo galleries, working on more POSSE features, fixing broken links, or upgrading to the latest version of Drupal.

The past month, a collection of smaller updates have resulted in a new visual design for my site. If you are reading this post through an RSS aggregator or through my mailing list, consider checking out the new design on dri.es.

2018 dri.es redesignBefore (left) and after (right).

The new dri.es may not win design awards, but will hopefully make it easier to consume the content. My design goals were the following:

Improve readability of the content

To improve the readability of the content, I implemented various usability best practices for spacing text and images.

I also adjusted the width of the main content area. For optimal readability, you should have between 45 and 75 characters on each line. No more, no less. The old design had about 95 characters on each line, while the new design is closer to 70.

Both the line width and the spacing changes should improve the readability.

Improve the discoverability of content

I also wanted to improve the discoverability of my content. I cover a lot of different topics on my blog - from Drupal to Open Source, startups, business, investing, travel, photography and the Open Web. To help visitors understand what my site is about, I created a new navigation. The new Archive page shows visitors a list of the main topics I write about. It's a small change, but it should help new visitors figure out what my site is about.

Optimize the performance of my site

Less noticeable is that the underlying HTML and CSS code is now entirely different. I'm still using Drupal, of course, but I decided to rewrite my Drupal theme from scratch.

The new design's CSS code is more than three times smaller: the previous design had almost 52K of theme-specific CSS while the new design has only 16K of theme-specific CSS.

The new design also results in fewer HTTP requests as I replaced all stand-alone icons with inline SVGs. Serving the page you are reading right now only takes 16 HTTP requests compared to 33 HTTP requests with the previous design.

All this results in faster site performance. This is especially important for people visiting my site from a mobile device, and even more important for people visiting my site from areas in the world with slow internet. A lighter theme with fewer HTTP requests makes my site more accessible. It is something I plan to work more on in the future.

Website bloat is a growing problem and impacts the user experience. I wanted to lead by example, and made my site simpler and faster to load.

The new design also uses Flexbox and CSS Grid Layout - both are more modern CSS standards. The new design is fully supported in all main browsers: Chrome, Firefox, Safari and Edge. It is, however, not fully supported on Internet Explorer, which accounts for 3% of all my visitors. Internet Explorer users should still be able to read all content though.

Give me more creative freedom

Last but not least, the new design provides me with a better foundation to build upon in subsequent updates. I wanted more flexibility for how to lay out images in my blog posts, highlight important snippets, and add a table of content on long posts. You can see all three in action in this post, assuming you're looking at this blog post on a larger screen.

13 Oct 2018 7:07pm GMT

FOSDEM organizers: Accepted developer rooms

We are pleased to announce the developer rooms that will be organised at FOSDEM 2019. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. We will update this table with links to the calls for participation. Saturday 2 February 2019 Topic Call for Participation CfP deadline .NET and TypeScript - TBA Ada - TBA BSD announcement 2018-12-10 Collaborative Information and Content Management Applications - TBA Decentralized舰

13 Oct 2018 3:00pm GMT

12 Oct 2018

feedPlanet Grep

Dries Buytaert: A breakout year for Open Source businesses

I was talking to Chetan Puttagunta yesterday, and we both agreed that 2018 has been an incredible year for Open Source businesses so far. (Chetan helped lead NEA's investment in Acquia, but is also an investor in Mulesoft, MongoDB and Elastic.)

Between a series of acquisitions and IPOs, Open Source companies have shown incredible financial returns this year. Just look at this year-to-date list:

Company Acquirer Date Value
CoreOS RedHat January 2018 $250 million
Mulesoft Saleforce May 2018 $6,5 billion
Magento Adobe June 2018 $1,7 billion
GitHub Microsoft June 2018 $7,5 billion
Suse EQT partners July 2018 $2,5 billion
Elastic IPO September 2018 $4,9 billion

For me, the success of Open Source companies is not a surprise. In 2016, I explained how open source crossed the chasm in 2016, and predicted that proprietary software giants would soon need to incorporate Open Source into their own offerings to remain competitive:

The FUD-era where proprietary software giants campaigned aggressively against open source and cloud computing by sowing fear, uncertainty and doubt is over. Ironically, those same critics are now scrambling to paint themselves as committed to open source and cloud architectures.

Adobe's acquisition of Magento, Microsoft's acquisition of GitHub, the $5,2 billion merger of Hortonworks and Cloudera, or SalesForce's acquisition of Mulesoft endorse this prediction. The FUD around Open Source businesses is officially over.

12 Oct 2018 8:05pm GMT

Xavier Mertens: [SANS ISC] More Equation Editor Exploit Waves

I published the following diary on isc.sans.edu: "More Equation Editor Exploit Waves":

This morning, I spotted another wave of malicious documents that (ab)use again CVE-2017-11882 in the Equation Editor (see my yesterday's diary). This time, malicious files are RTF files. One of the samples is SHA256:bc84bb7b07d196339c3f92933c5449e71808aa40a102774729ba6f1c152d5ee2 (VT score: 19/57)… [Read more]

[The post [SANS ISC] More Equation Editor Exploit Waves has been first published on /dev/random]

12 Oct 2018 1:43pm GMT

Frank Goossens: Radar, I am (still) under you

So Autoptimize did not go over 100K downloads in one day. Still flying under the radar ;-)

Possibly related twitterless twaddle:

12 Oct 2018 10:25am GMT

Lionel Dricot: Une limite d’âge pour voter et être élu ?

Dans mon pays, il faut 18 ans pour pouvoir se présenter aux élections (il n'y a pas si longtemps, il fallait même 23 ans pour être sénateur). Je suppose que la raison est que l'on considère que la plupart des jeunes de moins de 18 ans sont trop immatures intellectuellement. Ils sont trop aisément influençables, ils ne connaissent pas encore les réalités de la vie.

Bien sûr, la limite est arbitraire. Certains sont bien plus matures et réalistes à 15 ans que ce que certains ne seront jamais de toute leur vie de centenaire.

Nous acceptons cette limite arbitraire pour éviter l'innocence et la naïveté de la jeunesse.

Mais qu'en est-il de l'inverse ?

J'observe que beaucoup d'adultes qui étaient très idéalistes à 20 ans deviennent soudainement plus préoccupés par leur petit confort et payer les traites de la maison dans la trentaine et la quarantaine. Au-delà, nous sommes nombreux à devenir conservateurs, voire réactionnaires. Ce n'est certes pas une généralité, mais c'est quand même une forme de constance humaine observée depuis les premiers philosophes. Pline (ou Sénèque ? Plus moyen de retrouver le texte) en parlait déjà dans mes cours de latin.

Mais du coup pourquoi ne pas mettre une limite d'âge à la candidature aux élections ?

Plutôt que de faire des élections un concours de serrage de mains de retraités dans les kermesses, laissons les jeunes prendre le pouvoir. Ils ont encore des idéaux eux. Ils sont vachement plus concernés par le futur de leur planète, car ils savent qu'ils y vivront. Ils sont plus vifs, plus au courant des dernières tendances. Mais, contrairement aux plus âgés, ils n'ont ni le temps ni l'argent à investir dans des élections.

Imaginez un instant un monde où aucun élu ne dépasserait l'âge de 35 ans (ce qui me disqualifie directement) ! Et bien, en toute honnêteté, je trouve ça pas mal. Les ainés deviendraient les conseillers des plus jeunes et non plus le contraire où les plus jeunes font les cafés des plus vieux en espérant monter dans la hiérarchie et bénéficier d'un poste enviable à 50 ans.

En approchant de la limite fatidique des 35 ans, les élus commenceraient à vouloir plaire aux plus jeunes encore empreints de rébellion. Comme ce serait beau de voir ces trentenaires considérer les intérêts des plus jeunes comme leur principale priorité. Comme il serait jouissif de ne plus subir la suffisance d'élus cacochymes admonestant la fainéantise de la jeunesse entre deux tournois de golf.

Bien sûr, il y'a énormément de vieux qui restent jeunes dans leur cœur et leur esprit. Vous les reconnaitrez aisément : ils ne cherchent pas à occuper l'espace, mais à mettre en avant la jeunesse, la relève. Allez savoir pourquoi, j'ai l'impression que ceux-là auraient tendance à être plutôt d'accord avec ce billet.

Après tout, on se prive bien de toute l'intelligence et de la vivacité des moins de 18 ans, je n'ai pas l'impression que le conservatisme des anciens jeunes nous manquera beaucoup.

Photo by Cristina Gottardi on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

12 Oct 2018 8:56am GMT

11 Oct 2018

feedPlanet Grep

Xavier Mertens: [SANS ISC] New Campaign Using Old Equation Editor Vulnerability

I published the following diary on isc.sans.edu: "New Campaign Using Old Equation Editor Vulnerability":

Yesterday, I found a phishing sample that looked interesting:

From: sales@tjzxchem[.]com
To: me
Subject: RE: Re: Proforma Invoice INV 075 2018-19 '08
Reply-To: exports.sonyaceramics@gmail[.]com

[Read more]

[The post [SANS ISC] New Campaign Using Old Equation Editor Vulnerability has been first published on /dev/random]

11 Oct 2018 11:02am GMT

10 Oct 2018

feedPlanet Grep

Mattias Geniar: MySQL 8 removes shorthand for creating user + permissions

The post MySQL 8 removes shorthand for creating user + permissions appeared first on ma.ttias.be.

I used to run a one-liner for creating a new database and adding a new user to it, with a custom password. It looked like this:

mysql> GRANT ALL PRIVILEGES ON ohdear_ci.*
       TO 'ohdear_ci'@'localhost'
       IDENTIFIED BY 'ohdear_secret';

In MySQL 8 however, you'll receive this error message.

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
  corresponds to your MySQL server version for the right syntax to use
  near 'IDENTIFIED BY 'ohdear_secret'' at line 1

The reason appears to be that MySQL dropped support for this short-hand version and now requires the slightly longer version instead.

mysql> CREATE USER 'ohdear_ci'@'localhost' IDENTIFIED BY 'ohdear_secret';
Query OK, 0 rows affected (0.11 sec)

mysql> GRANT ALL ON ohdear_ci.* TO 'ohdear_ci'@'localhost';
Query OK, 0 rows affected (0.15 sec)

If you have scripting in place that uses the short, one-liner version, be aware those might need changing if you move to MySQL 8.

The post MySQL 8 removes shorthand for creating user + permissions appeared first on ma.ttias.be.

10 Oct 2018 7:29pm GMT

Xavier Mertens: [SANS ISC] “OG” Tools Remain Valuable

I published the following diary on isc.sans.edu: "'OG' Tools Remain Valuable":

For vendors, the cybersecurity landscape is a nice place to make a very lucrative business. New solutions and tools are released every day and promise you to easily detect malicious activities on your networks. And it's a recurring story. Once they have been implemented by many customers, vendors come back again with new version flagged as "2.0", "NG" or "Next Generation". Is it really useful or just a hype? I won't start the debate but keep in mind that good old tools and protocols remain still very valuable today… [Read more]

[The post [SANS ISC] "OG" Tools Remain Valuable has been first published on /dev/random]

10 Oct 2018 11:04am GMT

09 Oct 2018

feedPlanet Grep

Frank Goossens: Bye September, miss you already

So I forgot to post my customary love-declaration for the month of September, which is as I'm sure everyone will agree the most beautiful month in the world. But when looking outside it still feels very much like September and maybe I was just waiting for Chilly Gonzales to release this beautiful Septemberish tune, which he did last week;

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

09 Oct 2018 8:37am GMT