19 Jan 2017

feedPlanet Grep

Philip Van Hoof: Gewoën, op teevee

De ideale wereld.

19 Jan 2017 10:45pm GMT

Mattias Geniar: Create a SOCKS proxy on a Linux server with SSH to bypass content filters

The post Create a SOCKS proxy on a Linux server with SSH to bypass content filters appeared first on ma.ttias.be.

Are you on a network with limited access? Is someone filtering your internet traffic, limiting your abilities? Well, if you have SSH access to any server, you can probably set up your own SOCKS5 proxy and tunnel all your traffic over SSH.

From that point on, what you do on your laptop/computer is sent encrypted to the SOCKS5 proxy (your SSH server) and that server sends the traffic to the outside.

It's an SSH tunnel on steroids through which you can easily pass HTTP and HTTPs traffic.

And it isn't even that hard. This guide is for Linux/Mac OSX users that have direct access to a terminal, but the same logic applies to PuTTy on Windows too.

Set up SOCKS5 SSH tunnel

You set up a SOCKS 5 tunnel in 2 essential steps. The first one is to build an SSH tunnel to a remote server.

Once that's set up, you can configure your browser to connect to the local TCP port that the SSH client has exposed, which will then transport the data through the remote SSH server.

It boils down to a few key actions;

  1. You open an SSH connection to a remote server. As you open that connection, your SSH client will also open a local TCP port, available only to your computer. In this example, I'll use local TCP port :1337.
  2. You configure your browser (Chrome/Firefox/...) to use that local proxy instead of directly going out on the internet.
  3. The remote SSH server accepts your SSH connection and will act as the outgoing proxy/vpn for that SOCKS5 connection.

To start such a connection, run the following command in your terminal.

$ ssh -D 1337 -q -C -N user@ma.ttias.be

What that command does is;

  1. -D 1337: open a SOCKS proxy on local port :1337. If that port is taken, try a different port number. If you want to open multiple SOCKS proxies to multiple endpoints, choose a different port for each one.
  2. -C: compress data in the tunnel, save bandwidth
  3. -q: quiet mode, don't output anything locally
  4. -N: do not execute remote commands, useful for just forwarding ports
  5. user@ma.ttias.be: the remote SSH server you have access to

Once you run that, ssh will stay in the foreground until you CTRL+C it to cancel it. If you prefer to keep it running in the background, add -f to fork it to a background command:

$ ssh -D 1337 -q -C -N -f user@ma.ttias.be

Now you have an SSH tunnel between your computer and the remote host, in this example ma.ttias.be.

Use SOCKS proxy in Chrome/Firefox

Next up: tell your browser to use that proxy. This is something that should be done per application as it isn't a system-wide proxy.

In Chrome, go to the chrome://settings/ screen and click through to Advanced Settings. Find the Proxy Settings.

In Firefox, go to Preferences > Advanced > Network and find the Connection settings. Change them as such:

From now on, your browser will connect to localhost:1337, which is picked up by the SSH tunnel to the remote server, which then connects to your HTTP or HTTPs sites.

Encrypted Traffic

This has some advantages and some caveats. For instance, most of your traffic is now encrypted.

What you send between the browser and the local SOCKS proxy is encrypted if you visit an HTTPs site, it's plain text if you visit an HTTP site.

What your SSH client sends between your computer and the remote server is always encrypted.

What your remote server does to connect to the requested website may be encrypted (if it's an HTTPS site) or may be plain text, in case of plain HTTP.

Some parts of your SOCKS proxy are encrypted, some others are not.

Bypassing firewall limitations

If you're somewhere with limited access, you might not be allowed to open an SSH connection to a remote server. You only need to get an SSH connection going, and you're good to go.

So as an alternative, run your SSH server port on additional ports, like :80, :443 or :53: web and DNS traffic is usually allowed out of networks. Your best bet is :443, as it's already an encrypted protocol and less chance of deep packet inspection middleware from blocking your connection because it doesn't follow the expected protocol.

The chances of :53 working are also rather slim, as most DNS is UDP based and TCP is only use in either zone transfers or rare DNS occasions.

Testing SOCKS5 proxy

Visit any "what is my IP" website and refresh the page before and after your SOCKS proxy configuration.

If all went well, your IP should change to that of your remote SSH server, as that's now the outgoing IP for your web browsing.

If your SSH tunnel is down, crashed or wasn't started yet, your browser will kindly tell you that the SOCKS proxy is not responding.

If that's the case, restart the ssh command, try a different port or check your local firewall settings.

The post Create a SOCKS proxy on a Linux server with SSH to bypass content filters appeared first on ma.ttias.be.

19 Jan 2017 6:30am GMT

18 Jan 2017

feedPlanet Grep

Frank Goossens: How to make Autoptimize (even) faster

Less blogposts here lately, mostly because I'm doing custom Autoptimize-development for a partner (more on that later) and because I get a lot of support-questions on the wordpress.org support forums (with approx. between 1500-2000 downloads/ weekday that is to be expected). One of the more interesting questions I got there was about Autoptimize being slow when JS optimization was active and what would be the cause of that. The reply is of interest for a larger audience and is equally valid for CSS optimization;

Typically the majority of time spent in Autoptimize is mainly in the actual minification of code that is not minified yet (purely based on filename; if the filename ends in .min.js or -min.js).

So generally speaking, the way to avoid this is;
1. have a page cache to avoid requests triggering autoptimize (as in that case the cached HTML will have links to cached CSS/JS in it)
2. for uncached pages; make sure AO can re-use previously cached CSS/ JS (from another page), in which case no minification needs to be done (for that you will almost always want to NOT aggregate inline JS, as this almost always busts the cache)
3. for uncached CSS/ JS; make sure any minified file is recognizable as such in the filename (e.g. .min.css -min.js), this can lighten the minification-load considerably (I'll add a filter in the next version of AO so you can tell AO a file is minified even if it does not have that in the name).

So based on this, some tips;
* make sure you're not aggregating inline JS
* for your own code (CSS/ JS); make sure it is minified and that the filename confirms this. if you can convince the theme's developer to do so, all the better (esp. the already minified but big wp-content/themes/bridge/js/plugins.js is a waste of precious resources)
* you could try switching to the legacy minifiers (see FAQ) to see if this improves performance
* you can also check if excluding some un-minified files from minification helps performance (e.g. that bridge/js/plugins.js)

Possibly related twitterless twaddle:

18 Jan 2017 1:37pm GMT

Mattias Geniar: Starting with sponsorships for this blog

The post Starting with sponsorships for this blog appeared first on ma.ttias.be.

For the last few years I've always had some kind of advertising on this blog. At first, it was Google Adsense, which I've now replaced by Carbon ads.

Declining revenue

While AdSense had a higher payout (usually 4x to 5x higher than what Carbon can deliver), the ads were always ugly: online dating, flashy commercials, bright colours, ... nothing that actually fit the style of this blog.

Since I care about design and esthetics, I replaced those ads with Carbon -- an invite-only advertising network. They also care about the looks and can deliver better looking ads. If you're not using an adblocker, there's an example in the top right corner of this blogpost.

But there's still a fundamental problem with this approach: these kind of ads use cookies and javascript to create an online profile and tailer ads to your persona.

They track you. They know you.

Getting rid of ads

When I rebuilt this blog 6 months ago, I famously said 'fuck ads'.

But as you can tell, there are ads on this blog. While I don't write for the money, having a constant revenue stream of semi-passive income (1) is pretty nice.

Since Google's ads were too obtrusive and ugly, and Carbon's ads not paying enough, I'm going to test a 3rd option: direct sponsored deals.

If you've got a brand, service or product you want to promote to around 100k visitors each month, you can now purchase advertising space on this blog. As soon as that happens, all ads will be removed and only the sponsorship message at the top will remain.

(1) With the amount of time that goes into writing detailed blogposts, I wouldn't exactly call it passive though.

No sponsored posts

I considered a fourth alternative, one that blogs like HighScalability use: sponsored posts. Someone pays you and writes a post, you publish it on your blog.

But that feels wrong as well. It's my blog. These are my words. Readers are assuming they're reading what I have to say. Not some random company spamming your RSS feed.

So I'm removing that option from the table too and going big on sponsorships with companies or organisations I can actually endorse. Companies that offer something a reader of this blog might actually enjoy.

It's a lot like advertising on the cron.weekly newsletter: whatever it is you're trying to sell, it needs to fit with the audience. Both for your sake (you'll get much better ROI) and for mine (I don't come of as a money-grabbing idiot that'll sell anything for a few $$).

I'm curious to see where this goes. I also haven't decided yet what I'll do when there isn't anyone willing to sponsor this blog. Let's assume, for the time being, that I won't have to think about that case. :-)

Want to sponsor?

Cool!

Have a look at the explanation, the statistics, what you can expect and what kind of content you can promote here: sponsorships.

The post Starting with sponsorships for this blog appeared first on ma.ttias.be.

18 Jan 2017 8:15am GMT

17 Jan 2017

feedPlanet Grep

Mattias Geniar: Despite revoked CA’s, StartCom and WoSign continue to sell certificates

The post Despite revoked CA's, StartCom and WoSign continue to sell certificates appeared first on ma.ttias.be.

As it stands, the HTTPs "encrypted web" is built on trust. We use browsers that trust that Certificate Authorities secure their infrastructure and deliver TLS certificates (1) after validating and verifying the request correctly.

It's all about trust. Browsers trust those CA root certificates and in turn, they accept the certificates that the CA issues.

(1) Let's all agree to never call it SSL certificates ever again.

Revoking trust

Once in a while, Certificate Authorities misbehave. They might have bugs in their validation procedures that have lead to TLS certificates being issued where the requester had no access to. It's happened for Github.com, Gmail, ... you can probably guess the likely targets.

When that happens, an investigation is performed -- in the open -- to ensure the CA has taken adequate measures to prevent it from happening again. But sometimes, those CA's don't cooperate. As is the case with StartCom (StartSSL) and WoSign, which in the next Chrome update will start to show as invalid certificates.

Google has determined that two CAs, WoSign and StartCom, have not maintained the high standards expected of CAs and will no longer be trusted by Google Chrome, in accordance with our Root Certificate Policy.

This view is similar to the recent announcements by the root certificate programs of both Apple and Mozilla.

Distrusting WoSign and StartCom Certificates

So Apple (Safari), Mozilla (Firefox) and Google (Chrome) are about to stop trusting the StartCom & WoSign TLS certificates.

From that point forward, those sites will look like this.

With Mozilla, Chrome & Safari, that's 80% of the browser market share blocking those Certificate Authorities.

Staged removal of CA trust

Chrome is handling the update sensibly, it'll start distrusting the most recent certificates first, and gradually block the entire CA.

Beginning with Chrome 56, certificates issued by WoSign and StartCom after October 21, 2016 00:00:00 UTC will not be trusted. [..]

In subsequent Chrome releases, these exceptions will be reduced and ultimately removed, culminating in the full distrust of these CAs.

Distrusting WoSign and StartCom Certificates

If you purchased a TLS certificate from either of those 2 CAs in the last 2 months, it won't work in Chrome, Firefox or Safari.

Customer Transparency

Those 3 browsers have essentially just bankrupted those 2 CA's. Surely, if your certificates are not going to be accepted by 80% of the browsers, you're out of business -- right?

Those companies don't see it that way, apparently, as they still sell new certificates online.

This is pure fraud: they're willingly selling certificates that are known to stop working in all major browsers.

Things like that piss me of, because only a handful of IT experts know that those Certificate Authorities are essentially worthless. But they're still willing to accept money from unsuspecting individuals wishing to secure their sites.

I guess they proved once again why they should be distrusted in the first place.

Guilt by Association

Part of the irony is that StartCom, which runs StartSSL, didn't actually do anything wrong. But a few years ago, they were bought by WoSign. In that process, StartCom replaced its own process and staff with those of WoSign, essentially copying the bad practices that WoSign had.

If StartCom hadn't been bought by WoSign, they'd still be in business.

I'm looking forward to the days when we have an easy-to-use, secure, decentralized alternative to Certificate Authorities.

The post Despite revoked CA's, StartCom and WoSign continue to sell certificates appeared first on ma.ttias.be.

17 Jan 2017 8:30am GMT

16 Jan 2017

feedPlanet Grep

Mattias Geniar: Google Infrastructure Security Design Overview

The post Google Infrastructure Security Design Overview appeared first on ma.ttias.be.

This is quite a fascinating document highlighting everything (?) Google does to keep its infrastructure safe.

And to think we're still trying to get our users to generate random, unique, passphrases for every service.

Secure Boot Stack and Machine Identity

Google server machines use a variety of technologies to ensure that they are booting the correct software stack. We use cryptographic signatures over low-level components like the BIOS, bootloader, kernel, and base operating system image. These signatures can be validated during each boot or update. The components are all Google-controlled, built, and hardened. With each new generation of hardware we strive to continually improve security: for example, depending on the generation of server design, we root the trust of the boot chain in either a lockable firmware chip, a microcontroller running Google-written security code, or the above mentioned Google-designed security chip.

Each server machine in the data center has its own specific identity that can be tied to the hardware root of trust and the software with which the machine booted. This identity is used to authenticate API calls to and from low-level management services on the machine.

Source: Google Infrastructure Security Design Overview

The post Google Infrastructure Security Design Overview appeared first on ma.ttias.be.

16 Jan 2017 9:50pm GMT

Dries Buytaert: Acquia retrospective 2016

As my loyal blog readers know, at the beginning of every year I publish a retrospective to look back and take stock of how far Acquia has come over the past 12 months. If you'd like to read my previous annual retrospectives, they can be found here: 2015, 2014, 2013, 2012, 2011, 2010, 2009. When read together, they provide a comprehensive overview of Acquia's trajectory from its inception in 2008 to where it is today, nine years later.

The process of pulling together this annual retrospective is very rewarding for me as it gives me a chance to reflect with some perspective; a rare opportunity among the hustle and bustle of the day-to-day. Trends and cycles only reveal themselves over time, and I continue to learn from this annual period of reflection.

Crossing the chasm

If I were to give Acquia a headline for 2016, it would be the year in which we crossed the proverbial "chasm" from startup to a true leader in our market. Acquia is now entering its ninth full year of operations (we began commercial operations in the fall of 2008). We've raised $186 million in venture capital, opened offices around the world, and now employ over 750 people. However, crossing the "chasm" is more than achieving a revenue target or other benchmarks of size.

The "chasm" describes the difficult transition conceived by Geoffrey Moore in his 1991 classic of technology strategy, Crossing the Chasm. This is the book that talks about making the transition from selling to the early adopters of a product (the technology enthusiasts and visionaries) to the early majority (the pragmatists). If the early majority accepts the technology solutions and products, they can make a company a de facto standard for its category.

I think future retrospectives will endorse my opinion that Acquia crossed the chasm in 2016. I believe that Acquia has crossed the "chasm" because the world has embraced open source and the cloud without any reservations. The FUD-era where proprietary software giants campaigned aggressively against open source and cloud computing by sowing fear, uncertainty and doubt is over. Ironically, those same critics are now scrambling to paint themselves as committed to open source and cloud architectures. Today, I believe that Acquia sets the standard for digital experiences built with open source and delivered in the cloud.

When Tom (my business partner and Acquia CEO) and I spoke together at Acquia's annual customer conference in November, we talked about the two founding pillars that have served Acquia well over its history: open source and cloud. In 2008, we made a commitment to build a company based on open source and the cloud, with its products and services offered through a subscription model rather than a perpetual license. At the time, our industry was skeptical of this forward-thinking combination. It was a bold move, but we have always believed that this combination offers significant advantages over proprietary software because of its faster rate of innovation, higher quality, freedom from vendor lock-in, greater security, and lower total cost of ownership.

Creating digital winners

Acquia has continued its evolution from a content management company to a company that offers a more complete digital experience platform. This transition inspired an internal project to update our vision and mission accordingly.

In 2016, we updated Acquia's vision to "make it possible for dreamers and doers to craft the digital world". To achieve this vision, we want to build "the universal platform for the world's greatest digital experiences".

We increasingly find ourselves at the center of our customer's technology and digital strategies, and they depend on us to provide the open platform to integrate, syndicate, govern and distribute all of their digital business.

The focus on any and every part of their digital business is important and sets us apart from our competitors. Nearly all of our competitors offer single-point solutions for marketers, customer service, online commerce or for portals. An open source model allows customers to integrate systems together through open APIs, which enables our technology to fit into any part of their existing environment. It gives them the freedom to pursue a best-of-breed strategy outside of the confines of a proprietary "marketing cloud".

Business momentum

We continued to grow rapidly in 2016, and it was another record year for revenue at Acquia. We focused on the growth of our recurring revenue, which includes new customers and the renewal and expansion of our work with existing customers. Ever since we started the company, our corporate emphasis on customer success has fueled both components. Successful customers mean renewals and references for new customers. Customer satisfaction remains extremely high at 96 percent, an achievement I'm confident we can maintain as we continue to grow.

In 2016, the top industry analysts published very positive reviews based on their dealings with our customers. I'm proud that Acquia made the biggest positive move of all vendors in this year's Gartner Magic Quadrant for Web Content Management. There are now three distinct leaders: Acquia, Adobe and Sitecore. Out of the leaders, Acquia is the only player that is open-source or has a cloud-first strategy.

Over the course of 2016 Acquia welcomed an impressive roster of new customers who included Nasdaq, Nestle, Vodafone, iHeartMedia, Advanced Auto Parts, Athenahealth, National Grid UK and more. Exiting 2016, Acquia can count 16 of the Fortune 100 among its customers.

Digital transformation is happening everywhere. Only a few years ago, the majority of our customers were in either government, media and entertainment or higher education. In the past two years, we've seen a lot of growth in other verticals and today, our customers span nearly every industry from pharmaceuticals to finance.

To support our growth, we opened a new sales office in Munich (Germany), and we expanded our global support facilities in Brisbane (Queensland, Australia), Portland (Oregon, USA) and Delhi (India). In total, we now have 14 offices around the world. Over the past year we have also seen our remote workforce expand; 33 percent of Acquia's employees are now remote. They can be found in 225 cities worldwide.

Acquia's offices around the world. The world got more flat for Acquia in 2016.

We've also seen an evolution in our partner ecosystem. In addition to working with traditional Drupal businesses, we started partnering with the world's most elite digital agencies and system integrators to deliver massive projects that span dozens of languages and countries. Our partners are taking Acquia and Drupal into some of the world's most impressive brands, new industries and into new parts of the world.

Growing pains and challenges

I enjoy writing these retrospectives because they allow me to chronicle Acquia's incredible journey. But I also write them for you, because you might be able to learn a thing or two from my experiences. To make these retrospectives useful for everyone, I try to document both milestones and difficulties. To grow an organization, you must learn how to overcome your challenges and growing pains.

Rapid growth does not come without cost. In 2016 we made several leadership changes that will help us continue to grow. We added new heads of revenue, European sales, security, IT, talent acquisition and engineering. I'm really proud of the team we built. We exited 2016 in the market for new heads of finance and marketing.

Part of the Acquia leadership team at The Lobster Pool restaurant in Rockport, MA.

We adjusted our business levers to adapt to changes in the financial markets, which in early 2016 shifted from valuing companies almost solely focused on growth to a combination of growth and free cash flow. This is easier said than done, and required a significant organizational mindshift. We changed our operating plan, took a closer look at expanding headcount, and postponed certain investments we had planned. All this was done in the name of "fiscal fitness" to make sure that we don't have to raise more money down the road. Our efforts to cut our burn rate are paying off, and we were able to beat our targets on margin (the difference between our revenue and operating expenses) while continuing to grow our top line.

We now manage 17,000+ AWS instances within Acquia Cloud. What we once were able to do efficiently for hundreds of clients is not necessarily the best way to do it for thousands. Going into 2016, we decided to improve the efficiency of our operations at this scale. While more work remains to be done, our efforts are already paying off. For example, we can now roll out new Acquia Cloud releases about 10 times faster than we could at the end of 2015.

Lastly, 2016 was the first full year of Drupal 8 availability (it was formally released in November 2015). As expected, it took time for developers and the Drupal community to become familiar with its vast array of changes and new capabilities. This wasn't a surprise; in my DrupalCon keynotes I shared that I expected Drupal 8 to really take off in Q4 of 2016. Through the MAP program we committed over $1M in funds and engineering hours to help module creators upgrade their modules to Drupal 8. All told, Acquia invested about $2.5 million in Drupal code contributions in 2016 alone (excluding our contributions in marketing, events, etc). This is the most we have ever invested in Drupal and something is I'm personally very proud of.

Product milestones

The components and products that make up the Acquia Platform.

Acquia remains an amazing place for engineers who want to build great products. We achieved some big milestones over the course of the year.

One of the largest milestones was the significant enhancements to our multi-site platform: Acquia Cloud Site Factory. Site Factory allows a team to manage and operate thousands of sites around the world from a single console, ensuring all fixes, upgrades and improvements are delivered responsibly and efficiently. Last year we added support for multiple codebases in Site Factory - which we call Stacks - allowing an organization to manage multiple Site Factories from the same administrative console and distribute the operation around the world over multiple data centers. It's unique in its ability and is being deployed globally by many multinational, multi-brand consumer goods companies. We manage thousands of sites for our biggest customers. Site Factory has elevated Acquia into the realm of very large and ambitious digital experience delivery.

Another exciting product release was the third version of Acquia Lift, our personalization and contextualization tool. With the third version of Acquia Lift, we've taken everything we've learned about personalization over the past several years to build a tool that is more flexible and easier to use. The new Lift also provides content syndication services that allow both content and user profile data to be reused across sites. When taken together with Site Factory, Lift permits true content governance and reuse.

We also released Lightning, Acquia's Drupal 8 distribution aimed at developers who want to accelerate their projects based on the set of tested and vetted modules and configurations we use ourselves in our customer work. Acquia's commitment to improving the developer experience also led to the release of both Acquia BLT and Acquia Pipelines (private beta). Acquia BLT is a development tool for building new Drupal projects using a standard approach, while Pipelines is a continuous delivery and continuous deployment service that can be used to develop, test and deploy websites on Acquia Cloud.

Acquia has also set a precedent of contributing significantly to Drupal. We helped with the release management of Drupal 8.1 and Drupal 8.2, and with the community's adoption of a new innovation model that allows for faster innovation. We also invested a lot in Drupal 8's "API-first initiative", whose goal is to improve Drupal's web services capabilities. As part of those efforts, we introduced Waterwheel, a group of SDKs which make it easier to build JavaScript and native mobile applications on top of Drupal 8's REST-backend. We have also been driving usability improvements in Drupal 8 by prototyping a new UX paradigm called "Outside-in" and by contributing to the media and layout initiatives. I believe we should maintain our focus on release management, API-first and usability throughout 2017.

Our core product, Acquia Cloud, received a major reworking of its user interface. That new UI is a more modern, faster and responsive user interface that simplifies interaction for developers and administrators.

The new Acquia Cloud user interface released in 2016.

Our focus on security reached new levels in 2016. In January we secured certification that we complied with ISO 27001: the international security and compliance standard for enterprise cloud frameworks. In April we were awarded our FedRAMP ATO from the U.S. Department of Treasury after we were judged compliant with the U.S. federal standards for cloud security and risk management practices. Today we have the most secure, reliable and agile cloud platform available.

We ended the year with an exciting partnership with commerce platform Magento that will help us advance our vision of content and commerce. Existing commerce platforms have focused primarily on the transactions (cart systems, payment processing, warehouse/supply chain integration, tax compliance, customer credentials, etc.) and neglected the customer's actual shopping experience. We've demonstrated with numerous customers that a better brand experience can be delivered with Drupal and Acquia Lift alongside these existing commerce platforms.

The wind in our sales (pun intended)

Entering 2017, I believe that Acquia is positioned for long-term success. Here are a few reasons why:

As I explained at the beginning of this retrospective, trends and cycles reveal themselves over time. After reflecting on 2016, I believe that Acquia is in a unique position. As the world has embraced open source and cloud without reservation, our long-term commitment to this disruptive combination has put us at the right place at the right time. Our investments in expanding the breadth of our platform with products like Acquia Lift and Site Factory are also starting to pay off.

However, Acquia's success is not only determined by the technology we back. Our unique innovation model, which is impossible to cultivate with proprietary software, combined with our commitment to customer success has also contributed to our "crossing of the chasm."

Of course, none of these 2016 results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for your support in 2016 - I can't wait to see what the next year will bring!

16 Jan 2017 6:30pm GMT

Serge van Ginderachter: Ansible Inventory 2.0 design rules

This is my third post in the Ansible Inventory series. See the first and the second posts for some background information.

Preliminary note: in this post, I try to give an example of a typical deployment inventory, and what rules we might need the inventory to abide to. Whilst I tried to keep things as simple as possible, I still feel this gets too complex. I'm not sure if it's just a complex matter, if that's me over complicating things, or if Ansible just should not try to handle things in a more convoluted way.

Let's have an example of how inventory could be handled for a LAMP setup. Let's assume we have these 3 set's of applications:

We also have 3 environments: development, testing and production.

We have 4 different PHP applications, A, B, C and D. We have 2 MySQL cluster instances, CL1 (for dev and testing) and CL2 (for production). We have a single reverse proxy setup that manages all environments.

The Apache PHP application gets installed on one of three nodes, 1 per environment: node1 (dev), node2 (test) and node3 (prod).

For each role in ansible (assume 1 role per application here), we have to define a set of variables (template) that gets applied to the nodes. If we focus on the apache-php apps for this example, the apache-php varset template get instantiated 4 time, 1 for each of A, B, C and D. Assume the url for where the application gets published is part of each varset.

Each application gets installed on each node, respectively in one of the three environments. Each Apache-PHP node will need a list of those 4 applications, so it can define the needed virtual host, and set each application in its subdirectory. Where each application was just a set of key values, to define the single php app, we now need to listify those 4 varsets into a list that can be iterated on the apache config level.

Also, each Apache-RP node will need a list of applications, even when those applications are not directly installed on the reverse proxy nodes. The domain part (say contoso.com) is a specific domain for your organisation. Each application gets published beneath a specific context subfolder (contoso.com/appA, ..). For each environment we have a dedicated supdomain. We finally get 12 frontends: {dev,test,prod}.constoso.com/{appA.appB,appC,appD}. This 12 values must become part of a list of 12 values, and be exported to the reverse proxy nodes, together with the endpoint of the respective backend. (1)

Similarly CL1 needs a list of the applications in dev and test, and CL2 needs a list of applications in prod. We need a way to say that a particular variable that applies to a group of nodes, needs to be exported to a group of other nodes.

So, the initial var sets we had at the app level, get's merged at some point when applied to a node. In this example, merging means, make a list out of the different single applications. It also means overrule: the environment gets overruled by membership of a certain environment group (like for the subdomain part).

Something similar could happen for the php version. One app could need PHP 5, whilst another would need PHP7, which could bring in a constraint that gets the application deployed on separate nodes within the same environment.

Of course, this can get very complicated, very quickly. The key is to define some basic rules the inventory needs (merge dictionaries, listify varsets, overrule vars, export vars to other hosts) and try to keep things simple.

Allow me to summarize a bunch of rules I came up with.

Overview in a nut shell:

(1) This is probably the point where service discovery becomes a better option

The post Ansible Inventory 2.0 design rules appeared first on Serge van Ginderachter.

16 Jan 2017 2:32pm GMT

Mattias Geniar: WordPress to get secure, cryptographic updates

The post WordPress to get secure, cryptographic updates appeared first on ma.ttias.be.

Exciting work is being done with regards to the WordPress auto-update system that allows the WordPress team to sign each update.

That signature can be verified by each WordPress installation to guarantee you're installing the actual WordPress update and not something from a compromised server.

Compromising the WordPress Update Infrastructure

This work is being lead by security researcher Scott Arciszewski from Paragon Initiative, a long-time voice in the PHP security community. He's been warning about the potential dangers of the WordPress update infrastructure for a very long time.

Scott and I discussed it in the SysCast podcast about Application Security too.

Since WordPress 3.7, support has been added to auto-update WordPress installations in case critical vulnerabilities are discovered. I praised them for that -- I really love that feature. It requires 0 effort from the website maintainer.

But that obviously poses a threat, as Scott explains:

Currently, if an attacker can compromise api.wordpress.org, they can issue a fake WordPress update and gain access to every WordPress install on the Internet that has automatic updating enabled. We're two minutes to midnight here (we were one minute to midnight before the Wordfence team found that vulnerability).

Given WordPress's ubiquity, an attacker with control of 27% of websites on the Internet is a grave threat to the security of the rest of the Internet. I don't know how much infrastructure could withstand that level of DDoS.
#39309: Secure WordPress Against Infrastructure Attacks

Scott has already published several security articles, with Guide to Automatic Security Updates For PHP Developers arguably being the most important one for anyone designing and creating a CMS.

Just about every CMS, from Drupal to WordPress to Joomla, uses a weak update mechanisme: if an attacker manages to take control over the update server(s), there's no additional proof they need to have in order to issue new updates. This poses a real threat to the stability of the web..

Securing auto-updates

For WordPress, a federated authentication model is proposed.

It consists of 3 key areas, as Scott explains:

1. Notaries (WordPress blogs or other services that opt in to hosting/verifying the updates) will mirror a Merkle tree which contains (with timestamps and signatures):
--- Any new public keys
--- Any public key revocations
--- Cryptographic hashes of any core/extension updates

2. WordPress blogs will have a pool of notaries they trust explicitly. [...]

3. When an update is received from the server, after checking the signature against the WP core's public key, they will poll at least one trusted Notary [..]. The Notary will verify that the update exists and matches the checksum on file, and respond with a signed message containing:
--- The challenge nonce
--- The response timestamp
--- Whether or not the update was valid

This will be useful in the event that the WP.org's signing key is ever compromised by a sophisticated adversary: If they attempt to issue a silent, targeted update to a machine of interest, they cannot do so reliably [..].
#39309: Secure WordPress Against Infrastructure Attacks

This way, in order to compromise the update system, you need to trick the notaries too to accept the false update. It's no longer merely dependent on the update system itself, but uses a set of peers to validate each of those updates.

Show Me The Money Code

The first patches have already been proposed, it's now up to the WordPress security team to evaluate them and implement any concerns they might have: patch1 and patch2.

Most of the work comes from a sodium_compat PHP package that implements the features provided by libsodium, a modern and easy-to-use crypto library.


Source: #39309

Because WordPress supports PHP 5.2.4 and higher (this is an entirely different security threat to WordPress, but let's ignore it for now), a pure PHP implementation of libsodium is needed since the PHP binary extensions aren't supported that far back. The pecl/libsodium extension requires at least PHP 5.4 or higher.

Here's hoping the patches get accepted and can be used soon, as I'm pretty sure there are a lot of parties interested in getting access to the WordPress update infrastructure.

The post WordPress to get secure, cryptographic updates appeared first on ma.ttias.be.

16 Jan 2017 7:30am GMT

15 Jan 2017

feedPlanet Grep

FOSDEM organizers: Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

15 Jan 2017 3:00pm GMT

14 Jan 2017

feedPlanet Grep

Frank Goossens: Music from Our Tube; Bill Evans’ Peace Piece

Although I feel time has not been kind to Laurent Garnier's music (it sounds rather dated now, but it could just as well be me getting old), but I do love listening to his "It is what it is"-radioshow on Radio Meuh which he duefully also uploads to his SoundCloud-account. Garnier's musical taste, which he displays in his radioshow, is that broad that every show there's at least one song that I get all excited about. This time it's a solo improvisation from Bill Evans from back in 1958, titled "Peace Piece". So beautiful!

YouTube Video
Watch this video on YouTube.

Possibly related twitterless twaddle:

14 Jan 2017 10:30am GMT

Xavier Mertens: [SANS ISC Diary] Backup Files Are Good but Can Be Evil

I published the following diary on isc.sans.org: "Backup Files Are Good but Can Be Evil".

Since we started to work with computers, we always heard the following advice: "Make backups!". Everytime you have to change something in a file or an application, first make a backup of the existing resources (code, configuration files, data). But, if not properly managed, backups can be evil and increase the surface attack of your web application… [Read more]

[The post [SANS ISC Diary] Backup Files Are Good but Can Be Evil has been first published on /dev/random]

14 Jan 2017 9:41am GMT

13 Jan 2017

feedPlanet Grep

Lionel Dricot: Printeurs, l’épopée d’un feuilleton

Un soir d'août 2013, je me mis à écrire un début d'histoire qui me trottait dans la tête. Il est toujours difficile de dire d'où proviennent les idées mais peut-être avais-je fait évoluer « Les non-humains », un couple de nouvelles écrites en 2009 et 2012 dont j'avais préparé un tome 3 sans jamais l'achever.

Le thème de mon histoire était assez clair et le titre s'était imposé immédiatement : Printeurs.

Par contre, je n'avais aucun plan, aucune structure. Ma seule volonté était de placer la phrase qui clôturera le second épisode :

« Je suis pauvre mais je sais penser comme une riche. Je vais changer le système. »

Mon défi ? Publier un épisode par semaine et voir où me conduirait l'histoire.

Une nuit de septembre 2013, après 5 épisodes publiés, je me suis retrouvé dans un hôtel du centre-ville de Milan sans inspiration, sans envie de continuer.

Mes doigts se sont mis spontanément à écrire une autre histoire qui balbutiait dans mon cerveau, une histoire inspirée directement par les témoignages de survivants des camps de concentration nord-coréens et la découverte que ces prisonniers produisent des jouets ou des biens bon-marchés vendu en ce moment dans le reste du monde.

Avec ce très violent épisode 6, Printeurs se transforma. D'un feuilleton pulp cyberpunk gentillet, il devint une plongée noire et obscène dans ce que notre monde a de plus repoussant.

Le ton était donné : Printeurs n'allait pas ménager la sensibilité du lecteur.

Sans effort, les deux histoires de Nellio et de 689 continuèrent en parallèle tout en convergeant graduellement.

En cours d'écriture, le personnage d'Eva changea radicalement de finalité. Tout se mettait en place sans planification, sans effort conscient de ma part.

Malheureusement, Printeurs passa plusieurs fois au second plan pour de longs mois. Mais quelques lecteurs acharnés n'hésitaient pas à le réclamer et je voudrais les remercier : demander la suite d'un épisode est la plus belle des récompenses que vous pouviez me faire.

Au final, il m'a fallu 3 ans et demi pour finir les 51 épisodes de Printeurs et mettre un point final aux pérégrinations de Nellio, Eva et 689. J'avoue tirer une certaine fierté d'une particularité involontaire : dans une seule histoire cohérente, Printeurs se paie le luxe d'explorer de manière quasi-exhaustive toutes les interactions possibles entre le corps, la conscience et la technologie.

Terminer Printeurs est également un soulagement. En novembre 2013, en plus de Printeurs, je me suis lancé dans le challenge NaNoWriMo avec pour objectif d'écrire le roman : « L'Écume du temps ».

Les premiers 50.000 mots de ce roman ont bien été écrits mais je n'ai jamais achevé le reste, tiraillé en permanence entre me replonger dans l'Écume du Temps ou dans Printeurs.

J'ai donc décidé d'envoyer cette première version bêta et non-relue de Printeurs à tous ceux qui ont soutenu mon NaNoWrimo. Avec une promesse solennelle : désormais, je vais replonger dans l'Écume du Temps qui est très différent et beaucoup plus structuré que Printeurs.

Dans le courant du mois de janvier, j'enverrai également cette première version de Printeurs au format Epub à ceux qui me soutiennent sur Tipeee. Pour les autres, je vous invite à lire la série sur ce blog, dans l'Epub qui contient les 19 premiers épisodes ou sur Wattpad. Les derniers épisodes seront publiés sur ce blog progressivement.

J'espère que vous aurez autant de plaisir à lire Printeurs que j'en ai eu à l'écrire. Et si vous avez un éditeur dans votre carnet d'adresses, sachez que je serais enchanté de tenir dans mes mains une version papier de ce qu'est devenu et deviendra Printeurs.

Merci pour votre soutien, vos corrections, vos encouragements et…bonne lecture !

Photo par Eyesplash.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

13 Jan 2017 1:50pm GMT

Xavier Mertens: [SANS ISC Diary] Who’s Attacking Me?

I published the following diary on isc.sans.org: "Who's Attacking Me?".

I started to play with a nice reconnaissance tool that could be helpful in many cases - offensive as well as defensive. "IVRE" ("DRUNK" in French) is a tool developed by the CEA, the Alternative Energies and Atomic Energy Commission in France. It's a network reconnaissance framework that includes… [Read more]

[The post [SANS ISC Diary] Who's Attacking Me? has been first published on /dev/random]

13 Jan 2017 10:54am GMT

11 Jan 2017

feedPlanet Grep

Mattias Geniar: Staying Safe Online – A short guide for non-technical people

The post Staying Safe Online - A short guide for non-technical people appeared first on ma.ttias.be.

Help, my computer's acting up!

If you know a thing or two about computers, chances are you've got an uncle, aunt, niece or nephew that's asked for your help in fixing a computer problem. If the problem wasn't hardware related, it was most likely malware, a virus, ransomware or drive-by-installations of internet toolbars.

Instead of fixing the problems after the infections, let's do some preventive work: this is a guide to staying safe online, for non technical people. This means it won't cover things like 2FA because I think the problem is much more basic: how to keep your computer and information clean and safe.

The goal is to have this be a guide you can share with your relatives and result in them having a safer computer with less problems that need fixing. If you agree with these pointers, go ahead and share the link!

If you are said relative or friend: this won't take more than 30 minutes and you don't need to be a wizard IT guru to implement these fixes. Please, take your time and configure your computer, you'll be much safer because of it.

Enable auto-updates on all your devices

The number 1 problem with broken or infected computers it outdated software. Yes, an update can break your system, but you're much more likely to have a compromised computer by the lack of updates than the other way around.

For all your devices, enable auto-updates. That means:

Enabling auto updates usually isn't very hard. Go to your settings screen and find the Updates or Upgrades section. Mark the checkbox that says "Install updates automatically".

Don't reuse your passwords or PINs, use passphrases

But Mattias, I can't remember more than 5 different passwords!

Good. Neither can I! That's why I have tools that help me with that. Tools like 1 Password.

They work really simple:

Here's the reasoning: your password right now is probably something like "john1". You added the number 1 a few years ago, because some website insisted that you add a numerical value to your password.

Or you're at john17, because every few months work makes you change your password and you increment the number. But these passwords are insanely easy to guess, hack or brute force. Don't bother, you might as well not have a password.

There's a famous internet comic that explains this very well, but it's all very nerdy. The summary is: you're better of using passphrases instead of passwords.

After you've installed 1 Password, it will ask you for the master password of your password vault. Make it a sentence. Did you know you can use spaces in a password? And comma's, exclamation marks and questions marks? You could use "I followed Mattias his security advice!" as a password and it would be safe as hell. Spaces and exclamation mark included.

It's got so many characters, even the fastest computers in the world would take ages to guess.

Now, whenever you need to make a new account for a Photo printing service, an online birth list, your kids' high school, ... don't reuse your same old password, let 1 Password generate a password for you.

Next time you visit that website, you don't have to remember the password, 1 Password can tell you.

If you've reused your current password on Facebook, Gmail and all other websites, now would be a very good time to reset them all to something completely random, managed in your 1 Password.

Install an adblocker in your browser

First, you should be using either Google Chrome or Firefox. If you're using Internet Explorer, Edge, Safari or something else, you might want to switch. (Note: it's not just personal preference, Internet Explorer is much more likely to be the target of a hack and you need a browser that supports "extensions" to install an adblocker)

You'll want to block ads on the web not just because they're ugly, but because they can contain viruses and you're probably going to be browsing to websites that are using shady advertising partners that are up to no good.

Now that you're on Chrome or Firefox, install uBlock Origin, probably the best adblocker you can get today.

Once you have it installed, you should see a red badge icon appear in your browser:

Don't bother with that little counter, as long the icon is red, the plugin is active and blocking ads and other potential sources of malware for you.

If you have other adblockers, like Adblock Plus, go ahead and remove them. They slow your browser down, you don't need those of if you have uBlock Origin.

Offsite Back-ups

Do you like the data on your computer? Those pictures from your grand children, the excel sheets with your expenses and the biography you've been working on for years? Then you'll definitely care about keeping your data safe in case of a disaster.

Disasters can come in many forms: hardware failure, ransomware, a virus, theft, your house burns down, ...

If you take copies a USB disk once in a while: good for you. But it's not enough. You want to store your back-ups outside your home.

And it isn't as complex as it sounds. The only catch, it costs a bit of money. But trust me, your data is worth this if you ever need to restore the back-up.

My recommendation for an easy-to-use tool is Backblaze: it's 5$ per month, fixed price, for unlimited storage.

You install the tool (again: click next, next & next), it'll back-up all files of your computer and send it to the cloud, encrypted. If you ever need to restore it, log in to your Backblaze account and download the files.

Remember, your back-up account contains all your valuable data, choose a strong passphrase for the account and save it in your 1 Password!

Odd emails from strangers or relatives

It's something we call phishing emails, where someone sends you an email, which looks very legitimate, and tries to fool you into browsing to a website that isn't real.

The most likely targets are: your banking website, electrical or phone bill websites, your facebook/gmail/... account page, ...

If you receive an e-mail and you're not sure what to do: reply and ask for clarification. If it's a spammer or someone trying to trick you, chances are they won't reply.

If they replied and you're still not convinced: forward to someone in IT you know, they'll happily say "yes" or "no" to determine the safety and validity of the email. In most cases, we can tell really quickly because we see them all the time.

About those porn sites ...

Yes, the P word. I have to go there.

If my history of fixing other people's computers has thought me one thing: if your PC is infected with a virus, it's probably because you visited a dirty website.

So let's make a deal and agree never to discuss this in public:

And if a link looks shady: don't click on it. Please.

Want to go beyond these steps and optimally secure your computer? Check out DecentSecurity.com and follow their guides.


If you found these tips useful to you, please help me spread awareness by sharing this post on your social network (Facebook, Twitter, ...) using the buttons below this post.

Good luck!

The post Staying Safe Online - A short guide for non-technical people appeared first on ma.ttias.be.

11 Jan 2017 10:12pm GMT

Mattias Geniar: A collection of Drupal Drush one liners and commands

The post A collection of Drupal Drush one liners and commands appeared first on ma.ttias.be.

Some quick one-liners that can come in handy when using drush, Drupal's command line interface.

Place all websites in Drupal maintenance mode

$ drush @sites vset site_offline 1

Place a specific multi-site website in maintenance mode

$ drush -l site_alias_name vset site_offline 1

Take all sites out of maintenance mode

$ drush @sites vset site_offline 0

Set the cache lifetime for all sites to 1800 seconds

$ drush @sites vset cache_lifetime 600
$ drush @sites vset page_cache_maximum_age 1800

List all sites in a multi-site drupal setup

$ drush @sites status
You are about to execute 'status' non-interactively (--yes forced) on all of the following targets:
  /var/www/html/site#multisite_A
  /var/www/html/site#multisite_B
Continue?  (y/n):
...

Flush all caches (varnish, memcached, ...)

$ drush @sites cc all

Disable drupal's cron

$ drush @sites vset cron_safe_threshold 0

If you know any more life-saving drush commands (you know the kind, the ones you need when all hell breaks loose and everything's offline), let me know!

The post A collection of Drupal Drush one liners and commands appeared first on ma.ttias.be.

11 Jan 2017 8:46pm GMT