22 Dec 2014

feedPlanet Grep

Mattias Geniar: Replacing Software Stacks Is Never The Solution

Blindly replacing ntpd with an alternative, that you have no experience with, for such a crucial service, does not seem like a good plan.

- ma.ttias.be (@mattiasgeniar) December 22, 2014

This tweet referred to the blind replacing of the ntpd daemon by alternatives, such as tlsdate and OpenNTPD, as a result of the vulnerabilities found in ntpd.

While I am at no point talking down the security risks and the impact of those ntpd vulnerabilities, especially combined with the recent CVE-201-9322 that allows local user privilege escalation in recent RHEL/CentOS kernels, it is not worth completely abandoning a service overnight and blindly running to an alternative.

For instance, I saw a number of tweets with "suggestions" to fix these vulnerabilities with the following one-liner.

apt-get remove ntp && apt-get install tlsdate

This indeed removes ntpd. And it indeed installs tlsdate, which does not have CVE-2014-9295. Short-term, yes it's a fix.

But you may no longer realise this, as most of it is automated/abstracted away behind a config management system of sorts, but ntpd is a crucial part of your server. It's as important as DNS resolving.

Should you really just replace this with a piece of software you don't know? Are you monitoring tlsdate? Did you configure tlsdate properly? Do you know how to troubleshoot tlsdate? Did you finetune the tlsdate configs to your needs? Do you have years of experience with tlsdate, as you do with ntpd?

This doesn't only apply to ntpd, but the recent endeavours of the OpenSSL to LibreSSL fork as well. Why is it that as soon as a security vulnerability is found, everybody jumps ship to an alternative, without investing the resources to fix the problems in the first place? Do you really think the alternatives don't have security loopholes?

Besides the shortsighted tweets and remarks, there are valid, well-supported arguments for migrating away from NTPD. You know, thoughts that don't just occur overnight.

But forking projects and replacing crucial services without rational thinking only creates a greatly fragmented landscape in the open source community that nobody benefits from. And I'm aware that some projects are flawed by design, especially since they were designed over a decade ago. But even those projects can receive patches, bugfixes and refactored code to improve the quality.

The only time you should abandon a software project is after careful consideration of the alternatives, have experience with it in a test-environment and you know how to monitor, secure and debug said new software. Not the day after a vulnerability release as "a fix" to the problem. Abandoning a software stack is (almost) never the solution.

The post Replacing Software Stacks Is Never The Solution appeared first on ma.ttias.be.

Related posts:

  1. Groundrules for when compiling applications from source Today I got thinking about a few rules that I...
  2. Automating the Unknown While Config Management isn't new as a concept, it is...
  3. My "Real Security in a Virtual Environment" Presentation A few days ago, I gave a talk at InfoSecurity,...

22 Dec 2014 5:10pm GMT

Dries Buytaert: Attitude beats experience

The older I get, the quicker the years seem to fly by. As I begin to reflect on a great 2014, one thing is crystal clear again. People are the most important thing to any organization. Having a great team is more important than having a great idea. A good team will figure out how to make something great happen; they'll pivot, evolve and claw their way to success. I see it every day at Acquia, the Drupal Association or the Drupal community. I'm fortunate to be surrounded by so many great people.

By extension, recruiting is serious business. How do you figure out if someone is a great fit for your organization? Books have been written about finding and attracting the right people, but for me the following quote from Dee Hock, the founder of Visa, sums it up perfectly.

"Hire and promote first on the basis of integrity; second, motivation; third, capacity; fourth, understanding; fifth, knowledge; and last and least, experience. Without integrity, motivation is dangerous; without motivation, capacity is impotent; without capacity, understanding is limited; without understanding, knowledge is meaningless; without knowledge, experience is blind." - Dee Hock, founder of Visa.

Most hiring managers get it wrong and focus primarily on experience. While experience can be important, attitude is much more important. Attitude, not experience, is what creates a strong positive culture and what turns users and customers into raving fans.

22 Dec 2014 4:23pm GMT

21 Dec 2014

feedPlanet Grep

Mattias Geniar: Explicitly Approving (Whitelisting) Cookies in Varnish With Libvmod-Cookie

In all my previous Varnish 3.x configs, I've always used blacklisting as the way of handling Cookies. You explicitly tell which cookies you want to remove in vcl_recv, all others remain. But just as security measures, whitelisting is always better than blacklisting.

Even if you fully manage your site and all code, you may not have full control over 3rd party (client-side) advertisers that use tracking cookies. And those cookies may, even if you don't approve of the method, be placed under your domain. So the next request to your site suddenly includes (random) tracking cookies, unique for each visitor, and it destroys the caching in vcl_hash.

Please note this guide is focused on Varnish 3.x. Varnish 4.x will have the Cookie VMOD available by default, no custom compiles required!

Blacklisting / Removing cookies in Varnish

This is the common method of removing cookies in vcl_recv.

set req.http.Cookie = regsuball(req.http.Cookie, "has_js=[^;]+(; )?", "");

And you would repeat that line 1, 10 or 100 times, depending on what cookies you want to remove.

Implementing a whitelist of allowed cookies

In order to use a whitelisting approach, you can use the libvmod-cookie VMOD for Varnish 3.x. It allows more fine-grained control over what cookies are preserved and which ones get removed.

In order to use the VMOD, you need to compile it from source. And to compile the VMOD from source, you also need the Varnish source files somewhere on your system. You can still keep the RPM packages from Varnish installed, but the source is needed to compile the VMOD against it.

Preparing the Varnish source

In this guide, I'll use Varnish 3.0.6 as the base-version to compile against. Download the source and run make to build the binary files, but no not make install as you want to keep the packages from the upstream Varnish repo intact.

$ cd /usr/local/src
$ wget "https://repo.varnish-cache.org/source/varnish-3.0.6.tar.gz"
$ tar xzvf varnish-3.0.6.tar.gz
$ cd varnish-3.0.6
$ ./configure
$ make

Now you have the Varnish source and a built binary available in /usr/local/src/varnish-3.0.6. We'll use this to compile the VMOD against.

Download and install the libvmod-cookie varnish module

Next, download and compile the libvmod-cookie module.

$ cd /usr/local/src
$ wget "https://github.com/lkarsten/libvmod-cookie/tarball/3.0"
$ tar xzvf 3.0
$ cd lkarsten-libvmod-cookie-fe38614
$ ./configure VARNISHSRC=/usr/local/src/varnish-3.0.6 VMODDIR=/usr/lib64/varnish/vmods/
$ make && make install

The result is a vmod module installed in /usr/lib64/varnish/vmods/.

$ ls -alh /usr/lib64/varnish/vmods/
-rwxr-xr-x 1 root root  955 Dec 21 21:28 libvmod_cookie.la
-rwxr-xr-x 1 root root  42K Dec 21 21:28 libvmod_cookie.so
-rwxr-xr-x 1 root root  16K Oct 16 16:30 libvmod_std.so

The libvmod_std is the standard library included by Varnish. The libvmod_cookie is the new binary module, and you can include the vmod in your VCL code now.

import cookie;

vcl_recv {
  ...
}

Whitelisting cookies using filter_except() in libvmod-cookie

And now that the VMOD module is installed and ready for use, you can use the powerful filter_except() function call to pass a comma-separated list of cookies to allow, all others will be removed.

vcl_recv {
  # Let the module parse the "Cookie:" header from the client
  cookie.parse(req.http.cookie);

  # Filter all except these cookies from it
  cookie.filter_except("cookie1,cookie2");

  # Set the "Cookie:" header to the parsed/filtered value, removing all unnecessary cookies
  set req.http.cookie = cookie.get_string();
}

Any other cookie besides cookie1 and cookie2 will be removed from the Cookie: header now.

To debug this and to test what cookies are removed and which ones remain, look at my post about seeing which cookies get stripped in the VCL.

Next up: figuring out how to pass regex's along. ;-)

Caveats

A few things to keep in mind;

  1. VMODs are compiled, so it's better to make packages out of them
  2. Since VMODs are compiled against a specific version, they need to match the Varnish version. (so varnish 3.0.5 from RPM/Yum repos and VMODs compiled with 3.0.6 source can mean troubles)
  3. For automating this at scale, you need the VMODs in your own repository
  4. The filter_except() call accepts strings, not regex's -- to match against regex's, you would need to loop all values

To be continued!

The post Explicitly Approving (Whitelisting) Cookies in Varnish With Libvmod-Cookie appeared first on ma.ttias.be.

Related posts:

  1. Varnish tip: see which cookies are being stripped in your VCL Most Varnish configs contain a lot of logic to strip...
  2. Combine Apache's HTTP authentication with X-Forwarded-For IP whitelisting in Varnish Such a long title for a post. If you want...
  3. Useful Varnish 3.0 Commands: one-liners with varnishtop and varnishlog Here are some useful commands if you're toying around with...

21 Dec 2014 8:48pm GMT

Mattias Geniar: List The Files In A Yum/RPM Package

It's not possible by default, but you can install the yum-utils package that provides tools to list the contents of a certain package.

$ yum -y install yum-utils

Now the repoquery tool is available, that allows you to look into (installed and not-yet-installed) packages.

$ repoquery --list varnish-libs-devel
/usr/include/varnish
/usr/include/varnish/shmlog.h
/usr/include/varnish/shmlog_tags.h
/usr/include/varnish/stat_field.h
/usr/include/varnish/stats.h
/usr/include/varnish/varnishapi.h
...

Very useful to combine with yum whatprovides */something searches to find exactly the package you need!

The post List The Files In A Yum/RPM Package appeared first on ma.ttias.be.

Related posts:

  1. Search yum for the content of a package Yum can do more things than just a yum search...
  2. Update A Specific Package With apt-get Probably default knowledge to most, but I didn't find it...
  3. Silly Yum Tricks: whatprovides, groups & repolist These are a few I've only learned recently. yum (what)provides...

21 Dec 2014 7:40pm GMT

Mattias Geniar: Setting HTTPS $_SERVER variables in PHP-FPM with Nginx

A typical Nginx setup uses fastcgi_pass directives to pass the request to the PHP-FPM daemon. If you would be running an Apache setup, Apache would automatically set the HTTPS server variable, that PHP code can check via $_SERVER['HTTPS'] to determine if the request is HTTP or HTTPs.

In fact, that's how most CMS's (WordPress, Drupal, ...) determine the server environment. They'll also use it for redirects from HTTP-to-HTTPs or vica versa, depending on the config. So the existence of the $_SERVER['HTTPS'] variable is pretty crucial.

Nginx doesn't pass the variable by default to the PHP-FPM daemon when you use fastcgi_pass, but it is easily added.

A basic example in Nginx looks like this.

include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;

# Check if the PHP source file exists (prevents the cgi.fix_pathinfo=1 exploits)
if (-f $request_filename) {
    fastcgi_pass   backend_php; # This backend is defined elsewhere in your Nginx configs
}

The example above is a classic one, that just passes all to PHP. In order to make PHP-FPM aware of your HTTPs setup, you need to add a fastcgi_param environment variable to the config.

include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;

# Make PHP-FPM aware that this vhost is HTTPs enabled
fastcgi_param  HTTPS 'on';

# Check if the PHP source file exists (prevents the cgi.fix_pathinfo=1 exploits)
if (-f $request_filename) {
    fastcgi_pass   backend_php; # This backend is defined elsewhere in your Nginx configs
}

The solution is in the fastcgi_param HTTPS 'on'; line, which passes the HTTPS variable to the PHP-FPM daemon.

The post Setting HTTPS $_SERVER variables in PHP-FPM with Nginx appeared first on ma.ttias.be.

Related posts:

  1. Nginx returning blank white page on PHP parsed with FastCGI or PHP-FPM I've just spent a while debugging this, was tricky'er than...
  2. Guide: running NginX 1.0 with PHP-FPM 5.3 on CentOS 5.x This is a very easy/simple NginX + PHP-FPM guide. PHP-FPM...
  3. Porting standard Apache's mod_rewrite rules to Nginx Most webframeworks will provide you with a .htaccess file that...

21 Dec 2014 6:40pm GMT

FOSDEM organizers: First FOSDEM 2015 speaker interviews

Just like previous editions we have performed some interviews with our main track speakers. To get up to speed with the various topics discussed in the main track talks, you can start reading the following interviews: Dana Jacobsen: Design and Implementation of a Perl Number Theory Module George Neville-Neil: Computers, Clocks and Network Time: Everything you never wanted to know about time James Pallister: Superoptimization: How fast can your code go? Simon Cozens: Introducing SILE: A New Typesetting System Stefan Marr: Building High-Performance Language Implementations With Low Effort

21 Dec 2014 3:00pm GMT

Mark Van den Borre: The Spirit of the 1914 Christmas Truce

The Spirit of the 1914 Christmas Truce

21 Dec 2014 8:04am GMT

19 Dec 2014

feedPlanet Grep

Wouter Verhelst: joytest UI improvements

After yesterday's late night accomplishments, today I fixed up the UI of joytest a bit. It's still not quite what I think it should look like, but at least it's actually usable with a 27-axis, 19-button "joystick" (read: a PS3 controller). Things may disappear off the edge of the window, but you can scroll towards it. Also, I removed the names of the buttons and axes from the window, and installed them as tooltips instead. Few people will be interested in the factoid that "button 1" is a "BaseBtn4", anyway.

The result now looks like this:

If you plug in a new joystick, or remove one from the system, then as soon as udev finishes up creating the necessary device node, joytest will show the joystick (by name) in the treeview to the left. Clicking on a joystick will show that joystick's data to the right. When one pushes a button, the relevant checkbox will be selected; and when one moves an axis, the numbers will start changing.

I really should have some widget to actually show the axis position, rather than some boring numbers. Not sure how to do that.

19 Dec 2014 10:21pm GMT

Xavier Mertens: The Marketing of Vulnerabilities

There is a black market for vulnerabilities, nothing new with this fact! A brand new 0-day can be sold for huge amounts of money. The goal of this blog post is not to cover this market of vulnerabilities but the way some of them are disclosed today. It's just a reflexion I had when reading some news about the Rompager:

tweet-2015-predictions

2014 is almost behind us and we faced some critical vulnerabilities in the last months! If some of them affected very critical and widely spread software components, some also were publicly released in the wild with all the classic components of a commercial marketing action.

Previously, vulnerabilities were disclosed on specific communications channels like mailing-lists (full-disclosure being one of the most known). Then, came social networks like Twitter (which remains a key player for broadcast information) but, across the last months, we saw more and more vulnerabilities disclosed with:

Vulnerabilities are referenced via assigned IDs. The most used reference system is called "CVE" or "Common Vulnerabilities and Exposures". Security professionals are always speaking about such identifiers. To give you an example, the Rompager vulnerability is reference as CVE-2014-9222. But some vulnerabitilies receive a name and all the marketing material associated to them. Here are some examples: Heartbleed, Poodle, Sandworm or, the latest, the Misfortune cookie.

HeartbleedMisfortune CookieSandwormPoodle

Such vulnerabilities are critical and affect millions of devices and, thanks to the help of their marketing presence, they were also relayed by regular mass media to the general public, sometimes in a good way but sometimes with a very bad coverage. Often, behind this marketing material, there are big players in the infosec landscape who are fighting for being the first one to release the vulnerability. Examples:

If speaking about major vulnerabilities to a broader audience is of course a good initiative, it must be performed in the right way. I'm afraid that more and more vulnerabilities will be known to the general public but keep in mind that they are the top of the iceberg. There are new vulnerabilities found every day and some of them are also very nasty! The graphic below gives you an idea of CVE numbers assigned per month in 2014 (7739 as of today!). As you can see, it's far more than the four vulnerabilities mentionned above.

CVE 2014

(Source: cvedetails.com)

To resume: not only "general public" vulnerabilites must be addressed. All of them are important and could lead to a complete compromise of your infrastructure (remember: the weakest link). I hate marketing, also in information security! ;-)

19 Dec 2014 4:01pm GMT

Mattias Geniar: Sony vs. North Korea: 0 – 0?

The fact that the code was written on a PC with Korean locale & language actually makes it less likely to be North Korea. Not least because they don't speak traditional "Korean" in North Korea, they speak their own dialect and traditional Korean is forbidden.
Marc Rogers

And there's many more arguments why North Korea would not be behind these recent Sony hacks.

The post Sony vs. North Korea: 0 - 0? appeared first on ma.ttias.be.

Related posts:

  1. Snakes On A Keyboard Now this is a very cool hardware mod. You have...
  2. Automating the Unknown While Config Management isn't new as a concept, it is...
  3. Code Quality & Code Requirements I'm a programmer by heart. If I don't do it...

19 Dec 2014 12:22pm GMT

18 Dec 2014

feedPlanet Grep

Wouter Verhelst: Introducing libjoy

I've owned a Logitech Wingman Gamepad Extreme since pretty much forever, and although it's been battered over the years, it's still mostly functional. As a gamepad, it has 10 buttons. What's special about it, though, is that the device also has a mode in which a gravity sensor kicks in and produces two extra axes, allowing me to pretend I'm really talking to a joystick. It looks a bit weird though, since you end up playing your games by wobbling the gamepad around a bit.

About 10 years ago, I first learned how to write GObjects by writing a GObject-based joystick API. Unfortunately, I lost the code at some point due to an overzealous rm -rf call. I had planned to rewrite it, but that never really happened.

About a year back, I needed to write a user interface for a customer where a joystick would be a major part of the interaction. The code there was written in Qt, so I write an event-based joystick API in Qt. As it happened, I also noticed that jstest would output names for the actual buttons and axes; I had never noticed this, because due to my 10 buttons and 4 axes, which by default produce a lot of output, the jstest program would just scroll the names off my screen whenever I plugged it in. But the names are there, and it's not too difficult.

Refreshing my memory on the joystick API made me remember how much fun it is, and I wrote the beginnings of what I (at the time) called "libgjs", for "Gobject JoyStick". I didn't really finish it though, until today. I did notice in the mean time that someone else released GObject bindings for javascript and also called that gjs, so in the interest of avoiding confusion I decided to rename my library to libjoy. Not only will this allow me all kinds of interesting puns like "today I am releasing more joy", it also makes for a more compact API (compare joy_stick_open() against gjs_joystick_open()).

The library also comes with a libjoy-gtk that creates a GtkListStore* which is automatically updated as joysticks are added and removed to the system; and a joytest program, a graphical joystick test program which also serves as an example of how to use the API.

still TODO:

What's there is functional, though.

Update: if you're going to talk about code, it's usually a good idea to link to said code. Thanks, Emanuele, for pointing that out ;-)

18 Dec 2014 11:29pm GMT

Mattias Geniar: Interviewing Systems Administrators

... it probably makes more sense for you to ask open ended questions about things you care about. "So, we have a lot of web servers here. What's your experience with managing them?"

and

We all have our favorite pet technologies, but most of us are able to put personal preferences aside in favor of the prevailing consensus. A subset of technologists are unable to do this. "You use Redis? Why?! It's a steaming pile of crap!"

There's a lot of truth in SysAdvent's blogpost about hiring systems administrators.

The post Interviewing Systems Administrators appeared first on ma.ttias.be.

Related posts:

  1. Your Daily Read in December: Sysadvent It's that time of the year again. One article for...
  2. Automating the Unknown While Config Management isn't new as a concept, it is...
  3. Why Programming Is Less Rewarding Than Designing Reactions towards designers: "Oooh that's shiny & pretty, I love...

18 Dec 2014 7:08pm GMT

FOSDEM organizers: Guided sightseeing tours

If you intend to bring your non-geek partner and/or kids to FOSDEM, they may be interested in exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

18 Dec 2014 3:00pm GMT

Joram Barrez: Activiti + Spring Boot docs and example

With the Activiti 5.17.0 release going out any minute now, one of the things we did was writing down documentation on how to use this release together with Spring Boot. If you missed it, me and my Spring friend Josh Long did a webinar a while ago about this. You can find the new docs already […]

18 Dec 2014 9:18am GMT

Frederic Hornain: Red Hat 2015 Customer Priorities Survey

en-rh-2015-customer-priorities-survey

Customers reporting interest in cloud, containers, Linux, and OpenStack for 2015

More information at http://www.redhat.com/en/about/blog/red-hat-2015-customer-priorities-survey

Kind Regards

Frederic


18 Dec 2014 8:57am GMT

17 Dec 2014

feedPlanet Grep

Mattias Geniar: Azure Cloud Outage Root Cause Analysis

I don't particularly enjoy outages, but I do like reading about their root cause analysis afterwards. It's a valuable place to learn about mistakes made and often shares a lot of insights into (the technology behind) an organization that you normally wouldn't get to know.

And last November's Azure outage is no different. A very detailed write-up with enough internals to keep things interesting. The outage occurred as a result of a planned maintenance, to deploy an improvement to the storage infrastructure that would result in faster Storage Tables.

During this deployment, there were two operational errors:

1. The standard flighting deployment policy of incrementally deploying changes across small slices was not followed.

2. Although validation in test and pre-production had been done against Azure Table storage Front-Ends, the configuration switch was incorrectly enabled for Azure Blob storage Front-Ends.

As with most problems, they're human-induced. Technology doesn't often fail, except when engineers make mistakes or implement the technology in a bad way. In this case, a combination of several human errors were the cause.

In summary, Microsoft Azure had clear operating guidelines but there was a gap in the deployment tooling that relied on human decisions and protocol. With the tooling updates the policy is now enforced by the deployment platform itself.

Not everything can be solved with procedures. Even with every step clearly outlined, it still relies on engineers following every step to the letter, and not making mistakes. But we make mistakes. We all do.

It's just hoping those mistakes don't occur during critical times.

The post Azure Cloud Outage Root Cause Analysis appeared first on ma.ttias.be.

17 Dec 2014 9:28pm GMT