31 Jan 2016

feedPlanet Gentoo

Kristian Fiskerstrand: Gentoo at FOSDEM 2016

Gentoo Linux was present at this year's Free and Open Source Developer European Meeting (FOSDEM). For those not familiar with FOSDEM it is a conference that consists of more than 5,000 developers and more than 600 presentations over a two-day span at the premises of the Université libre de Bruxelles. The presentations are both streamed directly and recorded making it available to browse the archive once published.

Hanno Böck, a name mostly heard in relation to the fuzzing project, was the only Gentoo Developer presenting a talk this year on the very important subject of security and how Gentoo can be used as a framework for running Address Sanitizer to detect security bugs: "Can we run C code and be safe?: A Linux system protected with Address Sanitizer".

For the first time in many years Gentoo had a stand this year where we handed out buttons and stickers in addition to a LiveDVD.

Gentoo Boot

The Gentoo Ten team created a hybrid amd64/x86 "FOSDEM 2016 Special Edition" release for our user's benefit (thanks likewhoa!), and 200 DVDs were printed of which 155 were already distributed to potential users by the end of day one. A posters on the stand succinctly listed all the packages included on the LiveDVD with some highlights of packages more familiar to some users, something that also highlights one of the benefits of rolling release distributions in that the versions are up to date with upstream releases.

Gentoo DVD package list

If the LiveDVD is used on USB instead of the handed out DVDs it also offers the option of using persistence to store changes on the USB. It uses aufs to layer a read-write file system on top of a read-only squashfs compressed file system. This is great, because it allows you to make changes to the livedvd and have those changes appear on future reboots.

As mentioned in a blog post by dilfridge the stand also attracted attention due to a comment involving Gentoo Linux by Lennart Poettering in his keynote speech as a distribution that doesn't use systemd by default. This fit nicely with one of our banners at the stand; "Gentoo Linux | Works even without systemd | choice included".


There was a lot of positive feedback from various users and the stand functioned very nicely as a meeting place for the various people and the atmosphere was good throughout the conference.

10 fosdem-booth

As has become a tradition there was also a Gentoo dinner again this year amongst developers and users (thanks xaviermiller), a nice way to meet up and discuss everything in a relaxing setting.

31 Jan 2016 1:00pm GMT

30 Jan 2016

feedPlanet Gentoo

Andreas K. Hüttel: Gentoo at FOSDEM: Posters (systemd, arches)

Especially after Lennart Poettering made some publicity for Gentoo Linux in his keynote talk (unfortunately I missed it due to other commitments :), we've had a lot of visitors at our FOSDEM booth. So, because of popular demand, here are again the files for our posters. They are based on the great "Gentoo Abducted" design by Matteo Pescarin. Released under CC BY-SA 2.5 as the original. Enjoy!



30 Jan 2016 3:24pm GMT

29 Jan 2016

feedPlanet Gentoo

Matthew Thode: Stage4 tarballs, minimal and cloud

Where are they

The tarballs can be found in the normal place.


This is meant to be just what you need to boot, the disk won't expand itself, it won't even get networking info or set any passwords for you (no default password).

This tarball is suposed to be the base you generate more complex images from, it is what is going to be used by Openstack's diskimage-builder.

The primary things it does is get you a kernel, bootloader and sshd.

stage4-minimal spec


This was primarilly targeted for use with openstack but it should work with amazon as well, both use cloud-init.

Network interfaces are expected to use dhcp, a couple of other useful things are installed as well, syslog, logrotate, etc.

By default cloud-init will take data (keys mainly) and set them up for the 'gentoo' user.

stage4-cloud spec


I'll be posting about the work being done to take these stages and build bootable images. At the momebt I do have images available here.

openstack images

29 Jan 2016 5:03am GMT

26 Jan 2016

feedPlanet Gentoo

Hanno Böck: Safer use of C code - running Gentoo with Address Sanitizer

GentooAddress Sanitizer is a remarkable feature that is part of the gcc and clang compilers. It can be used to find many typical C bugs - invalid memory reads and writes, use after free errors etc. - while running applications. It has found countless bugs in many software packages. I'm often surprised that many people in the free software community seem to be unaware of this powerful tool.

Address Sanitizer is mainly intended to be a debugging tool. It is usually used to test single applications, often in combination with fuzzing. But as Address Sanitizer can prevent many typical C security bugs - why not use it in production? It doesn't come for free. Address Sanitizer takes significantly more memory and slows down applications by 50 - 100 %. But for some security sensitive applications this may be a reasonable trade-off. The Tor project is already experimenting with this with its Hardened Tor Browser.

One project I've been working on in the past months is to allow a Gentoo system to be compiled with Address Sanitizer. Today I'm publishing this and want to allow others to test it. I have created a page in the Gentoo Wiki that should become the central documentation hub for this project. I published an overlay with several fixes and quirks on Github.

I see this work as part of my Fuzzing Project. (I'm posting it here because the Gentoo category of my personal blog gets indexed by Planet Gentoo.)

I am not sure if using Gentoo with Address Sanitizer is reasonable for a production system. One thing that makes me uneasy in suggesting this for high security requirements is that it's currently incompatible with Grsecurity. But just creating this project already caused me to find a whole number of bugs in several applications. Some notable examples include Coreutils/shred, Bash ([2], [3]), man-db, Pidgin-OTR, Courier, Syslog-NG, Screen, Claws-Mail ([2], [3]), ProFTPD ([2], [3]) ICU, TCL ([2]), Dovecot. I think it was worth the effort.

I will present this work in a talk at FOSDEM in Brussels this Saturday, 14:00, in the Security Devroom.

26 Jan 2016 12:40am GMT

25 Jan 2016

feedPlanet Gentoo

Michał Górny: Mangling shell options in ebuilds

A long time ago eutils.eclass was gifted with a set of terribly ugly functions to push/pop various variables and shell options. Those functions were written very badly, and committed without any review. As a result, a number of eclasses and ebuilds are now using that code without even understanding how bad it is.

In this post, I would like to shortly summarize how to properly and reliably save states of shell options. While the resulting code is a little bit longer than use of e*_push and e*_pop functions, it is much more readable, does not abuse eval, does not abuse global variables and is more reliable.

Preferable solution: subshell scope

Of course, the preferable way of altering shell options is to do that in a subshell. This is the only way that reliably isolates the alterations from parent ebuild environment. However, subshells are rarely desired - so this is something you'd rather reuse if it's already there, rather than introducing just for the sake of shell option mangling.

Mangling shopt options

Most of the 'new' bash options are mangled using shopt builtin. In this case, the -s and -u switches are used to change the option state, while the -p option can be used to get the current value. The current value is output in the form of shopt command syntax that can be called directly to restore the previous value.

my_function() {
        local prev_shopt=$(shopt -p nullglob)
        # prev_shopt='shopt -u nullglob' now
        shopt -s nullglob
        # ...

Mangling set options

The options set using the set builtin can be manipulated in a similar way. While the builtin support both short and long options, I strongly recommend using long options for readability. In fact, the long option names can be used through shopt with the additional -o parameter.

my_function() {
        local prev_shopt=$(shopt -p -o noglob)
        # prev_shopt='set +o noglob' now
        set -o noglob  # or shopt -s -o noglob
        # ...

Mangling umask

The umask builtin returns the current octal umask when called with no parameters. Furthermore, the -p parameter can be used to get full command for use alike shopt -p output.

my_function() {
        local prev_umask=$(umask)
        # prev_umask=0022 now
        umask 077
        # ...
        umask "${prev_umask}"

alternative_function() {
        local prev_umask=$(umask -p)
        # prev_umask='umask 0022' now
        umask 077
        # ...

Mangling environment variables

The eutils hackery went as far as to reinvent local variables using… global stacks. Not that it makes any sense. Whenever you want to change variable's value, attributes or just unset it temporarily, just use local variables. If the change needs to apply to part of a function, create a sub-function and put the local variable inside it.

While at it, please remember that bash does not support local functions. Therefore, you need to namespace your functions to avoid collisions and unset them after use.

my_function() {
        # unset FOO in local scope (this also prevents it from being exported)
        local FOO
        # 'localize' bar for modifications, preserving value
        local bar="${bar}"


        my_sub_func() {
                # export LC_ALL=POSIX in function scope
                local -x LC_ALL=POSIX
        # unset the function after use
        unset -f my_sub_func

Update: mangling shell options without a single subshell

(added on 2016-01-28)

izabera has brought it to my attention that the shopt builtin supports -q option to suppress output and uses exit statuses to return the original flag state. This makes it possible to set and unset the flags without using a single subshell or executing returned commands.

Since I do not expect most shell script writers to use such a long replacement, I present it merely as a curiosity.

my_setting_function() {
        shopt -q nullglob
        local prev_shopt=${?}
        shopt -s nullglob


        [[ ${prev_shopt} -eq 0 ]] || shopt -u nullglob

my_unsetting_function() {
        shopt -q extquote
        local prev_shopt=${?}
        shopt -u extquote


        [[ ${prev_shopt} -eq 0 ]] && shopt -s extquote

25 Jan 2016 10:46am GMT

24 Jan 2016

feedPlanet Gentoo

Jan Kundrát: Trojita 0.6 is released

Hi all,
we are pleased to announce version 0.6 of Trojitá, a fast Qt IMAP e-mail client. This release brings several new features as well as the usual share of bugfixes:

This release has been tagged in git as "v0.6". You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

This release is named after the Aegean island Λέσβος (Lesvos). Jan was there for the past five weeks, and he insisted on mentioning this challenging experience.

The Trojitá developers

24 Jan 2016 11:14am GMT

22 Jan 2016

feedPlanet Gentoo

Andreas K. Hüttel: Please test www-apache/mod_perl-2.0.10_pre201601

We're trying to get both Perl 5.22 and Apache 2.4 stable on Gentoo these days. One thing that would be really useful is to have a www-apache/mod_perl that works with all current Perl and Apache versions... and there's a candidate for that: a snapshot of what is hopefully going to be mod_perl-2.0.10 pretty soon. So...

Please keyword (if necessary) and test www-apache/mod_perl-2.0.10_pre201601!!! Feedback for all Perl and Apache versions is very much appreciated. Gentoo developers can directly edit our compatibility table with the results, everyone else please comment on this blog post or file bugs in case of problems!

Please always include exact www-servers/apache, dev-lang/perl, and www-apache/mod_perl versions!

22 Jan 2016 6:39pm GMT

18 Jan 2016

feedPlanet Gentoo

Matthew Thode: Of OpenStack and SSL

SSL in vanila OpenStack

The nature of OpenStack projects is largely like projects in Gentoo. Even though they are all under the OpenStack umbrella that doesn't mean they all have to work the same, or even work together.

For instance, nova has the ability to do ssl itself, you can define a CA and public/private keypair. Glance (last time I checked) doesn't do ssl yourself so you must offload it. Other service might do ssl themselves but not in the same way nova does it.

This means that the most 'standard' setup would be to not run ssl, but this isn't exactly desirable. So run a ssl reverse proxy.

Basic Setup

Configs and Tuning

General Config for All Services/Sites

This is the basic setup for each of the openstack services, the only difference between them will be what goes in the location subsection.

server {
    listen [LOCAL_PUBLIC_IPV6]:PORT;
    server_name = name.subdomain.example.com;
    access_log /var/log/nginx/keystone/access.log;
    error_log /var/log/nginx/keystone/error.log;

    ssl on;
    ssl_certificate /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    ssl_certificate_key /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    add_header Public-Key-Pins 'pin-sha256="PUB_KEY_PIN_SHA"; max-age=2592000; includeSubDomains';
    ssl_dhparam /etc/nginx/params.4096;
    resolver TRUSTED_DNS_SERVER;
    resolver_timeout 5s;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    add_header X-XSS-Protection "1; mode=block";
    add_header Content-Security-Policy "default-src 'self' https: wss:;";
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;

    location / {
        # this changes depending on the service

Keystone and Uwsgi

It turns out keystone has switched to uwsgi for it's service backend. This is good because it means we can have the web server connect to that, no more trying to do it all by itself. I'll leave the setting up of uwsgi itself as an exercise to the reader :P

This config has a few extra things, but it is currently what I know to be 'secure' (similiar config on this blog gets an A+ on all those ssl test things). It's the last location piece that changes the most between services.

location / {
    uwsgi_pass unix:///run/uwsgi/keystone_admin.socket;
    include /etc/nginx/uwsgi_params;
    uwsgi_param SCRIPT_NAME admin;


Glance just needs one thing on top of the general proxying that it needs. It needs client_max_body_size 0; in the main server stanza so that you can upload images without being cut off at some low size.

client_max_body_size 0;
location / {


The serviecs for nova just need the basic proxy_pass line. The only exception is novnc, it needs some proxy headers passed.

location / {
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Upgrade $http_upgrade;


Rabbit is fairly simple, you just need to enable ssl and disable the plaintext port (setting up your config of course).

    {ssl, [{versions, ['tlsv1.2', 'tlsv1.1']}]},
    {rabbit, [
        {tcp_listeners, []},
        {ssl_listeners, [5671]},
        {ssl_options, [{cacertfile,"/etc/rabbitmq/ssl/CA_CERT.pem"},
                       {certfile,  "/etc/rabbitmq/ssl/PUB_KEY.pem"},
                       {keyfile,   "PRIV_KEYKEY.key"},
                       {versions, ['tlsv1.2', 'tlsv1.1']}

Openstack Configs

The openstack configs can differ slightly but they are all mostly the same now they are using the same libraries (oslo stuff).

General Config

auth_uri = https://name.subdomain.example.com:5000
auth_url = https://name.subdomain.example.com:35357

rabbit_host = name.subdomain.example.com
rabbit_port = 5671
rabbit_use_ssl = True


osapi_compute_listen =
metadata_listen =
novncproxy_host =
enabled_apis = osapi_compute, metadata
novncproxy_base_url = https://name.subdomain.example.com:6080/vnc_auto.html
# the following only on the 'master' host
vncserver_proxyclient_address =
vncserver_listen =

host = name.subdomain.example.com
protocol = https
api_servers = https://name.subdomain.example.com:9292

url = https://name.subdomain.example.com:9696
auth_url = https://name.subdomain.example.com:35357


# api-servers get this
osapi_volume_listen =

# volume-servers and api-servers get this




# api
bind_host =
registry_host = name.subdomain.example.com
registry_port = 9191
registry_client_protocol = https


# cache
registry_host = name.subdomain.example.com
registry_port = 9191


# registry
bind_host =
rabbit_host = name.subdomain.example.com
rabbit_port = 5671
rabbit_use_ssl = True


# scrubber
registry_host = name.subdomain.example.com
registry_port = 9191


# neutron.conf
bind_host =
nova_url = https://name.subdomain.example.com:8774/v2

auth_url = https://name.subdomain.example.com:35357


# metadata_agent.ini
nova_metadata_ip = name.subdomain.example.com
nova_metadata_protocol = https

18 Jan 2016 6:00am GMT

16 Jan 2016

feedPlanet Gentoo

Diego E. Pettenò: TEXTRELs (Text Relocations) and their impact on hardening techniques

You might have seen the word TEXTREL thrown around security or hardening circles, or used in Gentoo Linux installation warnings, but one thing that is clear out there is that the documentation around this term is not very useful to understand why they are a problem. so I've been asked to write something about it.

Let's start with taking apart the terminology. TEXTREL is jargon for "text relocation", which is once again more jargon, as "text" in this case means "code portion of an executable file." Indeed, in ELF files, the .text section is the one that contains all the actual machine code.

As for "relocation", the term is related to dynamic loaders. It is the process of modifying the data loaded from the loaded file to suit its placement within memory. This might also require some explanation.

When you build code into executables, any named reference is translated into an address instead. This includes, among others, variables, functions, constants and labels - and also some unnamed references such as branch destinations on statements such as if and for.

These references fall into two main typologies: relative and absolute references. This is the easiest part to explain: a relative reference takes some address as "base" and then adds or subtracts from it. Indeed, many architectures have a "base register" which is used for relative references. In case of executable code, particularly with the reference to labels and branch destinations, relative references translate into relative jumps, which are relative to the current instruction pointer. An absolute reference is instead a fully qualified pointer to memory, well at least to the address space of the running process.

While absolute addresses are kinda obvious as a concept, they are not very practical for a compiler to emit in many cases. For instance, when building shared objects, there is no way for the compiler to know which addresses to use, particularly because a single process can load multiple objects, and they need to all be loaded at different addresses. So instead of writing to the file the actual final (unknown) address, what gets written by the compiler first - and by the link editor afterwards - is a placeholder. It might sound ironic, but an absolute reference is then emitted as a relative reference based upon the loading address of the object itself.

When the loader takes an object and loads it to memory, it'll be mapped at a given "start" address. After that, the absolute references are inspected, and the relative placeholder resolved to the final absolute address. This is the process of relocation. Different types of relocation (or displacements) exists, but they are not the topic of this post.

Relocations as described up until now can apply to both data and code, but we single out code relocations as TEXTRELs. The reason for this is to be found in mitigation (or hardening) techniques. In particular, what is called W^X, NX or PaX. The basic idea of this technique is to disallow modification to executable areas of memory, by forcing the mapped pages to either be writable or executable, but not both (W^X reads "writable xor executable".) This has a number of drawbacks, which are most clearly visible with JIT (Just-in-Time) compilation processes, including most JavaScript engines.

But beside JIT problem, there is the problem with relocations happening in code section of an executable, too. Since the relocations need to be written to, it is not feasible (or at least not easy) to provide an exclusive writeable or executable access to those. Well, there are theoretical ways to produce that result, but it complicates memory management significantly, so the short version is that generally speaking, TEXTRELs and W^X techniques don't go well together.

This is further complicated by another mitigation strategy: ASLR, Address Space Layout Randomization. In particular, ASLR fully defeats prelinking as a strategy for dealing with TEXTRELs - theoretically on a system that allows TEXTREL but has the address space to map every single shared object at a fixed address, it would not be necessary to relocate at runtime. For stronger ASLR you also want to make sure that the executables themselves are mapped at different addresses, so you use PIE, Position Independent Executable, to make sure they don't depend on a single stable loading address.

Usage of PIE was for a long while limited to a few select hardened distributions, such as Gentoo Hardened, but it's getting more common, as ASLR is a fairly effective mitigation strategy even for binary distributions where otherwise function offsets would be known to an attacker.

At the same time, SELinux also implements protection against text relocation, so you no longer need to have a patched hardened kernel to provide this protection.

Similarly, Android 6 is now disallowing the generation of shared objects with text relocations, although I have no idea if binaries built to target this new SDK version gain any more protection at runtime, since it's not really my area of expertise.

16 Jan 2016 5:40pm GMT

Michał Górny: GLEP67, or how packages are going to be maintained

The way packages are maintained in Gentoo have been evolving for quite some time already. So far all of that has been happening on top of old file formats which slowly started to diverge from the needs of Gentoo developers, and become partially broken. The concept of herds has become blurry, with confusion in definition between different developers and partial assumption about their deprecation. Maintenance of herd by project has been broken by moving projects to the Wiki. Some projects have stopped using herds, others have been declaring them in metadata.xml in different ways.

The problem has finally reached the Gentoo Council and has been discussed on 2015-10-25 meeting (note: no summary still…). The Council attempted to address different problems by votes, and create a new solution by combining the results of votes. However, finally it decided that it is not possible to create a good specification this way. Instead, the meeting has brought two major points. Firstly, herds are definitely deprecated. Secondly, someone needs to provide a complete, consistent replacement in GLEP form.

This is how GLEP 67 came to be. It was based on results of previous discussion, Council votes and thorough analysis of different problems. It provides a complete, consistent system for maintaining packages and expressing the maintenance information. It has been approved by the Council on 2016-01-10, with two week deadline on preparing to the switch.

Therefore, on 2016-01-24 Gentoo is going to switch to the new maintenance structure described in GLEP 67 completely. The announcement with transition details has been sent already. Instead, I'd like to focus on describing how things are going to work starting from the day GLEP 67 becomes implemented.

Who is going to maintain packages?

Before getting into technical details, GLEP 67 starts by limiting possible package maintainer entries. Until now, metadata files were allowed to list practically any e-mail addresses for package maintainers. From now on, only real people (either developers or proxied maintainers) and projects (meeting the requirements of GLEP 39, in particular having a Wiki page) are allowed to be maintainers. All maintainers are identified by e-mail addresses which are required to be unique (i.e. sharing the same address between multiple projects is forbidden) and registered on bugs.g.o.

This supports the two major goals behind maintainer specifications: bug assignment and responsibility assignment. The former is rather obvious - Bugzilla is the most reliable communication platform for both Gentoo developers and users. Therefore, it is important that the bugs can be assigned to appropriate entities without any issues. The latter aims to address the problem of unclear 'ownership' of some packages, and packages maintained by 'dead' e-mail aliases.

In other words, from now on we require that for every maintained package in Gentoo, we need to be able to obtain a complete, clear list of people maintaining it (directly and via projects). We no longer accept 'dumb' e-mail aliases that make it impossible to distinguish real maintainers from people who are simply following bugs. This gives three important advantages:

  1. we can actually ping the correct people on IRC without having to go through hoops,
  2. we can determine whether a package is actually maintained by someone, rather than assigned to an alias from which nobody reads bug mail anymore,
  3. we can clearly determine who is responsible for a package and who is the appropriate person to acknowledge changes.

Changes for maintainer-needed packages

The new requirements brought a new issue: maintainer-needed@g.o. The specific use of this e-mail alias did not really seem to suit a project. Creating a 'maintainer-needed project' would either imply creating a dead entity or assigning actual maintainers to supposedly-unmaintained packages. On the other hand, I did not really want to introduce special cases in the specification.

Instead, I have decided that the best way forward is to remove it. In other words, unmaintained packages now explicitly list no maintainers. The assignment to maintainer-needed@g.o will be done implicitly, and is a rule specific to the Gentoo repository. Other repositories may use different policies for packages with no explicit maintainers (like assigning to the repository owner).

The metadata.xml and projects.xml files

The changes to metadata.xml are really minimal, and backwards compatible. The <herd/> element is no longer used, and will be prohibited. Instead, <maintainer/> elements are going to be used for all kinds of maintainers. There are no incompatible changes in those elements, therefore existing tools will not be broken.

The <maintainer/> element gets a new obligatory type attribute that needs to either be person or project. The latter value may cause tools to look the project up (by e-mail) in projects.xml.

The projects.xml file (unlike past herds.xml) is clearly defined in the repository scope. In particular, the tools must not assume it always comes out of Gentoo repository. Other repositories are allowed to define their own projects (though overriding projects is not allowed), and project lookup needs to respect masters= setting in repositories.

For the Gentoo repository, projects.xml is generated automatically from the Wiki project pages, and distributed both via api.gentoo.org and via the usual repository distribution means (the rsync and git mirrors).


A quick summary for Gentoo developers of how things look like with GLEP 67:

  1. Only people and projects can maintain packages. If you want to maintain packages in an organized group, you have to create a project - with Wiki page, unique e-mail address and bugs.g.o account.
  2. Only the explicitly listed project members (and subproject members whenever member inheritance is enabled) are considered maintainers of the package. Other people subscribing the e-mail alias are not counted.
  3. Packages with no maintainers have no <maintainer/> elements. The bugs are still implicitly assigned to maintainer-needed@g.o but this e-mail alias is no longer used in metadata.xml.
  4. <herd/> elements are no longer used, <maintainer/>s are used instead.
  5. <maintainer/> requires a new type attribute that takes either person or project value.

16 Jan 2016 10:31am GMT

05 Jan 2016

feedPlanet Gentoo

Sebastian Pipping: Uses Gentoo: PS4 Linux demo by fail0verflow at 32c3

Demo start:

Where "OpenRC 0.19.1 is starting up Gentoo Linux (x86_64)" scrolls into display:

05 Jan 2016 1:09am GMT

03 Jan 2016

feedPlanet Gentoo

Gentoo News: January Events: Gentoo at SCALE14x and FOSDEM 2016

The new year kicks off with two large events with Gentoo participation: The Southern California Linux Expo SCALE14x and FOSDEM 2016, both featuring a Gentoo booth.


SCALE14x logo

First we have the Southern California Linux Expo SCALE in its 14th edition. The Pasadena Convention Center will host the event this year from January 21 to January 24.

Gentoo will be present as an exhibitor, like in the previous years.

Thanks to the organizers, we can share a special promotional code for attending SCALE with our community, valid for full access passes. Using the code GNTOO on the registration page will get you a 50% discount.


FOSDEM 2016 logo

Then, on the last weekend of January, we'll be on the other side of the pond in Brussels, Belgium where FOSDEM 2016 will take place on January 30 and 31.

Located at the Université libre de Bruxelles, it doesn't just offer interesting talks, but also the finest Belgian beers when the sun sets. :)

This year, Gentoo will also be manning a stand with gadgets, swag, and LiveDVDs.

Booth locations

We'll update this news item with more detailed information on how to find our booths at both conferences once we have information from the organizers.

03 Jan 2016 12:00am GMT

02 Jan 2016

feedPlanet Gentoo

Alexys Jacob: Gentoo Linux on DELL XPS 13 9350

As I found little help about this online I figured I'd write a summary piece about my recent experience in installing Gentoo Linux on a DELL XPS 13 9350.


This machine ships with a NVME SSD so don't think twice : UEFI is the only sane way to go.

BIOS configuration

I advise to use the pre-installed Windows 10 to update the XPS to the latest BIOS (1.1.7 at the time of writing). Then you need to change some stuff to boot and get the NVME SSD disk discovered by the live CD.

Live CD

Go for the latest SystemRescueCD (it's Gentoo based, you won't be lost) as it's quite more up to date and supports booting on UEFI. Make it a Live USB for example using unetbootin and the ISO on a vfat formatted USB stick.

NVME SSD disk partitioning

We'll be using GPT with UEFI. I found that using gdisk was the easiest. The disk itself is found on /dev/nvme0n1. Here it is the partition table I used :

The corresponding gdisk commands :

# gdisk /dev/nvme0n1

Command: o ↵
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y ↵

Command: n ↵
Partition Number: 1 ↵
First sector: ↵
Last sector: +500M ↵
Hex Code: EF00 ↵

Command: n ↵
Partition Number: 2 ↵
First sector: ↵
Last sector: +16G ↵
Hex Code: 8200 ↵

Command: n ↵
Partition Number: 3 ↵
First sector: ↵
Last sector: +60G ↵
Hex Code: ↵

Command: n ↵
Partition Number: 4 ↵
First sector: ↵
Last sector: ↵ (for rest of disk)
Hex Code: ↵

Command: w ↵
Do you want to proceed? (Y/N): Y ↵

No WiFi on Live CD ? no panic

If your live CD is old (pre 4.4 kernel), the integrated broadcom 4350 wifi card won't be available !

My trick was to use my Android phone connected to my local WiFi as a USB modem which was detected directly by the live CD.

Running ip addr will show the network card enp0s20f0u2 (for me at least), then if no IP address is set on the card, just ask for one :

# dhcpcd enp0s20f0u2

Et voilà, you have now access to the internet.

Proceed with installation

The only thing to worry about is to format the UEFI boot partition as FAT32.

# mkfs.vfat -F 32 /dev/nvme0n1p1

Then follow the Gentoo handbook as usual for the next steps of the installation process until you arrive to the kernel and the bootloader / grub part.

From this moment I can already say that NO we won't be using GRUB at all so don't bother installing it. Why ? Because at the time of writing, the efi-64 support of GRUB was totally not working at all as it failed to discover the NVME SSD disk on boot.

Kernel sources and consideration

The trick here is that we'll setup the boot ourselves directly from the BIOS later so we only need to build a standalone kernel (meaning able to boot without an initramfs).

EDIT: as of Jan. 10 2016, kernel 4.4 is available on portage so you don't need the patching below any more !

Make sure you install and use at least a 4.3.x kernel (4.3.3 at the time of writing). Add sys-kernel/gentoo-sources to your /etc/portage/package.keywords file if needed. If you have a 4.4 kernel available, you can skip patching it below.

Patching 4.3.x kernels for Broadcom 4350 WiFi support

To get the broadcom 4350 WiFi card working on 4.3.x, we need to patch the kernel sources. This is very easy to do thanks to Gentoo's user patches support. Do this before installing gentoo-sources (or reinstall it afterwards).

This example is for gentoo-sources-4.3.3, adjust your version accordingly :

(chroot) # mkdir -p /etc/portage/patches/sys-kernel/gentoo-sources-4.3.3
(chroot) # cd /etc/portage/patches/sys-kernel/gentoo-sources-4.3.3
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/0001-bcm4350.patch

When emerging the gentoo-sources package, you should see the patch being applied. Check that it worked by issuing :

(chroot) # grep BRCM_CC_4350 /usr/src/linux/drivers/net/wireless/brcm80211/brcmfmac/chip.c
case BRCM_CC_4350_CHIP_ID:

The resulting kernel module will be called brcmfmac, make sure to load it on boot by adding it in your /etc/conf.d/modules :


EDIT: as of Jan. 7 2016, version 20151207 of linux-firmware ships with the needed files so you don't need to download those any more !

Then we need to download the WiFi card's firmware files which are not part of the linux-firmware package at the time of writing (20150012).

(chroot) # emerge '>=sys-kernel/linux-firmware-20151207'

# DO THIS ONLY IF YOU DONT HAVE >=sys-kernel/linux-firmware-20151207 available !
(chroot) # cd /lib/firmware/brcm/
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/BCM-0a5c-6412.hcd
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/brcmfmac4350-pcie.bin

Kernel config & build

I used genkernel to build my kernel. I've done a very few adjustments but these are the things to mind in this pre-built kernel :

Get the kernel config and compile it :

(chroot) # mkdir -p /etc/kernels
(chroot) # cd /etc/kernels
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/kernel-config-x86_64-4.3.3-gentoo
(chroot) # genkernel kernel

The proposed kernel config here is for gentoo-sources-4.3.3 so make sure to rename the file for your current version.

This kernel is far from perfect but it works very good so far, sound, webcam and suspend work smoothly !

make.conf settings for intel graphics

I can recommend using the following on your /etc/portage/make.conf :

INPUT_DRIVERS="evdev synaptics"
VIDEO_CARDS="intel i965"

fstab for SSD

Don't forget to make sure the noatime option is used on your fstab for / and /home !

/dev/nvme0n1p1    /boot    vfat    noauto,noatime    1 2
/dev/nvme0n1p2    none     swap    sw                0 0
/dev/nvme0n1p3    /        ext4    noatime   0 1
/dev/nvme0n1p4    /home    ext4    noatime   0 1

As pointed out by stefantalpalaru on comments, it is recommended to schedule a SSD TRIM on your crontab once in a while, see Gentoo Wiki on SSD for more details.

encrypted /home auto-mounted at login

I advise adding the cryptsetup to your USE variable in /etc/portage/make.conf and then updating your @world with a emerge -NDuq @world.

I assume you don't have created your user yet so your unmounted /home is empty. Make sure that :

AFAIK, the LUKS password you'll set on the first slot when issuing luksFormat below should be the same as your user's password !

(chroot) # cryptsetup luksFormat -s 512 /dev/nvme0n1p4
(chroot) # cryptsetup luksOpen /dev/nvme0n1p4 crypt_home
(chroot) # mkfs.ext4 /dev/mapper/crypt_home
(chroot) # mount /dev/mapper/crypt_home /home
(chroot) # useradd -m -G wheel,audio,video,plugdev,portage,users USERNAME
(chroot) # passwd USERNAME
(chroot) # umount /home
(chroot) # cryptsetup luksClose crypt_home

We'll use sys-auth/pam_mount to manage the mounting of our /home partition when a user logs in successfully, so make sure you emerge pam_mount first, then configure the following files :

<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">
        See pam_mount.conf(5) for a description.


                <!-- debug should come before everything else,
                since this file is still processed in a single pass
                from top-to-bottom -->

<debug enable="0" />

                <!-- Volume definitions -->

<volume user="USERNAME" fstype="auto" path="/dev/nvme0n1p4" mountpoint="/home" options="fsck,noatime" />

                <!-- pam_mount parameters: General tunables -->

<luserconf name=".pam_mount.conf.xml" />

<!-- Note that commenting out mntoptions will give you the defaults.
     You will need to explicitly initialize it with the empty string
     to reset the defaults to nothing. -->
<mntoptions allow="nosuid,nodev,loop,encryption,fsck,nonempty,allow_root,allow_other" />
<mntoptions deny="suid,dev" />
<mntoptions allow="*" />
<mntoptions deny="*" />
<mntoptions require="nosuid,nodev" />

<!-- requires ofl from hxtools to be present -->
<logout wait="0" hup="no" term="no" kill="no" />

                <!-- pam_mount parameters: Volume-related -->

<mkmountpoint enable="1" remove="true" />

auth               required        pam_env.so 
auth            required        pam_unix.so try_first_pass likeauth nullok 
auth            optional        pam_mount.so
auth            optional        pam_permit.so

account         required        pam_unix.so 
account         optional        pam_permit.so

password        optional        pam_mount.so
password        required        pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3 
password        required        pam_unix.so try_first_pass use_authtok nullok sha512 shadow 
password        optional        pam_permit.so

session         optional        pam_mount.so
session         required        pam_limits.so 
session         required        pam_env.so 
session         required        pam_unix.so 
session         optional        pam_permit.so

That's it, easy heh ?! When you login as your user, pam_mount will decrypt your home partition using your user's password and mount it on /home !

UEFI booting your Gentoo Linux

The best (and weird ?) way I found for booting the installed Gentoo Linux and its kernel is to configure the UEFI boot directly from the XPS BIOS.

The idea is that the BIOS can read the files from the EFI boot partition since it is formatted as FAT32. All we have to do is create a new boot option from the BIOS and configure it to use the kernel file stored in the EFI boot partition.



Your Gentoo kernel and OpenRC will be booting now !

Suggestions, corrections, enhancements ?

As I said, I wrote all this quickly to spare some time to whoever it could help. I'm sure there are a lot of improvements to be done still so I'll surely update this article later on.

02 Jan 2016 10:43am GMT

31 Dec 2015

feedPlanet Gentoo

Alexys Jacob: uWSGI v2.0.12

It's been a long time since I made a blog post about a uWSGI release but this one is special to me because it contains some features I asked for to a colleague of mine.

For his first contributions to a big Open Source project, our fellow @shir0kamii added two features (spooler_get_task and -if-hostname-match) which were backported in this release and that we needed at work for quite a long time : congratulations again :)


Of course, all of this is already available on Gentoo Linux !

Full changelog here as usual.

31 Dec 2015 2:18pm GMT

30 Dec 2015

feedPlanet Gentoo

Kristian Fiskerstrand: 32C3

This year I participated in the Chaos Computer Club's annual congress for the first time, despite it being the 32nd such event being arranged, hence its name 32c3. This year's event has the subname of "Gated Communities" and follows last year in its location in Hamburg after having been in Berlin for a while. By this point I expect many have written the event off as a nerd gathering of hackers, which, well, in many ways it is, but it requires some modification. The number of visitors exceeds 12,000, so this is a large event, lasting over four days from 27th to 30th of December each year, and if you look deeper into it actually is a family event for many with own events for teaching children technology and a childspace that include games that use technology to represent position or sound in order to control ping-pong-games. Picture taking is of course prohibited throughout the conference unless getting explicit permission from all involved parties (as it should be in the rest of society).

Presentations this year were organized in four main tracks, starting at 11:30 and going as late as 2am. It is a somewhat interesting experience to attend a lecture on "A gentle introduction to post-quantum cryptography" by Dan Bernstein and Tanja Lange at 23:00 - 00:00 and having a full lecture hall. I wonder how many universities would have the same result.

Don't worry though, if missing a lecture the video streaming is one of the better you can encounter, separated into multiple sections, (i) a live stream (ii) a Re-Live, which is un-modified version of the stream that can be watched later and (iii) A released video of the talk that is properly mastered and in better quality. So if wanting to watch the aforementioned talk on PQC you can do so at any time.

As a disproporational amount of my acquaintances are focusing on the legal field instead of technology in itself, lets continue with a good talk by Max Schrems suing Facebook over Safe Harbor and data protection going all the way to the european court of justice. Or maybe you want to learn more about the legal ambiguities surrounding Sealand, and the precesses involved in creation your own country and the operational failures of data havens?

If wanting to mix in the more technological part, how about a wrap-up of the Crypto Wars part II and comparisons to the 1990's. For those not having spent too much time looking into the first one, some particularly bad ideas were the clipper chip for key escrow, but what is curious is the same amount of arguments being used then as now. FBI/NSA and other governmental agencies wants un-fethered access to encrypted email and blames cryptography for its failures, even though those involved in recent events in Paris and San Bernadino actually used un-encrypted communication and the security services never picked up anything. As such, they, along with politicians, use Fear, Uncertainty, and Doubt (FUD) to make their case. Its typical of politicians to think that the problem is the rhethoric or the name rather than the underlying substance, and as a result we see discussions of a "secure golden key" or a "front door" instead of a "back door" to cryptography. The attempts of governments from the first crypto wars of couse influence us even today, in particular with the export restrictions imposed that until recently still exists compatibility for in various libraries allowing for downgrade attacks. A good talk by J. Alex Halderman and Nadia Heninger on Logjam underlines why attempts of undermining encryption is a bad thing even decades later.

What people seems to forget is that encryption is required for the e-commerce that we use every day. Who would ever connect to an internet banking application if their neighbour could be monitoring all account information and traffic? And the right to privacy is even established under the Universal Declaration of Human Rights, article 19, stating: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinion without interfearence and to seek, receive and impart information and ideas through any media and regardless of frontiers".

The United Kingdom (UK) is comming off this debate in a particularly bad way with Cameron's Snooper's Charter. In particular §189(4)(c): "Operators may be obliged to remove "electronic protection" if they provide ..." seems worrying. This is followed by Australia; where simply explaining an algorithm to someone can result in penalization. But none of these beats India; that require a copy of plain text to be retained for a minimum of 90 days if sending an encrypted message.

This level of tyranny from oppressive regimes of various governments nicely sets the stage for the presentation of North Korea's Red Star Operating System and the various ways the operating system, set to mimic Apple's Mac OS, in order to spy and keep the people down. Of particular interest is the watermarking technology and censoring application that forms part of the "anti-virus" (well, the red start icon of it could be a hint)

All in all, this is just a minimal representaiton of some of the interesting aspects of this conference. Not surprisingly the most used operating systems of the visitors (at least those connected to the network) was GNU/Linux (24.1%) and Android (17.6%), and if you want to see the talk about Windows 10 acting as a botnet you have that video as well.

30 Dec 2015 9:18pm GMT

28 Dec 2015

feedPlanet Gentoo

Denis Dupeyron: SCALE 14x

I don't know about you but I'm going to SCALE 14x.

I'm going to SCALE 14x!

SCALE is always a lot of fun and last year I particularly liked their new embedded track. This year's schedule looks equally exciting. Among the speakers you may recognize former Gentoo developers as well as a few OSS celebrities.

We'll have a booth again this year. Feel free to come talk to us or help us on the booth. One of the new things this year is we're getting a Gentoo specific promotional code for a 50% rebate on full-access passes for the entire event. The code is GNTOO. It's not only for Gentoo developers, but also for all Gentoo users, potential users, new-year-resolution future users, friends, etc… Use it! Spread it! I'm looking forward to see you all there.

28 Dec 2015 8:30pm GMT