31 Jul 2015
The other day I installed Ubuntu 15.04 on one of my boxes. I just needed something where I could throw in a DVD, hit install and be done. I didn't care about customization or choice, I just needed a working Linux system from which I could chroot work. Thousands of people around the world install Ubuntu this way and when they're done, they have a stock system like any other Ubuntu installation, all identical like frames in a Andy Warhol lithograph. Replication as a form of art.
In contrast, when I install a Gentoo system, I enjoy the anxiety of choice. Should I use syslog-ng, metalog, or skip a system logger altogether? If I choose syslog-ng, then I have a choice of 14 USE flags for 2^14 possible configurations for just that package. And that's just one of some 850+ packages that are going to make up my desktop. In contrast to Ubuntu where every installation is identical (whatever "idem" means in this context), the shear space of possibilities make no two Gentoo systems the same unless there is some concerted effort to make them so. In fact, Gentoo doesn't even have a notion of a "stock" system unless you count the stage3s which are really bare bones. There is no "stock" Gentoo desktop.
With the work I am doing with uClibc and musl, I needed a release tool that would build identical desktops repeatedly and predictably where all the choices of packages and USE flags were layed out a priori in some specifications. I considered catalyst stage4, but catalyst didn't provide the flexibility I wanted. I initially wrote some bash scripts to build an XFCE4 desktop from uClibc stage3 tarballs (what I dubbed "Lilblue Linux"), but this was very much ad hoc code and I needed something that could be generalized so I could do the same for a musl-based desktop, or indeed any Gentoo system I could dream up.
This led me to formulate the notion of what I call a "Gentoo Reference System" or GRS for short - maybe we could make stock Gentoo systems available. The idea here is that one should be able to define some specs for a particular Gentoo system that will unambiguously define all the choices that go into building that system. Then all instances built according to those particular GRS specs would be identical in much the same way that all Ubuntu systems are the same. In a Warholian turn, the artistic choices in designing the system would be pushed back into the specs and become part of the automation. You draw one frame of the lithograph and you magically have a million.
The idea of these systems being "references" was also important for my work because, with uClibc or musl, there's a lot of package breakages - remember you pushing up against actual implementations of C functions and nearly everything in your systems written in C. So, in the space of all possible Gentoo systems, I needed some reference points that worked. I needed that magical combinations of flags and packages that would build and yield useful systems. It was also important that these references be easily kept working over time since Gentoo systems evolve as the main tree, or overlays, are modified. Since on some successive build something might break, I needed to quickly identify the delta and address it. The metaphor that came up in my head from my physics background is that of phase space. In the swirling mass of evolving dynamical systems, I pictured these "Gentoo Reference Systems" as markers etching out a well defined path over time.
Enough with the metaphors, how does GRS work? There are two main utilities, grsrun and grsup. The first is run on a build machine and generates the GRS release as well as any extra packages and updates. These are delivered as binpkgs. In contrast, grsup is run on an installed GRS instance and its used for package management. Since we're working in a world of identical systems, grsup prefers working with binpkgs that are downloaded from some build machine, but it can revert to building locally as well.
The GRS specs for some system are found on a branch of a git repository. Currently the repo at https://gitweb.gentoo.org/proj/grs.git/ has four branches, each for one of the four GRS specs housed there. grsrun is then directed to sync the remote repo locally, check out the branch of the GRS system we want to build and begin reading a script file called build which directs grsrun on what steps to take. The scripting language is very simple and contains only a handful of different directives. After a stage tarball is unpacked, build can direct grsrun to do any of the following:
mount and umount - Do a bind mount of /dev/, /dev/pts/ and other directories that are required to get a chroot ready.
populate - Selectively copy files from the local repo to the chroot. Any files can be copied in, so, for example, you can prepare a pristine home directory for some user with a pre-configured desktop. Or you can add customized configuration files to /etc for services you plan to run.
runscript - This will run some bash or python script in the chroots. The scripts are copied from the local repo to /tmp of the chroot and executed there. These scripts can be like the ones that catalyst runs during stage1/2/3 but can also be scripts to add users and groups, to add services to runlevels, etc. Think of anything you would do when growing a stage3 into the system you want, script it up and GRS will automated it for you.
kernel - This looks for a kernel config file in the local repo, parses it for the version, builds it and both bundles it as a packages called linux-image-<version>.tar.xz for later distribution as well as installs it into the chroot. grsup knows how to work with these linux-image-<version>.tar.xz files and can treat them like binpkgs.
tarit and hashit - These directives create a release tarball of the entire chroot and generate the digests.
pivot - If you built a chroot within a chroot, like catalyst does during stage1, then this pivots the inner chroot out so that further building can make use of it.
From an implementation point of view, the GRS suite is written in python and each of the above directives is backed by a simple python class. Its easy, for instance, to implement more directives this way. E.g. if you want to build a bootable CD image, you can include a directive called isoit, write a python class for what's required to construct the iso image and glue this new class into the grs module.
If you're familiar with catalyst, at this point you might be wondering what's the difference? Can't you do all of this with catalyst? There is a lot of overlap, but the emphasis is different. For example, I wanted to be able to drop in a pre-configured desktop for a user. How would I do that with catalyst? I guess I could create an overlay with packages for some pre-built home directory but that's a perversion of what ebuilds are for - we should never be installing into /home. Rather with grsrun I can just populate the chroot with whatever files I like anywhere in the filesystem. More importantly, I want to be able control what USE flags are set and, in general, manage all of /etc/portage/. catalyst does provide portage_configdir which populates /etc/portage when building stages, but its pretty static. Instead, grsup and two other utilities, install-worldconf and clean-worldconf, can dynamically manage files under /etc/portage/ according to a configuration file called world.conf.
Lapsing back into metaphor, I see catalyst as rigid and frozen whereas grsrun is loose and fluid. You can use grsrun to build stage1/2/3 tarballs which are identical to those built with catalyst, and in fact I've done so for hardened amd64 mutlilib stages so I could compare. But with grsrun you have too much freedom in writing the scripts and file that go into the GRS specs and chances are you'll get something wrong, whereas with catalyst the build is pretty regimented and you're guaranteed to get uniformity across arches and profiles. So while you can do the same things with each tool, its not recommended that you use grsrun to do catalyst stage builds - there's too much freedom. Whereas when building desktops or servers you might welcome that freedom.
Finally, let me close with how grsup works. As mentioned above, the GRS specs for some system include a file called world.conf. Its in configparser format and it specifies files and their contents in the /etc/portage/ directory. An example section in the file looks like:
[app-crypt/gpgme:1] package.use : app-crypt/gpgme:1 -common-lisp static-libs package.env : app-crypt/gpgme:1 app-crypt_gpgme_1 env : LDFLAGS=-largp
This says, for package app-crypt/gpgme:1, drop a file called app-crypt_gpgme_1 in /etc/portage/package.use/ that contains the line "app-crypt/gpgme:1 -common-lisp static-libs", drop another file by the same name in /etc/portage/package.env/ with line "app-crypt/gpgme:1 app-crypt_gpgme_1″, and finally drop a third file by the same name in /etc/portage/env/ with line "LDFLAGS=-largp". grsup is basically a wrapper to emerge which first populates /etc/portage/ according to the world.conf file, then emerges the requested pkg(s) preferring the use of binpkgs over building locally as stated above, and finally does a clean up on /etc/portage/. install-worldconf and clean-worldconf isolate the populate and clean up steps so they can be used in scripts run by grsrun when building the release. To be clear, you don't have to use grsup to maintain a GRS system. You can maintain it just like any other Gentoo system, but if you manage your own /etc/portage/, then you are no longer tracking the GRS specs. grsup is meant to make sure you update, install or remove packages in a manner that keeps the local installation in compliance with the GRS specs for that system.
All this is pretty alpha stuff, so I'd appreciate comments on design and implementation before things begin to solidify. I am using GRS to build three desktop systems which I'll blog about next. I've dubbed these systems Lilblue which is a hardened amd64 XFCE4 desktop with uClibc as its standard libc, Bluedragon that uses musl, and finally Bluemoon which uses good old glibc. (Lilblue is actually a few years old, but the latest release is the first built using GRS.) All three desktops are identical with respect to the choice of packages and USE flags, and differ only in their libc's so one can compare the three. Lilbue and Bluedragon are on the mirrors, or you can get all three from my dev space at http://dev.gentoo.org/~blueness/theblues/. I didn't push out bluemoon on the mirrors because a glibc based desktop is nothing special. But since building with GRS is as simple as cloning a git branch and tweaking, and since the comparison is useful, why not?
The GRS home page is at https://wiki.gentoo.org/wiki/Project:RelEng_GRS.
31 Jul 2015 2:21pm GMT
23 Jul 2015
First of all it's nothing to eat. So what is it then? This is the introduction by upstream:
Calamares is an installer framework. By design it is very customizable, in order to satisfy a wide variety of needs and use cases. Calamares aims to be easy, usable, beautiful, pragmatic, inclusive and distribution-agnostic. Calamares includes an advanced partitioning feature, with support for both manual and automated partitioning operations. It is the first installer with an automated "Replace Partition" option, which makes it easy to reuse a partition over and over for distribution testing. Got a Linux distribution but no system installer? Grab Calamares, mix and match any number of Calamares modules (or write your own in Python or C++), throw together some branding, package it up and you are ready to ship!
I have just added newest release version (1.1.2) to the tree and in my dev overlay a live version (9999). The underlaying technology stack is mainly Qt5, KDE Frameworks, Python3, YAML and systemd. It's picked up and of course in evaluation process by several Linux distributions.
You may asking why i have added it to Gentoo then where we have OpenRC as default init system?! You are right at the moment it is not very useful for Gentoo. But for example Sabayon as a downstream of us will (maybe) use it for the next releases, so in the first place it is just a service for our downstreams.
The second reason, there is a discussion on gentoo-dev mailing list at the moment to reboot the Gentoo installer. Instead of creating yet another installer implementation, we have two potential ways to pick it up, which are not mutual exclusive:
1. Write modules to make it work with sysvinit aka OpenRC
2. Solve Bug #482702 - Provide alternative stage3 tarballs using sys-apps/systemd
23 Jul 2015 8:36pm GMT
22 Jul 2015
I'm starting a new blog!
Actually, it is almost the same blog, but powered by a new "blogging engine", and I don't want to spend time migrating the old posts, that are mostly outdated right now.
The old content is archived here, if you need it due to some crazy reason: http://old.rafaelmartins.eng.br/.
For Gentoo planet readers, everything should be working just fine. I created rewrite rules to keep the old atom feeds working.
I'll publish another blog post soon, talking about the "blogging engine" and my next plans.
22 Jul 2015 6:45am GMT
20 Jul 2015
Since updating to VMware Workstation 11 (from the Gentoo vmware overlay), I've experienced a lot of hangs of my KDE environment whenever a virtual machine was running. Basically my system became unusable, which is bad if your workflow depends on accessing both Linux and (gasp!) Windows 7 (as guest). I first suspected a dbus timeout (doing the "stopwatch test" for 25s waits), but it seems according to some reports that this might be caused by buggy behavior in kwin (4.11.21). Sadly I haven't been able to pinpoint a specific bug report.
Now, I'm not sure if the problem is really 100% fixed, but at least now the lags are much smaller- and here's how to do it (kudos to matthewls and vrenn):
- Add to /etc/xorg.conf in the Device section
Option "TripleBuffer" "True"
- Create a file in /etc/profile.d with content
- Log out, stop your display manager, restart it.
I'll leave it as an exercise to the reader to figure out what these settings do. (Feel free to explain it in a comment. :) No guarantees of any kind. If this kills kittens you have been warned. Cheers.
20 Jul 2015 8:28pm GMT
18 Jul 2015
I just added sys-process/systemd-cron to the Gentoo repository. Until now I've been running it from my overlay and getting it into the tree was overdue. I've found it to be an incredibly useful tool.
All it does is install a set of unit files and a crontab generator. The unit files (best used by starting/enabling cron.target) will run jobs from /etc/cron.* at the appropriate times. The generator can parse /etc/crontab and create timer units for every line dynamically.
Note that the default Gentoo install runs the /etc/cron.* jobs from /etc/crontab, so if you aren't careful you might end up running them twice. The simplest solutions this are to either remove those lines from /etc/crontab, or install systemd-cron using USE=etc-crontab-systemd which will have the generator ignore /etc/crontab and instead look for /etc/crontab-systemd where you can install jobs you'd like to run using systemd.
The generator works like you'd expect it to - if you edit the crontab file the units will automatically be created/destroyed dynamically.
One warning about timer units compared to cron jobs is that the jobs are run as services, which means that when the main process dies all its children will be killed. If you have anything in /etc/cron.* which forks you'll need to have the main script wait at the end.
On the topic of race conditions, each cron.* directory and each /etc/crontab line will create a separate unit. Those units will all run in parallel (to the extent that one is still running when the next starts), but within a cron.* directory the scripts will run in series. That may be a bit different from some cron implementations which may limit the number of simultaneous jobs globally.
All the usual timer unit logic applies. stdout goes to the journal, systemctl list-timers shows what is scheduled, etc.
18 Jul 2015 4:00pm GMT
16 Jul 2015
Libav is an open source set of tools for audio and video processing.
After talking with Luca Barbato which is both a Gentoo and Libav developer, I spent a bit of my time fuzzing libav and in particular I fuzzed libavcodec though avplay.
I hit a crash and after I reported it to upstream, they confirmed the issue as a divide-by-zero.
The complete gdb output:
ago@willoughby $ gdb --args /usr/bin/avplay avplay.crash GNU gdb (Gentoo 7.7.1 p1) 7.7.1 Copyright (C) 2014 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-pc-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from /usr/bin/avplay...Reading symbols from /usr/lib64/debug//usr/bin/avplay.debug...done. done. (gdb) run Starting program: /usr/bin/avplay avplay.crash warning: Could not load shared library symbols for linux-vdso.so.1. Do you need "set solib-search-path" or "set sysroot"? [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". avplay version 11.3, Copyright (c) 2003-2014 the Libav developers built on Jun 19 2015 09:50:59 with gcc 4.8.4 (Gentoo 4.8.4 p1.6, pie-0.6.1) [New Thread 0x7fffec4c7700 (LWP 7016)] [New Thread 0x7fffeb166700 (LWP 7017)] INFO: AddressSanitizer ignores mlock/mlockall/munlock/munlockall [New Thread 0x7fffe9e28700 (LWP 7018)] [h263 @ 0x60480000f680] Format detected only with low score of 25, misdetection possible! [h263 @ 0x60440001f980] Syntax-based Arithmetic Coding (SAC) not supported [h263 @ 0x60440001f980] Reference Picture Selection not supported [h263 @ 0x60440001f980] Independent Segment Decoding not supported [h263 @ 0x60440001f980] header damaged Program received signal SIGFPE, Arithmetic exception. [Switching to Thread 0x7fffe9e28700 (LWP 7018)] 0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142 142 /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c: No such file or directory. (gdb) bt #0 0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142 #1 0x00007ffff21f3c2d in ff_h263_decode_picture_header (s=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:1112 #2 0x00007ffff1ae16ed in ff_h263_decode_frame (avctx=0x60440001f980, data=0x60380002f480, got_frame=0x7fffe9e272f0, avpkt=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/h263dec.c:444 #3 0x00007ffff2cd963e in avcodec_decode_video2 (avctx=0x60440001f980, picture=0x60380002f480, got_picture_ptr=got_picture_ptr@entry=0x7fffe9e272f0, avpkt=avpkt@entry=0x7fffe9e273b0) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/utils.c:1600 #4 0x00007ffff44d4fb4 in try_decode_frame (st=st@entry=0x60340002fb00, avpkt=avpkt@entry=0x601c00037b00, options=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:1910 #5 0x00007ffff44ebd89 in avformat_find_stream_info (ic=0x60480000f680, options=0x600a00009e80) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:2276 #6 0x0000000000431834 in decode_thread (arg=0x7ffff7e0b800) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/avplay.c:2268 #7 0x00007ffff0284b08 in ?? () from /usr/lib64/libSDL-1.2.so.0 #8 0x00007ffff02b4be9 in ?? () from /usr/lib64/libSDL-1.2.so.0 #9 0x00007ffff4e65aa8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.4/libasan.so.0 #10 0x00007ffff0062204 in start_thread () from /lib64/libpthread.so.0 #11 0x00007fffefda957d in clone () from /lib64/libc.so.6 (gdb)
11.3 (and maybe past versions)
11.5 and 12.0
This bug was discovered by Agostino Sarubbo of Gentoo.
2015-06-21: bug discovered
2015-06-22: bug reported privately to upstream
2015-06-30: upstream commit the fix
2015-07-14: CVE assigned
2015-07-16: advisory release
16 Jul 2015 9:57am GMT
14 Jul 2015
Siege is an http load testing and benchmarking utility.
During the test of a webserver, I hit a segmentation fault. I recompiled siege with ASan and it clearly show an off-by-one in load_conf(). The issue is reproducible without passing any arguments to the binary.
The complete output:
ago@willoughby ~ # siege ================================================================= ==488==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000d7f1 at pc 0x00000051ab64 bp 0x7ffcc3d19a70 sp 0x7ffcc3d19a68 READ of size 1 at 0x60200000d7f1 thread T0 #0 0x51ab63 in load_conf /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263:12 #1 0x515486 in init_config /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:96:7 #2 0x5217b9 in main /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/main.c:324:7 #3 0x7fb2b1b93aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289 #4 0x439426 in _start (/usr/bin/siege+0x439426) 0x60200000d7f1 is located 0 bytes to the right of 1-byte region [0x60200000d7f0,0x60200000d7f1) allocated by thread T0 here: #0 0x4c03e2 in __interceptor_malloc /var/tmp/portage/sys-devel/llvm-3.6.1/work/llvm-3.6.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:40:3 #1 0x7fb2b1bf31e9 in __strdup /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/string/strdup.c:42 SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263 load_conf Shadow bytes around the buggy address: 0x0c047fff9aa0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff9ab0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff9ac0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff9ad0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff9ae0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa =>0x0c047fff9af0: fa fa fa fa fa fa fa fa fa fa fa fa fa fafa 0x0c047fff9b00: fa fa 03 fa fa fa fd fd fa fa fd fa fa fa fd fd 0x0c047fff9b10: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd 0x0c047fff9b20: fa fa fd fd fa fa fd fa fa fa fd fa fa fa fd fa 0x0c047fff9b30: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa 0x0c047fff9b40: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Heap right redzone: fb Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack partial redzone: f4 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==488==ABORTING
3.1.0 (and maybe past versions).
This bug was discovered by Agostino Sarubbo of Gentoo.
Not really qualifiable, is more a programming bug.
2015-06-09: bug discovered
2015-06-10: bug reported privately to upstream
2015-07-13: no upstream response
2015-07-14: advisory release
14 Jul 2015 7:04pm GMT
10 Jul 2015
Thanks to Mike Pagano who enabled kdbus support in Gentoo kernel sources almost 2 weeks ago. Which gives us the choice to test it. As described in Mikes blog post you will need to enable the use flags kdbus and experimental on sys-kernel/gentoo-sources and kdbus on sys-apps/systemd.
root # echo "sys-kernel/gentoo-sources kdbus experimental" >> /etc/portage/package.use/kdbus
If you are running >=sys-apps/systemd-221 kdbus is already enabled by default otherwise you have to enable it.
root # echo "sys-apps/systemd kdbus" >> /etc/portage/package.use/kdbus
Any packages affected by the change need to be rebuilt.
root # emerge -avuND @world
Enable kdbus option in kernel.
General setup --->
<*> kdbus interprocess communication
Build the kernel, install it and reboot. Now we can check if kdbus is enabled properly. systemd should automatically mask dbus.service and start systemd-bus-proxyd.service instead (Thanks to eliasp for the info).
root # systemctl status dbus
Loaded: masked (/dev/null)
Active: inactive (dead)
root # systemctl status systemd-bus-proxyd
● systemd-bus-proxyd.service - Legacy D-Bus Protocol Compatibility Daemon
Loaded: loaded (/usr/lib64/systemd/system/systemd-bus-proxyd.service; static; vendor preset: enabled)
Active: active (running) since Fr 2015-07-10 22:42:16 CEST; 16min ago
Main PID: 317 (systemd-bus-pro)
└─317 /usr/lib/systemd/systemd-bus-proxyd --address=kernel:path=/sys/fs/kdbus/0-system/bus
Plasma 5 starts fine here using sddm as login manager. On Plasma 4 you may be interested in Bug #553460.
Looking forward when Plasma 5 will get user session support.
10 Jul 2015 10:03pm GMT
Important!My tech articles-especially Linux ones-are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!
For quite some time, I have tried to get links in Thunderbird to open automatically in Chrome or Chromium instead of defaulting to Firefox. Moreover, I have Chromium start in incognito mode by default, and I would like those links to do the same. This has been a problem for me since I don't use a full desktop environment like KDE, GNOME, or even XFCE. As I'm really a minimalist, I only have my window manager (which is Openbox), and the applications that I use on a regular basis.
One thing I found, though, is that by using PCManFM as my file manager, I do have a few other related applications and utilities that help me customise my workspace and workflows. One such application is libfm-pref-apps, which allows for setting preferred applications. I found that I could do just what I wanted to do without mucking around with manually setting MIME types, writing custom hooks for Thunderbird, or any of that other mess.
Here's how it was done:
/usr/bin/libfm-pref-appsfrom your terminal emulator of choice
- Under "Web Browser," select "Customise" from the drop-down menu
- Select the "Custom Command Line" tab
- In the "Command line to execute" box, type
/usr/bin/chromium --incognito --start-maximized %U
- In the "Application name" box, type "Chromium incognito" (or however else you would like to identify the application)
Voilà! After restarting Thunderbird, my links opened just like I wanted them to. The only modification that you might need to make is the "Command line to execute" portion. If you use the binary of Chrome instead of building the open-source Chromium browser, you would need to change it to the appropriate executable (and the path may be different for you, depending on your system and distribution). Also, in the command line that I have above, here are some notes about the switches used:
- -incognito starts Chromium in incognito mode by default (that one should be obvious)
- -start-maximized makes the browser window open in the full size of your screen
- %U allows Chromium to accept a URL or list of URLs, and thus, opens the link that you clicked in Thunderbird
Under the hood, it seems like libfm-pref-apps is adding some associations in the
~/.config/mimeapps.list file. The relevant lines that I found were:
x-scheme-handler/http=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;
x-scheme-handler/https=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;
Hope this information helps you get your links to open in your browser of choice (and with the command-line arguments that you want)!
10 Jul 2015 5:15pm GMT
09 Jul 2015
In our previous attempt to upgrade our production cluster to 3.0, we had to roll back from the WiredTiger engine on primary servers.
Since then, we switched back our whole cluster to 3.0 MMAPv1 which has brought us some better performances than 2.6 with no instability.
We decided to use this increase in performance to allow us some time to fulfil the entire production checklist from MongoDB, especially the migration to XFS. We're slowly upgrading our servers kernels and resynchronising our data set after migrating from ext4 to XFS.
Ironically, the strong recommendation of XFS in the production checklist appeared 3 days after our failed attempt at WiredTiger… This is frustrating but gives some kind of hope.
I'll keep on posting on our next steps and results.
Our hero WiredTiger Replica Set
While we were battling with our production cluster, we got a spontaneous major increase in the daily volumes from another platform which was running on a single Replica Set. This application is write intensive and very disk I/O bound. We were killing the disk I/O with almost a continuous 100% usage on the disk write queue.
Despite our frustration with WiredTiger so far, we decided to give it a chance considering that this time we were talking about a single Replica Set. We were very happy to see WiredTiger keep up to its promises with an almost shocking serenity.
Disk I/O went down dramatically, almost as if nothing was happening any more. Compression did magic on our disk usage and our application went Roarrr !
09 Jul 2015 7:44am GMT
06 Jul 2015
It is possible to optimize docker containers such that multiple containers are based off of a single copy of a common base image. If containers are constructed from tarballs, then it can be useful to create a delta tarball which contains the differences between a base image and a derived image. The delta tarball can then be layered on top of the base image using a Dockerfile like the following:
ADD delta.tar.xz /
Many different types of containers can thus be derived from a common base image, while sharing a single copy of the base image. This saves disk space, and can also reduce memory consumption since it avoids having duplicate copies of base image data in the kernel's buffer cache.
06 Jul 2015 8:31am GMT
03 Jul 2015
As you know, Gentoo is all about flexibility. You can run bleeding edge code (portage, our package manager, even provides you with installation from git master KF5 and friends) or you can focus on stability and trusted code. This is why we've been offering our users for the last years KDEPIM 22.214.171.124 (the version where KMail e-mail storage was not integrated with Akonadi yet, also known as KMail1) as a drop-in replacement for the newer versions.
Recently the Nepomuk search framework has been replaced by Baloo, and after some discussion we decided that for the Nepomuk-related packages it's now time to go. Problem is, the old KDEPIM packages still depend on it via their Akonadi version. This is why - for those of our users who prefer to run KDEPIM 4.4 / KMail1 - we've decided to switch to Pali Rohár's kdepim-noakonadi fork (see also his 2013 blog post and the code).The packages are right now in the KDE overlay, but will move to the main tree after a few days of testing and be treated as an update of KDEPIM 126.96.36.199.
The fork is essentially KDEPIM 4.4 including some additional bugfixes from the KDE/4.4 git branch, with KAddressbook patched back to KDEPIM 4.3 state and references to Akonadi removed elsewhere. This is in some ways a functionality regression since the integration of e.g. different calendar types is lost, however in that version it never really worked perfectly anyway.
For now, you will still need the akonadi-server package, since kdepimlibs (outside kdepim and now at version 4.14.9) requires it to build, but you'll never need to start the Akonadi server. As a consequence, Nepomuk support can be disabled everywhere, and the Nepomuk core and client and Akonadi client packages can be removed by the package manager (--depclean, make sure to first globally disable the nepomuk useflag and rebuild accordingly).
You might ask "Why are you still doing this?"... well. I've been told Akonadi and Baloo is working very nicely, and again I've considered upgrading all my installations... but then on my work desktop where I am using newest and greatest KDE4PIM bug 338658 pops up regularly and stops syncing of important folders. I just don't have the time to pointlessly dig deep into the Akonadi database every few days. So KMail1 it is, and I'll rather spend some time occasionally picking and backporting bugfixes.
03 Jul 2015 2:16pm GMT
26 Jun 2015
Keeping with the theme of 'Gentoo is about choice" I've added the ability for users to include kdbus into their gentoo-sources kernel. I wanted an easy way for gentoo users to test the patchset while maintaining the default installation of not having it at all.
In order to include the patchset on your gentoo-sources you'll need the following:
1. A kernel version >= 4.1.0-r1
2. the 'experimental' use flag
3. the 'kdbus' use flag
I am not a systemd user, but from the ebuild it looks like if you build systemd with the 'kdbus' use flag it will use it.
Please send all kdbus bugs upstream by emailing the developers and including firstname.lastname@example.org in the CC .
Read as much as you can about kdbus before you decided to build it into your kernel. There have been security concerns mentioned (warranted or not), so following the upstream patch review at lkml.org would probably be prudent.
When a new version is released, wait a week before opening a bug. Unless I am on vacation, I will most likely have it included before the week is out. Thanks!
NOTE: This is not some kind of Gentoo endorsement of kdbus. Nor is it a Mike Pagano endorsement of kdbus. This is no different then some of the other optional and experimental patches we carry. I do all the genpatches work which includes the patches, the ebuilds and the bugs therefore since I don't mind the extra work of keeping this up to date, then I can't see any reason not to include it as an option.
26 Jun 2015 11:35pm GMT
25 Jun 2015
After several month of packaging in kde overlay and almost a month in tree, we have lifted the mask for KDE Plasma 5.3.1 today. If you want to test it out, now some infos how to get it.
For easy transition we provide two new profiles, one for OpenRC and the other for systemd.
root # eselect profile list
Following example activates the Plasma systemd profile:
root # eselect profile set 9
On stable systems you need to unmask the qt5 use flag:
root # echo "-qt5" >> /etc/portage/profile/use.stable.mask
Any packages affected by the profile change need to be rebuilt:
root # emerge -avuND @world
For stable users, you also need to keyword the required packages. You can let portage handle it with autokeyword feature or just grep the keyword files for KDE Frameworks 5.11 and KDE Plasma 5.3.1 from kde overlay.
Now just install it (this is full Plasma 5, the basic desktop would be kde-plasma/plasma-desktop):
root # emerge -av kde-plasma/plasma-meta
the Gentoo KDE Team
25 Jun 2015 11:15pm GMT
23 Jun 2015
tl;dr Most servers running a multi-user webhosting setup with Apache HTTPD probably have a security problem. Unless you're using Grsecurity there is no easy fix.
I am part of a small webhosting business that I run as a side project since quite a while. We offer customers user accounts on our servers running Gentoo Linux and webspace with the typical Apache/PHP/MySQL combination. We recently became aware of a security problem regarding Symlinks. I wanted to share this, because I was appalled by the fact that there was no obvious solution.
Apache has an option FollowSymLinks which basically does what it says. If a symlink in a webroot is accessed the webserver will follow it. In a multi-user setup this is a security problem. Here's why: If I know that another user on the same system is running a typical web application - let's say Wordpress - I can create a symlink to his config file (for Wordpress that's wp-config.php). I can't see this file with my own user account. But the webserver can see it, so I can access it with the browser over my own webpage. As I'm usually allowed to disable PHP I'm able to prevent the server from interpreting the file, so I can read the other user's database credentials. The webserver needs to be able to see all files, therefore this works. While PHP and CGI scripts usually run with user's rights (at least if the server is properly configured) the files are still read by the webserver. For this to work I need to guess the path and name of the file I want to read, but that's often trivial. In our case we have default paths in the form /home/[username]/websites/[hostname]/htdocs where webpages are located.
So the obvious solution one might think about is to disable the FollowSymLinks option and forbid users to set it themselves. However symlinks in web applications are pretty common and many will break if you do that. It's not feasible for a common webhosting server.
Apache supports another Option called SymLinksIfOwnerMatch. It's also pretty self-explanatory, it will only follow symlinks if they belong to the same user. That sounds like it solves our problem. However there are two catches: First of all the Apache documentation itself says that "this option should not be considered a security restriction". It is still vulnerable to race conditions.
But even leaving the race condition aside it doesn't really work. Web applications using symlinks will usually try to set FollowSymLinks in their .htaccess file. An example is Drupal which by default comes with such an .htaccess file. If you forbid users to set FollowSymLinks then the option won't be just ignored, the whole webpage won't run and will just return an error 500. What you could do is changing the FollowSymLinks option in the .htaccess manually to SymlinksIfOwnerMatch. While this may be feasible in some cases, if you consider that you have a lot of users you don't want to explain to all of them that in case they want to install some common web application they have to manually edit some file they don't understand. (There's a bug report for Drupal asking to change FollowSymLinks to SymlinksIfOwnerMatch, but it's been ignored since several years.)
So using SymLinksIfOwnerMatch is neither secure nor really feasible. The documentation for Cpanel discusses several possible solutions. The recommended solutions require proprietary modules. None of the proposed fixes work with a plain Apache setup, which I think is a pretty dismal situation. The most common web server has a severe security weakness in a very common situation and no usable solution for it.
The one solution that we chose is a feature of Grsecurity. Grsecurity is a Linux kernel patch that greatly enhances security and we've been very happy with it in the past. There are a lot of reasons to use this patch, I'm often impressed that local root exploits very often don't work on a Grsecurity system.
Grsecurity has an option like SymlinksIfOwnerMatch (CONFIG_GRKERNSEC_SYMLINKOWN) that operates on the kernel level. You can define a certain user group (which in our case is the "apache" group) for which this option will be enabled. For us this was the best solution, as it required very little change.
I haven't checked this, but I'm pretty sure that we were not alone with this problem. I'd guess that a lot of shared web hosting companies are vulnerable to this problem.
Here's the German blog post on our webpage and here's the original blogpost from an administrator at Uberspace (also German) which made us aware of this issue.
23 Jun 2015 9:41pm GMT
21 Jun 2015
Gentoo users rejoice, for a few days already we have Perl 5.22.0 packaged in the main tree. Since we don't know yet how much stuff will break because of the update, it is masked for now. Which means, we need daring testers (preferably running ~arch systems, stable is also fine but may need more work on your part to get things running) who unmask the new Perl, upgrade, and file bugs if needed!!!
Here's what you need in /etc/portage/package.unmask (and possibly package.accept_keywords) to get started (download); please always use the full block, since partial unmasking will lead to chaos. We're looking forward to your feedback!
# Perl 5.22.0 mask / unmask block
# end of the Perl 5.22.0 mask / unmask block
After the update, first run
emerge --depclean --ask
perl-cleaner should not need to do anything, ideally. If you have depcleaned first and it still wants to rebuild something, that's a bug. Please file a bug report for the package that is getting rebuilt (but check our wiki page on known Perl 5.22 issues first to avoid duplicates).
21 Jun 2015 8:14am GMT