03 Jun 2018

feedPlanet Gentoo

Sebastian Pipping: Upstream release notification for package maintainers

Repology is monitoring package repositories across Linux distributions. By now, Atom feeds of per-maintainer outdated packages that I was waiting for have been implemented.

So I subscribed to my own Gentoo feed using net-mail/rss2email and now Repology notifies me via e-mail of new upstream releases that other Linux distros have packaged that I still need to bump in Gentoo. In my case, it brought an update of dev-vcs/svn2git to my attention that I would have missed (or heard about later), otherwise.

Based on this comment, Repology may soon do release detection upstream similar to what euscan does, as well.

03 Jun 2018 1:22pm GMT

01 Jun 2018

feedPlanet Gentoo

Domen Kožar: Announcing Cachix - Binary Cache as a Service

In the last 6 years working with Nix and mostly in last two years full-time, I've noticed a few patterns.

These are mostly direct or indirect result of not having a "good enough" infrastructure to support how much Nix has grown (1600+ contributors, 1500 pull requests per month).

Without further ado, I am announcing https://cachix.org - Binary Cache as a Service that is ready to be used after two months of work.

What problem(s) does cachix solve?

The main motivation is to save you time and compute resources waiting for your packages to build. By using a shared cache of already built packages, you'll only have to build your project once.

This should also speed up CI builds, as Nix can take use of granular caching of each package, rather than caching the whole build.

Another one (which I personally consider even more important) is decentralization of work produced by Nix developers. Up until today, most devs pushed their software updates into the nixpkgs repository, which has the global binary cache at https://cache.nixos.org.

But as the community grew, fitting different ideologies into one global namespace became impossible. I consider nixpkgs community to be mature but sometimes clash of ideologies with rational backing occurs. Some want packages to be featureful by default, some prefer them to be minimalist. Some might prefer lots of configuration knobs available (for example cross-compilation support or musl/glib swapping), some might prefer the build system to do just one thing, as it's easier to maintain.

These are not right or wrong opinions, but rather a specific view of use cases that software might or might not cover.

There are also many projects that don't fit into nixpkgs because their releases are too frequent, they are not available under permissive license, are simpler to manage over complete control or maintainers simply disagree with requirements that nixpkgs developers impose on contributors.

And that's fine. What we've learned in the past is not to fight these ideas, but allow them to co-exist in different domains.

If you're interested:

Domen (domen@enlambda.com)

01 Jun 2018 10:00am GMT

25 May 2018

feedPlanet Gentoo

Michał Górny: The story of Gentoo management

I have recently made a tabular summary of (probably) all Council members and Trustees in the history of Gentoo. I think that this table provides a very succinct way of expressing the changes within management of Gentoo. While it can't express the complete history of Gentoo, it can serve as a useful tool of reference.

What questions can it answer? For example, it provides an easy way to see how many terms individuals have served, or how long Trustee terms were. You can clearly see who served both on the Council and on the Board and when those two bodies had common members. Most notably, it collects a fair amount of hard-to-find data in a single table.

Can you trust it? I've put an effort to make the developer lists correct but given the bad quality of data (see below), I can't guarantee complete correctness. The Trustee term dates are approximate at best, and oriented around elections rather than actual term (which is hard to find). Finally, I've merged a few short-time changes such as empty seats between resignation and appointing a replacement, as expressing them one by one made little sense and would cause the tables to grow even longer.

This article aims to be the text counterpart to the table. I would like to tell the history of the presented management bodies, explain the sources that I've used to get the data and the problems that I've found while working on it.

As you could suspect, the further back I had to go, the less good data I was able to find. The problems included the limited scope of our archives and some apparent secrecy of decision-making processes at the early time (judging by some cross-posts, the traffic on -core mailing list was significant, and it was not archived before late 2004). Both due to lack of data, and due to specific interest in developer self-government, this article starts in mid-2003.

Continue reading

25 May 2018 3:43pm GMT

20 May 2018

feedPlanet Gentoo

Michał Górny: Empty directories, *into, dodir, keepdir and tmpfiles.d

There seems to be some serious confusion around the way directories are installed in Gentoo. In this post, I would like to shortly explain the differences between different methods of creating directories in ebuilds, and instruct how to handle the issues related to installing empty directories and volatile locations.

Empty directories are not guaranteed to be installed

First things first. The standards are pretty clear here:

Behaviour upon encountering an empty directory is undefined. Ebuilds must not attempt to install an empty directory.

PMS 13.2.2 Empty directories (EAPI 7 version)

What does that mean in practice? It means that if an empty directory is found in the installation image, it may or may not be installed. Or it may be installed, and incidentally removed later (that's the historical Portage behavior!). In any case, you can't rely on either behavior. If you really need a directory to exist once the package is installed, you need to make it non-empty (see: keepdir below). If you really need a directory not to exist, you need to rmdir it from the image.

That said, this behavior does makes sense. It guarantees that the Gentoo installation is secured against empty directory pruning tools.

*into

The *into family of functions is used to control install destination for other ebuild helpers. By design, either they or the respective helpers create the install directories as necessary. In other words, you do not need to call dodir when using *into.

dodir

dodir is not really special in any way. It is just a convenient wrapper for install -d that prepends ${ED} to the path. It creates an empty directory the same way the upstream build system would have created it, and if the directory is left empty, it is not guaranteed to be preserved.

So when do you use it? You use it when you need to create a directory that will not be created otherwise and that will become non-empty at the end of the build process. Example use cases are working around broken build systems (that fail due to non-existing directories but do not create them), and creating directories when you want to manually write to a file there.

src_install() {
    # build system is broken and fails
    # if ${D}/usr/bin does not exist
    dodir /usr/bin
    default

    dodir /etc/foo
    sed -e "s:@libdir@:$(get_libdir):" \
        "${FILESDIR}"/foo.conf.in \
        > "${ED}"/etc/foo/foo.conf || die
}

keepdir

keepdir is the function specifically meant for installing empty directories. It creates the directory, and a keep-file inside it. The directory becomes non-empty, and therefore guaranteed to be installed and preserved. When using keepdir, you do not call dodir as well.

Note that actually preserving the empty directories is not always necessary. Sometimes packages are perfectly capable of recreating the directories themselves. However, make sure to verify that the permissions are correct afterwards.

src_install() {
    default

    # install empty directory
    keepdir /var/lib/foo
}

Volatile locations

The keepdir method works fine for persistent locations. However, it will not work correctly in directories such as /run that are volatile or /var/cache that may be subject to wiping by user. On Gentoo, this also includes /var/run (which OpenRC maintainers unilaterally decided to turn into a /run symlink), and /var/lock.

Since the package manager does not handle recreating those directories e.g. after a reboot, something else needs to. There are three common approaches to it, most preferred first:

  1. Application creates all necessary directories at startup.
  2. tmpfiles.d file is installed to create the files at boot.
  3. Init script creates the directories before starting the service (checkpath).

The preferred approach is for applications to create those directories themselves. However, not all applications do that, and not all actually can. For example, applications that are running unprivileged generally can't create those directories.

The second approach is to install a tmpfiles.d file to create (and maintain) the directory. Those files are work both for systemd and OpenRC users (via opentmpfiles) out of the box. The directories are (re-)created at boot, and optionally cleaned up periodically. The ebuild should also use tmpfiles.eclass to trigger directory creation after installing the package.

The third approach is to make the init script create the directory. This was the traditional way but nowadays it is generally discouraged as it causes duplication between different init systems, and the directories are not created when the application is started directly by the user.

Summary

To summarize:

  1. when you install files via *into, installation directories are automatically created for you;
  2. when you need to create a directory into which files are installed in other way than ebuild helpers, use dodir;
  3. when you need to install an empty directory in a non-volatile location (and application can't just create it on start), use keepdir;
  4. when you need to install a directory into a volatile location (and application can't just create it on start), use tmpfiles.d.

20 May 2018 8:03am GMT

13 May 2018

feedPlanet Gentoo

Michał Górny: A short history of Gentoo copyright

As part of the recent effort into forming a new copyright policy for Gentoo, a research into the historical status has been conducted. We've tried to establish all the key events regarding the topic, as well as the reasoning behind the existing policy. I would like to shortly note the history based on the evidence discovered by Robin H. Johnson, Ulrich Müller and myself.

Continue reading

13 May 2018 7:04pm GMT

12 May 2018

feedPlanet Gentoo

Michał Górny: On OpenPGP (GnuPG) key management

Over the time, a number of developers have had problems following the Gentoo OpenPGP key policy (GLEP 63. In particular, the key expiration requirements have resulted in many developers wanting to replace their key unnecessarily. I've been asked to write some instructions on managing your OpenPGP key, and I've decided to go for a full blog post with some less-known tips. I won't be getting into detailed explanations how to use GnuPG though - you may still need to read the documentation after all.

Primary key and subkeys

An OpenPGP key actually consists of one or more pairs of public and private keys - the primary key (or root key, in GLEP 63 naming), and zero or more subkeys. Ideally, the primary key is only used to create subkeys, UIDs, manipulate them and sign other people's keys. All 'non-key' cryptographic operations are done using subkeys. This reduces the wear of the primary key, and the risk of it being compromised.

If you don't use a smartcard, then a good idea would be to move the private part of primary key off-site since you don't need it for normal operation. However, before doing that please remember to always have a revocation certificate around. You will need it to revoke the primary key if you lose it. With GnuPG 2.1, removing private keys is trivial. First, list all keys with keygrips:

$ gpg --list-secret --with-keygrip
/home/you/.gnupg/pubring.kbx
-------------------------------
sec   rsa2048/0xBBC7E6E002FE74E8 2018-05-12 [SC] [expires: 2020-05-11]
      55642983197252C35550375FBBC7E6E002FE74E8
      Keygrip = B51708C7209017A162BDA515A9803D3089B993F0
uid                   [ultimate] Example key 
ssb   rsa2048/0xB7BA421CDCD4AF16 2018-05-12 [E] [expires: 2020-05-11]
      Keygrip = 92230550DA684B506FC277B005CD3296CB70463C

Note that the output may differ depending on your settings. The sec entry indicates a primary key. Once you find the correct key, just look for a file named after its Keygrip in ~/.gnupg/private-keys-v1.d (e.g. B51708C7209017A162BDA515A9803D3089B993F0.key here). Move that file off-site and voilà!

In fact, you can go even further and use a dedicated off-line system to create and manage keys, and only transfer appropriate private keys (and public keyring updates) to your online hosts. You can transfer and remove any other private key the same way, and use --export-key to transfer the public keys.

How many subkeys to use?

Create at least one signing subkey and exactly one encryption subkey.

Signing keys are used to sign data, i.e. to prove its integrity and authenticity. Using multiple signing subkeys is rather trivial - you can explicitly specify the key to use while creating a signature (note that you need to append ! to key-id to force non-default subkey), and GnuPG will automatically use the correct subkey when verifying the signature. To reduce the wear of your main signing subkey, you can create a separate signing subkey for Gentoo commits. Or you can go ever further, and have a separate signing subkey for each machine you're using (and keep only the appropriate key on each machine).

Encryption keys are used to encrypt messages. While technically it is possible to have multiple encryption subkeys, GnuPG does not make that meaningful. When someone will try to encrypt a message to you, it will insist on using the newest key even if multiple keys are valid. Therefore, use only one encryption key to avoid confusion.

There is also a third key class: authentication keys that can be used in place of SSH keys. If you intend to use them, I suggest the same rule as for SSH keys, that is one key for each host holding the keyring. More on using GnuPG for SSH below.

To summarize: use one encryption subkey, and as many signing and authentication subkeys as you need. Using more subkeys reduces individual wear of each key, and makes it easier to assess the damage if one of them gets compromised.

When to create a new key?

One of the common misconceptions is that you need to create a new key when the current one expires. This is not really the purpose of key expiration - we use it mostly to automatically rule out dead keys. There are generally three cases when you want to create a new key:

  1. if the key is compromised,
  2. if the primary key is irrecoverably lost,
  3. if the key uses really weak algorithm (e.g. short DSA key).

Most of the time, you will just decide to prolong the primary key and subkeys, i.e. use the --edit-key option to update their expiration dates. Note that GnuPG is not very user-friendly there. To prolong the primary key, use expire command without any subkeys selected. To prolong one or more subkeys, select them using key and then use expire. Normally, you will want to do this periodically, before the expiration date to give people some time to refresh. Add it to your calendar as a periodic event.

$ gpg --edit-key 0xBBC7E6E002FE74E8
Secret key is available.

sec  rsa2048/0xBBC7E6E002FE74E8
     created: 2018-05-12  expires: 2020-05-11  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa2048/0xB7BA421CDCD4AF16
     created: 2018-05-12  expires: 2020-05-11  usage: E   
[ultimate] (1). Example key <example@example.com>

gpg> expire
Changing expiration time for the primary key.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 3y
Key expires at Tue May 11 12:32:35 2021 CEST
Is this correct? (y/N) y

sec  rsa2048/0xBBC7E6E002FE74E8
     created: 2018-05-12  expires: 2021-05-11  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa2048/0xB7BA421CDCD4AF16
     created: 2018-05-12  expires: 2020-05-11  usage: E   
[ultimate] (1). Example key <example@example.com>

gpg> key 1

sec  rsa2048/0xBBC7E6E002FE74E8
     created: 2018-05-12  expires: 2021-05-11  usage: SC  
     trust: ultimate      validity: ultimate
ssb* rsa2048/0xB7BA421CDCD4AF16
     created: 2018-05-12  expires: 2020-05-11  usage: E   
[ultimate] (1). Example key <example@example.com>

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at Sun May 12 12:32:47 2019 CEST
Is this correct? (y/N) y

sec  rsa2048/0xBBC7E6E002FE74E8
     created: 2018-05-12  expires: 2021-05-11  usage: SC  
     trust: ultimate      validity: ultimate
ssb* rsa2048/0xB7BA421CDCD4AF16
     created: 2018-05-12  expires: 2019-05-12  usage: E   
[ultimate] (1). Example key <example@example.com>

If one of the conditions above applies to one of your subkeys, or you think that it has reached a very high wear, you will want to replace the subkey. While at it, make sure that the old key is either expired or revoked (but don't revoke the whole key accidentally!). If one of those conditions applies to your primary key, revoke it and start propagating your new key.

Please remember to upload your key to key servers after each change (using --send-keys).

To summarize: prolong your keys periodically, rotate subkeys whenever you consider that beneficial but avoid replacing the primary key unless really necessary.

Using gpg-agent for SSH authentication

If you already have to set up a secure store for OpenPGP keys, why not use it for SSH keys as well? GnuPG provides ssh-agent emulation which lets you use an OpenPGP subkey to authenticate via SSH.

Firstly, you need to create a new key. You need to use the --expert option to access additional options. Use addkey to create a new key, choose one of the options with custom capabilities and toggle them from the default sign+<em<encrypt to authenticate:

$ gpg --expert --edit-key 0xBBC7E6E002FE74E8
Secret key is available.

sec  rsa2048/0xBBC7E6E002FE74E8
     created: 2018-05-12  expires: 2020-05-11  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa2048/0xB7BA421CDCD4AF16
     created: 2018-05-12  expires: 2020-05-11  usage: E   
[ultimate] (1). Example key <example@example.com>

gpg> addkey
Please select what kind of key you want:
   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
   (7) DSA (set your own capabilities)
   (8) RSA (set your own capabilities)
  (10) ECC (sign only)
  (11) ECC (set your own capabilities)
  (12) ECC (encrypt only)
  (13) Existing key
Your selection? 8

Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: Sign Encrypt 

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? s

Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: Encrypt 

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? e

Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: 

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? a

Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: Authenticate 

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? q
[...]

Once the key is created, find its keygrip:

$ gpg --list-secret --with-keygrip
/home/mgorny/.gnupg/pubring.kbx
-------------------------------
sec   rsa2048/0xBBC7E6E002FE74E8 2018-05-12 [SC] [expires: 2020-05-11]
      55642983197252C35550375FBBC7E6E002FE74E8
      Keygrip = B51708C7209017A162BDA515A9803D3089B993F0
uid                   [ultimate] Example key <example@example.com>
ssb   rsa2048/0xB7BA421CDCD4AF16 2018-05-12 [E] [expires: 2020-05-11]
      Keygrip = 92230550DA684B506FC277B005CD3296CB70463C
ssb   rsa2048/0x2BE2AF20C43617A0 2018-05-12 [A] [expires: 2018-05-13]
      Keygrip = 569A0C016AB264B0451309775FDCF06A2DE73473

This time we're talking about the keygrip of the [A] key. Append that to ~/.gnupg/sshcontrol:

$ echo 569A0C016AB264B0451309775FDCF06A2DE73473 >> ~/.gnupg/sshcontrol

The final step is to have gpg-agent with --enable-ssh-support started. The exact procedure here depends on the environment used. In XFCE, it involves setting a hidden configuration option:

$ xfconf-query -c xfce4-session -p /startup/ssh-agent/type -n -t string -s gpg-agent

Further reading

12 May 2018 6:40am GMT

08 May 2018

feedPlanet Gentoo

Michał Górny: Copyright 101 for Gentoo contributors

While the work on new Gentoo copyright policy is still in progress, I think it would be reasonable to write a short article on copyright in general, for the benefit of Gentoo developers and contributors (proxied maintainers, in particular). There are some common misconceptions regarding copyright, and I would like to specifically focus on correcting them. Hopefully, this will reduce the risk of users submitting ebuilds and other files in violation of copyrights of other parties.

First of all, I'd like to point out that IANAL. The following information is based on what I've gathered from various sources over the years. Some or all of it may be incorrect. I take no responsibility for that. When in doubt, please contact a lawyer.

Secondly, the copyright laws vary from country to country. In particular, I have no clue how they work across two countries with incompatible laws. I attempt to provide a baseline that should work both for US and EU, i.e. 'stay on the safe side'. However, there is no guarantee that it will work everywhere.

Thirdly, you might argue that a particular case would not stand a chance in court. However, my goal here is to avoid the court in the first place.

The guidelines follow. While I'm referring to 'code' below, the same rules to apply to any copyrightable material.

  1. Lack of clear copyright notice does not imply lack of copyright. When there is no license declaration clearly applicable to the file in question, it is implicitly all-rights-reserved. In other words, you can't reuse that code in your project. You need to contact the copyright holder and ask him to give you rights to do so (i.e. add a permissive license).
  2. Copyright still holds even if the author did not list his name, made it anonymously or used a fake name. If it's covered by an open source license, you can use it preserving the original copyright notice. If not, you need to reliably determine who the real copyright holder is.
  3. 'Public domain' dedication is not recognized globally (e.g. in the EU copyright is irrevocable). If you wish to release your work with no restrictions, please use an equivalent globally recognized license, e.g. CC0. If you wish to include a 'public domain' code in your project, please consider contacting its author to use a safer license option instead.
  4. Copyrights and licenses do not merge when combining code. Instead, each code fragment retains its original copyright. When you include code with different copyright, you should include the original copyright notice. If you modify such code fragment, you only hold copyright (and can enforce your own license) to your own changes.
  5. Copyright is only applicable to original work. It is generally agreed that e.g. a typo fix is not copyrightable (i.e. you can't pursue copyright for doing that). However, with anything more complex than that the distinction is rather blurry.
  6. When a project uses code fragments with multiple different licenses, you need to conform to all of them.
  7. When a project specifies that you can choose between multiple licenses (e.g. BSD/GPL dual-licensing, 'GPL-2 or newer'), you need to conform only to the terms of one of the specified licenses. However, in the context of a single use, you need to conform to all terms of the chosen license. You can't freely combine incompatible terms of multiple licenses.
  8. Not all licenses can be combined within a single project. Before including code using a different license, please research license compatibility. Most of those rules are asymmetrical. For example:
    • you can't include GPL code in BSD-licensed project (since GPL forbids creating derivative work with less restrictive licensing);
    • but you can include BSD-licensed code in GPL project (since BSD does not forbid using more restrictive license in derivative works);
    • also, you can include BSD/GPL dual-licensed code in BSD-licensed project (since dual-licensing allows you to choose either of the licenses).
  9. Relicensing a whole project can happen only if you obtain explicit permission from all people holding copyright to it. Otherwise, you can only relicense those fragments to which you had obtained permission (provided that the new license is compatible with the remaining licenses).
  10. Relicensing a project does not apply retroactively. The previous license still applies to the revisions of the project prior to the license change. However, this applies only to factual license changes. For example, if a MIT-licensed project included LGPL code snippet that lacked appropriate copyright notice (and added the necessary notice afterwards), you can't use the snippet under (mistakenly attributed) MIT license.

08 May 2018 5:59am GMT

03 May 2018

feedPlanet Gentoo

Michał Górny: The ultimate guide to EAPI 7

Back when EAPI 6 was approved and ready for deployment, I have written a blog post entitled the Ultimate Guide to EAPI 6. Now that EAPI 7 is ready, it is time to publish a similar guide to it.

Of all EAPIs approved so far, EAPI 7 brings the largest number of changes. It follows the path established by EAPI 6. It focuses on integrating features that are either commonly used or that can not be properly implemented in eclasses, and removing those that are either deemed unnecessary or too complex to support. However, the circumstances of its creation are entirely different.

EAPI 6 was more like a minor release. It was formed around the time when Portage development has been practically stalled. It aimed to collect some old requests into an EAPI that would be easy to implement by people with little knowledge of Portage codebase. Therefore, the majority of features oscillated around bash parts of the package manager.

EAPI 7 is closer to a proper major release. It included some explicit planning ahead of specification, and the specification has been mostly completed even before the implementation work started. We did not initially skip features that were hard to implement, even though the hardest of them were eventually postponed.

I will attempt to explain all the changes in EAPI 7 in this guide, including the rationale and ebuild code examples.

Continue reading

03 May 2018 7:25am GMT

18 Apr 2018

feedPlanet Gentoo

Zack Medico: portage API now provides an asyncio event loop policy

In portage-2.3.30, portage's python API provides an asyncio event loop policy via a DefaultEventLoopPolicy class. For example, here's a little program that uses portage's DefaultEventLoopPolicy to do the same thing as emerge --regen, using an async_iter_completed function to implement the --jobs and --load-average options:

#!/usr/bin/env python

from __future__ import print_function

import argparse
import functools
import multiprocessing
import operator

import portage
from portage.util.futures.iter_completed import (
    async_iter_completed,
)
from portage.util.futures.unix_events import (
    DefaultEventLoopPolicy,
)


def handle_result(cpv, future):
    metadata = dict(zip(portage.auxdbkeys, future.result()))
    print(cpv)
    for k, v in sorted(metadata.items(),
        key=operator.itemgetter(0)):
        if v:
            print('\t{}: {}'.format(k, v))
    print()


def future_generator(repo_location, loop=None):

    portdb = portage.portdb

    for cp in portdb.cp_all(trees=[repo_location]):
        for cpv in portdb.cp_list(cp, mytree=repo_location):
            future = portdb.async_aux_get(
                cpv,
                portage.auxdbkeys,
                mytree=repo_location,
                loop=loop,
            )

            future.add_done_callback(
                functools.partial(handle_result, cpv))

            yield future


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--repo',
        action='store',
        default='gentoo',
    )
    parser.add_argument(
        '--jobs',
        action='store',
        type=int,
        default=multiprocessing.cpu_count(),
    )
    parser.add_argument(
        '--load-average',
        action='store',
        type=float,
        default=multiprocessing.cpu_count(),
    )
    args = parser.parse_args()

    try:
        repo_location = portage.settings.repositories.\
            get_location_for_name(args.repo)
    except KeyError:
        parser.error('unknown repo: {}\navailable repos: {}'.\
            format(args.repo, ' '.join(sorted(
            repo.name for repo in
            portage.settings.repositories))))

    policy = DefaultEventLoopPolicy()
    loop = policy.get_event_loop()

    try:
        for future_done_set in async_iter_completed(
            future_generator(repo_location, loop=loop),
            max_jobs=args.jobs,
            max_load=args.load_average,
            loop=loop):
            loop.run_until_complete(future_done_set)
    finally:
        loop.close()



if __name__ == '__main__':
    main()

18 Apr 2018 6:15am GMT

03 Apr 2018

feedPlanet Gentoo

Alexys Jacob: py3status v3.8

Another long awaited release has come true thanks to our community!

The changelog is so huge that I had to open an issue and cry for help to make it happen… thanks again @lasers for stepping up once again 🙂

Highlights

Milestone 3.9

The next release will focus on bugs and modules improvements / standardization.

Thanks contributors!

This release is their work, thanks a lot guys!

03 Apr 2018 12:06pm GMT

02 Apr 2018

feedPlanet Gentoo

Thomas Raschbacher: enlightenment 0.22.3 fixes lock screen bug on linux (pam related)

Thanks to the enlightenment devs for fixing this ;) no lock screen sucks :D

https://www.enlightenment.org/news/e0.22.3_release

also it is in my Gentoo dev overlay as of now.

02 Apr 2018 6:20pm GMT

Thomas Raschbacher: Gentoo Dev overlay in layman again - contains efl 1.20.7 and enlightenment 0.22.[12]

So a while ago I cleaned out my dev overlay and added dev-libs/efl-1.20.7 and x11-wm/enlightenment-0.22.1 (and 0.22.2)

Works for me at the moment (except the screen (un-)lock) but not sure if that has to do with my box. any testers welcome

Here's the link: https://gitweb.gentoo.org/dev/lordvan.git/

Oh and I added it to layman's repo list again, so gentoo users can easily just "layman -a lordvan" to test it.

On a side note: 0.22.1 gave me trouble with a 2nd screen plugged in, which seems fixed in 0.22.2, but that has (pam related) problems with the lock screen ..

02 Apr 2018 5:53pm GMT

17 Mar 2018

feedPlanet Gentoo

Sebastian Pipping: Holy cow! Larry the cow Gentoo tattoo

Probably not new but was new to me: Just ran into this Larry the Cow tattoo online: http://www.geekytattoos.com/larry-the-gender-challenged-cow/

17 Mar 2018 2:53pm GMT

11 Mar 2018

feedPlanet Gentoo

Greg KH: My affidavit in the Geniatech vs. McHardy case

As many people know, last week there was a court hearing in the Geniatech vs. McHardy case. This was a case brought claiming a license violation of the Linux kernel in Geniatech devices in the German court of OLG Cologne.

Harald Welte has written up a wonderful summary of the hearing, I strongly recommend that everyone go read that first.

In Harald's summary, he refers to an affidavit that I provided to the court. Because the case was withdrawn by McHardy, my affidavit was not entered into the public record. I had always assumed that my affidavit would be made public, and since I have had a number of people ask me about what it contained, I figured it was good to just publish it for everyone to be able to see it.

There are some minor edits from what was exactly submitted to the court such as the side-by-side German translation of the English text, and some reformatting around some footnotes in the text, because I don't know how to do that directly here, and they really were not all that relevant for anyone who reads this blog. Exhibit A is also not reproduced as it's just a huge list of all of the kernel releases in which I felt that were no evidence of any contribution by Patrick McHardy.

AFFIDAVIT

I, the undersigned, Greg Kroah-Hartman,
declare in lieu of an oath and in the
knowledge that a wrong declaration in
lieu of an oath is punishable, to be
submitted before the Court:

I. With regard to me personally:

1. I have been an active contributor to
   the Linux Kernel since 1999.

2. Since February 1, 2012 I have been a
   Linux Foundation Fellow.  I am currently
   one of five Linux Foundation Fellows
   devoted to full time maintenance and
   advancement of Linux. In particular, I am
   the current Linux stable Kernel maintainer
   and manage the stable Kernel releases. I
   am also the maintainer for a variety of
   different subsystems that include USB,
   staging, driver core, tty, and sysfs,
   among others.

3. I have been a member of the Linux
   Technical Advisory Board since 2005.

4. I have authored two books on Linux Kernel
   development including Linux Kernel in a
   Nutshell (2006) and Linux Device Drivers
   (co-authored Third Edition in 2009.)

5. I have been a contributing editor to Linux
   Journal from 2003 - 2006.

6. I am a co-author of every Linux Kernel
   Development Report. The first report was
   based on my Ottawa Linux Symposium keynote
   in 2006, and the report has been published
   every few years since then. I have been
   one of the co-author on all of them. This
   report includes a periodic in-depth
   analysis of who is currently contributing
   to Linux. Because of this work, I have an
   in-depth knowledge of the various records
   of contributions that have been maintained
   over the course of the Linux Kernel
   project.

   For many years, Linus Torvalds compiled a
   list of contributors to the Linux kernel
   with each release. There are also usenet
   and email records of contributions made
   prior to 2005. In April of 2005, Linus
   Torvalds created a program now known as
   "Git" which is a version control system
   for tracking changes in computer files and
   coordinating work on those files among
   multiple people. Every Git directory on
   every computer contains an accurate
   repository with complete history and full
   version tracking abilities.  Every Git
   directory captures the identity of
   contributors.  Development of the Linux
   kernel has been tracked and managed using
   Git since April of 2005.

   One of the findings in the report is that
   since the 2.6.11 release in 2005, a total
   of 15,637 developers have contributed to
   the Linux Kernel.

7. I have been an advisor on the Cregit
   project and compared its results to other
   methods that have been used to identify
   contributors and contributions to the
   Linux Kernel, such as a tool known as "git
   blame" that is used by developers to
   identify contributions to a git repository
   such as the repositories used by the Linux
   Kernel project.

8. I have been shown documents related to
   court actions by Patrick McHardy to
   enforce copyright claims regarding the
   Linux Kernel. I have heard many people
   familiar with the court actions discuss
   the cases and the threats of injunction
   McHardy leverages to obtain financial
   settlements. I have not otherwise been
   involved in any of the previous court
   actions.

II. With regard to the facts:

1. The Linux Kernel project started in 1991
   with a release of code authored entirely
   by Linus Torvalds (who is also currently a
   Linux Foundation Fellow).  Since that time
   there have been a variety of ways in which
   contributions and contributors to the
   Linux Kernel have been tracked and
   identified. I am familiar with these
   records.

2. The first record of any contribution
   explicitly attributed to Patrick McHardy
   to the Linux kernel is April 23, 2002.
   McHardy's last contribution to the Linux
   Kernel was made on November 24, 2015.

3. The Linux Kernel 2.5.12 was released by
   Linus Torvalds on April 30, 2002.

4. After review of the relevant records, I
   conclude that there is no evidence in the
   records that the Kernel community relies
   upon to identify contributions and
   contributors that Patrick McHardy made any
   code contributions to versions of the
   Linux Kernel earlier than 2.4.18 and
   2.5.12. Attached as Exhibit A is a list of
   Kernel releases which have no evidence in
   the relevant records of any contribution
   by Patrick McHardy.

11 Mar 2018 1:51am GMT

01 Mar 2018

feedPlanet Gentoo

Thomas Raschbacher: Running UCS (Univention Corporate Server) Core on Gentoo with kvm + using an LVM volume

Just a quick post about how to run UCS (Core Edition in my case) with KVM on gentoo.

First off I go with the assumption that

If any of the above are not yet set up: https://wiki.gentoo.org/wiki/QEMU

First download the Virtualbox Image from https://www.univention.de/download/ .

Further for the kvm name I use ucs-dc

next we convert the image to qcow2:

qemu-img convert -f vmdk -O qcow2 UCS-DC/UCS-DC-virtualbox-disk1.vmdk  UCS-DC_disk1.qcow2

create your init script link:

cd /etc/init.d; ln -s qemu kvm.ucs-dc

Then in /etc/conf.d copy qemu.conf.example to kvm.ucs-dc

Check / change the following:

  1. change the MACADDR (it includes a command line to generate one) -- The reason this is first is, that if you forget you might spend hours - like me - trying to find out why your network is not working ..
  2. QEMU_TYPE="x86_64"
  3. NIC_TYPE=br
  4. point DISKIMAGE= to your qcow2 file
  5. ENABLE_KVM=1 (believe me disabling kvm is noticeable)
  6. adjust MEMORY (I set it to 2GB for the DC) and SMP (i set that to 2)
  7. FOREGROUND="vnc=:<port>" - so you can connect to your console using VNC
  8. check the other stuff if it applies to you (OTHER_ARGS is quite useful for example to also add a CD/usb emulation of a rescue disk,..

run it with

/etc/init.d/kvm.ucs-dc start

connect with your favourite VNC client and set up your UCS Server.

One thing I did on the fileserver instance (I run 3 UCS kvms at the moment - DC, Backup-DC and File Server):

I created a LVM Volume for the file share on the Host, and mapped it to the KVM - here's the config line:

OTHER_ARGS="-drive format=raw,file=/dev/mapper/<your volume devide>,if=virtio,aio=native,cache.direct=on"

works great for me, and I will also add another one for other shares later I think. but this way if i really have any VM problems my files are just on the lvm device and i can get to it easily (also there are lvm snapshots,.. that could be useful eventually)

01 Mar 2018 10:48am GMT

Thomas Raschbacher: Tryton setup & config

Because I keep forgetting stuff I need to do (or the order) here a very quick overview:

Install trytond, modules + deps (on gentoo add the tryton overlay and just emerge)

If you don'T use sqlite create a user (and database) for tryton.

Gentoo Init scripts use /etc/conf.d/trytond (here's mine):

# Location of the configuration file
CONFIG=/etc/tryton/trytond.conf
# Location of the logging configuration file
LOGCONF=/etc/tryton/logging.conf
# The database names to load (space separated)
DATABASES=tryton

since it took me a while to find a working logging.conf example here's my working one:

[formatters]
keys=simple

[handlers]
keys=rotate,console

[loggers]
keys=root

[formatter_simple]
format=%(asctime)s] %(levelname)s:%(name)s:%(message)s
datefmt=%a %b %d %H:%M:%S %Y

[handler_rotate]
class=handlers.TimedRotatingFileHandler
args=('/var/log/trytond/trytond.log', 'D', 1, 120)
formatter=simple

[handler_console]
class=StreamHandler
formatter=simple
args=(sys.stdout,)

[logger_root]
level=INFO
handlers=rotate,console

(Not going into details here, if you want to know more there are plenty of resources online)

As for config I went and got an example online (from open Suse) and modified it:

# /etc/tryton/trytond.conf - Configuration file for Tryton Server (trytond)
#
# This file contains the most common settings for trytond (Defaults
# are commented).
# For more information read
# /usr/share/doc/trytond-<version>/

[database]
# Database related settings

# The URI to connect to the SQL database (following RFC-3986)
# uri = database://username:password@host:port/
# (Internal default: sqlite:// (i.e. a local SQLite database))
#
# PostgreSQL via Unix domain sockets
# (e.g. PostgreSQL database running on the same machine (localhost))
#uri = postgresql://tryton:tryton@/
#
#Default setting for a local postgres database
#uri = postgresql:///

#
# PostgreSQL via TCP/IP
# (e.g. connecting to a PostgreSQL database running on a remote machine or
# by means of md5 authentication. Needs PostgreSQL to be configured to accept
# those connections (pg_hba.conf).)
#uri = postgresql://tryton:tryton@localhost:5432/
uri = postgresql://tryton:mypassword@localhost:5432/

# The path to the directory where the Tryton Server stores files.
# The server must have write permissions to this directory.
# (Internal default: /var/lib/trytond)
path = /var/lib/tryton

# Shall available databases be listed in the client?
#list = True

# The number of retries of the Tryton Server when there are errors
# in a request to the database
#retry = 5

# The primary language, that is used to store entries in translatable
# fields into the database.
#language = en_US
language = de_AT

[ssl]
# SSL settings
# Activation of SSL for all available protocols.
# Uncomment the following settings for key and certificate
# to enable SSL.

# The path to the private key
#privatekey = /etc/ssl/private/ssl-cert-snakeoil.key

# The path to the certificate
#certificate = /etc/ssl/certs/ssl-cert-snakeoil.pem

[jsonrpc]
# Settings for the JSON-RPC network interface

# The IP/host and port number of the interface
# (Internal default: localhost:8000)
#
# Listen on all interfaces (IPv4)

listen = 0.0.0.0:8000

#
# Listen on all interfaces (IPv4 and IPv6)
#listen = [::]:8000

# The hostname for this interface
#hostname =

# The root path to retrieve data for GET requests
#data = jsondata

[xmlrpc]
# Settings for the XML-RPC network interface

# The IP/host and port number of the interface
#listen = localhost:8069

[webdav]
# Settings for the WebDAV network interface

# The IP/host and port number of the interface
#listen = localhost:8080
listen = 0.0.0.0:8080

[session]
# Session settings

# The time (in seconds) until an inactive session expires
timeout = 3600

# The server administration password used by the client for
# the execution of database management tasks. It is encrypted
# using using the Unix crypt(3) routine. A password can be
# generated using the following command line (on one line):
# $ python -c 'import getpass,crypt,random,string; \
# print crypt.crypt(getpass.getpass(), \
# "".join(random.sample(string.ascii_letters + string.digits, 8)))'
# Example password with 'admin'
#super_pwd = jkUbZGvFNeugk
super_pwd = <your pwd>


[email]
# Mail settings

# The URI to connect to the SMTP server.
# Available protocols are:
# - smtp: simple SMTP
# - smtp+tls: SMTP with STARTTLS
# - smtps: SMTP with SSL
#uri = smtp://localhost:25
uri = smtp://localhost:25

# The From address used by the Tryton Server to send emails.
from = tryton@<your-domain.tld>

[report]
# Report settings

# Unoconv parameters for connection to the unoconv service.
#unoconv = pipe,name=trytond;urp;StarOffice.ComponentContext

# Module settings
#
# Some modules are reading configuration parameters from this
# configuration file. These settings only apply when those modules
# are installed.
#
#[ldap_authentication]
# The URI to connect to the LDAP server.
#uri = ldap://host:port/dn?attributes?scope?filter?extensions
# A basic default URL could look like
#uri = ldap://localhost:389/

[web]
# Path for the web-frontend
#root = /usr/lib/node-modules/tryton-sao
listen = 0.0.0.0:8000
root = /usr/share/sao

Set up the database tables, modules, superuser

trytond-admin -c /etc/tryton/trytond.conf -d tryton --all

Should you forget to set your superuser password (or need to change it later):

trytond-admin -c /etc/tryton/trytond.conf -d tryton -p

It's now time to connect a client to it and enable & configure the modules (make sure to finish the basic configuration (including accounts,..) otherwise you have to either restart, or know what exactly needs to be set up accounting wise !

during this you can watch trytond.log to see what happens behind the scenes (e.g. country module takes a while,..)

How to add languages:

If you install new modules or languages run trytond-admin ... --all again (see above)

01 Mar 2018 8:10am GMT