13 Oct 2019

feedPlanet Gentoo

Michał Górny: Improving distfile mirror structure

The Gentoo distfile mirror network is essential in distributing sources to our users. It offloads upstream download locations, improves throughput and reliability, guarantees distfile persistency.

The current structure of distfile mirrors dates back to 2002. It might have worked well back when we mirrored around 2500 files but it proved not to scale well. Today, mirrors hold almost 70 000 files, and this number has been causing problems for mirror admins.

The most recent discussion on restructuring mirrors started in January 2015. I have started the preliminary research in January 2017, and it resulted in GLEP 75 being created in January 2018. With the actual implementation effort starting in October 2019, I'd like to summarize all the data and update it with fresh statistics.

Continue reading

13 Oct 2019 2:34pm GMT

04 Oct 2019

feedPlanet Gentoo

Thomas Raschbacher: [gentoo] Network Bridge device for Qemu kvm

So I needed to set up qemu+kvm on a new server (After the old one died)

Seems like i forgot to mention how I set up the bridge network on my previous blog post so here you go:

First let me mention that I am using the second physical Interface on the server for the bridge. Depending on your available hardware or use case you might need / want to change this:

So this is fairly simple (if one has a properly configured kernel of course - details on the Gentoo Wiki article on Network Bridges):

First add this in your /etc/conf.d/net (adjust the interface names as needed):

# QEMU / KVM bridge
bridge_br0="enp96s0f1"
config_br0="dhcp"

then add an init script and start it:

cd /etc/init.d; ln -s net.lo net.br0
/etc/init.d/net.br0 start # to test it

So then I get this error when trying to start my kvm/qemu instance:

 * creating qtap (auto allocating one) ...
/usr/sbin/qtap-manipulate: line 28: tunctl: command not found
tunctl failed
* failed to create qtap interface

seems like I was missing sys-apps/usermode-utilities .. so just emerge that, only to get this:

 * creating qtap (auto allocating one) ...
/usr/sbin/qtap-manipulate: line 29: brctl: command not found
brctl failed
* failed to create qtap interface

yepp I forgot to install that too ^^ .. so Install net-misc/bridge-utils and now it starts the VM

04 Oct 2019 8:53am GMT

30 Sep 2019

feedPlanet Gentoo

Thomas Raschbacher: "Network is unreachable" error - Gentoo, silly netifrc configuration mistake

So on a new install I was just sitting there and wondering .. what did I do wrong .. why do I keep getting those errors:

# ping lordvan.com
connect: Network is unreachable

then I realized something when checking the output of route -n:

 # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp96s0f0
192.168.0.254 0.0.0.0 255.255.255.255 UH 2 0 0 enp96s0f0

It should be:

 # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.254 0.0.0.0 UG 2 0 0 enp96s0f0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp96s0f0

Turns out I had forgotten something quite simple, yet important: add "default via <router IP>" to /etc/conf.d/net.. So after changing it from

routes_enp96s0f0="192.168.0.254"

to

routes_enp96s0f0="default via 192.168.0.254"

and restarting the interface everything works just fine ;)

Silly mistake, easy fix .. can be a pain to realize what went wrong though .. maybe someone will make the same mistake and find this blog post hopefully to fix it faster than me ;)

30 Sep 2019 8:36am GMT

22 Sep 2019

feedPlanet Gentoo

Alexys Jacob: py3status v3.20 – EuroPython 2019 edition

Shame on me to post this so long after it happened… Still, that's a funny story to tell and a lot of thank you to give so let's go!

The py3status EuroPython 2019 sprint

I've attended all EuroPython conferences since 2013. It's a great event and I encourage everyone to get there!

The last two days of the conference week are meant for Open Source projects collaboration: this is called sprints.

I don't know why but this year I decided that I would propose a sprint to welcome anyone willing to work on py3status to come and help…

To be honest I was expecting that nobody would be interested so when I sat down at an empty table on saturday I thought that it would remain empty… but hey, I would have worked on py3status anyway so every option was okay!

Then two students came. They ran Windows and Mac OS and never heard of i3wm or py3status but were curious so I showed them. They could read C so I asked them if they could understand how i3status was reading its horrible configuration file… and they did!

Then Oliver Bestwalter (main maintainer of tox) came and told me he was a long time py3status user… followed by Hubert Bryłkowski and Ólafur Bjarni! Wow..

We joined forces to create a py3status module that allows the use of the great PewPew hardware device created by Radomir Dopieralski (which was given to all attendees) to control i3!

And we did it and had a lot of fun!

Oliver's major contribution

The module itself is awesome okay… but thanks to Oliver's experience with tox he proposed and contributed one of the most significant feature py3status has had: the ability to import modules from other pypi packages!

The idea is that you have your module or set of modules. Instead of having to contribute them to py3status you could just publish them to pypi and py3status will automatically be able to detect and load them!

The usage of entry points allow custom and more distributed modules creation for our project!

Read more about this amazing feature on the docs.

All of this happened during EuroPython 2019 and I want to extend once again my gratitude to everyone who participated!

Thank you contributors

Version 3.20 is also the work of cool contributors.
See the changelog.

22 Sep 2019 2:27pm GMT

13 Sep 2019

feedPlanet Gentoo

Nathan Zachary: Vim pulling in xorg dependencies in Gentoo

Today I went to update one of my Gentoo servers, and noticed that it wanted to pull in a bunch of xorg dependencies. This is a simple music server without any type of graphical environment, so I don't really want any xorg libraries or other GUI components installed. Looking through the full output, I couldn't see a direct reason that these components were now requirements.

To troubleshoot, I started adding packages to /etc/portage/package.mask, starting with cairo (which was the package directly requesting the 'X' USE flag be enabled). That didn't get me very far as it still just indicated that GTK+ needed to be installed. After following the dependency chain for a bit, I noticed that something was pulling in libcanberra and found that the default USE flags now include 'sound' and that vim now had it enabled. It looks like this USE flag was added between vim-8.1.1486 and vim-8.1.1846.

For my needs, the most straightforward solution was to just remove the 'sound' USE flag from vim by adding the following to /etc/portage/package.use:

# grep vim /etc/portage/package.use 
app-editors/vim -sound

13 Sep 2019 5:10pm GMT

06 Sep 2019

feedPlanet Gentoo

Nathan Zachary: Adobe Flash and Firefox 68+ in Gentoo Linux

Though many sites have abandoned Adobe Flash in favour of HTML5 these days, there are still some legacy applications (e.g. older versions of VMWare's vSphere web client) that depend on it. Recent versions of Firefox in Linux (68+) started failing to load Flash content for me, and it took some digging to find out why. First off, I noticed that the content wouldn't load even on Adobe's Flash test page. Second off, I found that the plugin wasn't listed in Firefox's about:plugins page.

So, I realised that the problem was due to the Adobe Flash plugin not integrating properly with Firefox. I use Gentoo Linux, so these instructions may not directly apply to other distributions, but I would imagine that the directory structures are at least similar. To start, I made sure that I had installed the www-plugins/adobe-flash ebuild with the 'npapi' USE flag enabled:

$ eix adobe-flash
[I] www-plugins/adobe-flash
     Available versions:  (22) 32.0.0.238^ms
       {+nsplugin +ppapi ABI_MIPS="n32 n64 o32" ABI_RISCV="lp64 lp64d" ABI_S390="32 64" ABI_X86="32 64 x32"}
     Installed versions:  32.0.0.238(22)^ms(03:13:05 22/08/19)(nsplugin -ppapi ABI_MIPS="-n32 -n64 -o32" ABI_RISCV="-lp64 -lp64d" ABI_S390="-32 -64" ABI_X86="64 -32 -x32")
     Homepage:            https://www.adobe.com/products/flashplayer.html https://get.adobe.com/flashplayer/ https://helpx.adobe.com/security/products/flash-player.html
     Description:         Adobe Flash Player

That ebuild installs the libflashplayer.so (shared object) in the /usr/lib64/nsbrowser/plugins/ directory by default.

However, through some digging, I found that Firefox 68+ was looking in another directory for the plugin (in my particular situation, that directory was /usr/lib64/mozilla/plugins/, which actually didn't exist on my system). Seeing as the target directory didn't exist, I had to firstly create it, and then I decided to symlink the shared object there so that future updates to the www-plugins/adobe-flash package would work without any further manual intervention:

mkdir -p /usr/lib64/mozilla/plugins/
cd $_
ln -s /usr/lib64/nsbrowser/plugins/libflashplayer.so .

After restarting Firefox, the Adobe Flash test page started working as did other sites that use Flash. So, though your particular Linux distribution, version of Firefox, and version of Adobe Flash may require the use of different directories than the ones I referenced above, I hope that these instructions can help you troubleshoot the problem with Adobe Flash not showing in the Firefox about:plugins page.

06 Sep 2019 7:17pm GMT

11 Aug 2019

feedPlanet Gentoo

Gentoo News: AArch64 (arm64) profiles are now stable!

Packet.com logo

The ARM64 project is pleased to announce that all ARM64 profiles are now stable.

While our developers and users have contributed significantly in this accomplishment, we must also thank our Packet sponsor for their contribution. Providing the Gentoo developer community with access to bare metal hardware has accelerated progress in achieving the stabilization of the ARM64 profiles.

About Packet.com

This access has been kindly provided to Gentoo by bare metal cloud Packet via their Works on Arm project. Learn more about their commitment to supporting open source here.

About Gentoo

Gentoo Linux is a free, source-based, rolling release meta distribution that features a high degree of flexibility and high performance. It empowers you to make your computer work for you, and offers a variety of choices at all levels of system configuration.

As a community, Gentoo consists of approximately two hundred developers and over fifty thousand users globally.

11 Aug 2019 12:00am GMT

09 Jul 2019

feedPlanet Gentoo

Michał Górny: Verifying Gentoo election results via Votrify

Gentoo elections are conducted using a custom software called votify. During the voting period, the developers place their votes in their respective home directories on one of the Gentoo servers. Afterwards, the election officials collect the votes, count them, compare their results and finally announce them.

The simplified description stated above suggests two weak points. Firstly, we rely on honesty of election officials. If they chose to conspire, they could fake the result. Secondly, we rely on honesty of all Infrastructure members, as they could use root access to manipulate the votes (or the collection process).

To protect against possible fraud, we make the elections transparent (but pseudonymous). This means that all votes cast are public, so everyone can count them and verify the result. Furthermore, developers can verify whether their personal vote has been included. Ideally, all developers would do that and therefore confirm that no votes were manipulated.

Currently, we are pretty much implicitly relying on developers doing that, and assuming that no protest implies successful verification. However, this is not really reliable, and given the unfriendly nature of our scripts I have reasons to doubt that the majority of developers actually verify the election results. In this post, I would like to shortly explain how Gentoo elections work, how they could be manipulated and introduce Votrify - a tool to explicitly verify election results.

Gentoo voting process in detail

Once the nomination period is over, an election official sets the voting process up by creating control files for the voting scripts. Those control files include election name, voting period, ballot (containing all vote choices) and list of eligible voters.

There are no explicit events corresponding to the beginning or the end of voting period. The votify script used by developers reads election data on each execution, and uses it to determine whether the voting period is open. During the voting period, it permits the developer to edit the vote, and finally to 'submit' it. Both draft and submitted vote are stored as appropriate files in the developer's home directory, 'submitted' votes are not collected automatically. This means that the developer can still manually manipulate the vote once voting period concludes, and before the votes are manually collected.

Votes are collected explicitly by an election official. When run, the countify script collects all vote files from developers' home directories. An unique 'confirmation ID' is generated for each voting developer. All votes along with their confirmation IDs are placed in so-called 'master ballot', while mapping from developer names to confirmation IDs is stored separately. The latter is used to send developers their respective confirmation IDs, and can be discarded afterwards.

Each of the election officials uses the master ballot to count the votes. Afterwards, they compare their results and if they match, they announce the election results. The master ballot is attached to the announcement mail, so that everyone can verify the results.

Possible manipulations

The three methods of manipulating the vote that I can think of are:

  1. Announcing fake results. An election result may be presented that does not match the votes cast. This is actively prevented by having multiple election officials, and by making the votes transparent so that everyone can count them.
  2. Manipulating votes cast by developers. The result could be manipulated by modifying the votes cast by individual developers. This is prevented by including pseudonymous vote attribution in the master ballot. Every developer can therefore check whether his/her vote has been reproduced correctly. However, this presumes that the developer is active.
  3. Adding fake votes to the master ballot. The result could be manipulated by adding votes that were not cast by any of the existing developers. This is a major problem, and such manipulation is entirely plausible if the turnout is low enough, and developers who did not vote fail to check whether they have not been added to the casting voter list.

Furthermore, the efficiency of the last method can be improved if the attacker is able to restrict communication between voters and/or reliably deliver different versions of the master ballot to different voters, i.e. convince the voters that their own vote was included correctly while manipulating the remaining votes to achieve the desired result. The former is rather unlikely but the latter is generally feasible.

Finally, the results could be manipulated via manipulating the voting software. This can be counteracted through verifying the implementation against the algorithm specification or, to some degree, via comparing the results a third party tool. Robin H. Johnson and myself were historically working on this (or more specifically, on verifying whether the Gentoo implementation of Schulze method is correct) but neither of us was able to finish the work. If you're interested in the topic, you can look at my election-compare repository. For the purpose of this post, I'm going to consider this possibility out of scope.

Verifying election results using Votrify

Votrify uses a two-stage verification model. It consists of individual verification which is performed by each voter separately and produces signed confirmations, and community verification that uses the aforementioned files to provide final verified election result.

The individual verification part involves:

  1. Verifying that the developer's vote has been recorded correctly. This takes part in detecting whether any votes have been manipulated. The positive result of this verification is implied by the fact that a confirmation is produced. Additionally, developers who did not cast a vote also need to produce confirmations, in order to detect any extraneous votes.
  2. Counting the votes and producing the election result. This produces the election results as seen from the developer's perspective, and therefore prevents manipulation via announcing fake results. Furthermore, comparing the results between different developers helps finding implementation bugs.
  3. Hashing the master ballot. The hash of master ballot file is included, and comparing it between different results confirms that all voters received the same master ballot.

If the verification is positive, a confirmation is produced and signed using developer's OpenPGP key. I would like to note that no private data is leaked in the process. It does not even indicate whether the dev in question has actually voted - only that he/she participates in the verification process.

Afterwards, confirmations from different voters are collected. They are used to perform community verification which involves:

  1. Verifying the OpenPGP signature. This is necessary to confirm the authenticity of the signed confirmation. The check also involves verifying that the key owner was an eligible voter and that each voter produced only one confirmation. Therefore, it prevents attempts to~fake the verification results.
  2. Comparing the results and master ballot hashes. This confirms that everyone participating received the same master ballot, and produced the same results.

If the verification for all confirmations is positive, the election results are repeated, along with explicit quantification of how trustworthy they are. The number indicates how many confirmations were used, and therefore how many of the votes (or non-votes) in master ballot were confirmed. The difference between the number of eligible voters and the number of confirmations indicates how many votes may have been altered, planted or deleted. Ideally, if all eligible voters produced signed confirmations, the election would be 100% confirmed.

09 Jul 2019 2:15pm GMT

Thomas Raschbacher: Autoclicker for Linux

So I wanted an autoclicker for linux - for one of my browser based games that require a lot of clicking.

Looked around and tried to find something useful, but all i could find was old pages outdated download links,..

In the end I stumbled upon something simple yet immensely more powerful:xdotool (github) or check out the xdotool website

As an extra bonus it is in the Gentoo repository so a simple

emerge xdotool

Got it installed. it also has minimal dependencies which is nice.

The good part, but also a bit of a downside is that there is no UI (maybe I'll write one when I get a chance .. just as a wrapper).

anyway to do what I wanted was simply this:

xdotool click --repeat 1000 --delay 100 1

Pretty self explainatory, but here's a short explaination anyway:

The only problem is I need to know how many clicks I need beforehand - which can also be a nice feature of course.

There is one way to stop it if you have the terminal you ran this command from visible (which i always have - and set it to always on top): click with your left mouse button - this stops the click events being registered since it is mouse-down and waits for mouse-up i guess .. but not sure if that is the reason. then move to the terminal and either close it or ctrl+c abort the command -- or just wait for the program to exit after finishing the requested number of clicks. -- On a side note if you don't like that way of stopping it you could always just ctrl+alt+f1 (or whatever terminal you want to use) and log in there and kill the xdotool process (either find thepid and kill it or just killall xdotool - which will of course kill all, but i doubt you'll run more than one at once)

09 Jul 2019 2:02pm GMT

04 Jul 2019

feedPlanet Gentoo

Michał Górny: SKS poisoning, keys.openpgp.org / Hagrid and other non-solutions

The recent key poisoning attack on SKS keyservers shook the world of OpenPGP. While this isn't a new problem, it has not been exploited on this scale before. The attackers have proved how easy it is to poison commonly used keys on the keyservers and effectively render GnuPG unusably slow. A renewed discussion on improving keyservers has started as a result. It also forced Gentoo to employ countermeasures. You can read more on them in the 'Impact of SKS keyserver poisoning on Gentoo' news item.

Coincidentally, the attack happened shortly after the launch of keys.openpgp.org, that advertises itself as both poisoning-resistant and GDPR-friendly keyserver. Naturally, many users see it as the ultimate solution to the issues with SKS. I'm afraid I have to disagree - in my opinion, this keyserver does not solve any problems, it merely cripples OpenPGP in order to avoid being affected by them, and harms its security in the process.

In this article, I'd like to shortly explain what the problem is, and which of the different solutions proposed so far to it (e.g. on gnupg-users mailing list) make sense, and which make things even worse. Naturally, I will also cover the new Hagrid keyserver as one of the glorified non-solutions.

The attack - key poisoning

OpenPGP uses a distributed design - once the primary key is created, additional packets can be freely appended to it and recombined on different systems. Those packets include subkeys, user identifiers and signatures. Signatures are used to confirm the authenticity of appended packets. The packets are only meaningful if the client can verify the authenticity of their respective signatures.

The attack is carried through third-party signatures that normally are used by different people to confirm the authenticity of the key - that is, to state that the signer has verified the identity of the key owner. It relies on three distinct properties of OpenPGP:

  1. The key can contain unlimited number of signatures. After all, it is natural that very old keys will have a large number of signatures made by different people on them.
  2. Anyone can append signatures to any OpenPGP key. This is partially keyserver policy, and partially the fact that SKS keyserver nodes are propagating keys one to another.
  3. There is no way to distinguish legitimate signatures from garbage. To put it other way, it is trivial to make garbage signatures look like the real deal.

The attacker abuses those properties by creating a large number of garbage signatures and sending them to keyservers. When users fetch key updates from the keyserver, GnuPG normally appends all those signatures to the local copy. As a result, the key becomes unusually large and causes severe performance issues with GnuPG, preventing its normal usage. The user ends up having to manually remove the key in order to fix the installation.

The obvious non-solutions and potential solutions

Let's start by analyzing the properties I've listed above. After all, removing at least one of the requirements should prevent the attack from being possible. But can we really do that?

Firstly, we could set a hard limit on number of signatures or key size. This should obviously prevent the attacker from breaking user systems via huge keys. However, it will make it entirely possible for the attacker to 'brick' the key by appending garbage up to the limit. Then it would no longer be possible to append any valid signatures to the key. Users would suffer less but the key owner will lose the ability to use the key meaningfully. It's a no-go.

Secondly, we could limit key updates to the owner. However, the keyserver update protocol currently does not provide any standard way of verifying who the uploader is, so it would effectively require incompatible changes at least to the upload protocol. Furthermore, in order to prevent malicious keyservers from propagating fake signatures we'd also need to carry the verification along when propagating key updates. This effectively means an extension of the key format, and it has been proposed e.g. in 'Abuse-Resistant OpenPGP Keystores' draft. This is probably a wortwhile option but it will take time before it's implemented.

Thirdly, we could try to validate signatures. However, any validation can be easily worked around. If we started requiring signing keys to be present on the keyserver, the attackers can simply mass-upload keys used to create garbage signatures. If we went even further and e.g. started requiring verified e-mail addresses for the signing keys, the attackers can simply mass-create e-mail addresses and verify them. It might work as a temporary solution but it will probably cause more harm than good.

There were other non-solutions suggested - most notably, blacklisting poisoned keys. However, this is even worse. It means that every victim of poisoning attack would be excluded from using the keyserver, and in my opinion it will only provoke the attackers to poison even more keys. It may sound like a good interim solution preventing users from being hit but it is rather short-sighted.

keys.openpgp.org / Hagrid - a big non-solution

A common suggestion for OpenPGP users - one that even Gentoo news item mentions for lack of alternative - is to switch to keys.openpgp.org keyserver, or switch keyservers to their Hagrid software. It is not vulnerable to key poisoning attack because it strips away all third-party signatures. However, this and other limitations make it a rather poor replacement, and in my opinion can be harmful to security of OpenPGP.

Firstly, stripping all third-party signatures is not a solution. It simply avoids the problem by killing a very important portion of OpenPGP protocol - the Web of Trust. Without it, the keys obtained from the server can not be authenticated otherwise than by direct interaction between the individuals. For example, Gentoo Authority Keys can't work there. Most of the time, you won't be able to tell whether the key on keyserver is legitimate or forged.

The e-mail verification makes it even worse, though not intentionally. While I agree that many users do not understand or use WoT, Hagrid is implicitly going to cause users to start relying on e-mail verification as proof of key authenticity. In other words, people are going to assume that if a key on keys.openpgp.org has verified e-mail address, it has to be legitimate. This makes it trivial for an attacker that manages to gain unauthorized access to the e-mail address or the keyserver to publish a forged key and convince others to use it.

Secondly, Hagrid does not support UID revocations. This is an entirely absurd case where GDPR fear won over security. If your e-mail address becomes compromised, you will not be able to revoke it. Sure, the keyserver admins may eventually stop propagating it along with your key, but all users who fetched the key before will continue seeing it as a valid UID. Of course, if users send encrypted mail the attacker won't be able to read it. However, the users can be trivially persuaded to switch to a new, forged key.

Thirdly, Hagrid rejects all UIDs except for verified e-mail-based UIDs. This is something we could live with if key owners actively pursue having their identities verified. However, this also means you can't publish a photo identity or use keybase.io. The 'explicit consent' argument used by upstream is rather silly - apparently every UID requires separate consent, while at the same time you can trivially share somebody else's PII as the real name of a valid e-mail address.

Apparently, upstream is willing to resolve the first two of those issues once satisfactory solutions are established. However, this doesn't mean that it's fine to ignore those problems. Until they are resolved, and necessary OpenPGP client updates are sufficiently widely deployed, I don't believe Hagrid or its instance at keys.openpgp.org are good replacements for SKS and other keyservers.

So what are the solutions?

Sadly, I am not aware of any good global solution at the moment. The best workaround for GnuPG users so far is the new self-sigs-only option that prevents it from importing third-party signatures. Of course, it shares the first limitation of Hagrid keyserver. The future versions of GnuPG will supposedly fallback to this option upon meeting excessively large keys.

For domain-limited use cases such as Gentoo's, running a local keyserver with restricted upload access is an option. However, it requires users to explicitly specify our keyserver, and effectively end up having to specify multiple different keyservers for each domain. Furthermore, WKD can be used to distribute keys. Sadly, at the moment GnuPG uses it only to locate new keys and does not support refreshing keys via WKD (gemato employs a cheap hack to make it happen). In both cases, the attack is prevented via isolating the infrastructure and preventing public upload access.

The long-term solution probably lies in the 'First-party-attested Third-party Certifications' section of the 'Abuse-Resistant OpenPGP Keystores' draft. In this proposal, every third-party signature must be explicitly attested by the key owner. Therefore, only the key owner can append additional signatures to the key, and keyservers can reject any signatures that were not attested. However, this is not currently supported by GnuPG, and once it is, deploying it will most likely take significant time.

04 Jul 2019 11:23am GMT

03 Jul 2019

feedPlanet Gentoo

Marek Szuba: Case label for Pocket Science Lab V5

tl;dr: Here (PDF, 67 kB) is a case label for Pocket Science Lab version 5 that is compatible with the design for a laser-cut case published by FOSSAsia.


In case you haven't heard about it, Pocket Science Lab [1] is a really nifty board developed by the FOSSAsia community which combines a multichannel, megahertz-range oscilloscope, a multimeter, a logic probe, several voltage sources and a current source, several wave generators, UART and I2C interfaces… and all of this in the form factor of an Arduino Mega, i.e. only somewhat larger than that of a credit card. Hook it up over USB to a PC or an Android device running the official (free and open source, of course) app and you are all set.

Well, not quite set yet. What you get for your 50-ish EUR is just the board itself. You will quite definitely need a set of probe cables (sadly, I have yet to find even an unofficial adaptor allowing one to equip PSLab with standard industry oscilloscope probes using BNC connectors) but if you expect to lug yours around anywhere you go, you will quite definitely want to invest in a case of some sort. While FOSSAsia does not to my knowledge sell PSLab cases, they provide a design for one [2]. It is meant to be laser-cut but I have successfully managed to 3D-print it as well, and for the more patient among us it shouldn't be too difficult to hand-cut one with a jigsaw either.

Of course in addition to making sure your Pocket Science Lab is protected against accidental damage it would also be nice to have all the connectors clearly labelled. Documentation bundled with PSLab software does show not a few "how to connect instrument X" diagrams but unfortunately said diagrams picture a version 4 of the board and the current major version, V5, features radically different pinout (compare [3] with [4]/[5] and you will see immediately what I mean), not to mention that having to stare at a screen while wiring your circuit isn't always optimal. Now, all versions of the board feature a complete set of header labels (along with LEDs showing the device is active) on the front side and at least the more recent ones additionally show more detailed descriptions on the back, clearly suggesting the optimal way to go is to make your case our of transparent material. But what if looking at the provided labels directly is not an option, for instance because you have gone eco-friendly and made your case out of wood? Probably stick a label to the front of the case… which brings us back to the problem of the case label from [5] not being compatible with recent versions of the board.

Which brings me to my take on adapting the design from [5] to match the header layout and labels of PSLab V5.1 as well as the laser-cut case design from [2]. It could probably be more accurate but having tried it out, it is close enough. Bluetooth and ICSP-programmer connectors near the centre of the board are not included because the current case design does not provide access to them and indeed, they haven't even got headers soldered in. Licence and copyright: same as the original.


[1]
https://pslab.io/

[2]
https://github.com/fossasia/pslab-case

[3]
https://github.com/fossasia/pslab-hardware/raw/master/docs/images/PSLab_v5_top.png

[4]
https://github.com/fossasia/pslab-hardware/raw/master/docs/images/pslab_version_previews/PSLab_v4.png

[5]
https://github.com/fossasia/pslab-hardware/raw/master/docs/images/pslabdesign.png

03 Jul 2019 5:28pm GMT

Gentoo News: Impact of SKS keyserver poisoning on Gentoo

The SKS keyserver network has been a victim of certificate poisoning attack lately. The OpenPGP verification used for repository syncing is protected against the attack. However, our users can be affected when using GnuPG directly. In this post, we would like to shortly summarize what the attack is, what we did to protect Gentoo against it and what can you do to protect your system.

The certificate poisoning attack abuses three facts: that OpenPGP keys can contain unlimited number of signatures, that anyone can append signatures to any key and that there is no way to distinguish a legitimate signature from garbage. The attackers are appending a large number of garbage signatures to keys stored on SKS keyservers, causing them to become very large and cause severe performance issues in GnuPG clients that fetch them.

The attackers have poisoned the keys of a few high ranking OpenPGP people on the SKS keyservers, including one Gentoo developer. Furthermore, the current expectation is that the problem won't be fixed any time soon, so it seems plausible that more keys may be affected in the future. We recommend users not to fetch or refresh keys from SKS keyserver network (this includes aliases such as keys.gnupg.net) for the time being. GnuPG upstream is already working on client-side countermeasures and they can be expected to enter Gentoo as soon as they are released.

The Gentoo key infrastructure has not been affected by the attack. Shortly after it was reported, we have disabled fetching developer key updates from SKS and today we have disabled public key upload access to prevent the keys stored on the server from being poisoned by a malicious third party.

The gemato tool used to verify the Gentoo ebuild repository uses WKD by default. During normal operation it should not be affected by this vulnerability. Gemato has a keyserver fallback that might be vulnerable if WKD fails, however gemato operates in an isolated environment that will prevent a poisoned key from causing permanent damage to your system. In the worst case; Gentoo repository syncs will be slow or hang.

The webrsync and delta-webrsync methods also support gemato, although it is not used by default at the moment. In order to use it, you need to remove PORTAGE_GPG_DIR from /etc/portage/make.conf (if it present) and put the following values into /etc/portage/repos.conf:

[gentoo]
sync-type = webrsync
sync-webrsync-delta = true  # false to use plain webrsync
sync-webrsync-verify-signature = true

Afterwards, calling emerge --sync or emaint sync --repo gentoo will use gemato key management rather than the vulnerable legacy method. The default is going to be changed in a future release of Portage.

When using GnuPG directly, Gentoo developer and service keys can be securely fetched (and refreshed) via:

  1. Web Key Directory, e.g. gpg --locate-key developer@gentoo.org
  2. Gentoo keyserver, e.g. gpg --keyserver hkps://keys.gentoo.org ...
  3. Key bundles, e.g.: active devs, service keys

Please note that the aforementioned services provide only keys specific to Gentoo. Keys belonging to other people will not be found on our keyserver. If you are looking for them, you may try keys.openpgp.org keyserver that is not vulnerable to the attack, at the cost of stripping all signatures and unverified UIDs.

03 Jul 2019 12:00am GMT

29 Apr 2019

feedPlanet Gentoo

Yury German: Gentoo Blogs Update

This is just a notification that the Blogs and the appropriate plug-ins for the release 5.1.1 have been updated.

With the release of these updated we (The Gentoo Blog Team) have updated the themes that had updates. If you have a blog on this site, and have a theme that is based on one of the following themes please consider updating as these themes are no longer updated and things will break in your blogs.

If you are using one of these themes it is recommended that you update to the other themes available. If you think that there is an open source theme that you would like to have available please contact the Blogs team by opening a Bugzilla Bug with pertinent information.

29 Apr 2019 3:41am GMT

24 Apr 2019

feedPlanet Gentoo

Matthew Thode: Building Gentoo disk images

Disclaimer

I'm not responsible if you ruin your system, this guide functions as documentation for future me. Remember to back up your data.

Why this is useful / needed

It's useful to have a way of building a disk image for shipping, either for testing or production usage. The image output formats could be qcow2, raw or compressed tarball, it's up to you to make this what you want it to be.

Pre-work

Install diskimage-builder, for Gentoo you just have to 'emerge' the latest version. I personally keep one around in a virtual environment for testing (this allows me to build musl images as well easily).

The actual setup

What diskimage-builder actually does is take elements and run them. Each elements consists of a set of phases where the element takes actions. All you are really doing is defining the elements and they will insert themselves where needed. It also uses environment variables for tunables, or for other various small tweaks.

This is how I build the images at http://distfiles.gentoo.org/experimental/amd64/openstack/

export GENTOO_PORTAGE_CLEANUP=True
export DIB_INSTALLTYPE_pip_and_virtualenv=package
export DIB_INSTALLTYPE_simple_init=repo
export GENTOO_PYTHON_TARGETS="python3_6"
export GENTOO_PYTHON_ACTIVE_VERSION="python3.6"
export ELEMENTS="gentoo simple-init growroot vm openssh-server block-device-mbr"
export COMMAND="disk-image-create -a amd64 -t qcow2 --image-size 3"
export DATE="$(date -u +%Y%m%d)"

GENTOO_PROFILE=default/linux/amd64/17.0/no-multilib/hardened ${COMMAND} -o "gentoo-openstack-amd64-hardened-nomultilib-${DATE}" ${ELEMENTS}
GENTOO_PROFILE=default/linux/amd64/17.0/no-multilib ${COMMAND} -o "gentoo-openstack-amd64-default-nomultilib-${DATE}" ${ELEMENTS}
GENTOO_PROFILE=default/linux/amd64/17.0/hardened ${COMMAND} -o "gentoo-openstack-amd64-hardened-${DATE}" ${ELEMENTS}
GENTOO_PROFILE=default/linux/amd64/17.0/systemd ${COMMAND} -o "gentoo-openstack-amd64-systemd-${DATE}" ${ELEMENTS}
${COMMAND} -o "gentoo-openstack-amd64-default-${DATE}" ${ELEMENTS}

For musl I've had to do some custom work as I have to build the stage4s locally, but it's largely the same (with the additional need to define a musl overlay.

cd ~/diskimage-builder
cp ~/10-gentoo-image.musl diskimage_builder/elements/gentoo/root.d/10-gentoo-image
pip install -U .
cd ~/

export GENTOO_PORTAGE_CLEANUP=False
export DIB_INSTALLTYPE_pip_and_virtualenv=package
export DIB_INSTALLTYPE_simple_init=repo
export GENTOO_PYTHON_TARGETS="python3_6"
export GENTOO_PYTHON_ACTIVE_VERSION="python3.6"
DATE="$(date +%Y%m%d)"
export GENTOO_OVERLAYS="musl"
export GENTOO_PROFILE=default/linux/amd64/17.0/musl/hardened

disk-image-create -a amd64 -t qcow2 --image-size 3 -o gentoo-openstack-amd64-hardened-musl-"${DATE}" gentoo simple-init growroot vm

cd ~/diskimage-builder
git checkout diskimage_builder/elements/gentoo/root.d/10-gentoo-image
pip install -U .
cd ~/

Generic images

The elements I use are for an OpenStack image, meaning there is no default user/pass, those are set by cloud-init / glean. For a generic image you will want the following elements.

'gentoo growroot devuser vm'

The following environment variables are needed as well (changed to match your needs).

DIB_DEV_USER_PASSWORD=supersecrete DIB_DEV_USER_USERNAME=secrete DIB_DEV_USER_PWDLESS_SUDO=yes DIB_DEV_USER_AUTHORIZED_KEYS=/foo/bar/.ssh/authorized_keys

Fin

All this work was done upstream, if you have a question (or feature request) just ask. I'm on irc (Freenode) as prometheanfire or the same nick at gentoo.org for email.

24 Apr 2019 5:00am GMT

16 Apr 2019

feedPlanet Gentoo

Gentoo News: Nitrokey partners with Gentoo Foundation to equip developers with USB keys

Nitrokey logo

The Gentoo Foundation has partnered with Nitrokey to equip all Gentoo developers with free Nitrokey Pro 2 devices. Gentoo developers will use the Nitrokey devices to store cryptographic keys for signing of git commits and software packages, GnuPG keys, and SSH accounts.

Thanks to the Gentoo Foundation and Nitrokey's discount, each Gentoo developer is eligible to receive one free Nitrokey Pro 2. To receive their Nitrokey, developers will need to register with their @gentoo.org email address at the dedicated order form.

A Nitrokey Pro 2 Guide is available on the Gentoo Wiki with FAQ & instructions for integrating Nitrokeys into developer workflow.

ABOUT NITROKEY PRO 2

Nitrokey Pro 2 has strong reliable hardware encryption, thanks to open source. It can help you to: sign Git commits; encrypt emails and files; secure server access; and protect accounts against identity theft via two-factor authentication (one-time passwords).

ABOUT GENTOO

Gentoo Linux is a free, source-based, rolling release meta distribution that features a high degree of flexibility and high performance. It empowers you to make your computer work for you, and offers a variety of choices at all levels of system configuration.

As a community, Gentoo consists of approximately two hundred developers and over fifty thousand users globally.

The Gentoo Foundation supports the development of Gentoo, protects Gentoo's intellectual property, and oversees adherence to Gentoo's Social Contract.

ABOUT NITROKEY

Nitrokey is a German IT security startup committed to open source hardware and software. Nitrokey develops and produces USB keys for data encryption, email encryption (PGP/GPG, S/MIME), and secure account logins (SSH, two-factor authentication via OTP and FIDO).

Nitrokey is proud to support the Gentoo Foundation in further securing the Gentoo infrastructure and contributing to a secure open source Linux ecosystem.

16 Apr 2019 12:00am GMT

29 Mar 2019

feedPlanet Gentoo

Alexys Jacob: Scylla: four ways to optimize your disk space consumption

We recently had to face free disk space outages on some of our scylla clusters and we learnt some very interesting things while outlining some improvements that could be made to the ScyllaDB guys.

100% disk space usage?

First of all I wanted to give a bit of a heads up about what happened when some of our scylla nodes reached (almost) 100% disk space usage.

Basically they:

After restarting your scylla server, the first and obvious thing you can try to do to get out of this situation is to run the nodetool clearsnapshot command which will remove any data snapshot that could be lying around. That's a handy command to reclaim space usually.

Reminder: depending on your compaction strategy, it is usually not advised to allow your data to grow over 50% of disk space...

But that's only a patch so let's go down the rabbit hole and look at the optimization options we have.


Optimize your schemas

Schema design and the types your choose for your columns have a huge impact on disk space usage! And in our case we indeed overlooked some of the optimizations that we could have done from the start and that did cost us a lot of wasted disk space. Fortunately it was easy and fast to change.

To illustrate this, I'll take a sample of 100,000 rows of a simple and naive schema associating readings of 50 integers to a user ID:

Note: all those operations were done using Scylla 3.0.3 on Gentoo Linux.

CREATE TABLE IF NOT EXISTS test.not_optimized
(
uid text,
readings list<int>,
PRIMARY KEY(uid)
) WITH compression = {};

Once inserted on disk, this takes about 250MB of disk space:

250M    not_optimized-00cf1500520b11e9ae38000000000004

Now depending on your use case, if those readings at not meant to be updated for example you could use a frozen list instead, which will allow a huge storage optimization:

CREATE TABLE IF NOT EXISTS test.mid_optimized
(
uid text,
readings frozen<list<int>>,
PRIMARY KEY(uid)
) WITH compression = {};

With this frozen list we now consume 54MB of disk space for the same data!

54M     mid_optimized-011bae60520b11e9ae38000000000004

There's another optimization that we could do since our user ID are UUIDs. Let's switch to the uuid type instead of text:

CREATE TABLE IF NOT EXISTS test.optimized
(
uid uuid,
readings frozen<list<int>>,
PRIMARY KEY(uid)
) WITH compression = {};

By switching to uuid, we now consume 50MB of disk space: that's a 80% reduced disk space consumption compared to the naive schema for the same data!

50M     optimized-01f74150520b11e9ae38000000000004

Enable compression

All those examples were not using compression. If your workload latencies allows it, you should probably enable compression on your sstables.

Let's see its impact on our tables:

ALTER TABLE test.not_optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};
ALTER TABLE test.mid_optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};
ALTER TABLE test.optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};

Then we run a nodetool compact test to force a (re)compaction of all the sstables and we get:

63M     not_optimized-00cf1500520b11e9ae38000000000004
28M mid_optimized-011bae60520b11e9ae38000000000004
24M optimized-01f74150520b11e9ae38000000000004

Compression is really a great gain here allowing another 50% reduced disk space usage reduction on our optimized table!

Switch to the new "mc" sstable format

Since the Scylla 3.0 release you can use the latest "mc" sstable storage format on your scylla clusters. It promises a greater efficiency for usually a way more reduced disk space consumption!

It is not enabled by default, you have to add the enable_sstables_mc_format: true parameter to your scylla.yaml for it to be taken into account.

Since it's backward compatible, you have nothing else to do as new compactions will start being made using the "mc" storage format and the scylla server will seamlessly read from old sstables as well.

But in our case of immediate disk space outage, we switched to the new format one node at a time, dropped the data from it and ran a nodetool rebuild to reconstruct the whole node using the new sstable format.

Let's demonstrate its impact on our test tables: we add the option to the scylla.yaml file, restart scylla-server and run nodetool compact test again:

49M     not_optimized-00cf1500520b11e9ae38000000000004
26M mid_optimized-011bae60520b11e9ae38000000000004
22M optimized-01f74150520b11e9ae38000000000004

That's a pretty cool gain of disk space, even more for the not optimized version of our schema!

So if you're in great need of disk space or it is hard for you to change your schemas, switching to the new "mc" sstable format is a simple and efficient way to free up some space without effort.

Consider using secondary indexes

While denormalization is the norm (yep.. legitimate pun) in the NoSQL world this does not mean we have to duplicate everything all the time. A good example lies in the internals of secondary indexes if your workload can compromise with its moderate impact on latency.

Secondary indexes on scylla are built on top of Materialized Views that basically stores an up to date pointer from your indexed column to your main table partition key. That means that secondary indexes MVs are not duplicating all the columns (and thus the data) from your main table as you would have to do when denormalizing a table to query by another column: this saves disk space!

This of course comes with a latency drawback because if your workload is interested in the other columns than the partition key of the main table, the coordinator node will actually issue two queries to get all your data:

  1. query the secondary index MV to get the pointer to the partition key of the main table
  2. query the main table with the partition key to get the rest of the columns you asked for

This has been an effective trick to avoid duplicating a table and save disk space for some of our workloads!

(not a tip) Move the commitlog to another disk / partition?

This should only be considered as a sort of emergency procedure or for cost efficiency (cheap disk tiering) on non critical clusters.

While this is possible even if the disk is not formatted using XFS, it not advised to separate the commitlog from data on modern SSD/NVMe disks but… you technically can do it (as we did) on non production clusters.

Switching is simple, you just need to change the commitlog_directory parameter in your scylla.yaml file.

29 Mar 2019 11:47am GMT