22 Feb 2019

feedPlanet Gentoo

Thomas Raschbacher: Postgresql major version upgrade (gentoo)

Just did an upgrade from postgres 10.x to 11.x on a test machine..

The guide on the Gentoo Wiki is pretty good, but a few things I forgot at first:

First off when initializing the new cluster with "emerge --config =dev-db/postgresql-11.1" making sure the DB init options are the same as the old cluster. They are stored in /etc/conf.d/postgresql-XX.Y so just make sure PG_INITDB_OPTS collation ,.. match - if not delete the new cluster and re-run emerge --config ;)

The second thing was pg_hba.conf: make sure to re-add extra user/db/connection permissions again (in my case I ran diff and then just copied the old config file as the only difference was the extra permissions I had added)

The third thing was postgresql.conf: here I forgot to make sure listen_addresses and port are the same as in the old config (I did not copy this one as there are a lot more differences here. -- and of course check the rest of the config file too (diff is your friend ;) )

other than that pg_upgrade worked really well for me and it is now up and running agian.

22 Feb 2019 10:37am GMT

20 Feb 2019

feedPlanet Gentoo

Michał Górny: gen-revoke: extending revocation certificates to subkeys

Traditionally, OpenPGP revocation certificates are used as a last resort. You are expected to generate one for your primary key and keep it in a secure location. If you ever lose the secret portion of the key and are unable to revoke it any other way, you import the revocation certificate and submit the updated key to keyservers. However, there is another interesting use for revocation certificates - revoking shared organization keys.

Let's take Gentoo, for example. We are using a few keys needed to perform automated signatures on servers. For this reason, the key is especially exposed to attacks and we want to be able to revoke it quickly if the need arises. Now, we really do not want to have every single Infra member hold a copy of the secret primary key. However, we can give Infra members revocation certificates instead. This way, they maintain the possibility of revoking the key without unnecessarily increasing its exposure.

The problem with traditional revocation certificates is that they are supported for the purpose of revoking the primary key only. In our security model, the primary key is well protected, compared to subkeys that are totally exposed. Therefore, it is superfluous to revoke the complete key when only a subkey is compromised. To resolve this limitation, gen-revoke tool was created that can create exported revocation signatures for both the primary key and subkeys.

Technical background

The OpenPGP key (v4, as defined by RFC 4880) consists of a primary key, one or more UIDs and zero or more subkeys. Each of those keys and UIDs can include zero or more signature packets. Those packets bind information to the specific key or UID, and their authenticity is confirmed by a signature made using the secret portion of a primary key.

Signatures made by the key's owner are called self-signatures. The most basic form of them serve as a binding between the primary key and its subkeys and UIDs. Since both those classes of objects are created independently of the primary key, self-signatures are necessary to distinguish authentic subkeys and UIDs created by the key owner from potential fakes. Appropriately, GnuPG will only accept subkeys and UIDs that have valid self-signature.

One specific type of signatures are revocation signatures. Those signatures indicate that the relevant key, subkey or UID has been revoked. If a revocation signature is found, it takes precedence over any other kinds of signatures and prevents the revoked object from being further used.

Key updates are means of distributing new data associated with the key. What's important is that during an update the key is not replaced by a new one. Instead, GnuPG collects all the new data (subkeys, UIDs, signatures) and adds it to the local copy of the key. The validity of this data is verified against appropriate signatures. Appropriately, anyone can submit a key update to the keyserver, provided that the new data includes valid signatures. Similarly to local GnuPG instance, the keyserver is going to update its copy of the key rather than replacing it.

Revocation certificates specifically make use of this property. Technically, a revocation certificate is simply an exported form of a revocation signature, signed using the owner's primary key. As long as it's not on the key (i.e. GnuPG does not see it), it does not do anything. When it's imported, GnuPG adds it to the key. Further submissions and exports include it, effectively distributing it to all copies of the key.

gen-revoke builds on this idea. It creates and exports revocation signatures for the primary key and subkeys. Due to implementation limitations (and for better compatibility), rather than exporting the signature alone it exports a minimal copy of the relevant key. This copy can be imported just like any other key export, and it causes the revocation signature to be added to the key. Afterwards, it can be exported and distributed just like a revocation done directly on the key.

Usage

To use the script, you need to have the secret portion of the primary key available, and public encryption keys for all the people who are supposed to obtain a copy of the revocation signatures (recipients).

The script takes at least two parameters: an identifier of the key for which revocation signatures should be created, followed by one or more e-mail addresses of signature recipients. It creates revocation signatures both for the primary key and for all valid subkeys, for all the people specified.

The signatures are written into the current directory as key exports and are encrypted to each specified person. They should be distributed afterwards, and kept securely by all the individuals. If a need to revoke either a subkey or the primary key arises, the first person available can decrypt the signature, import it and send the resulting key to keyservers.

Additionally, each signature includes a comment specifying the person it was created for. This comment will afterwards be displayed by GnuPG if one of the revocation signatures is imported. This provides a clear audit trace as to who revoked the key.

Security considerations

Each of the revocation signatures can be used by an attacker to disable the key in question. The signatures are protected through encryption. Therefore, the system is vulnerable to the key of a single signature owner being compromised.

However, this is considerably safer than the equivalent option of distributing the secret portion of the primary key. In the latter case, the attacker would be able to completely compromise the key and use it for malicious purposes; in the former, it is only capable of revoking the key and therefore causing some frustration. Furthermore, the revocation comment helps identifying the compromised user.

The tradeoff between reliability and security can be adjusted by changing the number of revocation signature holders.

20 Feb 2019 1:18pm GMT

31 Jan 2019

feedPlanet Gentoo

Michał Górny: Evolution: UID trust extrapolation attack on OpenPGP signatures

This article describes the UI deficiency of Evolution mail client that extrapolates the trust of one of OpenPGP key UIDs into the key itself, and reports it along with the (potentially untrusted) primary UID. This creates the possibility of tricking the user into trusting a phished mail via adding a forged UID to a key that has a previously trusted UID.

Continue reading

31 Jan 2019 6:00am GMT

29 Jan 2019

feedPlanet Gentoo

Michał Górny: Identity with OpenPGP trust model

Let's say you want to send a confidential message to me, and possibly receive a reply. Through employing asymmetric encryption, you can prevent a third party from reading its contents, even if it can intercept the ciphertext. Through signatures, you can verify the authenticity of the message, and therefore detect any possible tampering. But for all this to work, you need to be able to verify the authenticity of the public keys first. In other words, we need to be able to prevent the aforementioned third party - possibly capable of intercepting your communications and publishing a forged key with my credentials on it - from tricking you into using the wrong key.

This renders key authenticity the fundamental problem of asymmetric cryptography. But before we start discussing how key certification is implemented, we need to cover another fundamental issue - identity. After all, who am I - who is the person you are writing to? Are you writing to a person you've met? Or to a specific Gentoo developer? Author of some project? Before you can distinguish my authentic key from a forged key, you need to be able to clearly distinguish me from an impostor.

Forms of identity

Identity via e-mail address

If your primary goal is to communicate with the owner of the particular e-mail address, it seems obvious to associate the identity with the owner of the e-mail address. However, how in reality would you distinguish a 'rightful owner' of the e-mail address from a cracker who managed to obtain access to it, or to intercept your network communications and inject forged mails?

The truth is, the best you can certify is that the owner of a particular key is able to read and/or send mails from a particular e-mail address, at a particular point in time. Then, if you can certify the same for a long enough period of time, you may reasonably assume the address is continuously used by the same identity (which may qualify as a legitimate owner or a cracker with a lot of patience).

Of course, all this relies on your trust in mail infrastructure not being compromised.

Identity via personal data

A stronger protection against crackers may be provided by associating the identity with personal data, as confirmed by government-issued documents. In case of OpenPGP, this is just the real name; X.509 certificates also provide fields for street address, phone number, etc.

The use of real names seems to be based on two assumptions: that your real name is reasonable well-known (e.g. it can be established with little risk of being replaced by a third party), and that the attacker does not wish to disclose his own name. Besides that, using real names meets with some additional criticism.

Firstly, requiring one to use his real name may be considered an invasion on privacy. Most notably, some people wish not to disclose or use their real names, and this effectively prevents them from ever being certified.

Secondly, real names are not unique. After all, the naming systems developed from the necessity of distinguishing individuals in comparatively small groups, and they simply don't scale to the size of the Internet. Therefore, name collisions are entirely possible and we are relying on sheer luck that the attacker wouldn't happen to have the same name as you do.

Thirdly and most importantly, verifying identity documents is non-trivial and untrained individuals are likely to fall victim of mediocre quality fakes. After all, we're talking about people who hopefully read some article on verifying a particular kind of document but have no experience recognizing forgery, no specialized hardware (I suppose most of you don't carry a magnifying glass and a UV light on yourself) and who may lack skills in comparing signatures or photographs (not to mention some people have really old photographs in documents). Some countries don't even issue any official documentation for document verification in English!

Finally, even besides the point of forged documents, this relies on trust in administration.

Identity via photographs

This one I'm mentioning merely for completeness. OpenPGP keys allow adding a photo as one of your UIDs. However, this is rather rarely used (out of the keys my GnuPG fetched so far, less than 10% have photographs). The concerns are similar as for personal data: it assumes that others are reliably able to know how you look like, and that they are capable of reliably comparing faces.

Online identity

An interesting concept is to use your public online activity to prove your identity - such as websites or social media. This is generally based on cross-referencing multiple resources with cryptographically proven publishing access, and assuming that an attacker would not be able to compromise all of them simultaneously.

A form of this concept is utilized by keybase.io. This service builds trust in user profiles via cryptographically cross-linking your profiles on some external sites and/or your websites. Furthermore, it actively encourages other users to verify those external proofs as well.

This identity model entirely relies on trust in network infrastructure and external sites. The likeliness of it being compromised is reduced by (potentially) relying on multiple independent sites.

Web of Trust model

Most of time, you won't be able to directly verify the identity of everyone you'd like to communicate with. This creates a necessity of obtaining indirect proof of authenticity, and the model normally used for that purpose in OpenPGP is the Web of Trust. I won't be getting into the fine details - you can find them e.g. in the GNU Privacy Handbook. For our purposes, it suffices to say that in WoT the authenticity of keys you haven't verified may be assessed by people whose keys you trust already, or people they know, with a limited level of recursion.

The more key holders you can trust, the more keys you can have verified indirectly and the more likely it is that your future recipient will be in that group. Or that you will be able to get someone from across the world into your WoT by meeting someone residing much closer to yourself. Therefore, you'd naturally want the WoT to grow fast and include more individuals. You'd want to preach OpenPGP onto non-crypto-aware people. However, this comes with inherent danger: can you really trust that they will properly verify the identity of the keys they sign?

I believe this is the most fundamental issue with WoT model: for it to work outside of small specialized circles, it has to include more and more individuals across the world. But this growth inevitable makes it easier for a malicious third party to find people that can be tricked into certifying keys with forged identities.

Conclusion

The fundamental problem in OpenPGP usage is finding the correct key and verifying its authenticity. This becomes especially complex given there is no single clear way of determining one's identity in the Internet. Normally, OpenPGP uses a combination of real name and e-mail address, optionally combined with a photograph. However, all of them have their weaknesses.

Direct identity verification for all recipients is non-practical, and therefore requires indirect certification solutions. While the WoT model used by OpenPGP attempts to avoid centralized trust specific to PKI, it is not clear whether it's practically manageable. On one hand, it requires trusting more people in order to improve coverage; on the other, it makes it more vulnerable to fraud.

Given all the above, the trust-via-online-presence concept may be of some interest. Most importantly, it establishes a closer relationship between the identity you actually need and the identity you verify - e.g. you want to mail the person being an open source developer, author of some specific projects rather than arbitrary person with a common enough name. However, this concept is not established broadly yet.

29 Jan 2019 1:50pm GMT

26 Jan 2019

feedPlanet Gentoo

Michał Górny: Attack on git signature verification via crafting multiple signatures

This article shortly explains the historical git weakness regarding handling commits with multiple OpenPGP signatures in git older than v2.20. The method of creating such commits is presented, and the results of using them are described and analyzed.

Continue reading

26 Jan 2019 10:24am GMT

20 Jan 2019

feedPlanet Gentoo

Alexys Jacob: py3status v3.16

Two py3status versions in less than a month? That's the holidays effect but not only!

Our community has been busy discussing our way forward to 4.0 (see below) and organization so it was time I wrote a bit about that.

Community

A new collaborator

First of all we have the great pleasure and honor to welcome Maxim Baz @maximbaz as a new collaborator on the project!

His engagement, numerous contributions and insightful reviews to py3status has made him a well known community member, not to mention his IRC support 🙂

Once again, thank you for being there Maxim!

Zen of py3status

As a result of an interesting discussion, we worked on defining better how to contribute to py3status as well as a set of guidelines we agree on to get the project moving on smoothly.

Here is born the zen of py3status which extends the philosophy from the user point of view to the contributor point of view!

This allowed us to handle the numerous open pull requests and get their number down to 5 at the time of writing this post!

Even our dear @lasers don't have any open PR anymore 🙂

3.15 + 3.16 versions

Our magic @lasers has worked a lot on general modules options as well as adding support for i3-gaps added features such as border coloring and fine tuning.

Also interesting is the work of Thiago Kenji Okada @m45t3r around NixOS packaging of py3status. Thanks a lot for this work and for sharing Thiago!

I also liked the question of Andreas Lundblad @aioobe asking if we could have a feature allowing to display a custom graphical output, such as a small PNG or anything upon clicking on the i3bar, you might be interested in following up the i3 issue he opened.

Make sure to read the amazing changelog for details, a lot of modules have been enhanced!

Highlights

New modules

A word on 4.0

Do you wonder what's gonna be in the 4.0 release?
Do you have ideas that you'd like to share?
Do you have dreams that you'd love to become true?

Then make sure to read and participate in the open RFC on 4.0 version!

Development has not started yet; we really want to hear from you.

Thank you contributors!

There would be no py3status release without our amazing contributors, so thank you guys!

20 Jan 2019 9:10pm GMT

09 Jan 2019

feedPlanet Gentoo

Gentoo News: FOSDEM 2019

FOSDEM logo

It's FOSDEM time again! Join us at Université libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. This year's FOSDEM 2019 will be held on February 2nd and 3rd.

Our developers will be happy to greet all open source enthusiasts at our Gentoo stand in building K. Visit this year's wiki page to see who's coming. So far eight developers have specified their attendance, with most likely many more on the way!

09 Jan 2019 12:00am GMT

06 Dec 2018

feedPlanet Gentoo

Alexys Jacob: Scylla Summit 2018 write-up

It's been almost one month since I had the chance to attend and speak at Scylla Summit 2018 so I'm relieved to finally publish a short write-up on the key things I wanted to share about this wonderful event!

Make Scylla boring

This statement of Glauber Costa sums up what looked to me to be the main driver of the engineering efforts put into Scylla lately: making it work so consistently well on any kind of workload that it's boring to operate 🙂

I will follow up on this statement to highlight the things I heard and (hopefully) understood during the summit. I hope you'll find it insightful.

Reduced operational efforts

The thread-per-core and queues design still has a lot of possibilities to be leveraged.

The recent addition of RPC streaming capabilities to seastar allows a drastic reduction in the time it takes the cluster to grow or shrink (data rebalancing / resynchronization).

Incremental compaction is also very promising as this background process is one of the most expensive there is in the database's design.

I was happy to hear that scylla-manager will soon be made available and free to use with basic features while retaining more advanced ones for enterprise version (like backup/restore).
I also noticed that the current version was not supporting SSL enabled clusters to store its configuration. So I directly asked Michał for it and I'm glad that it will be released on version 1.3.1.

Performant multi-tenancy

Why choose between real-time OLTP & analytics OLAP workloads?

The goal here is to be able to run both on the same cluster by giving users the ability to assign "SLA" shares to ROLES. That's basically like pools on Hadoop at a much finer grain since it will create dedicated queues that will be weighted by their share.

Having one queue per usage and full accounting will allow to limit resources efficiently and users to have their say on their latency SLAs.

But Scylla also has a lot to do in the background to run smoothly. So while this design pattern was already applied to tamper compactions, a lot of work has also been done on automatic flow control and back pressure.

For instance, Materialized Views are updated asynchronously which means that while we can interact and put a lot of pressure on the table its based on (called the Main Table), we could overwhelm the background work that's needed to keep MVs View Tables in sync. To mitigate this, a smart back pressure approach was developed and will throttle the clients to make sure that Scylla can manage to do everything at the best performance the hardware allows!

I was happy to hear that work on tiered storage is also planned to better optimize disk space costs for certain workloads.

Last but not least, columnar storage optimized for time series and analytics workloads are also something the developers are looking at.

Latency is expensive

If you care for latency, you might be happy to hear that a new polling API (named IOCB_CMD_POLL) has been contributed by Christoph Hellwig and Avi Kivity to the 4.19 Linux kernel which avoids context switching I/O by using a shared ring between kernel and userspace. Scylla will be using it by default if the kernel supports it.

The iotune utility has been upgraded since 2.3 to generate an enhanced I/O configuration.

Also, persistent (disk backed) in-memory tables are getting ready and are very promising for latency sensitive workloads!

A word on drivers

ScyllaDB has been relying on the Datastax drivers since the start. While it's a good thing for the whole community, it's important to note that the shard-per-CPU approach on data that Scylla is using is not known and leveraged by the current drivers.

Discussions took place and it seems that Datastax will not allow the protocol to evolve so that drivers could discover if the connected cluster is shard aware or not and then use this information to be more clever in which write/read path to use.

So for now ScyllaDB has been forking and developing their shard aware drivers for Java and Go (no Python yet… I was disappointed).

Kubernetes & containers

The ScyllaDB guys of course couldn't avoid the Kubernetes frenzy so Moreno Garcia gave a lot of feedback and tips on how to operate Scylla on docker with minimal performance degradation.

Kubernetes has been designed for stateless applications, not stateful ones and Docker does some automatic magic that have rather big performance hits on Scylla. You will basically have to play with affinities to dedicate one Scylla instance to run on one server with a "retain" reclaim policy.

Remember that the official Scylla docker image runs with dev-mode enabled by default which turns off all performance checks on start. So start by disabling that and look at all the tips and literature that Moreno has put online!

Scylla 3.0

A lot has been written on it already so I will just be short on things that important to understand in my point of view.

Random notes

Support for LWT (lightweight transactions) will be relying on a future implementation of the Raft consensus algorithm inside Scylla. This work will also benefits Materialized Views consistency. Duarte Nunes will be the one working on this and I envy him very much!

Support for search workloads is high in the ScyllaDB devs priorities so we should definitely hear about it in the coming months.

Support for "mc" sstables (new generation format) is done and will reduce storage requirements thanks to metadata / data compression. Migration will be transparent because Scylla can read previous formats as well so it will upgrade your sstables as it compacts them.

ScyllaDB developers have not settled on how to best implement CDC. I hope they do rather soon because it is crucial in their ability to integrate well with Kafka!

Materialized Views, Secondary Indexes and filtering will benefit from the work on partition key and indexes intersections to avoid server side filtering on the coordinator. That's an important optimization to come!

Last but not least, I've had the pleasure to discuss with Takuya Asada who is the packager of Scylla for RedHat/CentOS & Debian/Ubuntu. We discussed Gentoo Linux packaging requirements as well as the recent and promising work on a relocatable package. We will collaborate more closely in the future!

06 Dec 2018 10:53pm GMT

25 Nov 2018

feedPlanet Gentoo

Michał Górny: Portability of tar features

The tar format is one of the oldest archive formats in use. It comes as no surprise that it is ugly - built as layers of hacks on the older format versions to overcome their limitations. However, given the POSIX standarization in late 80s and the popularity of GNU tar, you would expect the interoperability problems to be mostly resolved nowadays.

This article is directly inspired by my proof-of-concept work on new binary package format for Gentoo. My original proposal used volume label to provide user- and file(1)-friendly way of distinguish our binary packages. While it is a GNU tar extension, it falls within POSIX ustar implementation-defined file format and you would expect that non-compliant implementations would extract it as regular files. What I did not anticipate is that some implementation reject the whole archive instead.

This naturally raised more questions on how portable various tar formats actually are. To verify that, I have decided to analyze the standards for possible incompatibility dangers and build a suite of test inputs that could be used to check how various implementations cope with that. This article describes those points and provides test results for a number of implementations.

Please note that this article is focused merely on read-wise format compatibility. In other words, it establishes how tar files should be written in order to achieve best probability that it will be read correctly afterwards. It does not investigate what formats the listed tools can write and whether they can correctly create archives using specific features.

Continue reading

25 Nov 2018 2:26pm GMT

10 Nov 2018

feedPlanet Gentoo

Alexys Jacob: py3status v3.14

I'm happy to announce this release as it contains some very interesting developments in the project. This release was focused on core changes.

IMPORTANT notice

There are now two optional dependencies to py3status:

To install them all using pip, simply do:

pip install py3status[all]

Modules can now react/refresh on udev events

When pyudev is available, py3status will allow modules to subscribe and react to udev events!

The xrandr module uses this feature by default which allows the module to instantly refresh when you plug in or off a secondary monitor. This also allows to stop running the xrandr command in the background and saves a lot of CPU!

Highlights

Thank you contributors!

10 Nov 2018 9:08pm GMT

04 Oct 2018

feedPlanet Gentoo

Gentoo News: CLIP OS - a hardened, multi-level OS based on Gentoo Hardened

CLIP OS logo ANSSI, the National Cybersecurity Agency of France, has released the sources of CLIP OS, that aims to build a hardened, multi-level operating system, based on the Linux kernel and a lot of free and open source software. We are happy to hear that it is based on Gentoo Hardened!

04 Oct 2018 12:00am GMT

28 Sep 2018

feedPlanet Gentoo

Alexys Jacob: py3status v3.13

I am once again lagging behind the release blog posts but this one is an important one.

I'm proud to announce that our long time contributor @lasers has become an official collaborator of the py3status project!

Dear @lasers, your amazing energy and overwhelming ideas have served our little community for a while. I'm sure we'll have a great way forward as we learn to work together with @tobes 🙂 Thank you again very much for everything you do!

This release is as much dedicated to you as it is yours 🙂

IMPORTANT notice

After this release, py3status coding style CI will enforce the 'black' formatter style.

Highlights

Needless to say that the changelog is huge, as usual, here is a very condensed view:

Thank you contributors!

28 Sep 2018 11:56am GMT

27 Sep 2018

feedPlanet Gentoo

Michał Górny: New copyright policy explained

On 2018-09-15 meeting, the Trustees have given the final stamp of approval to the new Gentoo copyright policy outlined in GLEP 76. This policy is the result of work that has been slowly progressing since 2005, and that has taken considerable speed by the end of 2017. It is a major step forward from the status quo that has been used since the forming of Gentoo Foundation, and that mostly has been inherited from earlier Gentoo Technologies.

The policy aims to cover all copyright-related aspects, bringing Gentoo in line with the practices used in many other large open source projects. Most notably, it introduces a concept of Gentoo Certificate of Origin that requires all contributors to confirm that they are entitled to submit their contributions to Gentoo, and corrects the copyright attribution policy to be viable under more jurisdictions.

This article aims to shortly reiterate over the most important points in the new copyright policy, and provide a detailed guide on following it in Q&A form.

Continue reading

27 Sep 2018 6:47am GMT

15 Sep 2018

feedPlanet Gentoo

Michał Górny: Overriding misreported screen dimensions with KMS-backed drivers

With Qt5 gaining support for high-DPI displays, and applications starting to exercise that support, it's easy for applications to suddenly become unusable with some screens. For example, my old Samsung TV reported itself as 7″ screen. While this used not to really matter with websites forcing you to force the resolution of 96 DPI, the high-DPI applications started scaling themselves to occupy most of my screen, with elements becoming really huge (and ugly, apparently due to some poor scaling).

It turns out that it is really hard to find a solution for this. Most of the guides and tips are focused either on proprietary drivers or on getting custom resolutions. The DisplaySize specification in xorg.conf apparently did not change anything either. Finally, I was able to resolve the issue by overriding the EDID data for my screen. This guide explains how I did it.

Step 1: dump EDID data

Firstly, you need to get the EDID data from your monitor. Supposedly read-edid tool could be used for this purpose but it did not work for me. With only a little bit more effort, you can get it e.g. from xrandr:

$ xrandr --verbose
[...]
HDMI-0 connected primary 1920x1080+0+0 (0x57) normal (normal left inverted right x axis y axis) 708mm x 398mm
[...]
  EDID:
    00ffffffffffff004c2dfb0400000000
    2f120103804728780aee91a3544c9926
    0f5054bdef80714f8100814081809500
    950fb300a940023a801871382d40582c
    4500c48e2100001e662150b051001b30
    40703600c48e2100001e000000fd0018
    4b1a5117000a2020202020200000000a
    0053414d53554e470a20202020200143
    020323f14b901f041305140312202122
    2309070783010000e2000f67030c0010
    00b82d011d007251d01e206e285500c4
    8e2100001e011d00bc52d01e20b82855
    40c48e2100001e011d8018711c162058
    2c2500c48e2100009e011d80d0721c16
    20102c2580c48e2100009e0000000000
    00000000000000000000000000000029
[...]

If you have multiple displays connected, make sure to use the EDID for the one you're overriding. Copy the hexdump and convert it to a binary blob. You can do this by passing it through xxd -p -r (installed by vim).

Step 2: fix screen dimensions

Once you have the EDID blob ready, you need to update the screen dimensions inside it. Initially, I did it using hex editor which involved finding all the occurrences, updating them (and manually encoding into the weird split-integers) and correcting the checksums. Then, I've written edid-fixdim so you wouldn't have to repeat that experience.

First, use --get option to verify that your EDID is supported correctly:

$ edid-fixdim -g edid.bin
EDID structure: 71 cm x 40 cm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
CEA EDID found
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm

So your EDID consists of basic EDID structure, followed by one extension block. The screen dimensions are stored in 7 different blocks you'd have to update, and referenced in two checksums. The tool will take care of updating it all for you, so just pass the correct dimensions to --set:

$ edid-fixdim -s 1600x900 edid.bin
EDID structure updated to 160 cm x 90 cm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
CEA EDID found
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm

Afterwards, you can use --get again to verify that the changes were made correctly.

Step 3: overriding EDID data

Now it's just the matter of putting the override in motion. First, make sure to enable CONFIG_DRM_LOAD_EDID_FIRMWARE in your kernel:

Device Drivers  --->
  Graphics support  --->
    Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)  --->
      [*] Allow to specify an EDID data set instead of probing for it

Then, determine the correct connector name. You can find it in dmesg output:

$ dmesg | grep -C 1 Connector
[   15.192088] [drm] ib test on ring 5 succeeded
[   15.193461] [drm] Radeon Display Connectors
[   15.193524] [drm] Connector 0:
[   15.193580] [drm]   HDMI-A-1
--
[   15.193800] [drm]     DFP1: INTERNAL_UNIPHY1
[   15.193857] [drm] Connector 1:
[   15.193911] [drm]   DVI-I-1
--
[   15.194210] [drm]     CRT1: INTERNAL_KLDSCP_DAC1
[   15.194267] [drm] Connector 2:
[   15.194322] [drm]   VGA-1

Copy the new EDID blob into location of your choice inside /lib/firmware:

$ mkdir /lib/firmware/edid
$ cp edid.bin /lib/firmware/edid/samsung.bin

Finally, add the override to your kernel command-line:

drm.edid_firmware=HDMI-A-1:edid/samsung.bin

If everything went fine, xrandr should report correct screen dimensions after next reboot, and dmesg should report that EDID override has been loaded:

$ dmesg | grep EDID
[   15.549063] [drm] Got external EDID base block and 1 extension from "edid/samsung.bin" for connector "HDMI-A-1"

If it didn't, check dmesg for error messages.

15 Sep 2018 9:00am GMT

07 Sep 2018

feedPlanet Gentoo

Gentoo News: Gentoo congratulates our GSoC participants

GSOC logo Gentoo would like to congratulate Gibix and JSteward for finishing and passing Google's Summer of Code for the 2018 calendar year. Gibix contributed by enhancing Rust (programming language) support within Gentoo. JSteward contributed by making a full Gentoo GNU/Linux distribution, managed by Portage, run on devices which use the original Android-customized kernel.

The final reports of their projects can be reviewed on their personal blogs:

07 Sep 2018 12:00am GMT

24 Aug 2018

feedPlanet Gentoo

Michał Górny: Securing google-authenticator-libpam against reading secrets

I have recently worked on enabling 2-step authentication via SSH on the Gentoo developer machine. I have selected google-authenticator-libpam amongst different available implementations as it seemed the best maintained and having all the necessary features, including a friendly tool for users to configure it. However, its design has a weakness: it stores the secret unprotected in user's home directory.

This means that if an attacker manages to gain at least temporary access to the filesystem with user's privileges - through a malicious process, vulnerability or simply because someone left the computer unattended for a minute - he can trivially read the secret and therefore clone the token source without leaving a trace. It would completely defeat the purpose of the second step, and the user may not even notice until the attacker makes real use of the stolen secret.

In order to protect against this, I've created google-authenticator-wrappers (as upstream decided to ignore the problem). This package provides a rather trivial setuid wrapper that manages a write-only, authentication-protected secret store for the PAM module. Additionally, it comes with a test program (so you can test the OTP setup without jumping through the hoops or risking losing access) and friendly wrappers for the default setup, as used on Gentoo Infra.

The recommended setup (as utilized by sys-auth/google-authenticator-wrappers package) is to use a dedicated user for the password store. In this scenario, the users are unable to read their secrets, and all secret operations (including authentication via the PAM module) are done using an unprivileged user. Furthermore, any operation regarding the configuration (either updating it or removing the second step) require regular PAM authentication (e.g. typing your own password).

This is consistent with e.g. how shadow operates (users can't read their passwords, nor update them without authenticating first), how most sites using 2-factor authentication operate (again, users can't read their secrets) and follows the RFC 6238 recommendation (that keys […] SHOULD be protected against unauthorized access and usage). It solves the aforementioned issue by preventing user-privileged processes from reading the secrets and recovery codes. Furthermore, it prevents the attacker with this particular level of access from disabling 2-step authentication, changing the secret or even weakening the configuration.

24 Aug 2018 6:44am GMT