11 Sep 2024
Planet Gentoo
Much improved MIPS and Alpha support in Gentoo Linux
Over the last years, MIPS and Alpha support in Gentoo has been slowing down, mostly due to a lack of volunteers keeping these architectures alive. Not anymore however! We're happy to announce that thanks to renewed volunteer interest both arches have returned to the forefront of Gentoo Linux development, with a consistent dependency tree checked and enforced by our continuous integration system. Up-to-date stage builds and the accompanying binary packages are available for both, in the case of MIPS for all three ABI variants o32, n32, and n64 and for both big and little endian, and in the case of Alpha also with a bootable installation CD.
11 Sep 2024 5:00am GMT
31 Aug 2024
Planet Gentoo
KDE Plasma 6 upgrade for stable Gentoo Linux
Exciting news for stable Gentoo users: It's time for the upgrade to the new "megaversion" of the KDE community desktop environment, KDE Plasma 6! Together with KDE Gear 24.05.2, where now most of the applications have been ported, and KDE Frameworks 6.5.0, the underlying library architecture, KDE Plasma 6.1.4 will be stabilized over the next days. The base libraries of Qt 6 are already available.
More technical information on the upgrade, which should be fairly seamless, as well as architecture-specific notes can be found in a repository news item. Enjoy!
31 Aug 2024 5:00am GMT
24 Aug 2024
Planet Gentoo
“your actual contribution to gentoo project is now pure shit!”
Ah, the life of a package maintainer. As far as controversial figures go, we probably rank somewhere under florist and nowhere near politician. Update software, back-port patches, submit patches upstream, stay on top of critical bugs and all of this in a Linux Distribution that has seen a decline in popularity. How much hate could I possibly stir up ?
Apparently, for one person, quite a lot. Living a pretty reserved life, I have never before experienced a real or implied threat. Note that I do drive on American roads, so I'm know people have expressed displeasure with my driving at points in the past, but nothing beyond normal, and nothing that I can recall short of a middle finger or two.
The below shows an exchange with an individual who apparently has a concerning sense of entitlement for the kind of work guarantees he receives from no cost software maintained by a volunteer who has never, and still does not, receive enumeration of any kind.
Stay safe everyone.
Note: The only editing I did was to fix the flow or add a comment to make it easier to read since this person likes to top post.
On Friday, July 26, 2024 at 12:43:26 PM GMT+2, Max Dubois
makemehappy@rocketmail.com wrote:
Hello,
According with this bug in bugzilla:
219061 - Memory leaks on vmalloc crash every 32 bit kernel after a
commit in 6.6.24 branch
https://bugzilla.kernel.org/show_bug.cgi?id=219061
219061 - Memory leaks on vmalloc crash every 32 bit kernel after
a
commi...
https://bugzilla.kernel.org/show_bug.cgi?id=219061
Evey kernel.org (pure X86 platform) is serious bugged after 6.6.23,
also Gentoo (my preferred distro) has the bug so you should,
eventually after try the bug yourself, mark 6.6.23 in Green, becouse
all the others listed in the gentoo kernel-source page got the bug
(and obviously also the kernel-bin packages too).
The bug is a memory leak that produce vmalloc errors on machines
using highmem (>1024 MB) and this like explained in the bugzilla
will crash very fast a running machine destroying bowser tabs,
preventing for opening apps, terminals and so on).
To reproduce the bug is very easy:
build, if you don't always have it, an x86 virtual machine and
configure it with 4 GB of ram, Virtualbox or VMware is the same,
Boot it with any kernel (gentoo or kernel.org or every kernel) over
6.6.23 (the last working). The machine will boot fine and it seems
to work as expected. Open a terminal and run a logging program (I
like metalog) and then start to use it to run apps, open a firefox
browser, some other terminals. Open some tabs on browser and look at
the logs. In minutes you will get messages like this and others in
the log, probably some kernel oops too:
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 24576
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:37 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:42 debian1232vm kernel: alloc_vmap_area: 104 callbacks
suppressed
Jul 24 17:04:42 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:42 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:42 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
Jul 24 17:04:42 debian1232vm kernel: vmap allocation for size 20480
failed: use vmalloc= to increase size
The running kernel s a brand new 32 bit 6.10.1 downloaded from
kernel.org and compiled.
Increasing the vmallOn Friday, July 26, 2024 at 12:43:26 PM GMT+2, Max Dubois
makemehappy@rocketmail.com wrote:
https://forums.gentoo.org/viewtopic-t-1169951.html
https://forums.gentoo.org/viewtopic-t-1169951.html
X86 is no more widely used then if we say we maintain compatibility
with x86 systems this bug has to be fixed and the Gentoo kernel x86
shouldn't have anything called stable after 6.6.23.
Regards from sunny italy to a fellow "paisano" like you probably are
Max
PS: grateful for any feedback for my e-mail to you and for any
eventually fix.
Note from Mike: So this was right before a business trip, and I'm not a great flyer, so I was mentally focused on getting through that trip. Got to fly home in a Tropical Storm. Yay.
Second email from my new fan:
Il giorno 23 ago 2024, alle ore 15:27, Mike Pagano
mpagano@gentoo.org ha scritto:
On 8/22/24 18:41, Max Dubois wrote:
Hello mr. Pagano,
After quietly one month of no reply and no action, I can see you
ignored my previous mail.
That is not a good service for the Gentoo community, you still serve
to Gentoo 32 bit kernel like all after 6.6.23 (gentoo or not gentoo)
without even a comment. And make it even greeen!
There is a bug and that bug not allow any use of the 32 bit system
when > 1 GB memory is installed in the system and this bug is
recognised by the kernel.org developers (if not they probably had
closed the ticket in the kernel.org bug section, don't you think
so?).
Now, I'm aware 32 bit systems are not used anymore, then it is not
serious from Gentoo and in this case from a good paisano like you
are, do nothing to inform people related to the gentoo-sources
branch you maintain.
This is not a good service to the community and I'm sad this is done
by an italian like you!
As a true italian and good friend of many paisani like you, I think
you should act in some way to inform Gentoo user base about this
problem. Obviously do your tests before, you should if you still
didn't them silently.
I know, you are a busy man, then it is simply not serious to act
like this.
BTW, I've been also waiting for your reply to my previous e-mail
then you ignored my concise and very precise mail to you. Many guys
in the kernel.org kernel list were interested and contacted me, but
you, mr. "not interested" Pagago.
Mr. Pagano, people in wonderful Campania, the region where your
blood come from, don't act like this! You are probably from small
Frignano, I visited it and it is a wonderful little village, and
Caserta and his reggia is so fantastic (I hope your visited it,
lotsa real americans visit it, so someone from USA - with roots
there - should come to visit!) and I also had a girlfriend years ago
from there (southern italian girls are the best and the prettiest
all around).
Back in subject, please do something for this problem, don't fool
gentoo users and gentoo tree.
Ciao paisano!
MD
PS: I want escalate the problem if you don't want to take any action
and act silently. Gentoo users don't deserve a maintainer not
pointing out if not solving problems in the package they maintain.
My Reply
On 8/23/24 15:27, Mike Pagano wrote:
We do not hold up stabilization for bugs that impact such a small niche of users.
If we accommodated all of these kinds of requests, no kernel would ever be stable.
Good luck with your issue. In the future, keep your emails to me technical and exclude
references to my nationality, real or imagined.
Mike Pagano
Note from Mike: This has been true for the nearly 17 years I have been maintaining the Kernel in Gentoo. Sometimes people have hardware failures, sneak in a proprietary driver, who knows. But unless it impacts a large subset of people, we don't hold up stabilization. Plus, this particular stabilization was for a root exploit.
On Fri, Aug 23, 2024 at 06:59:45PM +0200, Max Dubois wrote:
You like it technical fellow Michele? Here it is!
I forgot this and this is valid for kernel.org guys too.
I wrote it in the bug notes too when someone asked me to fix this!
First of all I'm not a developer, you can call me an advanced user,
second for someone that always have a developer machine with a local
copy of github kernel.org is a lot simple to bisecting the kernel
compared to me I don't need such a blob on my machines!!!Thankx to me, you guys all knows the bug happen between 6.6.23
(working) and 6.6.24 (not working).You guys patch the kernel all the time so it isn't complicated at all
bisecting the kernel to find the culprit modification bug that
introduced the problem.You, dear Michele, maintain gentoo sources, you should have all the
tools around to do that and serve the community!Inviato da iPhone
Another reply….
Il giorno 23 ago 2024, alle ore 18:39, Max Dubois
makemehappy@rocketmail.com ha scritto:You should proud to be italian, mr. Pagano!
And I bet you also speak some broccolino and you should proud of that
too…New York, New Jersey that is the broccolino nation and we, from the
real thing, we love you all… and i'm sorry you guys, your
ancestors, been forcing to left such a fantastic place like Italy for
such a shitty place, horrible weather, no history, poor quality life
New York, New Jersey allways offered, not talking about how this places
are adter covid panthomineTou should move in California if you can
And yes this bug just impact a small percentual of users then it is
just becouse just few people are 32 bit now! This doesn't mean that not
all the 32 bit aren't buggy for ALL the 32 bit users and it still seems
incredible to me you ignore that and act if the problem is not there.It is not a not working driver, it is the WHOLE system, all the 32 bir
linux systems, real or virtual, crashing after boot, in minutes!!!Ciao fratello Michele, stammi bene!
MD (from a beach in the Pontine Islands named Ponza)
[1]945 Isola Di Ponza Stock Photos, High-Res Pictures, and Images
[2]gettyimages.it
[3]
My last Reply
On 8/23/24 19:26, Mike Pagano wrote:
Do not contact me any further
Date: Fri, 23 Aug 2024 20:57:04 +0200
From: Max Dubois makemehappy@rocketmail.com
Lol
You are sooo conceited! I saw a picture of you and you look exactly like some good men fron the area of naples!!! You could be a great pizzaiolo or a great mafia man, choose you if you prefer to be around the shitty jersey or the fantastic costiera amalfitana (pizza in your area even if they call it ITALIANA is pure shit) you look perfect for a new soprano serie and believe me all this is a big compliment to you!!!
Ciao michelino, alla prossima!
PS: your actual contribution to gentoo project is now pure shit! You shouldn't mark green buggy kernel (everything over 6.6.23), you are completeky not honest with the community and even with yourself. And a broccolino like you shouls behave better also professionaly! You should let your gentoo-soirces commitment becouse you fail.
24 Aug 2024 3:46pm GMT
20 Aug 2024
Planet Gentoo
Gentoo: profiles and keywords rather than releases
Different distributions have different approaches to releases. For example, Debian simultaneously maintains multiple releases (branches). The "stable" branch is recommended for production use, "testing" for more recent software versions. Every two years or so, the branches "shift" (i.e. the previous "testing" becomes the new "stable", and so on) and users are asked to upgrade to the next release.
Fedora releases aren't really branched like Debian. Instead, they make a new release (with potentially major changes for an upgrade) every half a year, and maintain old releases for 13 months. You generally start with the newest release, and periodically upgrade.
Arch Linux follows a rolling release model instead. There is just one branch that all Arch users use, and releases are made periodically only for the purpose of installation media. Major upgrades are done in-place (and I have to say, they don't always go well).
Now, Gentoo is something of a hybrid, as it combines the best of both worlds. It is a rolling release distribution with a single shared repository that is available to all users. However, within this repository we use a keywording system to provide a choice between stable and testing packages, to facilitate both production and development systems (with some extra flexibility), and versioned profiles to tackle major lock-step upgrades.
Architectures
Before we enter any details, we need to clarify what an architecture (though I suppose platform might be a better term) is in Gentoo. In Gentoo, architectures provide a coarse (and rather arbitrary) way of classifying different supported processor families.
For example, the amd64 architecture is indicates 64-bit x86 processors (also called x86-64) running 64-bit userland, while x86 indicates 32-bit userland for x86 processors (both 32-bit and 64-bit in capability). Similarly, 64-bit AArch64 (ARMv8) userland is covered by arm64, while the 32-bit userland on all ARM architecture versions is covered by the arm. This is best seen in the ARM stage downloads - a single architecture is split into subarchitectures there.
For some architectures, the split is even coarser. For example, mips and riscv (at least for the moment) cover both 32-bit and 64-bit variations of the architecture. ppc64 covers both big-endian and little-endian (PPC64LE) variations - and the default big-endian variation tends to cause more issues with software.
Why does the split matter? Primarily because architectures define keywords, and keywords indicate whether the package works. A coarser split means that a single keyword may be used to cover a wide variety of platforms - not of which are equally working. But more on that further on.
By the way, I've mentioned "platforms" earlier. Why? Because besides the usual architectures, we are using names such as amd64-linux and x64-macos for Prefix - i.e. running Gentoo inside another operating system (or Linux distribution). Historically, we also had a Gentoo/FreeBSD variation.
Profiles
The simplest way of thinking of profiles would be as different Gentoo configurations. Gentoo provides a number of profiles for every supported architecture. Profiles serve multiple purposes.
The most obvious purpose is providing suitable defaults for different, well, profiles of Gentoo usage. So we have base profiles that are better suited for headless systems, and desktop profiles that are optimized for desktop use. Within desktop profiles, we have subprofiles for GNOME and Plasma desktops. We have base profiles for OpenRC, and subprofiles for systemd; base profiles for the GNU toolchain and subprofiles for the LLVM toolchain. Of course, these merely control defaults - you aren't actually required to use a specific subprofile to use the relevant software; you can adjust your configuration directly instead. However, using a right fit of a profile makes things easier, and increases the chances of finding Gentoo binary packages that match your setup.
But there's more to profiles than that. Profiles also control non-trivial system configuration aspects that cannot be easily changed. We have separate profiles for systems that have undergone the "/usr merge", and for systems that haven't - and you can't switch between the two without actually migrating your system first. On some architectures we have profiles with and without multilib; this is e.g. necessary to run 32-bit executables on amd64. On ARM, separate profiles are provided for different architecture versions. The implication of all that is that profiles also control which packages can actually be installed on a system. You can't install 32-bit software on an amd64 non-multilib system, or packages requiring newer ARM instructions on a system using a profile for older processors.
Finally, profiles are versioned to carry out major changes in Gentoo. This is akin to how Debian or Fedora do releases. When we introduce major changes that require some kind of migration, we do that via a new profile version. Users are provided with upgrade instructions, and are asked to migrate their systems. And we do support both old and new profiles for some time. To list two examples:
- 17.1 amd64 profiles changed the multilib layout from using lib64 + lib32 (+ a compatibility lib symlink) to lib64 + lib.
- 23.0 profiles featured hardening- and optimization-related toolchain changes.
Every available profile has one of three stability levels: stable, dev or exp. As you can guess, "stable" profiles are the ones that are currently considered safe to use on production systems. "Dev" profiles should be good too, but they're not as well tested yet. Then, "exp" profiles come with no guarantees, not even of dependency graph integrity (to be explained further on).
Keywords
While profiles can control which packages can be installed to some degree, keywords are at the core of that. Keywords are specified for every package version separately (inside the ebuild), and are specified (or not) for every architecture.
A keyword can effectively have one of four states:
- stable (e.g. amd64), indicating that the package should be good to be used on production;
- testing (often called ~arch, e.g. ~amd64), indicating that the package should work, but we don't give strong guarantees;
- unkeyworded (i.e. no keyword for given architecture is present), usually indicating that the package has not been tested yet;
- disabled (e.g. -amd64), indicating that the package can't work on given architecture. This is rarely used, usually for prebuilt software.
Now, the key point is that users have control over which keywords their package managers accepts. If you're running a production system, you may want to set it to accept stable keywords only - in which case only stable packages will normally be allowed to be installed, and your packages will only be upgraded once the next version is marked stable. Or you may set your system to accept both stable and testing keywords, and help us test them.
Of course, this is not just a binary global switch. At the cost of increased risk and reduced chances of getting support, you can adjust allowed keywords for packages, and run a mix of stable and testing. Or you can install some packages that has no keywords at all, including live packages built straight from a VCS repository. Or you can even set your system to follow keywords for another architecture - the sky is the limit!
Note that not all Gentoo architectures use stable keywords at a time. There are so called "pure ~arch arches" that use testing keywords only. An examples of such architectures are alpha, loong and riscv.
Bad terminology: stable and stable
Here's a time for a short intermezzo: as you may have noticed, we have used the term "stable" twice already: one time for profiles, and the other time for the keywords. Combined with the fact that not all architectures actually use stable keywords, this can get really confusing. Unfortunately, it's a historical legacy that we have to live with.
So to clarify. A stable profile is a profile that should be good to use on production systems. A stable package (i.e. a package [version] with stable keywords) is a package version that should be good to use on production systems.
However, the two aren't necessarily linked. You can use a dev or even exp profile, but only accept stable keywords, and the other way around. Furthermore, architectures that don't use stable keywords at all, do have stable profiles.
Visibility and dependency graph integrity
Equipped with all that information, now we can introduce the concept of package visibility. Long story short, a package (version) is visible if it is installable on a given system. The primary reasons why a package couldn't be installed are insufficient keywords, or an explicit mask. Let's consider these cases in detail.
As I've mentioned earlier, a particular system can be configured to accept either stable, or both stable and testing keywords. Therefore, on a system set to accept stable keywords, only packages featuring stable keywords can be visible (the remaining packages are masked by "missing keyword"). On a system set to accept both stable and testing keywords, all packages featuring either stable or testing keywords can be visible.
Additionally, packages can be explicitly masked either globally in the repository, or in profiles. These masks are used for a variety of reasons: when a particular package is incompatible with the configuration of a given profile (say, 32-bit packages on a non-multilib 64-bit profile), when it turns out to be broken or when we believe that it needs more testing before we let users install it (even on testing-keyword systems).
The considerations of package visibility here are limited to the package itself. However, in order for the package to be installable, all its dependencies need to be installable as well. For packages with stable keywords, this means that all their dependencies (including optional dependencies conditional to USE flags that can be enabled on a stable system) have a matching version with stable keywords as well. Conversely, for packages with testing keywords, this means that all dependencies need to have either stable or testing keywords. Furthermore, said dependency versions must not be masked on any profile, on which the package in question is visible.
This is precisely what dependency graph integrity checks are all about. They are performed for all profiles that are either stable or dev (i.e. exp profiles are excluded, and don't guarantee integrity), for all package versions with stable or testing keywords - and for each of these kind of keywords separately. And when integrity is not maintained, we get automated reports about it, and deployment pipeline is blocked, so ideally users don't have to experience the problem firsthand.
The life of a keyword
Now that we have all the fundamental ideas covered, we can start discussing how packages get their keywords in the first place.
The default state for a keyword is "unspecified". For a package to gain a testing keyword, it needs to be tested on the architecture in question. This can either be done by a developer directly, or via a keywording request filed on Gentoo Bugzilla, that will be processed by an arch tester. Usually, only the newest version of the package is handled, but in special circumstances testing keywords can be added to older versions as well (e.g. when required to satisfy a dependency). Any dependencies that are lacking a matching keyword need to be tested as well.
And what does happen if the package does not pass testing? Ideally, we file a bug upstream and get it fixed. But realistically, we can't always manage that. Sometimes the bug remains open for quite some time, waiting for someone to take action or for a new release that might happen to start working. Sometimes we decide that keywording a particular package at the time is not worth the effort - and if it is required as an optional dependency of something else, we instead mask the relevant USE flags in the profiles corresponding to the given architecture. In extreme cases, we may actually add a negative -arch flag, to indicate that the package can't work on given architecture. However, this is really rare and we generally do it only as a hint if people spend their time trying to keyword it over and over again.
Once a package gains a testing keyword, it "sticks". Whenever a new version is added, all the keywords from the previous version are copied into it, and stable keywords are lowered into testing keywords. This is done even though the developer only tested it on one of the architectures. Packages generally lose testing keywords only if we either have a justified suspicion that they have stopped working, or if they gained new dependencies that are lacking the keywords in question. Most of the time, we request readding the testing keywords (rekeywording) immediately afterwards.
Now, stable requests follow a stricter routine. The maintainer must decide that a particular package version is ready to become stable first. A rule of thumb is that it's been in testing for a month, and no major regressions have been reported. However, the exact details differ. For example, some projects make separate "stable branch" and "testing branch" releases, and we mark only the former stable. And when vulnerabilities are found in software, we tend to proceed with adding stable keywords to the fixed versions immediately.
Then, a stabilization request is filed, and then the package is tested on every architecture before the respective stable keyword is added. Testing is generally done on a system that is set only to accept stable keywords, therefore it may provide a slightly different environment that the original testing done when the package was added. Note that there is an exception to that rule - if we believe that particular packages are unlikely to exhibit different behavior across different architectures, we do ALLARCHES stabilization and add all the requested stable keywords after testing on one system.
Unlike with testing keywords, stable keywords need to be added to every version separately. When a new package version is added, all stable keywords in it are replaced by the corresponding testing keywords.
This process pretty much explains the difference between the guarantees given by testing and stable keywords. The testing keywords indicate that some version of the package has been tested on the given architecture at some point, and that we have good reasons to believe that it still works. The stable keywords indicate that this particular version has been tested on a system running stable keywords, and therefore it is less likely to turn out broken. Unfortunately, whether it actually is free of bugs is largely dependent on the quality of test suites, dependencies and so on. So yeah, it's a mess.
The cost of keywords
I suppose that from user's perspective it would be best if all packages that work on a given architecture had keywords for it; and ideally, all versions suitable for it would have stable keywords on all relevant architectures. However, every keyword comes with a cost. And that's not only the cost of actual testing, but also a long-term maintenance cost.
For the most important architectures, Gentoo developers have access to one or more dedicated machines. These machines are used to various purposes: arch testing (i.e. processing keywording and stabilization requests, usually semi-automated), building stage archives, building binary packages, and last but not least: providing development environments that are needed to debug and fix bugs. For other architectures, we are entirely dependent on volunteers doing the testing - a few prominent volunteers worthy of the highest praise, I must add.
The cost incurred by testing keywords is comparatively small, but contrary to what you might think, it's not a one time cost. Once a package gains a testing keyword, we generally want to keep it going forward. This means that if it gains new dependencies, we're going to have to retest it - and its new dependencies. However, that's the easy part.
The hard part is that stuff can actually break over time. The package itself can start exhibiting test failures, or stop working entirely. Its new dependencies may turn out to be broken on the architecture in question. In these cases, it's not just the cost of testing - but actually reporting bugs, and possibly debugging and writing patches when upstream authors don't have access to the relevant hardware (and/or don't care). Sometimes you even learn that the author never intended to support given architecture, and is unwilling to accept well-written patches.
And if it turns out that it really isn't feasible to keep the keyword going forward anymore, sometimes removing it may also turn out to be a lot of effort - especially if multiple packages depending on this one have been keyworded as well.
Of course, the cost for stable keywords is much higher. After all, it's no longer a case of one time testing, but we actually have to test every single version that's going stable. This is somewhat amortized by ALLARCHES packages that need to be tested on a single architecture only (and therefore usually are tested on one of the "fast" architectures), but still it's a lot. On top of that, frequent testing is more likely to reveal problems, and therefore require immediate fixes. This is actually a good thing, but also a future cost to consider. And removing keywords from packages that used to be stable is likely to have greater impact than from these that never were.
Struggling architectures
All the costs considered, it shouldn't come as a surprise that we sometimes find ourselves struggling with some of the less popular architectures. We may have limited access to hardware, the hardware itself may not be very performant, the hardware and the operating system may be susceptible to breakage. So if we keyword too much, then the arch teams can no longer keep up, the queue is getting long, and requests aren't handled timely. In the extreme case, we may lose the last machine for a given architecture and become stuck, unable to go forward. These are all things to consider.
For these reasons, we periodically discuss the state of architectures in Gentoo. If we determine that some of them are finding it hard to cope, we look for solutions. Of course, one possibility to weigh in is getting more hardware - but that's not always justified, or even possible. Sometimes we need to actually reduce the workload.
For architectures that use stable keywords, the obvious possibility is to reduce the number of packages using them - i.e. destabilize packages. Ordinarily, the best targets for this effort would be packages that are old, particularly problematic or unpopular, as they can reduce our effective maintenance cost while minimizing the potential discomfort to users. However, we might need to go deeper than that. In extreme cases, we can go as far as to reduce the stable package set to core system packages. At some point, this kind of reduction forces users to run a mixed stable-testing keyword system, but that at least permits them to limit risk of regressions in the most important packages.
If even that is insufficient, there are more options at our disposal. We can look into removing keywords entirely from packages, particularly packages that require further rekeywording work. We can decide to remove stable keywords from an architecture entirely. In the worst case, we can decide to mark all profiles exp, effectively abandoning dependency graph integrity (at this point, some dependencies may start missing keywords and packages may not be trivially installable), or we can decide to remove the support for a given architecture entirely.
Summary
Gentoo uses a combined profile and keyword system to facilitate user needs on top of a single ebuild repository. This is in contrast with many other distributions that use multiple repositories, make releases, sometimes maintain multiple release branches simultaneously. In fact, some distributions actually split into multiple versions to facilitate different user profiles. Gentoo does all that in a single, coherent product with rolling releases and profile upgrade paths.
The system of keywords is aimed at providing good user experience while keeping the maintenance affordable. On most of the supported architectures, we provide stable keywords to help keeping production systems on reasonably tested software. Before packages becomes stable, we offer them to more adventurous users via testing keywords. Gentoo also offers great flexibility - users can mix stable and testing keywords freely (though at the risk of hitting unexpected issues), or run experimental packages that aren't ready to get testing keywords yet.
Unfortunately, there are limits to how much support for various architectures we can provide. We are largely reliant on either having appropriate machines available, or volunteers with the hardware to test stuff for us, not to mention developers having skills and energy to debug and fix architecture-specific problems. Sometimes this turns out to be insufficient to cope with all the work, and we need to give up on some of the architecture support.
Still, I think the system works pretty well here, and it is one of Gentoo's strong suits. Sure, it occasionally needs a push here and there, or a policy change, but it's been one of Gentoo's foundations for years, and it doesn't look as if it's going to be replaced anytime soon.
20 Aug 2024 6:44pm GMT
14 Aug 2024
Planet Gentoo
Gentoo Linux drops IA-64 (Itanium) support
Following the removal of IA-64 (Itanium) support in the Linux kernel and glibc, and subsequent discussions on our mailing list, as well as a vote by the Gentoo Council, Gentoo will discontinue all ia64 profiles and keywords. The primary reason for this decision is the inability of the Gentoo IA-64 team to support this architecture without kernel support, glibc support, and a functional development box (or even a well-established emulator). In addition, there have been only very few users interested in this type of hardware.
As also announced in a news item, in one month, i.e. in the first half of September 2024, all ia64 profiles will be removed, all ia64 keywords will be dropped from all packages, and all IA-64 related Gentoo bugs will be closed.
14 Aug 2024 5:00am GMT
23 Jul 2024
Planet Gentoo
Optimizing distutils-r1.eclass via wheel reuse
Yesterday I've enabled a new distutils-r1.eclass optimization: wheel reuse. Without this optimization, the eclass would build a separate wheel for every Python implementation enabled, and then install every one of these wheels. In many cases, this meant repeatedly building the same thing. With the optimization enabled, under some circumstances the eclass will be able to build one (or two) wheels, and install them for all implementations.
This change brings the eclass behavior closer to the behavior of package managers such as pip. While this will cause no change for users who build packages for a single Python version only, it can bring some nice speedup when building for multiple interpreters. Particularly, pure Python packages using setuptools will no longer incur the penalty of having to start setuptools multiple times (which is quite slow), and packages using the stable ABI won't have to build roughly identical extensions multiple times.
In this post, I'm going to shortly go over a few design considerations of the new feature.
Pure Python wheels, and partial C extension compatibility
The obvious candidate for wheel reuse are pure Python wheels, i.e. packages using the *-py3-none-any.whl (or *-py2.py3-none-any.whl) suffix. Therefore, the algorithm would be roughly this: build a wheel; if you get a pure Python wheel, use it for all implementations.
[Well, to be more precise, the eclass works more like this: check if any of the previously built wheels can be used; if one can, use it; otherwise build a new wheel, add it to the list and use that.]
However, there is a problem with that approach: some packages feature extensions that aren't used across all supported implementations. In particular, some packages don't enable extensions for PyPy (often simply because pure Python code with JIT tends to be faster than calling into the C/Rust extension). Since we're building for PyPy3 first, the pure Python package created for PyPy would end up being reused across all implementations!
Fortunately, a simple way around the problem was already available - for multiple reasons, we already expect DISTUTILS_EXT to be set for all ebuilds featuring (at least optional) compiled extensions. Therefore, I've modified the logic to reuse pure Python wheels only if we don't expect extensions. If we do, then pure Python wheels are ignored.
Of course, this is not a perfect solution. If a package supports more than one implementation that uses pure Python version, the wheel won't be reused. In fact, if a package features native-extensions flag and it's disabled, so no extensions are built at all, the pure Python wheel reuse is also disabled! But that's just a matter of missed optimization, and it's better to stay on the safe side here.
Still, there are some risks left here. In particular, if a developer misses the CPython-only extension and includes PyPy3 from day one, wheel reuse will prevent the eclass from immediately reporting missing DISTUTILS_EXT. Fortunately, I think we can reasonably expect that someone will build it with PyPy3 target disabled and report the problem. In fact, I'm pretty sure our CI will catch that very fast.
Stable ABI wheels
The second candidate for wheel reuse are stable ABI wheels. Long story short, normally Python extensions are only guaranteed to be compatible with the single version of Python they were built for. However, should one use the so-called limited API, the resulting extensions will be forward-compatible with all CPython versions newer than the specified minimal version. The advantage from reusing stable ABI wheels is much greater than from pure Python wheels - since we can avoid repeatedly building the same C or Rust code, that can be quite resource consuming.
Normally, reusing stable ABI wheels requires determining whether a particular ABI/platform tag is compatible with the implementation in question. For example, a stable ABI wheel could be suffixed *-cp38-abi3-linux_x86_64.whl. This means that the particular wheel is compatible with CPython 3.8 and newer, on Linux x86_64 platform. Unfortunately, these tags can get quite complex and packaging features quite extensive code for determining tag compatibility.
Good news is that we don't really need to do that. Since we're building wheels locally, we don't need to be concerned about the platform tag at all. Furthermore, since we are building from oldest to newest Python version, we can also ignore the ABI tag (beyond checking for abi3) and assume that the wheel built for previous (i.e. earlier) CPython version will be compatible with the newer version. That said, we need to take special consideration that the stable ABI is supported only by CPython and not PyPy.
Multiple wheels per package
One final problem with wheel reuse is that a single Gentoo package may be building multiple wheels. For example, dev-python/sqlglot builds a main Python package and a Rust extension. A "dumb" wheel reuse would mean that the first wheel built would be used for all subsequent calls, even if these were supposed to build completely different packages!
To resolve this issue, I've converted the DISTUTILS_WHEELS variable into an associative array, mapping wheels into directory paths. For every wheel built, we are recording both wheel path and the source directory - and reusing the wheel only if the directory matches.
Summary
The resulting code in distutils-r1.eclass implements all that was mentioned above. I have been using it for 2 months prior to enabling it by default, and found no issues. During this period, the eclass was additionally verifying that Python packages don't install files with different contents, when they declare to produce universal wheels.
I'm really proud of how simple the logic is. If wheel reuse is enabled, scan recorded wheel list for wheels matching the current directory. For all matching wheels, check their tags. If we do not expect extensions, and we've got a pure Python wheel, use it. If we are installing for CPython, and we've got a stable ABI wheel, use it. Otherwise (no matching wheel or reuse disabled), build and install a new wheel (this is actually a call to the old function) and add it to the list.
Hope this helps you save some time and save some energy. I definitely don't need the extra heating in this hell of a summer.
23 Jul 2024 6:06pm GMT
11 Jul 2024
Planet Gentoo
The review-work balance, and other dilemmas
One of the biggest problems of working in such a large project as Gentoo, is that there's always a lot of work to be done. Once you get engaged deeply enough, no matter how hard you're going to try, the backlog will just keep growing. There are just so many things that need to be done, and someone has to do them.
Sooner or later, you are going to start facing some dilemmas, such as:
- This befell me, because nobody else was willing to do that. Should I continue overburdening myself with this, or should I leave it and let it rot?
- I have more time than other people on the team. Should I continue doing the bulk of the work, or leave more of it to them?
- What is the right balance between reviewing contributions, and doing the work myself?
In this post, I'd like to discuss these problems from my perspective as a long-time Gentoo developer.
Doing what needs to be done
There are things I've taken up in Gentoo simply because I've found them interesting or enjoyable. However, there are also some things that I've taken up, because they needed to be done and nobody was doing them. And then there are things that fall somewhere in the middle - like in Python, where I enjoy lots of stuff, but this also implies I'm ending up with a lot of thankless work. And I don't believe it's fair to just do the nice part, and ignore the hard part.
The immediate reasons for taking up these jobs vary. Sometimes a particular problem affected me directly, so I stepped up to resolve it - this is basically how people end up joining the Gentoo Infrastructure team. Sometimes I've noticed something early that would be a major hassle for users later on, and I've taken it up. Sometimes I've noticed that many users are already complaining about something, and that something needs to be done.
But then, what next? Let's say I've ended up doing something that's not really a good fit for me. I keep sending calls for help, but receive no offers. Now I'm facing said dilemma: Should I continue overburdening myself with this, or should I leave it and let it rot?
The truth is, sometimes abandoning stuff is the right thing to do. It has major drawbacks: it affects people, and makes the work pile up. However, it also makes people more aware of the problem. Sometimes it's the only way to have another person pick it up. At other times, it makes more users aware of the problem and they can offer to help.
However, it's never an easy choice to make, and should you make it, you are never sure if it will actually work. It may turn out that you will eventually have to pick it up yourself, and have to deal with all the resulting backlog.
Doing work yourself, or letting others do it
There's a proverb: if you want something done right, you have to do it yourself. Perhaps it's not the nicest way of putting stuff. Let's frame the problem differently. You are person in the best position to do something. You are flexible, you can do things the same day, while others need two or three days.
So, there are advantages and disadvantages to doing things yourself. On one hand, it means things get done sooner (so users benefit), and people who are more busy with their lives aren't distracted by stuff that you can do. On the other hand, it means that if you are already overburdened with work, you spend time on things that others can do for you, instead of on things that only you can do. Others have less opportunity to practice doing stuff, and in the end may even get discouraged from actively contributing.
The last part is actually the biggest problem, it's a bit of chicken-or-the-egg problem. If you're more experienced, you're better equipped to deal with problems. However, this means that others don't get an opportunity to gain the experience and become better. And this in turn means that the actual bus factor is not as high as it could be - you have people interested in doing stuff, but they don't have the training.
This is where the dilemma comes in: Should I continue doing the bulk of the work, or leave more of it to them? Sometimes it's not that big of a deal - in Python team, the version bumps are mostly a sliding window kind of work. I do the bulk of bumps every morning, and others join in at different times. But sometimes this isn't this easy.
And in the end, you never really know whether things will work as expected: if you start doing things rarer, giving more time to others, will they actually find time to do them? Or will it just mean you're going to end up doing more the next time, and at the same lose your perfect track record of response time?
The balance between doing and reviewing
As any Free Software project, Gentoo has a thriving community. A part of being developer is accepting contributions from this community. Unfortunately, reviewing them is not always easy.
Sometimes it is, for example, when a pull request is addressing a very specific problem, and you just have to look at the diff, and perhaps test it. At other times, reviewing actually takes more work than doing things yourself. And then you have to strive for balance.
For example, let's consider an average version bump. How you do it yourself is, roughly: run a script to copy the ebuild, check the diff between the sources, update the ebuild, test. Sometimes it's trivial, sometimes it's not - but it's all pretty streamlined. Now, if you're reviewing a version bump done by someone, you need to merge their commits, diff the packages, diff the ebuilds, test. Most of the time, it means you're actually doing more work than if you were doing it yourself - but this is fine so far.
The problem is, sometimes the user doesn't do some extra maintenance tasks you'd do (not blaming them, but it's something you want done anyway). Sometimes there are mistakes to be fixed. All these things multiply the work involved, and delay the actual bump (effectively affecting users negatively). You need to leave review comments, wait for the user to update and try again. Rinse and repeat.
The worst part is that you're never sure if it's worth it. Sometimes you don't even know if the user is really interested in working on this, or just wanted to get it bumped. You spend your time pointing out issues, the user spends their fixing them, and in the end you both would have preferred if you'd have done it all yourself.
The flip side is that there are actually promising contributors, and if you go through the whole effort, you'll end up having people who you can actually trust to do things right, and it pays back in the end. Perhaps people who are going to become Gentoo developers. But you have to put a lot effort and take a lot of risk for this. And this isn't easy when you are already overburdened.
If you get the balance wrong on one side, you get things done, but you get no new help and the project eventually dies. If you get it wrong on the other side, you waste your time, get no benefit and don't get other things done.
And then LLMs come and promise a new hell for you: people who could be (unintentionally) making pull requests with plagiarized, bad quality code. They submit stuff with the minimum of effort, you spend a lot of effort reviewing them, only to discover that the submitters have no clue what they've sent in the first place. That's one of the reasons Gentoo has banned LLM contributions - and added an explicit checkbox to the pull request template. But will this suffice?
Summary
In this post, I've summarized some of the biggest dilemmas I'm facing as a Gentoo developer. In fact, we're all facing them. We're all forced to make decisions, and see their outcome. Sometimes we see that what we did was right, and it pays off. Sometimes, it turns out that things end up on fire, and again we have to make a choice - should we give up and run with the fire extinguisher, and go back to square one? Or should we just let it burn? Perhaps somebody else will extinguish it then, or perhaps it's actually better if it burns to the ground… Maybe it will turn out to be a phoenix?
11 Jul 2024 2:11pm GMT
23 Jun 2024
Planet Gentoo
Evolving QA tooling
QA support in Gentoo has been a fluid, amorphous goal over the project's history. Throughout the years, developers have invented their own scripts and extensions to work around the limitations of official tooling. More recently, the relaxed standards have been tightened up a fair amount, but it should be possible to achieve more results with further improvement.
Beginning my tenure as an ebuild maintainer between 2005 and 2010, much of the development process revolved around CVS and repoman, both of which felt slow and antiquated even at the outset. Thankfully, CVS was swapped out for git in 2015, but repoman stuck around for years after that. While work was done on repoman over the years that followed, its overall design flaws were never corrected leading to it being officially retired in 2022 in favor of pkgcheck (and pkgdev).
Comparatively speaking, pkgcheck is much better designed than repoman; however, it still lags in many areas generally due in part to relying on pkgcore1 and using an amalgamation of caching and hacks to obtain a modicum of performance via parallelization. In short, performance can still be drastically improved, but the work required to achieve such results is not easy.
Pkgcraft support
Similar to how pkgcheck builds on top of pkgcore, pkgcraft provides its core set of QA tooling via pkgcruft2, an ebuild linter featuring a small subset of pkgcheck's functionality with several extensions. As the project is quite new, its limited number of checks run the spectrum from bash parsing to dependency scanning.
An API for scanning and reports is also provided, allowing language bindings for pkgcruft or building report (de)serialization into web interfaces and other tools. For example, a client/server combination could be constructed that creates and responds to queries related to reports generated by given packages between certain commit ranges.
Looking towards the future, the current design allows extending its ability past ebuild repos to any viable targets that make sense on Gentoo systems. For example, it could be handy to scan binary repos for outdated packages, flag installed packages removed from the tree, or warn about USE flag settings in config files that aren't relevant anymore. These types of tasks are often handled in a wide variety of places (or left to user implementation) at varying quality and performance levels.
Install
For those running Gentoo, it can be found in the main tree at dev-util/pkgcruft. Alternatively, it can be installed via cargo
using the following commands:
Current release: cargo install pkgcruft
From git: cargo install pkgcruft --git https://github.com/pkgcraft/pkgcraft.git
Pre-built binaries are also provided for releases on supported platforms.
Metadata issues
Before going through usage patterns, it should be noted that pkgcraft currently doesn't handle metadata generation in threaded contexts so pkgcruft will often crash when run against ebuilds with outdated metadata. Fixing this requires redesigning how pkgcraft interacts with its embedded bash interpreter, probably forcing the use of a process spawning daemon similar to pkgcore's ebd (ebuild daemon), but handled natively instead of in bash.
A simple workaround involves incrementally generating metadata via running pk pkg metadata
from any ebuild repo directory3. If that command completes successfully, then pkgcruft can be run from the same directory as well. On failure, the related errors should be fixed and metadata generated before attempting to run pkgcruft. So as a reference, pkgcruft can safely be run on writable repos via a command similar to the following:
pk pkg metadata && pkgcruft scan
It might be easiest to add a shell alias allowing for options to be specified for pkgcruft scan
until pkgcraft's metadata generation issue with threads is solved.
Usage
Much of the pkgcruft's command-line interface mirrors that of pkgcheck as there are only so many ways to construct a linter and it aids mapping existing knowledge to a new tool. See the following commands for example usage:
Scanning
Scan the current directory assuming it's inside an ebuild repo:
pkgcruft scan
Scan an unconfigured, external repo:
pkgcruft scan path/to/repo
Scan the configured gentoo repo:
pkgcruft scan --repo gentoo
pkgcruft scan '*::gentoo'
Scan all dev-python/* ebuilds in the configured gentoo repo:
pkgcruft scan --repo gentoo 'dev-python/*'
pkgcruft scan 'dev-python/*::gentoo'
See the help output for other scan-related options such as reporter support or report selection. Man pages and online documentation will also be provided in the future.
pkgcruft scan --help
Filtering
Native filtering support is included via the -f/--filters
option allowing specific package versions matching various conditions to be targeted. Note that filters can be chained and inverted to further specify targets. Finally, only checks that operate on individual package versions can be run when filters are used, all others are automatically disabled.
Restrict to the latest version of all packages:
pkgcruft scan -f latest
Restrict to packages with only stable keywords:
pkgcruft scan -f stable
Restrict to unmasked packages:
pkgcruft scan -f '!masked'
Restrict to the latest, non-live version:
pkgcruft scan -f '!live' -f latest
Beyond statically defined filters, much more powerful package restrictions are supported and can be defined using a declarative query format that allows logical composition. More information relating to valid package restrictions will be available once better documentation is written for them and pkgcraft in general. Until that work has been done, see the following commands for example usage and syntax:
Restrict to non-live versions maintained by the python project:
pkgcruft scan -f '!live' -f "maintainers any email == 'python@gentoo.org'"
Restrict to packages without maintainers:
pkgcruft scan -f "maintainers is none"
Restrict to packages with RDEPEND containing dev-python/* and empty BDEPEND:
pkgcruft scan -f "rdepend any 'dev-python/*' && bdepend is none"
Replay
Similar to pkgcheck, replay support is provided as well supporting workflows that cache results and then replay them later, potentially using custom filters. Pkgcruft only supports serializing reports to newline-separated JSON objects at this time which can be done via the following command:
pkgcruft scan -R json > reports.json
The serialized reports file can then be passed to the replay
subcommand to deserialize the reports.
pkgcruft replay reports.json
This functionality can be used to perform custom package filtering, sort the reports, or filter the report variants. See the following commands for some examples:
Replay all dev-python/* related reports, returning the total count:
pkgcruft replay -p 'dev-python/*' reports.json -R simple | wc -l
Replay all report variants generated by the Whitespace check:
pkgcruft replay -c Whitespace reports.json
Replay all python update reports:
pkgcruft replay -r PythonUpdate reports.json
Replay all reports in sorted order:
pkgcruft replay --sort reports.json
Benchmarks and performance
Rough benchmarks comparing pkgcruft and pkgcheck targeting a related check run over a semi-recent gentoo repo checkout on a modest laptop with 8 cores/16 threads (AMD Ryzen 7 5700U) using a midline SSD are as follows:
- pkgcheck:
pkgcheck scan -c PythonCompatCheck -j16
- approximately 5s - pkgcruft:
pkgcruft scan -c PythonUpdate -j16
- approximately .56s
For comparative parallel efficiency, pkgcruft achieves the following with different amounts of jobs:
- pkgcruft:
pkgcruft scan -c PythonUpdate -j8
- approximately .65s - pkgcruft:
pkgcruft scan -c PythonUpdate -j4
- approximately 1s - pkgcruft:
pkgcruft scan -c PythonUpdate -j2
- approximately 2s - pkgcruft:
pkgcruft scan -c PythonUpdate -j1
- approximately 4s
Note that these results are approximated averages for multiple runs without flushing memory caches. Initial runs of the same commands will be slower due to additional I/O latency.
While the python update check isn't overly complex it does require querying the repo for package matches which is the most significant portion of its runtime. Little to no work has been done on querying performance for pkgcraft yet, so it may be possible to decrease the runtime before resorting to drastic changes such as a more performant metadata cache format.
While it should still be able to improve, pkgcruft already runs faster using a single thread than pkgcheck running on all available cores. Most of this probably comes from the implementation language which is further exhibited when restricting runs to single category and package targets where process startup time dominates. See the following results for the same check run in those contexts:
Targeting dev-python/*:
- pkgcheck:
pkgcheck scan -c PythonCompatCheck -j16
- approximately 1s - pkgcruft:
pkgcruft scan -c PythonUpdate -j16
- approximately .13s
Targeting dev-python/jupyter-server:
- pkgcheck:
pkgcheck scan -c PythonCompatCheck -j16
- approximately .38s - pkgcruft:
pkgcruft scan -c PythonUpdate -j16
- approximately .022s
Note that in the case of targeting a single package with multiple versions, pkgcruft currently doesn't parallelize per version and thus could possibly half its runtime if that work is done.
Finally, in terms of memory usage pkgcruft usually consumes about an order of magnitude less than pkgcheck mostly due to python's ownership model as rust can more easily use immutable references rather than cloning objects. Also, pkgcheck's parallel design uses processes instead of threads due to python's weaker concurrency support again due to historical language design4 leading to more inefficiency. This difference may increase as more intensive checks or query caching is implemented as pkgcruft should be able to share writable objects between threads via locking or channels more readily than pkgcheck can in a performant manner between processes.
But is the duplicated effort worth it?
Even with some benchmarks showing potential, it may be hard to convince others that reworking QA scanning yet again is a worthwhile endeavor. This is a fair assessment as much work has gone into pkgcheck in order to bring it to its recent state underpinning Gentoo's QA. When regarding this opinion, it helps to revisit why repoman was supplanted and discuss its relative performance difference compared to pkgcheck.
Disregarding the work done on enabling more extensive checks, it can be argued that pkgcheck's performance differential allowed it to be more reasonably deployed at scale and is one of the main reasons Gentoo QA has noticeably improved in the last five to ten years. Instead of measuring a full tree scan in hours (or perhaps even days on slower machines) it can run in minutes. This has enabled Gentoo's CI (continuous integration) setup to flag issues within a shorter time period after being pushed to the tree.
Pkgcheck's main performance improvement over repoman came in terms of its design enabling much better internal parallelization support which repoman entirely lacked for the majority of its existence. However, single thread performance was much closer for similar use cases.
With that in mind, pkgcruft runs significantly faster than pkgcheck for single threaded comparisons of related checks before taking its more efficient parallelization design (threads vs processes) into account. Similar to the jump from repoman to pkgcheck, using pkgcruft could enable even more CI functionality that has never been seriously considered such as rejecting git pushes server-side due to invalid commits.
Whether this makes the reimplementation effort worthwhile is still debatable, but it's hard to argue against a design that achieves similar results using an order of magnitude less time and space with little work done towards performance thus far. If nothing else, it exhibits a glimpse of potential gains if Gentoo can ever break free of its pythonic shackles.
Future work
As with all replacement projects, there are many features pkgcruft lacks when comparing it to pkgcheck. Besides the obvious check set differential, the following are a few ideas beyond what pkgcheck supports that could come to fruition if more work is completed.
Viable revdeps cache
Verifying reverse dependencies (revdeps) is related to many dependency-based checks most of which are limited in scope or have to run over the entire repo. For example, when removing packages pkgcheck needs to do a full tree visibility scan in order to verify package dependencies.
Leveraging a revdeps cache, this could be drastically simplified to checking a much smaller set of packages. The major issues with this feature are defining a cache format supporting relatively quick (de)serialization and restriction matching while also supporting incremental updates in a performant fashion.
Git commit hooks
None of the QA tools developed for Gentoo have been fast enough to run server-side per git push, rejecting invalid commits before they hit the tree. In theory, pkgcruft might be able to get there, running in the 50-500ms range depending on the set of checks enabled, amount of target packages, and hardware running them.
Properly supporting this while minding concurrent pushes requires a daemon that the git hook queues tasks on with some type of filtering to ignore commits that cause too many package metadata updates (as it would take too long to responsively update metadata and scan them for most systems). Further down the road, it could make sense to decouple pushing directly to the main branch and instead provide support for a merge queue backed by pkgcruft thus alleviating some of the runtime sensitive pressure allowing to move from sub-second goals to sub-minute especially if some sense of progress and status is provided for feedback.
Native git bisect support
Extending pkgcheck's git support provided by pkgcheck scan --commits
, it should be possible to natively support bisecting ebuild repo commit ranges to find a bad commit generating certain report variants. Gentoo CI supports this in some form for its notification setup, but implements it in a more scripted fashion preventing regular users from leveraging it without recreating a similar environment.
Pkgcruft could internally run the procedure using native git library support and expose it via a command such as pkgcruft bisect a..b
. While this may be a workflow only used by more experienced devs, it would be handy to support natively instead of forcing users to roll their own scripts.
-
Pkgcore ambles over the low bar set by portage's design but has been showing its age since 2015 or so. It's overly meta, leaning into python's "everything is an object" tenet too much while hacking around the downsides of that approach for performance reasons. ↩︎
-
Aiming to fight the neverending torrent of package cruft in ebuild repos. ↩︎
-
Install pkgcraft-tools in order to use the
pk
command. ↩︎ -
Python's weaker threading support may be improved due to ongoing work to disable the GIL (global interpreter lock) in CPython 3.13; however, it's still difficult to see how a language not designed for threading (outside usage such as asynchronous I/O) adapts while supporting both GIL and non-GIL functionality as currently, separate builds (having already gone through a compatibility fiasco during the py2 -> py3 era). ↩︎
23 Jun 2024 3:09am GMT
30 May 2024
Planet Gentoo
The dead weight of packages in Gentoo
You've probably noticed it already: Gentoo developers are overwhelmed. There is a lot of unresolved bugs. There is a lot of unmaintained packages. There is a lot of open pull requests. This is all true, but it's all part of a larger problem, and a problem that doesn't affect Gentoo alone.
It's a problem that any major project is going to face sooner or later, and especially a project that's almost entirely relying on volunteer work. It's a problem of bitrot, of different focus, of energy deficit. And it is a very hard problem to solve.
A lot of packages - a lot of effort
Packages are at the core of a Linux distribution. After all, what would be any of Gentoo's advantages worth if people couldn't actually use them to realize their goals? Some people even go as far as to say: the more packages, the better. Gentoo needs to have popular packages, because many users will want them. Gentoo needs to have unique packages, because that gives it an edge over other distributions.
However, having a lot of packages is also a curse. All packages require at least some maintenance effort. Some packages require very little of it, others require a lot. When packages aren't being maintained properly, they stop serving users well. They become outdated, they accumulate bugs. Users spend time building dependencies just to discover that the package itself fails to build for months now. Users try different alternatives to discover that half of them don't work at all, or perhaps are so outdated that they don't actually have the functions upstream advertises, or even have data loss bugs.
Sometimes maintenance deficit is not that bad, but it usually is. Skipping every few releases of a frequently released package may have no ill effects, and save some work. Or it could mean that instead of dealing with trivial diffs (especially if upstream cared to make the changes atomic), you end up having to untangle a complex backlog. Or bisect bugs introduced a few releases ago. Or deal with an urgent security bump combined with major API changes.
If the demand for maintenance isn't met for a long time, bitrot accumulates. And getting things going straight again becomes harder and harder. On top of that, if we can't handle our current workload, how are we supposed to find energy to deal with all the backlog? Things quickly spiral out of control.
People want to do what they want to do
We all have packages that we find important. Sometimes, these packages require little maintenance, sometimes they are pain the ass. Sometimes, they end up being unmaintained, and we really wish someone would take care of them. Sometimes, we may end up going as far as to be angry that people are taking care of less important stuff, or that they keep adding new stuff while the existing packages rot.
The thing is, in a project that's almost entirely driven by volunteer work, you can't expect people to do what you want. The best you can achieve with that attitude is alienating them, and actively stopping them from doing anything. I'm not saying that there aren't cases when this isn't actually preferable but that's beside the point. If you want something done, you either have to convince people to do it, do it yourself, or pay someone to do it. But even that might not suffice. People may agree with you, but not have the energy or time, or skills to do the work, or to review your work.
On top of that, there will always be an inevitable push towards adding new packages rather than dealing with abandoned ones. Users expect new software too. They don't want to learn that Gentoo can't have a single Matrix client, because we're too busy keeping 20 old IRC clients alive. Or that they can't have Fediverse software, because we're overwhelmed with 30 minor window managers. And while this push is justified, it also means that the pile of unmaintained packages will still be there, and at the same time people will put effort into creating even more packages that may eventually end up on that pile.
The job market really sucks today
Perhaps it's the nostalgia talking, but situation in the job market is getting worse and worse. As I've mentioned before, the vast majority of Gentoo developers and contributors are volunteers. They are people who generally need to work full-time to keep themselves alive. Perhaps they work overtime. Perhaps they work in toxic work places. Perhaps they are sucked dry out of their energy by problems. And they need to find time and energy to do Gentoo on top of that.
There are a handful of developers hired to do Gentoo. However, they are hired by corporations, and this obviously limits what they can do for Gentoo. To the best of my knowledge, there is no longer such a thing as "time to do random stuff in work time". Their work can be beneficial to Gentoo users. Or it may not be. They may maintain important and useful packages, or they may end up adding lots of packages that they aren't allowed to properly maintain afterwards, and that create extra work for others in the end.
Perhaps an option would be for Gentoo to actually pay someone to do stuff. However, this is a huge mess. Even provided that we'd be able to do afford it, how to choose what to pay for? And whom to pay? In the end, the necessary proceedings also require a lot of effort and energy, and the inevitable bikeshed is quite likely to drain it of anyone daring enough to try.
Proxy maintenance is not a long-term solution
Let's be honest: proxy maintenance was supposed to make things better, but there's only as much that it can do. In the end, someone needs to review stuff, and while it pays back greatly, it is more effort than "just doing it". And there's no guarantee that the contributor will respond timely, especially if we weren't able to review stuff timely. Things can easily extend over time, or get stalled entirely, and that's just one problem.
We've stopped accepting new packages via proxy-maint a long time ago, because we weren't able to cope with it. I've created GURU to let people work together without being blocked by developers, but that's not a perfect solution either.
And proxy-maint is just one facet of pull requests. Many pull requests are affecting packages maintaining by a variety of developers, and handling them is even harder, as they getting the developer to review or acknowledge the change.
So what is the long-term solution? Treecleaning?
I'm afraid it's time to come to an unfortunate conclusion: the only real long-term solution is to keep removing packages. There's only as many packages that we can maintain, and we need to make hard decisions. Keeping unmaintained and broken packages is bad for users. Spending effort fixing them ends up biting us back.
The joke is, most of the time it's actually less effort to fix the immediate problem than to last rite and remove a package. Especially when someone already provided a fix. However, fixing the immediate issue doesn't resolve the larger problem of the package being unmaintained. There will be another issue, and then another, and you will keep pouring energy into it.
Of course, things can get worse. You can actually pour all that energy into last rites, just to have someone "rescue" the package last minute. Just to leave it unmaintained afterwards, and then you end up going through the whole effort again. And don't forget that in the end you're the "villain" who wants to take away a precious package from the users, and they were the "hero" who saved it, and now the users have to deal with a back-and-forth. It's a thankless job.
However, there's one advantage to removing packages: they can be moved to GURU afterwards. They can have another shot at finding an active maintainer there. There, they can actually be easily made available to users without adding to developers' workload. Of course, I'm not saying that GURU should be a dump for packages removed from Gentoo - but it's a good choice if someone actually wants to maintain it afterwards.
So there is hope - but it is also a lot of effort. But perhaps that's a better way to spend our energy than trying to deal with an endless influx of pull requests, and with developers adding tons of new packages that nobody will be able to take over afterwards.
30 May 2024 7:24pm GMT
10 Apr 2024
Planet Gentoo
Gentoo Linux becomes an SPI associated project
As of this March, Gentoo Linux has become an Associated Project of Software in the Public Interest, see also the formal invitation by the Board of Directors of SPI. Software in the Public Interest (SPI) is a non-profit corporation founded to act as a fiscal sponsor for organizations that develop open source software and hardware. It provides services such as accepting donations, holding funds and assets, … SPI qualifies for 501(c)(3) (U.S. non-profit organization) status. This means that all donations made to SPI and its supported projects are tax deductible for donors in the United States. Read on for more details…
Questions & Answers
Why become an SPI Associated Project?
Gentoo Linux, as a collective of software developers, is pretty good at being a Linux distribution. However, becoming a US federal non-profit organization would increase the non-technical workload.
The current Gentoo Foundation has bylaws restricting its behavior to that of a non-profit, is a recognized non-profit only in New Mexico, but a for-profit entity at the US federal level. A direct conversion to a federally recognized non-profit would be unlikely to succeed without significant effort and cost.
Finding Gentoo Foundation trustees to take care of the non-technical work is an ongoing challenge. Robin Johnson (robbat2), our current Gentoo Foundation treasurer, spent a huge amount of time and effort with getting bookkeeping and taxes in order after the prior treasurers lost interest and retired from Gentoo.
For these reasons, Gentoo is moving the non-technical organization overhead to Software in the Public Interest (SPI). As noted above, SPI is already now recognized at US federal level as a full-fleged non-profit 501(c)(3). It also handles several projects of similar type and size (e.g., Arch and Debian) and as such has exactly the experience and background that Gentoo needs.
What are the advantages of becoming an SPI Associated Project in detail?
Financial benefits to donors:
- tax deductions [1]
Financial benefits to Gentoo:
- matching fund programs [2]
- reduced organizational complexity
- reduced administration costs [3]
- reduced taxes [4]
- reduced fees [5]
- increased access to non-profit-only sponsorship [6]
Non-financial benefits to Gentoo:
- reduced organizational complexity, no "double-headed beast" any more
- less non-technical work required
[1] Presently, almost no donations to the Gentoo Foundation provide a tax benefit for donors anywhere in the world. Becoming a SPI Associated Project enables tax benefits for donors located in the USA. Some other countries do recognize donations made to non-profits in other jurisdictions and provide similar tax credits.
[2] This also depends on jurisdictions and local tax laws of the donor, and is often tied to tax deductions.
[3] The Gentoo Foundation currently pays $1500/year in tax preparation costs.
[4] In recent fiscal years, through careful budgetary planning on the part of the Treasurer and advice of tax professionals, the Gentoo Foundation has used depreciation expenses to offset taxes owing; however, this is not a sustainable strategy.
[5] Non-profits are eligible for reduced fees, e.g., of Paypal (savings of 0.9-1.29% per donation) and other services.
[6] Some sponsorship programs are only available to verified 501(c)(3) organizations
Can I still donate to Gentoo, and how?
Yes, of course, and please do so! For the start, you can go to SPI's Gentoo page and scroll down to the Paypal and Click&Pledge donation links. More information and more ways will be set up soon. Keep in mind, donations to Gentoo via SPI are tax-deductible in the US!
In time, Gentoo will contact existing recurring donors, to aid transitions to SPI's donation systems.
What will happen to the Gentoo Foundation?
Our intention is to eventually transfer the existing assets to SPI and dissolve the Gentoo Foundation. The precise steps needed on the way to this objective are still under discussion.
Does this affect in any way the European Gentoo e.V.?
No. Förderverein Gentoo e.V. will continue to exist independently. It is also recognized to serve public-benefit purposes (§ 52 Fiscal Code of Germany), meaning that donations are tax-deductible in the E.U.
10 Apr 2024 5:00am GMT
01 Apr 2024
Planet Gentoo
The interpersonal side of the xz-utils compromise
While everyone is busy analyzing the highly complex technical details of the recently discovered xz-utils compromise that is currently rocking the internet, it is worth looking at the underlying non-technical problems that make such a compromise possible. A very good write-up can be found on the blog of Rob Mensching...
"A Microcosm of the interactions in Open Source projects"
01 Apr 2024 2:54pm GMT
15 Mar 2024
Planet Gentoo
Optimizing parallel extension builds in PEP517 builds
The distutils (and therefore setuptools) build system supports building C extensions in parallel, through the use of -j (--parallel) option, passed either to build_ext or build command. Gentoo distutils-r1.eclass has always passed these options to speed up builds of packages that feature multiple C files.
However, the switch to PEP517 build backend made this problematic. While the backend uses the respective commands internally, it doesn't provide a way to pass options to them. In this post, I'd like to explore the different ways we attempted to resolve this problem, trying to find an optimal solution that would let us benefit from parallel extension builds while preserving minimal overhead for packages that wouldn't benefit from it (e.g. pure Python packages). I will also include a fresh benchmark results to compare these methods.
The history
The legacy build mode utilized two ebuild phases: the compile phase during which the build command was invoked, and the install phase during which install command was invoked. An explicit command invocation made it possible to simply pass the -j option.
When we initially implemented the PEP517 mode, we simply continued calling esetup.py build, prior to calling the PEP517 backend. The former call built all the extensions in parallel, and the latter simply reused the existing build directory.
This was a bit ugly, but it worked most of the time. However, it suffered from a significant overhead from calling the build command. This meant significantly slower builds in the vast majority of packages that did not feature multiple C source files that could benefit from parallel builds.
The next optimization was to replace the build command invocation with more specific build_ext. While the former also involved copying all .py files to the build directory, the latter only built C extensions - and therefore could be pretty much a no-op if there were none. As a side effect, we've started hitting rare bugs when custom setup.py scripts assumed that build_ext is never called directly. For a relatively recent example, there is my pull request to fix build_ext -j… in pyzmq.
I've followed this immediately with another optimization: skipping the call if there were no source files. To be honest, the code started looking messy at this point, but it was an optimization nevertheless. For the no-extension case, the overhead of calling esetup.py build_ext was replaced by the overhead of calling find to scan the source tree. Of course, this still had some risk of false positives and false negatives.
The next optimization was to call build_ext only if there were at least two files to compile. This mostly addressed the added overhead for packages building only one C file - but of course it couldn't resolve all false positives.
One more optimization was to make the call conditional to DISTUTILS_EXT variable. While the variable was introduced for another reason (to control adding debug flag), it provided a nice solution to avoid both most of the false positives (even if they were extremely rare) and the overhead of calling find.
The last step wasn't mine. It was Eli Schwartz's patch to pass build options via DIST_EXTRA_CONFIG. This provided the ultimate optimization - instead of trying to hack a build_ext call around, we were finally able to pass the necessary options to the PEP517 backend. Needless to say, it meant not only no false positives and no false negatives, but it effectively almost eliminated the overhead in all cases (except for the cost of writing the configuration file).
The timings
Django 5.0.3 | Cython 3.0.9 | ||||
---|---|---|---|---|---|
Serial PEP517 build | 5.4 s | 46.7 s | |||
build | total | 3.1 s | 8.4 s | 20.8 s | 23.5 s |
PEP517 | 5.3 s | 2.7 s | |||
build_ext | total | 0.6 s | 6 s | 20.8 s | 23.5 s |
PEP517 | 5.4 s | 2.7 s | |||
find + build_ext | total | 0.06 s | 5.5 s | 20.9 s | 23.6 s |
PEP517 | 5.4 s | 2.7 s | |||
Parallel PEP517 build | 5.4 s | 22.8 s |
For a pure Python package (django here), the table clearly shows how successive iterations have reduced the overhead from parallel build supports, from roughly 3 seconds in the earliest approach, resulting in 8.4 s total build time, to the same 5.4 s as the regular PEP517 build.
For Cython, all but the ultimate solution result in roughly 23.5 s total, half of the time needed for a serial build (46.7 s). The ultimate solution saves another 0.8 s on the double invocation overhead, giving the final result of 22.8 s.
Test data and methodology
The methods were tested against two packages:
- Django 5.0.3, representing a moderate size pure Python package, and
- Cython 3.0.9, representing a package with a moderate number of C extensions.
Python 3.12.2_p1 was used for testing. The timings were done using time command from bash. The results were averaged from 5 warm cache test runs. Testing was done on AMD Ryzen 5 3600, with pstates boost disabled.
The PEP517 builds were performed using the following command:
python3.12 -m build -nwx
The remaining commands and conditions were copied from the eclass. The test scripts, along with the results, spreadsheet and plot source can be found in the distutils-build-bench repository.
15 Mar 2024 3:41pm GMT
13 Mar 2024
Planet Gentoo
The story of distutils build directory in Gentoo
The Python distutils build system, as well as setuptools (that it was later merged into), used a two-stage build: first, a build command would prepare a built package version (usually just copy the .py files, sometimes compile Python extensions) into a build directory, then an install command would copy them to the live filesystem, or a staging directory. Curious enough, distutils were an early adopter of out-of-source builds - when used right (which often enough wasn't the case), no writes would occur in the source directory and all modifications would be done directly in the build directory.
Today, in the PEP517 era, two-stage builds aren't really relevant anymore. Build systems were turned into black boxes that spew wheels. However, setuptools still internally uses the two-stage build and the build directory, and therefore it still remains relevant to Gentoo eclasses. In this post, I'd like to shortly tell how we dealt with it over the years.
Act 1: The first overrides
Normally, distutils would use a build directory of build/lib*, optionally suffixed for platform and Python version. This was reasonably good most of the time, but not good enough for us. On one hand, it didn't properly distinguish CPython and PyPy (and it wouldn't for a long time, until Use cache_tag in default build_platlib dir PR). On the other, the directory name would be hard to get, if ebuilds ever needed to do something about it (and we surely did).
Therefore, the eclass would start overriding build directories quite early on. We would start by passing --build-base to the build command, then add --build-lib to make the lib subdirectory path simpler, then replace it with separate --build-platlib and --build-purelib to workaround build systems overriding one of them (wxPython, if I recall correctly).
The eclass would class this mode "out-of-source build" and use a dedicated BUILD_DIR variable to refer to the dedicated build directory. Confusingly, "in-source build" would actually indicate a distutils-style out-of-source build in the default build subdirectory, and the eclass would create a separate copy of the sources for every Python target (effectively permitting in-source modifications).
The last version of code passing --build* options.
Act 2: .pydistutils.cfg
The big problem with the earlier approach is that you'd have to pass the options every time setup.py is invoked. Given the design of option passing in distutils, this effectively meant that you needed to repeatedly invoke the build commands (otherwise you couldn't pass options to it).
The next step would be to replace this logic by using .pydistutils.cfg configuration file. The file, placed in HOME (also overridden in eclass) would allow us to set option values without actually having to pass specific commands on the command-line. The relevant logic, added in September 2013 (commit: Use pydistutils.cfg to set build-dirs instead of passing commands explicitly…), remains in the eclass even today. However, since the PEP517 build mode stopped using this file, it is used only in legacy mode.
The latest version of the code writing .pydistutils.cfg.
Act 3: Messy PEP517 mode
One of the changes caused by building in PEP517 mode was that .pydistutils.cfg started being ignored. This implied that setuptools were using the default build directory again. It wasn't such a big deal anymore - since we no longer used proper separation between the two build stages, and we no longer needed to have any awareness of the intermediate build directory, the path didn't matter per se. However, it meant CPython and PyPy started sharing the same build directory again - and since setuptools install stage picks everything up from that directory, it meant that extensions built for PyPy3.10 would be installed to CPython3.10 directory!
How did we deal with that? Well, at first I've tried calling setup.py clean -a. It was kinda ugly, especially that it meant combining setup.py calls with PEP517 invocations - but then, we were already calling setup.py build to take advantage of parallel build jobs when building extensions, and it worked. For a time.
Unfortunately, it turned out that some packages override the clean command and break our code, or even literally block calling it. So the next step was to stop being fancy and literally call rm -rf build. Well, this was ugly, but - again - it worked.
Act 4: Back to the config files
As I've mentioned before, we continued to call the build command in PEP517 mode, in order to enable building C extensions in parallel via the -j option. Over time, this code grew in complexity - we've replaced the call with more specific build_ext, then started adding heuristics to avoid calling it when unnecessary (a no-op setup.py build_ext call slowed pure Python package builds substantially).
Eventually, Eli Schwartz came up with a great alternative - using DIST_EXTRA_CONFIG to provide a configuration file. This meant that we could replace both setup.py invocations - by using the configuration file both to specify the job count for extension builds, and to use a dedicated build directory.
The change originally was done only for the explicit use of setuptools build backend. As a result, we've missed a bunch of "indirect" setuptools uses - other setuptools-backed PEP517 backends (jupyter-builder, pbr), backends using setuptools conditionally (pdm-backend), custom wrappers over setuptools and… dev-python/setuptools package itself ("standalone" backend). We've learned about it the hard way when setuptools stopped implicitly ignoring the build directory as a package name - and effectively a subsequent build collected a copy of the previous build as a build package. Yep, we've ended up with a monster of /usr/lib/python3.12/site-packages/build/lib/build/lib/setuptools.
So we approach the most recent change: enabling the config for all backends. After all, we're just setting an environment variable, so others build backends will just ignore it.
And so, we've came full circle. We've enabled configuration files early on, switched to other hacks when PEP517 builds broke that and eventually returned to unconditionally using configuration files.
13 Mar 2024 10:18am GMT
23 Feb 2024
Planet Gentoo
Gentoo RISC-V Image for the Allwinner Nezha D1
Motivation
The Allwinner Nezha D1 SoC was one of the first available RISC-V single-board computers (SBC) crowdfounded and released in 2021. According to the manufacturer, "it is the world's first mass-produced development board that supports 64bit RISC-V instruction set and Linux system.".
Installing Gentoo on this system usually involved grabbing one existing image, like the Fedora one, and swapping the userland with a Gentoo stage3.
Bootstrapping via a third-party image is now no longer necessary.
A Gentoo RISC-V Image for the Nezha D1
I have uploaded a, for now, experimental Gentoo RISCV-V Image for the Nezha D1 at
https://dev.gentoo.org/~flow/gymage/
Simply dd(rescue) the image onto a SD-Card and plug that card into your board.
Now, you could either connect to the UART or plug in a Ethernet cable to get to a login prompt.
UART
You typically want to connect a USB-to-UART adapter to the board. Unlike other SBCs, the debug UART on the Nezha D1 is clearly labeled with GND, RX, and TX. Using the standard ThunderFly color scheme, this resolves to black for ground (GND), green for RX, and white for TX.
Then fire up your favorite serial terminal
and power on the board.
Note: Your milleage may vary. For example, you probably want your user to be a member of the 'dialout' group to access the serial port. The device name of your USB-to-UART adapter may not be /dev/ttyUSB0
.
SSH
Ethernet port of the board is configured to use DHCP for network configuration. A SSH daemon is listening on port 22.
Login
The image comes with a 'root' user whose password is set to 'root'. Note that you should change this password as soon as possible.
gymage
The image was created using the gymage tool.
I envision the gymage to become an easy-to-use tool that allows users to create up-to-date Gentoo images for single-board computers. The tool is in an early stage with some open questions. However, you are free to try it. The source code of gymage is hosted at https://gitlab.com/flow/gymage, and feedback is, as always, appreciated.
Stay tuned for another blog post about gymage once it matures further.
23 Feb 2024 12:00am GMT
04 Feb 2024
Planet Gentoo
Gentoo x86-64-v3 binary packages available
End of December 2023 we already made our official announcement of binary Gentoo package hosting. The initial package set for amd64 was and is base-line x86-64, i.e., it should work on any 64bit Intel or AMD machine. Now, we are happy to announce that there is also a separate package set using the extended x86-64-v3 ISA (i.e., microarchitecture level) available for the same software. If your hardware supports it, use it and enjoy the speed-up! Read on for more details…
Questions & Answers
How can I check if my machine supports x86-64-v3?
The easiest way to do this is to use glibc's dynamic linker:
larry@noumea ~ $ ld.so --help
Usage: ld.so [OPTION]... EXECUTABLE-FILE [ARGS-FOR-PROGRAM...]
You have invoked 'ld.so', the program interpreter for dynamically-linked
ELF programs. Usually, the program interpreter is invoked automatically
when a dynamically-linked executable is started.
[...]
[...]
Subdirectories of glibc-hwcaps directories, in priority order:
x86-64-v4
x86-64-v3 (supported, searched)
x86-64-v2 (supported, searched)
larry@noumea ~ $
As you can see, this laptop supports x86-64-v2 and x86-64-v3, but not x86-64-v4.
How do I use the new x86-64-v3 packages?
On your amd64 machine, edit the configuration file in /etc/portage/binrepos.conf/
that defines the URI from where the packages are downloaded, and replace x86-64
with x86-64-v3
. E.g., if you have so far
sync-uri = https://distfiles.gentoo.org/releases/amd64/binpackages/17.1/x86-64/
then you change the URI to
sync-uri = https://distfiles.gentoo.org/releases/amd64/binpackages/17.1/x86-64-v3/
That's all.
Why don't you have x86-64-v4 packages?
There's not yet enough hardware and people out there that could use them.
We could start building such packages at any time (our build host is new and shiny), but for now we recommend you build from source and use your own CFLAGS then. After all, if your machine supports x86-64-v4, it's definitely fast…
Why is there recently so much noise about x86-64-v3 support in Linux distros?
Beats us. The ISA is 9 years old (just the tag x86-64-v3 was slapped onto it recently), so you'd think binaries would have been generated by now. With Gentoo you could've done (and probably have done) it all the time.
That said, in some processor lines (i.e. Atom), support for this instruction set was introduced rather late (2021).
04 Feb 2024 6:00am GMT
22 Jan 2024
Planet Gentoo
2023 in retrospect & happy new year 2024!
A Happy New Year 2024 to all of you! We hope you enjoyed the fireworks; we tried to contribute to these too with the binary package news just before new year! That's not the only thing in Gentoo that was new in 2023 though; as in the previous years, let's look back and give it a review.
Gentoo in numbers
The number of commits to the main ::gentoo repository has remained at an overall high level in 2023, only slightly lower from 126682 to 121000. The number of commits by external contributors has actually increased from 10492 to 10708, now across 404 unique external authors.
GURU, our user-curated repository with a trusted user model, is still attracting a lot of potential developers. We have had 5045 commits in 2023, a slight decrease from 5751 in 2022. The number of contributors to GURU has increased clearly however, from 134 in 2022 to 158 in 2023. Please join us there and help packaging the latest and greatest software. That's the ideal preparation for becoming a Gentoo developer!
On the Gentoo bugtracker bugs.gentoo.org, we've had 24795 bug reports created in 2023, compared to 26362 in 2022. The number of resolved bugs shows a similar trend, with 22779 in 2023 compared to 24681 in 2022. Many of these bugs are stabilization requests; a possible interpretation is that stable Gentoo is becoming more and more current, catching up with new software releases.
New developers
In 2023 we have gained 3 new Gentoo developers. They are in chronological order:
-
Arsen Arsenović (arsen): Arsen joined up as a developer right at the start of the year in January from Belgrade, Serbia. He's a computer science student interested in both maths and music, active in many different free software projects, and has already made his impression, e.g., in our emacs and toolchain projects.
-
Paul Fox (ris): After already being very active in our Wiki for some time, Paul joined in March as developer from France. Activity on our wiki and documentation quality will certainly grow much further with his help.
-
Petr Vaněk (arkamar): Petr Vaněk, from Prague, Czech Republic, joined the ranks of our developers in November. Gentoo user since 2009, craft beer enthusiast, and Linux kernel contributor, he has already been active in very diverse corners of Gentoo.
Featured changes and news
Let's now look at the major improvements and news of 2023 in Gentoo.
Distribution-wide Initiatives
-
Binary package hosting: Gentoo shockingly now also provides binary packages, for easier and faster installation! For amd64 and arm64, we've got a stunning >20 GByte of packages on our mirrors, from LibreOffice to KDE Plasma and from Gnome to Docker. Also, would you think 9-year old x86-64-v3 is still experimental? We have it already on our mirrors! For all other architectures and ABIs, the binary package files used for building the installation stages (including the build tool chain) are available for download.
-
New 23.0 profiles in preparation: A new profile version, i.e. a collection of presets and configurations, is at the moment undergoing internal preparation and testing for all architectures. It's not ready yet, but will integrate more toolchain hardening by default, as well as fix a lot of internal inconsistencies. Stay tuned for an announcement with more details in the near future.
-
Modern C: Work continues on porting Gentoo, and the Linux userland at large, to Modern C. This is a real marathon effort rather than a sprint (just see our tracker bug for it). Our efforts together with the same project ongoing in Fedora have already helped many upstreams, which have accepted patches in preparation for GCC 14 (that starts to enforce the modern language usage).
-
Event presence: At the Free and Open Source Developers European Meeting (FOSDEM) 2023, the Free and Open Source Software Conference (FrOSCon) 2023, and the Chemnitzer Linux-Tage (CLT) 2023, Gentoo had a booth with mugs, stickers, t-shirts, and of course the famous self-compiled buttons.
-
Google Summer of Code: In 2023 Gentoo had another successful year participating in the Google Summer of Code. We had three contributors completing their projects; you can find out more about them by visiting the Gentoo GSoC blog. We thank our contributors Catcream, LabBrat, and Listout, and also all the developers who took the time to mentor them.
-
Online workshops: Our German support, Gentoo e.V., organized this year 6 online workshops on building and improving ebuilds. This will be continued every two months in the upcoming year.
-
Documentation on wiki.gentoo.org has been making great progress as always. This past year the contributor's guide, article writing guidelines, and help pages were updated to give the best possible start to anyone ready to lend a hand. The Gentoo Handbook got updates, and a new changelog. Of course much documentation was fixed, extended, or updated, and quite a few new pages were created. We hope to see even more activity in the new year, and hopefully some new contributors - editing documentation is a particularly easy area to start contributing to Gentoo in, please give it a try!
Architectures
-
Alpha: Support for the DEC Alpha architecture was revived, with a massive keywording effort going on. While not perfectly complete yet, we are very close to a fully consistent dependency tree and package set for alpha again.
-
musl: Support for the lightweight musl libc has been added to the architectures MIPS (o32) and m68k, with corresponding profiles in the Gentoo repository and corresponding installation stages and binary packages available for download. Enjoy!
Packages
-
.NET: The Gentoo Dotnet project has significantly improved support for building .NET-based software, using the nuget, dotnet-pkg-base, and dotnet-pkg eclasses. Now we're ready for packages depending on the .NET ecosystem and for developers using dotnet-sdk on Gentoo. New software requiring .NET is constantly being added to the main Gentoo tree. Recent additions include PowerShell for Linux, Denaro (a finance application), Pinta (a graphics program), Ryujinx (a NS emulator) and many other aimed straight at developing .NET projects.
-
Java: OpenJDK 21 has been introduced for amd64, arm64, ppc64, and x86!
-
Python: In the meantime the default Python version in Gentoo has reached Python 3.11. Additionally we have also Python 3.12 available stable - again we're fully up to date with upstream.
-
PyPy3 compatibility for scientific Python: While some packages (numexpr, pandas, xarray) are at the moment still undergoing upstream bug fixing, more and more scientific Python packages have been adapted in Gentoo and upstream for the speed-optimized Python variant PyPy. This can provide a nice performance boost for numerical data analysis…
-
Signed kernel modules and (unified) kernel images: We now support signing of both in-tree and out-of-tree kernel modules and kernel images. This is useful for those who would like the extra bit of verification offered by Secure Boot, which is now easier than ever to set up on Gentoo systems! Additionally, our kernel install scripts and eclasses are now fully compatible with Unified Kernel Images and our prebuilt gentoo-kernel-bin can now optionally install an experimental pregenerated generic Unified Kernel Image.
-
The GAP System: A new dev-gap package category has arrived with about sixty packages. GAP is a popular system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP consists of a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language, and large data libraries of algebraic objects. It has its own package ecosystem, mostly written in the GAP language with a few C components.
Physical and Software Infrastructure
-
Portage improvements: A significant amount of work went into enhancing our package manager, Portage, to better support binary package deployment. Users building their own binary packages and setting up their own infrastructure will certainly benefit from it too.
-
packages.gentoo.org: The development of Gentoo's package database website, packages.gentoo.org, has picked up speed, with new features for maintainer, category, and arch pages, and Repology integration. Many optimization were done for the backend database queries and the website should now feel faster to use.
-
pkgdev bugs: A new developer tool called pkgdev bugs enables us now to simplify the procedure for filing new stable requests bugs a lot. By just giving it version lists (which can be generated by other tools), pkgdev bugs can be used to compute dependencies, cycles, merges, and will file the bugs for the architecture teams / testers. This allows us to step ahead much faster with package stabilizations.
Finances of the Gentoo Foundation
-
Income: The Gentoo Foundation took in approximately $18,500 in fiscal year 2023; the majority (over 80%) were individual cash donations from the community.
-
Expenses: Our expenses in 2023 were, as split into the usual three categories, operating expenses (for services, fees, …) $6,000, only minor capital expenses (for bought assets), and depreciation expenses (value loss of existing assets) $20,000.
-
Balance: We have about $101,000 in the bank as of July 1, 2023 (which is when our fiscal year 2023 ends for accounting purposes). The draft finanical report for 2023 is available on the Gentoo Wiki.
Thank you!
Obviously this is not all Gentoo development that happened in 2023. From KDE to GNOME, from kernels to scientific software, you can find much more if you look at the details. As every year, we would like to thank all Gentoo developers and all who have submitted contributions for their relentless everyday Gentoo work. As a volunteer project, Gentoo could not exist without them. And if you are interested and would like to contribute, please join us and help us make Gentoo even better!
22 Jan 2024 6:00am GMT