26 Jul 2025
Planet Gentoo
EPYTEST_PLUGINS and other goodies now in Gentoo
If you are following the gentoo-dev mailing list, you may have noticed that there's been a fair number of patches sent for the Python eclasses recently. Most of them have been centered on pytest support. Long story short, I've came up with what I believed to be a reasonably good design, and decided it's time to stop manually repeating all the good practices in every ebuild separately.
In this post, I am going to shortly summarize all the recently added options. As always, they are all also documented in the Gentoo Python Guide.
The unceasing fight against plugin autoloading
The pytest test loader defaults to automatically loading all the plugins installed to the system. While this is usually quite convenient, especially when you're testing in a virtual environment, it can get quite messy when you're testing against system packages and end up with lots of different plugins installed. The results can range from slowing tests down to completely breaking the test suite.
Our initial attempts to contain the situation were based on maintaining a list of known-bad plugins and explicitly disabling their autoloading. The list of disabled plugins has gotten quite long by now. It includes both plugins that were known to frequently break tests, and these that frequently resulted in automagic dependencies.
While the opt-out approach allowed us to resolve the worst issues, it only worked when we knew about a particular issue. So naturally we'd miss some rarer issue, and learn only when arch testing workflows were failing, or users reported issues. And of course, we would still be loading loads of unnecessary plugins at the cost of performance.
So, we started disabling autoloading entirely, using PYTEST_DISABLE_PLUGIN_AUTOLOAD environment variable. At first we only used it when we needed to, however over time we've started using it almost everywhere - after all, we don't want the test suites to suddenly start failing because of a new pytest plugin installed.
For a long time, I have been hesitant to disable autoloading by default. My main concern was that it's easy to miss a missing plugin. Say, if you ended up failing to load pytest-asyncio or a similar plugin, all the asynchronous tests would simply be skipped (verbosely, but it's still easy to miss among the flood of warnings). However, eventually we started treating this warning as an error (and then pytest started doing the same upstream), and I have decided that going opt-in is worth the risk. After all, we were already disabling it all over the place anyway.
EPYTEST_PLUGINS
Disabling plugin autoloading is only the first part of the solution. Once you disabled autoloading, you need to load the plugins explicitly - it's not sufficient anymore to add them as test dependencies, you also need to add a bunch of -p switches. And then, you need to keep maintaining both dependencies and pytest switches in sync. So you'd end up with bits like:
BDEPEND=" test? ( dev-python/flaky[${PYTHON_USEDEP}] dev-python/pytest-asyncio[${PYTHON_USEDEP}] dev-python/pytest-timeout[${PYTHON_USEDEP}] ) " distutils_enable_tests pytest python_test() { local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 epytest -p asyncio -p flaky -p timeout }
Not very efficient, right? The idea then is to replace all that with a single EPYTEST_PLUGINS variable:
EPYTEST_PLUGINS=( flaky pytest-{asyncio,timeout} ) distutils_enable_tests pytest
And that's it! EPYTEST_PLUGINS takes a bunch of Gentoo package names (without category - almost all of them reside in dev-python/, and we can special-case the few that do not), distutils_enable_tests adds the dependencies and epytest (in the default python_test() implementation) disables autoloading and passes the necessary flags.
Now, what's really cool is that the function will automatically determine the correct argument values! This can be especially important if entry point names change between package versions - and upstreams generally don't consider this an issue, since autoloading isn't affected.
Going towards no autoloading by default
Okay, that gives us a nice way of specifying which plugins to load. However, weren't we talking of disabling autoloading by default?
Well, yes - and the intent is that it's going to be disabled by default in EAPI 9. However, until then there's a simple solution we encourage everyone to use: set an empty EPYTEST_PLUGINS. So:
EPYTEST_PLUGINS=() distutils_enable_tests pytest
…and that's it. When it's set to an empty list, autoloading is disabled. When it's unset, it is enabled for backwards compatibility. And the next pkgcheck release is going to suggest it:
dev-python/a2wsgi EPyTestPluginsSuggestion: version 1.10.10: EPYTEST_PLUGINS can be used to control pytest plugins loaded
EPYTEST_PLUGIN* to deal with special cases
While the basic feature is neat, it is not a golden bullet. The approach used is insufficient for some packages, most notably pytest plugins that run a pytest subprocesses without appropriate -p options, and expect plugins to be autoloaded there. However, after some more fiddling we arrived at three helpful features:
- EPYTEST_PLUGIN_LOAD_VIA_ENV that switches explicit plugin loading from -p arguments to PYTEST_PLUGINS environment variable. This greatly increases the chance that subprocesses will load the specified plugins as well, though it is more likely to cause issues such as plugins being loaded twice (and therefore is not the default). And as a nicety, the eclass takes care of finding out the correct values, again.
- EPYTEST_PLUGIN_AUTOLOAD to reenable autoloading, effectively making EPYTEST_PLUGINS responsible only for adding dependencies. It's really intended to be used as a last resort, and mostly for future EAPIs when autoloading will be disabled by default.
- Additionally, EPYTEST_PLUGINS can accept the name of the package itself (i.e. ${PN}) - in which case it will not add a dependency, but load the just-built plugin.
How useful is that? Compare:
BDEPEND=" test? ( dev-python/pytest-datadir[${PYTHON_USEDEP}] ) " distutils_enable_tests pytest python_test() { local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 local -x PYTEST_PLUGINS=pytest_datadir.plugin,pytest_regressions.plugin epytest }
…and:
EPYTEST_PLUGINS=( "${PN}" pytest-datadir ) EPYTEST_PLUGIN_LOAD_VIA_ENV=1 distutils_enable_tests pytest
Old and new bits: common plugins
The eclass already had some bits related to enabling common plugins. Given that EPYTEST_PLUGINS only takes care of loading plugins, but not passing specific arguments to them, they are still meaningful. Furthermore, we've added EPYTEST_RERUNS.
The current list is:
- EPYTEST_RERUNS=... that takes a number of reruns and uses pytest-rerunfailures to retry failing tests the specified number of times.
- EPYTEST_TIMEOUT=... that takes a number of seconds and uses pytest-timeout to force a timeout if a single test does not complete within the specified time.
- EPYTEST_XDIST=1 that enables parallel testing using pytest-xdist, if the user allows multiple test jobs. The number of test jobs can be controlled (by the user) by setting EPYTEST_JOBS with a fallback to inferring from MAKEOPTS (setting to 1 disables the plugin entirely).
The variables automatically add the needed plugin, so they do not need to be repeated in EPYTEST_PLUGINS.
JUnit XML output and gpy-junit2deselect
As an extra treat, we ask pytest to generate a JUnit-style XML output for each test run that can be used for machine processing of test results. gpyutils now supply a gpy-junit2deselect tool that can parse this XML and output a handy EPYTEST_DESELECT for the failing tests:
$ gpy-junit2deselect /tmp/portage/dev-python/aiohttp-3.12.14/temp/pytest-xml/python3.13-QFr.xml EPYTEST_DESELECT=( tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_nonzero_passed tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_passed_to_create_connection tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_zero_not_passed )
While it doesn't replace due diligence, it can help you update long lists of deselects. As a bonus, it automatically collapses deselects to test functions, classes and files when all matching tests fail.
hypothesis-gentoo to deal with health check nightmare
Hypothesis is a popular Python fuzz testing library. Unfortunately, it has one feature that, while useful upstream, is pretty annoying to downstream testers: health checks.
The idea behind health checks is to make sure that fuzz testing remains efficient. For example, Hypothesis is going to fail if the routine used to generate examples is too slow. And as you can guess, "too slow" is more likely to happen on a busy Gentoo system than on dedicated upstream CI. Not to mention some upstreams plain ignore health check failures if they happen rarely.
Given how often this broke for us, we have requested an option to disable Hypothesis health checks long ago. Unfortunately, upstream's answer can be summarized as: "it's up to packages using Hypothesis to provide such an option, and you should not be running fuzz testing downstream anyway". Easy to say.
Well, obviously we are not going to pursue every single package using Hypothesis to add a profile with health checks disabled. We did report health check failures sometimes, and sometimes got no response at all. And skipping these tests is not really an option, given that often there are no other tests for a given function, and even if there are - it's just going to be a maintenance nightmare.
I've finally figured out that we can create a Hypothesis plugin - now hypothesis-gentoo - that provides a dedicated "gentoo" profile with all health checks disabled, and then we can simply use this profile in epytest. And how do we know that Hypothesis is used? Of course we look at EPYTEST_PLUGINS! All pieces fall into place. It's not 100% foolproof, but health check problems aren't that common either.
Summary
I have to say that I really like what we achieved here. Over the years, we learned a lot about pytest, and used that knowledge to improve testing in Gentoo. And after repeating the same patterns for years, we have finally replaced them with eclass functions that can largely work out of the box. This is a major step forward.
26 Jul 2025 1:29pm GMT
30 Apr 2025
Planet Gentoo
Urgent - OSU Open Source Lab needs your help
Oregon State University's Open Source Lab (OSL) has been a major supporter of Gentoo Linux and many other software projects for years. It is currently hosting several of our infrastructure servers as well as development machines for exotic architectures, and is critical for Gentoo operation.
Due to drops in sponsor contributions, OSL has been operating at loss for a while, with the OSU College of Engineering picking up the rest of the bill. Now, university funding has been cut, this is not possible anymore, and unless US$ 250.000 can be provided within the next two weeks OSL will have to shut down. The details can be found in a blog post of Lance Albertson, the director of OSL.
Please, if you value and use Gentoo Linux or any of the other projects that OSL has been supporting, and if you are in a position to make funds available, if this is true for the company you work for, etc … contact the address in the blog post. Obviously, long-term corporate sponsorships would here serve best - for what it's worth, OSL developers have ended up at almost every big US tech corporation by now. Right now probably everything helps though.
30 Apr 2025 5:00am GMT
20 Feb 2025
Planet Gentoo
Bootable Gentoo QCOW2 disk images - ready for the cloud!
We are very happy to announce new official downloads on our website and our mirrors: Gentoo for amd64 (x86-64) and arm64 (aarch64), as immediately bootable disk images in qemu's QCOW2 format! The images, updated weekly, include an EFI boot partition and a fully functional Gentoo installation; either with no network activated but a password-less root login on the console ("no root pw"), or with network activated, all accounts initially locked, but cloud-init running on boot ("cloud-init"). Enjoy, and read on for more!
Questions and answers
How can I quickly test the images?
We recommend using the "no root password" images and qemu system emulation. Both amd64 and arm64 images have all the necessary drivers ready for that. Boot them up, use as login name "root", and you will immediately get a fully functional Gentoo shell. The set of installed packages is similar to that of an administration or rescue system, with a focus more on network environment and less on exotic hardware. Of course you can emerge whatever you need though, and binary package sources are already configured too.
What settings do I need for qemu?
You need qemu with the target architecture (aarch64 or x86_64) enabled in QEMU_SOFTMMU_TARGETS, and the UEFI firmware.
app-emulation/qemu sys-firmware/edk2-bin
You should disable the useflag "pin-upstream-blobs" on qemu and update edk2-bin at least to the 2024 version. Also, since you probably want to use KVM hardware acceleration for the virtualization, make sure that your kernel supports that and that your current user is in the kvm group.
For testing the amd64 (x86-64) images, a command line could look like this, configuring 8G RAM and 4 CPU threads with KVM acceleration:
qemu-system-x86_64 \ -m 8G -smp 4 -cpu host -accel kvm -vga virtio -smbios type=0,uefi=on \ -drive if=pflash,unit=0,readonly=on,file=/usr/share/edk2/OvmfX64/OVMF_CODE_4M.qcow2,format=qcow2 \ -drive file=di-amd64-console.qcow2 &
For testing the arm64 (aarch64) images, a command line could look like this:
qemu-system-aarch64 \ -machine virt -cpu neoverse-v1 -m 8G -smp 4 -device virtio-gpu-pci -device usb-ehci -device usb-kbd \ -drive if=pflash,unit=0,readonly=on,file=/usr/share/edk2/ArmVirtQemu-AARCH64/QEMU_EFI.qcow2 \ -drive file=di-arm64-console.qcow2 &
Please consult the qemu documentation for more details.
Can I install the images onto a real harddisk / SSD?
Sure. Gentoo can do anything. The limitations are:
- you need a disk with sector size 512 bytes (otherwise the partition table of the image file will not work), see the "SSZ" value in the following example:
pinacolada ~ # blockdev --report /dev/sdb RO RA SSZ BSZ StartSec Size Device rw 256 512 4096 0 4000787030016 /dev/sdb
- your machine must be able to boot via UEFI (no legacy boot)
- you may have to adapt the configuration yourself to disks, hardware, …
So, this is an expert workflow.
Assuming your disk is /dev/sdb and has a size of at least 20GByte, you can then use the utility qemu-img to decompress the image onto the raw device. Warning, this obviously overwrites the first 20Gbyte of /dev/sdb (and with that the existing boot sector and partition table):
qemu-img convert -O raw di-amd64-console.qcow2 /dev/sdb
Afterwards, you can and should extend the new root partition with xfs_growfs, create an additional swap partition behind it, possibly adapt /etc/fstab and the grub configuration, …
If you are familiar with partitioning and handling disk images you can for sure imagine more workflow variants; you might find also the qemu-nbd tool interesting.
So what are the cloud-init images good for?
Well, for the cloud. Or more precisely, for any environment where a configuration data source for cloud-init is available. If this is already provided for you, the image should work out of the box. If not, well, you can provide the configuration data manually, but be warned that this is a non-trivial task.
Are you planning to support further architectures?
Eventually yes, in particular (EFI) riscv64 and loongarch64.
Are you planning to support legacy boot?
No, since the placement of the bootloader outside the file system complicates things.
How about disks with 4096 byte sectors?
Well… let's see how much demand this feature finds. If enough people are interested, we should be able to generate an alternative image with a corresponding partition table.
Why XFS as file system?
It has some features that ext4 is sorely missing (reflinks and copy-on-write), but at the same time is rock-solid and reliable.
20 Feb 2025 6:00am GMT