23 Aug 2025
Planet GNOME
Steven Deobald: 2025-08-22 Foundation Update
## Bureaucracy
You may have noted that there wasn't a Foundation Update last week. This is because almost everything that has been happening at the Foundation lately falls under the banner of "bureaucracy" and/or "administration" and there isn't much to say about either of those topics publicly.
Also, last Friday was GNOME's 28th birthday and no one wants to hear about paper-shuffling on their birthday.
I'm sorry to say there isn't much to add this week, either. But I hope you did all take a moment to reflect on the nearly-three-decades-of-work that's gone into GNOME last week.
Happy Belated Birthday, Computer.
23 Aug 2025 3:48am GMT
22 Aug 2025
Planet GNOME
This Week in GNOME: #213 Fixed Rules
Update on what happened across the GNOME project in the week from August 15 to August 22.
GNOME Core Apps and Libraries
Glycin β
Sandboxed and extendable image loading and editing.
Sophie (she/her) announces
Glycin 2.0.beta.3 has been released. Among the important changes are fixes for thumbnailers not working in certain configurations, loading speed for JPEG XL having been dramatically improved, fixed sandbox rules that broke image loading on some systems, and fixed editing for some JPEG images saved in progressive mode.
GNOME Circle Apps and Libraries
DΓ©jΓ Dup Backups β
A simple backup tool.
Michael Terry announces
DΓ©jΓ Dup 49.beta was released! It just fixes a few small bugs and uses the new libadwaita shortcuts dialog.
But if you haven't tried the 49.x branch yet, it has a big UI refactor and adds file-manager-based restore for Restic backups.
Read more details and install instructions in the previous 49.alpha announcement. Thanks for any testing you can do before this reaches the masses!
Third Party Projects
Mir Sobhan announces
We forked the TWIG website and forged it into a "good first issue" tracker. It catches all GNOME-related projects on GitHub and GNOME GitLab to show issues labeled "good first issue" or "Newcomers." This can help newcomers find places to contribute including myself.
Website: https://ggfi.mirsobhan.ir Repo: https://gitlab.gnome.org/misano/goodfirstissue
DzΜeremi says
Chronograph gets a BIG new 4.0 update!
What is Chronograph?
Chronograph is an app for syncing lyrics, making them display like karaoke in supported players. It comes with a beautiful GTK4 + LibAdwaita interface and includes a built-in metadata editor, so you can manage your music library along with syncing lyrics. Default LRC files can be published to the large lyrics database LRClib.net, which is widely used by many open-source players to fetch lyrics. Until now, Chronograph supported only line-by-line lyrics, which was enough for most cases since standard LRC is the most common format. But times changeβ¦
Word-by-Word support!
Starting August 24th, Chronograph will gain support for Word-by-Word syncing. This feature uses the eLRC format (also known as LRC A2 or Enchanted LRC). In eLRC, each word has its own timestamp, allowing players that support it to animate lyrics word-by-word, giving you a true karaoke experience. And this is just the beginning: future updates will also bring support for TTML (Timed Text Markup Language).
Final notes
I hope you'll enjoy using the latest version of Chronograph, and together we can spread awareness of eLRC to the wider community. Sync lyrics of your loved songs! β₯οΈ
Nathan Perlman announces
Rewaita - Give Adwaita some flavour
Hi there, a few weeks ago I released Rewaita, a spiritual successor to Gradience. With it, you can recolour GTK4/Adwaita apps to popular colour schemes. That's where the name comes from ~ Re(colour Ad)waita.
As of v1.0.4, released this week, you can create your own custom colour palettes if the ones we provide don't suit you, and you can also change the window controls to be either coloured or MacOS-styled.
You can find it on Flathub, but also in the AUR and NIXPKGS (the Nix Package is still under review).
Rewaita is also going through rapid development, so any help would be appreciated, or just leave us a star :). In particular, GTK3 and Cinnamon support are next up on the chopping block.
Miscellaneous
JumpLink says
ts-for-gir - TypeScript bindings for GObject Introspection
This week we've released a major improvement for GObject interface implementation: Virtual Interface Generation.
Instead of having to implement all methods of a GObject interface, developers can now only implement the virtual methods (
vfunc_*
). This matches the actual GObject-Introspection pattern and makes interface implementation much cleaner.Before (implement all methods):
class CustomPaintable implements Gdk.Paintable { // Implement all methods manually get_current_image(): Gdk.Paintable { ... } get_flags(): Gdk.PaintableFlags { ... } get_intrinsic_width(): number { ... } // ... and many more }
After (only virtual methods):
class CustomPaintable implements Gdk.Paintable.Interface { // Declare for TypeScript compatibility declare get_current_image: Gdk.Paintable["get_current_image"]; declare get_flags: Gdk.Paintable["get_flags"]; // Only implement virtual methods vfunc_get_current_image(): Gdk.Paintable { ... } vfunc_get_flags(): Gdk.PaintableFlags { ... } }
We've created a comprehensive example: https://github.com/gjsify/ts-for-gir/tree/main/examples/virtual-interface-test
This shows both
Gio.ListModel
andGdk.Paintable
implementations using the new pattern.Release: v4.0.0-beta.35 and v4.0.0-beta.36
Note: Last week we also released v4.0.0-beta.34 which introduced Advanced Variant Types by default, completing the gi.ts integration with enhanced TypeScript support for
GLib.Variant.deepUnpack()
and better type inference for GObject patterns.
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
22 Aug 2025 12:00am GMT
21 Aug 2025
Planet GNOME
Sebastian Wick: Testing with Portals
At the Linux App Summit (LAS) in Albania three months ago, I gave a talk about testing in the xdg-desktop-portal project. There is a recording of the presentation, and the slides are available as well.
To give a quick summary of the work I did:
- Revamped the CI
- Reworked and improved the pytest based integration test harness
- Added integration tests for new portals
- Ported over all the existing GLib/C based integration tests
- Support ASAN for detecting memory leaks in the tests
- Made tests pretend to be either a host, Flatpak or Snap app
The hope I had is that this will result in:
- Fewer regressions
- Tests for new features and bug fixes
- More confidence in refactoring
- More activity in the project
While it's hard to get definite data on those points, at least some of it seems to have become reality. I have seen an increase in activity (there are other factors to this for sure), and a lot of PRs already come with tests without me even having to ask for it. Canonical is involved again, taking care of the Snap side of things. So far it seems like we didn't introduce any new regressions, but this usually shows after a new release. The experience of refactoring portals also became a lot better because there is a baseline level of confidence when the tests pass, as well as the possibility to easily bisect issues. Overall I'm already quite happy with the results.
Two weeks ago, Georges merged the last piece of what I talked about in the LAS presentation, so we're finally testing the code paths that are specific to host, Flatpak and Snap applications! I also continued a bit with improving the tests, and now they can be run with Valgrind, which is super slow and that's why we're not doing it in the CI, but it tends to find memory leaks which ASAN does not. With the existing tests, it found 9 small memory leaks.
If you want to improve the Flatpak story, come and contribute to xdg-desktop-portal. It's now easier than ever!
21 Aug 2025 11:00pm GMT
20 Aug 2025
Planet GNOME
Michael Meeks: 2025-08-20 Wednesday
- Up extremely early, worked for a few hours, out for a run with J. painful left hip.
- Merger/finance call, sync with Dave, Gokay & Szymon, Lunch.
- Published the next strip: on Fixed Price projects
- Productivity All Hands meeting.
20 Aug 2025 1:22pm GMT
Peter Hutterer: Why is my device a touchpad and a mouse and a keyboard?
If you have spent any time around HID devices under Linux (for example if you are an avid mouse, touchpad or keyboard user) then you may have noticed that your single physical device actually shows up as multiple device nodes (for free! and nothing happens for free these days!). If you haven't noticed this, run libinput record
and you may be part of the lucky roughly 50% who get free extra event nodes.
The pattern is always the same. Assuming you have a device named FooBar ExceptionalDog 2000 AI
[1] what you will see are multiple devices
/dev/input/event0: FooBar ExceptionalDog 2000 AI Mouse /dev/input/event1: FooBar ExceptionalDog 2000 AI Keybard /dev/input/event2: FooBar ExceptionalDog 2000 AI Consumer Control
The Mouse/Keyboard/Consumer Control/... suffixes are a quirk of the kernel's HID implementation which splits out a device based on the Application Collection. [2]
A HID report descriptor may use collections to group things together. A "Physical Collection" indicates "these things are (on) the same physical thingy". A "Logical Collection" indicates "these things belong together". And you can of course nest these things near-indefinitely so e.g. a logical collection inside a physical collection is a common thing.
An "Application Collection" is a high-level abstractions to group something together so it can be detected by software. The "something" is defined by the HID usage for this collection. For example, you'll never guess what this device might be based on the hid-recorder output:
# 0x05, 0x01, // Usage Page (Generic Desktop) 0 # 0x09, 0x06, // Usage (Keyboard) 2 # 0xa1, 0x01, // Collection (Application) 4 ... # 0xc0, // End Collection 74
Yep, it's a keyboard. Pop the champagne[3] and hooray, you deserve it.
The kernel, ever eager to help, takes top-level application collections (i.e. those not inside another collection) and applies a usage-specific suffix to the device. For the above Generic Desktop/Keyboard usage you get "Keyboard", the other ones currently supported are "Keypad" and "Mouse" as well as the slightly more niche "System Control", "Consumer Control" and "Wireless Radio Control" and "System Multi Axis". In the Digitizer usage page we have "Stylus", "Pen", "Touchscreen" and "Touchpad". Any other Application Collection is currently unsuffixed (though see [2] again, e.g. the hid-uclogic driver uses "Touch Strip" and other suffixes).
This suffix is necessary because the kernel also splits out the data sent within each collection as separate evdev event node. Since HID is (mostly) hidden from userspace this makes it much easier for userspace to identify different devices because you can look at a event node and say "well, it has buttons and x/y, so must be a mouse" (this is exactly what udev does when applying the various ID_INPUT
properties, with varying levels of success).
The side effect of this however is that your device may show up as multiple devices and most of those extra devices will never send events. Sometimes that is due to the device supporting multiple modes (e.g. a touchpad may by default emulate a mouse for backwards compatibility but once the kernel toggles it to touchpad mode the mouse feature is mute). Sometimes it's just laziness when vendors re-use the same firmware and leave unused bits in place.
It's largely a cosmetic problem only, e.g. libinput treats every event node as individual device and if there is a device that never sends events it won't affect the other event nodes. It can cause user confusion though: "why does my laptop say there's a mouse?" and in some cases it can cause functional degradation - the two I can immediately recall are udev detecting the mouse node of a touchpad as pointing stick (because i2c mice aren't a thing), hence the pointing stick configuration may show up in unexpected places. And fake mouse devices prevent features like "disable touchpad if a mouse is plugged in" from working correctly. At the moment we don't have a good solution for detecting these fake devices - short of shipping giant databases with product-specific entries we cannot easily detect which device is fake. After all, a Keyboard node on a gaming mouse may only send events if the user configured the firmware to send keyboard events, and the same is true for a Mouse node on a gaming keyboard.
So for now, the only solution to those is a per-user udev rule to ignore a device. If we ever figure out a better fix, expect to find a gloating blog post in this very space.
[1] input device naming is typically bonkers, so I'm just sticking with precedence here
[2] if there's a custom kernel driver this may not apply and there are quirks to change this so this isn't true for all devices
[3] or sparkling wine, let's not be regionist here
20 Aug 2025 11:12am GMT
Thibault Martin: Cloud tech makes sense on-prem too
In the previous post, we talked about the importance to have a flexible homelab with Proxmox, and set it up. Long story short, I only have a single physical server but I like to experiment with new setups regularly. Proxmox is a baremetal hypervisor: a piece of software that lets me spin up Virtual Machines on top of my server, to act as mini servers.
Thanks to this set-up I can have a long-lived VM for my (single node) production k3s cluster, and I can spin up disposable VMs to experiment with, without impacting my production.
But it's more complex to install Proxmox, spin up a VM, and install k3s on it, as compared to just installing Debian and k3s on my baremetal server. We have already automated the Proxmox install process. Let's now automate the VM provisioning and deploy k3s on it, to make it simple and easy to re-provision a fully functional Virtual Machine on top of Proxmox!
In this post we will configure opentofu so it can ask Proxmox to spin up a new VM, use cloud-init to do the basic pre-configuration of the VM, and use ansible to deploy k3s on it.
Provisioning and pre-configuring a VM
OpenTofu is software I execute on my laptop. It reads file describing what I want to provision, and performs the actual provisioning. I can use it to say "I want a VM with 4 vCPUs and 8GB of RAM on this Proxmox cluster," or "I want to add this A record to my DNS managed by Cloudflare." I need to write this down in .tf
files, and invoke the tofu
CLI to read those files and apply the changes.
Opentofu is quite flexible. It can connect to many different providers (e.g. Proxmox, AWS, Scaleway, Hetzner...) to spin up a variety of resources (e.g. a VM on Proxmox, an EC2 or EKS instance or AWS, etc). Proxmox, Amazon and other providers publish Provider plugins for opentofu, available in the OpenTofu Registry (and in the Terraform Registry since opentofu is backward compatible for now).
Configuring Opentofu for Proxmox
To use Opentofu with Proxmox, you need to pick and configure an Opentofu Provider for Proxmox. There seem to be two active implementations:
- bpg/proxmox is maintained by an individual
- telmate/proxmox is maintained by the Telmate organization
The former seems to have better test coverage, and friends have used it for months without a problem. I am taking a leap of faith and picking it.
The plugin needs to be configured so opentofu on my laptop can talk to Proxmox and spin up new VMs. To do so, I need to create a Proxmox service account that opentofu will use, so opentofu has sufficient privileges to create the VMs I ask it to create.
I will rely on the pveum
(Proxmox Virtual Environment User Management) utility to create a role with the right privileges, create a new user/service account, and assign the role to the service account.
Once ssh'd into the Proxmox host, I can create the terraform user that opentofu will use
# pveum user add terraform@pve
[!info] I don't have to add a password
I will issue an API Key for opentofu to authenticate as this user. Not having a password reduces the attack surface by ensuring nobody can use this service account to log into the web UI.
Then let's create the role
# pveum role add Terraform -privs "Datastore.Allocate \
Datastore.AllocateSpace \
Datastore.AllocateTemplate \
Datastore.Audit \
Pool.Allocate \
Sys.Audit \
Sys.Console \
Sys.Modify \
VM.Allocate \
VM.Audit \
VM.Clone \
VM.Config.CDROM \
VM.Config.Cloudinit \
VM.Config.CPU \
VM.Config.Disk \
VM.Config.HWType \
VM.Config.Memory \
VM.Config.Network \
VM.Config.Options \
VM.Console \
VM.Migrate \
VM.PowerMgmt \
SDN.Use"
And now let's assign the role to the user
# pveum aclmod / -user terraform@pve -role Terraform
Finally I can create an API token
# pveum user token add terraform@pve provider --privsep=0
ββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββ
β key β value β
ββββββββββββββββͺβββββββββββββββββββββββββββββββββββββββ‘
β full-tokenid β terraform@pve!provider β
ββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ€
β info β {"privsep":"0"} β
ββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ€
β value β REDACTED β
ββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ
I now have a service account up and ready. Let's create a ~/Projects/infra/tofu
folder that will contain my whole infrastructure's opentofu file. In that folder, I will create a providers.tf
file to declare and configure the various providers I need. For now, this will only be Proxmox. I can configure my Proxmox provider so it knows where the API endpoint is, and what API key to use.
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.80.0"
}
}
}
provider "proxmox" {
endpoint = "https://192.168.1.220:8006/"
api_token = "terraform@pve!provider=REDACTED"
}
For some operations, including VM provisioning, the API is not enough and the Proxmox provider needs to ssh into the Proxmox host to issue commands. I can configure the Proxmox provider to use my ssh agent.
This way, when I call the tofu
command on my laptop to provision VMs on the Proxmox host, the provider will use the ssh-agent of my laptop to authenticate against the Proxmox host. This will make opentofu use my ssh keypair to authenticate. Since my ssh key is already trusted by the Proxmox host, opentofu will be able to log in seamlessly.
provider "proxmox" {
endpoint = "https://192.168.1.220:8006/"
api_token = "terraform@pve!provider=REDACTED"
insecure = true
ssh {
agent = true
username = "root"
}
}
[!warning] Insecure but still somewhat secure
We add an
insecure
line to our configuration. It instructs opentofu to skip the TLS verification of the certificate presented by the Proxmox host. We do this because Proxmox generates a self-signed certificate our computer doesn't trust. We will understand what this means and fix that in a further blog post.The main risk we're facing by doing so is to let another machine impersonate our Proxmox host. Since we're working on a homelab, in a home network, the chances of it happening are extraordinarily low, and this it can be considered a temporarily acceptable.
After moving to the tofu
directory, running tofu init
will install the provider
$ tofu init
Initializing the backend...
Initializing provider plugins...
- Finding bpg/proxmox versions matching "0.80.0"...
- Installing bpg/proxmox v0.80.0...
- Installed bpg/proxmox v0.80.0 (signed, key ID F0582AD6AE97C188)
And a tofu plan
shouldn't return an error
$ tofu plan
No changes. Your infrastructure matches the configuration.
OpenTofu has compared your real infrastructure against your configuration and found no
differences, so no changes are needed.
Removing sensitive information
If you made it so far, you probably think I am completely reckless for storing credentials in plain text files, and you would be correct to think so. Credentials should never be stored in plain text. Fortunately opentofu can grab sensitive credentials from environment variables.
I use Bitwarden to store my production credentials and pass them to opentofu when I step into my work directory. You can find all the details on how to do it on this previous blog post. Bear in mind that this works well for a homelab but I wouldn't recommend it for a production setup.
We need to create a new credential in the Infra
folder of our vault, and call it PROXMOX_VE_API_TOKEN
. Its content is the following
terraform@pve!provider=yourApiKeyGoesHere
Then we need to sync the vault managed by the bitwarden CLI, to ensure it has the credential we just added.
$ bw sync
Syncing complete.
Let's update our ~/Projects/infra/.direnv
to make it retrieve the PROXMOX_VE_API_TOKEN
environment variable when we step into our work directory.
bitwarden_password_to_env Infra PROXMOX_VE_API_TOKEN
And let's make direnv allow it
$ direnv allow ~/Projects/infra/
We can now remove the credential from tofu/providers.tf
provider "proxmox" {
endpoint = "https://192.168.1.220:8006/"
api_token = "terraform@pve!provider=REDACTED"
insecure = true
ssh {
agent = true
username = "root"
}
}
Spinning up a new VM
Now I have a working proxmox provider for opentofu, it's time to spin up a first VM! I already use Debian for my Proxmox host, I'm familiar with Debian, it's very stable, and it has a reactive security team. I want to keep track of as few operating systems (OS) as possible, so whenever possible I will use it as the base OS for my VMs.
When I spin up a new VM, I can also pre-configure a few settings with cloud-init. Cloud-init defines standard files that my VM will read on first boot. Those files contain various instructions: I can use them to give a static IP to my VM, create a user, add it to the sudoers without a password, and add a ssh key to let me perform key-based authentication with ssh.
I need to use a "cloud image" of Debian for it to support cloud-init file. I can grab the link on Debian's official Download page. I could upload it manually to Proxmox, but we're here to make things tidy and reproducible! So let's create a tofu/cloud-images.tf
file where we will tell opentofu to ask the Proxmox node to download the file.
resource "proxmox_virtual_environment_download_file" "debian_13_cloud_image" {
content_type = "iso"
datastore_id = "local"
node_name = "proximighty"
url = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2"
file_name = "debian-13-generic-amd64.img"
}
[!info] No include needed!
Opentofu merges all the files in the root of a directory into a single file before processing it. There is no need to include/import our
tofu/providers.tf
file intotofu/cloud-images.tf
!
Let's run a tofu plan
to see what opentofu would do.
$ tofu plan
OpenTofu used the selected providers to generate the following execution plan. Resource actions
are indicated with the following symbols:
+ create
OpenTofu will perform the following actions:
# proxmox_virtual_environment_download_file.debian_13_cloud_image will be created
+ resource "proxmox_virtual_environment_download_file" "debian_13_cloud_image" {
+ content_type = "iso"
+ datastore_id = "local"
+ file_name = "debian-13-generic-amd64.img"
+ id = (known after apply)
+ node_name = "proximighty"
+ overwrite = true
+ overwrite_unmanaged = false
+ size = (known after apply)
+ upload_timeout = 600
+ url = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2"
+ verify = true
}
Plan: 1 to add, 0 to change, 0 to destroy.
Everything looks alright, let's apply it to actually make the Proxmox host download the image!
$ tofu apply
OpenTofu used the selected providers to generate the following execution plan. Resource actions
are indicated with the following symbols:
+ create
OpenTofu will perform the following actions:
# proxmox_virtual_environment_download_file.debian_13_cloud_image will be created
+ resource "proxmox_virtual_environment_download_file" "debian_13_cloud_image" {
+ content_type = "iso"
+ datastore_id = "local"
+ file_name = "debian-13-generic-amd64.img"
+ id = (known after apply)
+ node_name = "proximighty"
+ overwrite = true
+ overwrite_unmanaged = false
+ size = (known after apply)
+ upload_timeout = 600
+ url = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2"
+ verify = true
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
OpenTofu will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
proxmox_virtual_environment_download_file.debian_13_cloud_image: Creating...
proxmox_virtual_environment_download_file.debian_13_cloud_image: Creation complete after 8s [id=local:iso/debian-13-generic-amd64.img]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Looking at the Proxmox UI I can see that the image has indeed been downloaded
Excellent! Now we can describe the parameters of the virtual machine we wan to create by creating a tofu/k3s-main.tf
file that contains a virtual_environment_vm
resource like so
resource "proxmox_virtual_environment_vm" "k3s-main" {
name = "k3s-main"
description = "Production k3s' main VM"
tags = ["production", "k3s", "debian"]
node_name = "proximighty"
}
This is the meta-data of our VM, giving it a name and a Proxmox node to run onto. But we need to be more specific. Let's give it 4 CPUs, and 16 GB of RAM.
resource "proxmox_virtual_environment_vm" "k3s-main" {
name = "k3s-main"
description = "Production k3s' main VM"
tags = ["production", "k3s", "debian"]
node_name = "proximighty"
cpu {
cores = 4
type = "x86-64-v4"
}
memory {
dedicated = 16384
floating = 16384 # set equal to dedicated to enable ballooning
}
}
To figure out the type of cpu to use for your VM, issue the following command on the Proxmox host
$ /lib64/ld-linux-x86-64.so.2 --help
[...]
Subdirectories of glibc-hwcaps directories, in priority order:
x86-64-v4 (supported, searched)
x86-64-v3 (supported, searched)
x86-64-v2 (supported, searched)
[...]
We can then give it a 50 GB disk with the disk
block.
resource "proxmox_virtual_environment_vm" "k3s-main" {
name = "k3s-main"
[...]
disk {
datastore_id = "local"
interface = "virtio0"
iothread = true
size = 50
file_id = proxmox_virtual_environment_download_file.debian_13_cloud_image.id
}
}
I use the local
datastore and not lvm-thin
despite running QEMU because I don't want to allow over-provisioning. lvm-thin
would allow me to allocate a disk of 500 GB to my VM, even if I only have 100 GB available, because the VM will only fill the Proxmox drive with the actual content it uses. You can read more about storage on Proxmox's wiki.
I use a virtio
device, since a colleague told me "virtio uses a special communication channel that requires guest drivers, that are well supported out of the box on Linux. You can be way faster when the guest knows it's a VM and don't have to emulate something that was intended for actual real hardware. It's the same for your network interface and a bunch of other things. Usually if there is a virtio option you want to use that"
I set the file_id
to the Debian cloud image we downloaded earlier.
I can then add a network interface that will use the vmbr0
bridge I created when setting up my Proxmox host. I also need an empty serial_device
, or Debian crashes.
resource "proxmox_virtual_environment_vm" "k3s-main" {
name = "k3s-main"
[...]
network_device {
bridge = "vmbr0"
}
serial_device {}
}
Now, instead of spinning up an un-configured VM and manually retrieving the parameters set during boot, we will use cloud-init to pre-configure it. We will do the following:
- Configure the network to get a static IP
- Configure the hostname
- Configure the timezone to UTC
- Add a user
thib
that doesn't have a password - Add
thib
to sudoers, without a password - Add my the public ssh key from my laptop to the trust keys of
thib
on the VM, so I can login with a ssh key and never have to use a password.
The documentation of the Proxmox provider teach us that that Proxmox has native support for cloud-init. This cloud-init configuration is done in the initialization
block of the virtual_environment_vm
resource.
We will first give it an IP
resource "proxmox_virtual_environment_vm" "k3s-main" {
name = "k3s-main"
[...]
initialization {
datastore_id = "local"
ip_config {
ipv4 {
address = "192.168.1.221/24"
gateway = "192.168.1.254"
}
}
}
}
I'm only specifying the datastore_id
because by default it uses local-lvm
, which I have not configured on my Proxmox host.
To create a user and give it ssh keys I could use the user_account
block inside initialization
. Unfortunately it doesn't support adding the user to sudoers, nor installing extra packages. To circumvent that limitation I will have to create a user config data file and pass it to cloud-init.
Let's start by creating the user config data file resource within tofu/k3s-main.tf
resource "proxmox_virtual_environment_file" "user_data_cloud_config" {
content_type = "snippets"
datastore_id = "local"
node_name = "proximighty"
source_raw {
data = <<-EOF
#cloud-config
hostname: mightykube
timezone: UTC
users:
- default
- name: thib
lock_passwd: true
groups:
- sudo
shell: /bin/bash
ssh_authorized_keys:
- ${trimspace("ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org")}
sudo: ALL=(ALL) NOPASSWD:ALL
- name: root
lock_passwd: true
package_update: true
package_upgrade: true
packages:
- htop
- qemu-guest-agent
- vim
runcmd:
- systemctl enable qemu-guest-agent
- systemctl start qemu-guest-agent
- echo "done" > /tmp/cloud-config.done
EOF
file_name = "user-data-cloud-config.yaml"
}
}
It's a bit inelegant to keep a copy of my ssh key inside this file. Let's ask opentofu to read it from the actual file on my laptop instead by creating a local_file
resource for it, in tofu/k3s.tf
data "local_file" "ssh_public_key" {
filename = "/Users/thibaultmartin/.ssh/id_ed25519.pub"
}
If I try to plan the change, I get the following error
$ tofu plan
β·
β Error: Inconsistent dependency lock file
β
β The following dependency selections recorded in the lock file are inconsistent with the
β current configuration:
β - provider registry.opentofu.org/hashicorp/local: required by this configuration but no version is selected
β
β To update the locked dependency selections to match a changed configuration, run:
β tofu init -upgrade
Like the error message says, I can fix it with tofu init
$ tofu init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding opentofu/cloudflare versions matching "5.7.1"...
- Finding bpg/proxmox versions matching "0.80.0"...
- Finding latest version of hashicorp/local...
- Installing hashicorp/local v2.5.3...
- Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80)
- Using previously-installed opentofu/cloudflare v5.7.1
- Using previously-installed bpg/proxmox v0.80.0
Providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://opentofu.org/docs/cli/plugins/signing/
OpenTofu has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
OpenTofu has been successfully initialized!
[...]
I can now change the user_data_cloud_config
resource to reference ssh_public_key
resource "proxmox_virtual_environment_file" "user_data_cloud_config" {
[...]
ssh_authorized_keys:
- ${trimspace("ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org")}
- ${trimspace(data.local_file.ssh_public_key.content)}
[...]
}
Now let's update the initialization
block of k3s-main
to use that cloud-init file
resource "proxmox_virtual_environment_vm" "k3s-main" {
name = "k3s-main"
[...]
initialization {
datastore_id = "local"
ip_config {
ipv4 {
address = "192.168.1.221/24"
gateway = "192.168.1.254"
}
}
user_data_file_id = proxmox_virtual_environment_file.user_data_cloud_config.id
}
}
We can check with tofu plan
that everything is alright, and then actually apply the plan with tofu apply
$ tofu apply
data.local_file.ssh_public_key: Reading...
data.local_file.ssh_public_key: Read complete after 0s [id=930cea05ae5e662573618e0d9f3e03920196cc5f]
proxmox_virtual_environment_file.user_data_cloud_config: Refreshing state... [id=local:snippets/user-data-cloud-config.yaml]
proxmox_virtual_environment_download_file.debian_13_cloud_image: Refreshing state... [id=local:iso/debian-13-generic-amd64.img]
OpenTofu used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
OpenTofu will perform the following actions:
# proxmox_virtual_environment_vm.k3s-main will be created
+ resource "proxmox_virtual_environment_vm" "k3s-main" {
[...]
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
OpenTofu will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
proxmox_virtual_environment_vm.k3s-main: Creating...
proxmox_virtual_environment_vm.k3s-main: Creation complete after 5s [id=100]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Looking at Proxmox's console, I can see that the VM was created, it healthy, and I can even see that it has the mightykube
hostname I had created for it.
Now I can try to ssh into the newly created VM from my laptop
$ ssh thib@192.168.1.221
The authenticity of host '192.168.1.221 (192.168.1.221)' can't be established.
ED25519 key fingerprint is SHA256:39Qocnshj+JMyt4ABpD9ZIjDpOHhXqdet94QeSh+uDo.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.1.221' (ED25519) to the list of known hosts.
Linux mightykube 6.12.41+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.41-1 (2025-08-12) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
thib@mightykube:~$
Let's check that I can perform actions as root without being prompted for a password
thib@mightykube:~$ sudo apt update
Get:1 file:/etc/apt/mirrors/debian.list Mirrorlist [30 B]
Get:2 file:/etc/apt/mirrors/debian-security.list Mirrorlist [39 B]
Hit:3 https://deb.debian.org/debian trixie InRelease
Hit:4 https://deb.debian.org/debian trixie-updates InRelease
Hit:5 https://deb.debian.org/debian trixie-backports InRelease
Hit:6 https://deb.debian.org/debian-security trixie-security InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
Brilliant! Just like that, I have a VM on Proxmox, with a static IP address, a well-known user, ssh key authentication and no password to manage at all!
[!warning] No password means no password!
When creating the VM, cloud-init creates a user but doesn't give it a password. It means we can only rely on SSH to control the VM. If we lose our SSH key or mess up with the sshd config and can't ssh into the VM, we're (kind of) locked out!
We have access to the VM console via Proxmox, but without a password we can't log into it. It is possible to rescue it by booting a live system and chrooting into our actual system, but it can be tedious. We'll cover that in a future blog post.
We still have credentials in our .tf
files so we can't commit them yet. We will extract the credentials a bit later, but first let's refactor our files for clarity.
A single place to attribute IPs
Since all my VMs will get a static IP, I want to make sure I keep a tidy list of all the IPs already used. This will help avoid IP clashes. Let's create a new ips.tf
file to keep track of everything
locals {
reserved_ips = {
proxmox_host = "192.168.1.220/24"
k3s_main = "192.168.1.221/24"
gateway = "192.168.1.254"
}
}
When spinning up the VM for the main k3s node, I will be able to refer to the local.reserved_ips.k3s_main
local variable. So let's update the tofu/k3s-main.tf
file accordingly!
resource "proxmox_virtual_environment_vm" "k3s-main" {
name = "k3s-main"
description = "Production k3s' main VM"
tags = ["production", "k3s", "debian"]
node_name = "proximighty"
[...]
initialization {
datastore_id = "local"
ip_config {
ipv4 {
address = "192.168.1.221/24"
gateway = "192.168.1.254"
address = local.reserved_ips.k3s_main
gateway = local.reserved_ips.gateway
}
}
user_data_file_id = proxmox_virtual_environment_file.user_data_cloud_config.id
}
}
We now have a single file to allocate IPs to virtual machines. We can see at a glance whether an IP is already used or not. That should save us some trouble! Let's now have a look at the precautions we need to take to save our files with git.
Keeping a safe copy of our state
What is tofu state
We used opentofu to describe what resources we wanted to create. Let's remove the resource "proxmox_virtual_environment_vm" "k3s-main"
we have created, and run tofu plan
to see how opentofu would react to that.
$ tofu plan
data.local_file.ssh_public_key: Reading...
data.local_file.ssh_public_key: Read complete after 0s [id=930cea05ae5e662573618e0d9f3e03920196cc5f]
proxmox_virtual_environment_download_file.debian_13_cloud_image: Refreshing state... [id=local:iso/debian-13-generic-amd64.img]
proxmox_virtual_environment_file.user_data_cloud_config: Refreshing state... [id=local:snippets/user-data-cloud-config.yaml]
proxmox_virtual_environment_vm.k3s-main: Refreshing state... [id=100]
OpenTofu used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
- destroy
OpenTofu will perform the following actions:
# proxmox_virtual_environment_vm.k3s-main will be destroyed
# (because proxmox_virtual_environment_vm.k3s-main is not in configuration)
- resource "proxmox_virtual_environment_vm" "k3s-main" {
[...]
}
Plan: 0 to add, 0 to change, 1 to destroy.
If I remove a resource block, opentofu will try to delete it. But it might not be aware of other VMs I could have deployed. Hang on but that might be dangerous! If I already had 3 VMs running on Proxmox and started using opentofu after that, would it destroy them all, since I didn't describe them in my files?!
Fortunately for us, no. Opentofu needs to know what it is in charge of, and leave the rest alone. When I provision something via opentofu, it adds it to a local inventory of all the things it manages. That inventory is called a state file and looks like the following (prettified via jq
)
{
"version": 4,
"terraform_version": "1.10.5",
"serial": 13,
"lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67",
"outputs": {},
"resources": [
{
"mode": "data",
"type": "local_file",
"name": "ssh_public_key",
"provider": "provider[\"registry.opentofu.org/hashicorp/local\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"content": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org\n",
"content_base64": "c3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSUlHQysrdmJNVHJTYlFGS0ZndGhqOW9MYVcxejVmQ2tRdGxQQ25HNmVPYkIgdGhpYkBlcmdhc3Rlci5vcmcK",
"content_base64sha256": "YjQvgHA99AXWCaKLep6phGgdlmkZHvXU3OOhRSsQvms=",
"content_base64sha512": "tRp4/iG90wX0R1SghdvXwND8Hg6ADNuMMdPXANUYDa2uIjkRkLRgK5YPK6ACz5cbW+SbqvGPGzYpWNNFLGIFpQ==",
"content_md5": "ed5ee6428ea7c048fe8019bb1a2206b3",
"content_sha1": "930cea05ae5e662573618e0d9f3e03920196cc5f",
"content_sha256": "62342f80703df405d609a28b7a9ea984681d9669191ef5d4dce3a1452b10be6b",
"content_sha512": "b51a78fe21bdd305f44754a085dbd7c0d0fc1e0e800cdb8c31d3d700d5180dadae22391190b4602b960f2ba002cf971b5be49baaf18f1b362958d3452c6205a5",
"filename": "/Users/thibaultmartin/.ssh/id_ed25519.pub",
"id": "930cea05ae5e662573618e0d9f3e03920196cc5f"
},
"sensitive_attributes": []
}
]
},
[...]
],
"check_results": null
}
The tofu state is a local representation of what opentofu manages. It's absolutely mandatory for opentofu to work: this is how opentofu knows if it needs to deploy, update, or tear down resources. So we need to keep in a safe place, and my laptop is not a safe place at all. It can fail or get stolen. Since the state is a text based file, I could use git to keep remote copies of it.
But as you can see, the tofu state file contains my public key, that it read from a local file. The state file contains a structured view of what is in the .tf
files it manages. So far we have not added any sensitive credentials, but we might do it and not realize they will end up in state, and thus on a git repo.
Fortunately, opentofu comes with tools that let us encrypt the state, so we can commit it to a remote git repository with more peace of mind.
Encrypting the tofu state
Before encrypting our state, the opentofu documentation has an important section to read so you understand what it entails.
We need to migrate our unencrypted plan to an encrypted one. Let's bear in mind that there's no way back if we screw up, so let's make a backup first (and delete it when we're done). Note that a properly encrypted state can be migrated to a decrypted one. A botched encrypted state will likely be irrecoverable. Let's just copy it in a different directory
$ cd ~/Project/infra/tofu
$ mkdir ~/tfbackups
$ cp terraform.tfstate{,.backup} ~/tfbackups/
To encrypt our state, we need to choose an encryption method: as a single admin homelabber I'm going for the simpler and sturdier method. I don't want to depend on extra infrastructure for secrets management, so I'm using PBKDF2, which roughly means "generating an encryption key from a long passphrase."
With that in mind, let's follow the documentation to migrate a pre-existing project. Let's open our providers.tf
file and add an encryption
block within the terraform
one.
terraform {
encryption {
method "unencrypted" "migrate" {}
key_provider "pbkdf2" "password_key" {
passphrase = "REDACTED"
}
method "aes_gcm" "password_based" {
keys = key_provider.pbkdf2.password_key
}
state {
method = method.aes_gcm.password_based
fallback {
method = method.unencrypted.migrate
}
}
}
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.80.0"
}
}
}
provider "proxmox" {
endpoint = "https://192.168.1.220:8006/"
ssh {
agent = true
username = "root"
}
}
This block instructs terraform to encrypt the state with a key generated from our password. It also tells it to expect a pre-existing unencrypted state to exist, that it's okay to read and encrypt it.
Note that I've used the encryption passphrase directly in that block. We will move it to a safer place later, but for now let's keep things simple.
Let's now apply this plan to see if our state gets encrypted correctly, but make sure you do have a cleartext backup first.
$ cd ~/Projects/infra/tofu
$ tofu apply
After the apply, we can have a look at the terraform.tfstate
file to check that it has indeed been encrypted.
{
"serial": 13,
"lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67",
"meta": {
"key_provider.pbkdf2.password_key": "eyJzYWx0[...]"
},
"encrypted_data": "ONXZsJhz[...]",
"encryption_version": "v0"
}
I know that opentofu people probably know what they're doing, but I don't like that password_key
field. It starts with eyJ
, so that must be a base64 encoded json object. Let's decode that
$ echo "eyJzYWx0[...]" | base64 -d
{"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}
All good, it's just the salt, iterations, hash function and key length parameters. Those are pretty much public, we can commit the file to our repo! But... what about the terraform.tfstate.backup
file? Let's examine this one
{
"version": 4,
"terraform_version": "1.10.5",
"serial": 12,
"lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67",
"outputs": {},
"resources": [
{
"mode": "data",
"type": "local_file",
"name": "ssh_public_key",
"provider": "provider[\"registry.opentofu.org/hashicorp/local\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"content": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org\n",
[...]
}
]
},
[...]
],
"check_results": null
}
Oh dear! That one is not encrypted! I didn't find any utility for it since terraform can't do "rollbacks", and I couldn't find docs for it. I've deleted the file and I could still perform tofu apply
without a problem. The next iterations should be encrypted, but I will add it to my .gitignore
just in case!
We're not quite ready to commit our files though. We still have a secret in plain text! We give away the encryption key we use. Let's extract it into an environment variable so we don't leak it.
Removing sensitive information
We need to create a new credential in the Infra
folder of our vault, and call it TF_ENCRYPTION
. Its content is the following
key_provider "pbkdf2" "password_key" { passphrase = "yourPassphraseGoesHere" }
Then we need to sync the vault managed by the bitwarden CLI, to ensure it has the credential we just added.
$ bw sync
Syncing complete.
Let's update our ~/Projects/infra/.direnv
to make it retrieve the TF_ENCRYPTION
environment variable
bitwarden_password_to_env Infra PROXMOX_VE_API_TOKEN TF_ENCRYPTION
And let's make direnv allow it
$ direnv allow ~/Projects/infra/
Let's remove the block that provided our password from the encryption
block in providers.tf
terraform {
encryption {
method "unencrypted" "migrate" {}
key_provider "pbkdf2" "password_key" {
passphrase = "REDACTED"
}
method "aes_gcm" "password_based" {
keys = key_provider.pbkdf2.password_key
}
state {
method = method.aes_gcm.password_based
fallback {
method = method.unencrypted.migrate
}
}
}
[...]
}
And let's try a tofu plan
to confirm that opentofu could read the passphrase from the environment variable
$ tofu plan
β·
β Warning: Unencrypted method configured
β
β on line 0:
β (source code not available)
β
β Method unencrypted is present in configuration. This is a security risk and
β should only be enabled during migrations.
β΅
data.local_file.ssh_public_key: Reading...
data.local_file.ssh_public_key: Read complete after 0s [id=930cea05ae5e662573618e0d9f3e03920196cc5f]
proxmox_virtual_environment_download_file.debian_13_cloud_image: Refreshing state... [id=local:iso/debian-13-generic-amd64.img]
proxmox_virtual_environment_file.user_data_cloud_config: Refreshing state... [id=local:snippets/user-data-cloud-config.yaml]
proxmox_virtual_environment_vm.k3s-main: Refreshing state... [id=100]
No changes. Your infrastructure matches the configuration.
OpenTofu has compared your real infrastructure against your configuration and found
no differences, so no changes are needed.
Brilliant! We can now also remove the migrate
and fallback blocks
so opentofu doesn't trust unencrypted content at all, which will prevent malicious actors from tampering with our file.
terraform {
encryption {
method "unencrypted" "migrate" {}
method "aes_gcm" "password_based" {
keys = key_provider.pbkdf2.password_key
}
state {
method = method.aes_gcm.password_based
fallback {
method = method.unencrypted.migrate
}
}
}
[...]
}
Finally we can delete our cleartext backup
$ rm -Rf ~/tfbackups
VoilΓ , we have an encrypted state that we can push to a remote Github repository, and our state will be reasonably safe for today's standards!
Fully configuring and managing the VM
As we've seen when setting up the Proxmox host, ansible can be used to put a machine in a desired state. It can write a playbook to install k3s and copy the kubeconfig file to my admin laptop.
Then there's the question of: how do we make opentofu (who provisions the VMs) and ansible (who deploys services on the VMs) talk to each other? In an ideal world, I would tell opentofu to provision the VM, and then to run an ansible playbook on the hosts it has created.
There's an ansible opentofu provider that's supposed to play this role. I didn't find intuitive to use, and most people around me told me they found it so cumbersome they didn't use it. There is a more flexible and sturdy solution: ansible dynamic inventories!
Creating a dynamic inventory for k3s VMs
Ansible supports creating inventories by calling plugins that will retrieve information from sources. The Proxmox inventory source plugin lets ansible query Proxmox and retrieve information about VMs, and automatically group them together.
Hang on. Are we really going to create a dynamic inventory for a single VM? I know we're over engineering things for the sake of learning, but isn't it a bit too much? As always, it's important to consider what problem we're trying to solve. To me, we're solving two different problems:
- We make sure that there is a single canonical source of truth, and it is opentofu. The IP defined in opentofu, it's the one provisioned on Proxmox, and it's the one the dynamic inventory will use to perform operations on the VM. If the VM needs to change its IP, we only have to update it in opentofu, and ansible will follow along.
- We build a sane foundation for more complex setups. It will be easy to extend when deploying more VMs to run complex clusters, while not adding unnecessary complexity.
So let's start by making sure we have the Proxmox plugin installed. It is part of the community.general
collection on ansible-galaxy
, so let's install it
$ ansible-galaxy collection install community.general
Then in the ~/Projects/infra/ansible/inventory
directory, we can create a proximighty.proxmox.yaml
. The file has to end with .proxmox.yaml
for the Proxmox plugin to work.
plugin: community.general.proxmox
url: https://192.168.1.200:8006
user: "terraform@pve"
token_id: "provider"
token_secret: "REDACTED"
Let's break it down:
plugin
tells ansible to use the Proxmox inventory source plugin.url
is the URL of the Proxmox cluster.user
is the Proxmox user we authenticate as. Here I'm reusing the same value as the service account we have created for opentofu.token_id
is the ID of the token we have issued for the user. I'm also reusing the same value as the API Key we have created for opentofu.token_secret
is the password for the API Key. Here again I'm reusing the same value as the API Key we have created for opentofu. I'm writting it in the plain text file for now, we will clean it up later.
Now we can try to pass that dynamic inventory configuration to ansible for it to build an inventory from Proxmox.
$ ansible-inventory -i proximighty.proxmox.yaml --list
[WARNING]: * Failed to parse
/Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml
with auto plugin: HTTPSConnectionPool(host='192.168.1.200', port=8006): Max retries
exceeded with url: /api2/json/nodes (Caused by SSLError(SSLCertVerificationError(1,
'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local
issuer certificate (_ssl.c:1028)')))
[WARNING]: * Failed to parse
/Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml
with yaml plugin: Plugin configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse
/Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml
with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not
allowed, this character is reserved to provide a port.
[WARNING]: Unable to parse
/Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml as
an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
And it fails! This is unfortunately not a surprise. We asked the plugin to look up into Proxmox and gave it a https URL. But when Proxmox runs for the first time, it generates a self-signed certificate. It is a perfectly fine certificate we can use to handle https requests. The only problem is that our laptop doesn't trust the Proxmox host, who signed the certificate for itself.
The good news is that Proxmox can retrieve certificates signed by authorities our laptop trusts! The bad news is that we need to understand what we're doing to do it properly. Like earlier, when we configured the Proxmox provider for opentofu, let's ask the Proxmox plugin to use the certificate even if it doesn't trust the authority who signed it. Since we're in a homelab on a home network, the risk of accidentally reaching a host that impersonates our Proxmox host is still fairly low, so it's acceptable to temporarily take this risk here again.
Let's add the following line to our dynamic inventory configuration to ignore the certificate signature
plugin: community.general.proxmox
url: https://192.168.1.200:8006
user: "terraform@pve"
token_id: "provider"
token_secret: "REDACTED"
validate_certs: false
And now, running the inventory command again
$ cd ~/Projects/infra/ansible/inventory
$ ansible-inventory -i proximighty.proxmox.yaml --list
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped",
"proxmox_all_lxc",
"proxmox_all_qemu",
"proxmox_all_running",
"proxmox_all_stopped",
"proxmox_nodes",
"proxmox_proximighty_lxc",
"proxmox_proximighty_qemu"
]
},
"proxmox_all_qemu": {
"hosts": [
"k3s-main"
]
},
"proxmox_all_running": {
"hosts": [
"k3s-main"
]
},
"proxmox_nodes": {
"hosts": [
"proximighty"
]
},
"proxmox_proximighty_qemu": {
"hosts": [
"k3s-main"
]
}
}
Great! We can see that our k3s-main VM appears! We didn't learn a lot about it though. Let's ask the Proxmox plugin to give us more information about the VMs with the want_facts
parameter
plugin: community.general.proxmox
url: https://192.168.1.200:8006
user: "terraform@pve"
token_id: "provider"
token_secret: "REDACTED"
want_facts: true
validate_certs: false
Let's run it again and see if we get more interesting results
$ cd ~/Projects/infra/ansible/inventory
$ ansible-inventory -i proximighty.proxmox.yaml --list
{
"_meta": {
"hostvars": {
"k3s-main": {
"proxmox_acpi": 1,
"proxmox_agent": {
"enabled": "0",
"fstrim_cloned_disks": "0",
"type": "virtio"
},
"proxmox_balloon": 16384,
"proxmox_bios": "seabios",
"proxmox_boot": {
"order": "virtio0;net0"
},
"proxmox_cicustom": {
"user": "local:snippets/user-data-cloud-config.yaml"
},
"proxmox_cores": 4,
"proxmox_cpu": {
"cputype": "x86-64-v4"
},
"proxmox_cpuunits": 1024,
"proxmox_description": "Production k3s' main VM",
"proxmox_digest": "b68508152b464627d06cba6505ed195aa3d34f59",
"proxmox_ide2": {
"disk_image": "local:100/vm-100-cloudinit.qcow2",
"media": "cdrom"
},
"proxmox_ipconfig0": {
"gw": "192.168.1.254",
"ip": "192.168.1.221/24"
},
"proxmox_keyboard": "en-us",
"proxmox_memory": "16384",
"proxmox_meta": {
"creation-qemu": "9.2.0",
"ctime": "1753547614"
},
"proxmox_name": "k3s-main",
"proxmox_net0": {
"bridge": "vmbr0",
"firewall": "0",
"virtio": "BC:24:11:A6:96:8B"
},
"proxmox_node": "proximighty",
"proxmox_numa": 0,
"proxmox_onboot": 1,
"proxmox_ostype": "other",
"proxmox_protection": 0,
"proxmox_qmpstatus": "running",
"proxmox_scsihw": {
"disk_image": "virtio-scsi-pci"
},
"proxmox_serial0": "socket",
"proxmox_smbios1": {
"uuid": "0d47f7c8-e0b4-4302-be03-64aa931a4c4e"
},
"proxmox_snapshots": [],
"proxmox_sockets": 1,
"proxmox_status": "running",
"proxmox_tablet": 1,
"proxmox_tags": "debian;k3s;production",
"proxmox_tags_parsed": [
"debian",
"k3s",
"production"
],
"proxmox_template": 0,
"proxmox_virtio0": {
"aio": "io_uring",
"backup": "1",
"cache": "none",
"discard": "ignore",
"disk_image": "local:100/vm-100-disk-0.qcow2",
"iothread": "1",
"replicate": "1",
"size": "500G"
},
"proxmox_vmgenid": "e00a2059-1310-4b0b-87f7-7818e7cdb9ae",
"proxmox_vmid": 100,
"proxmox_vmtype": "qemu"
}
}
},
"all": {
"children": [
"ungrouped",
"proxmox_all_lxc",
"proxmox_all_qemu",
"proxmox_all_running",
"proxmox_all_stopped",
"proxmox_nodes",
"proxmox_proximighty_lxc",
"proxmox_proximighty_qemu"
]
},
"proxmox_all_qemu": {
"hosts": [
"k3s-main"
]
},
"proxmox_all_running": {
"hosts": [
"k3s-main"
]
},
"proxmox_nodes": {
"hosts": [
"proximighty"
]
},
"proxmox_proximighty_qemu": {
"hosts": [
"k3s-main"
]
}
}
That's a tonne of information! Probably more than we need, and we still don't know how to connect to a specific host. Let's add some order into that. First, let's group all the VMs that have k3s
in their tags under a an ansible group called k3s
plugin: community.general.proxmox
url: https://192.168.1.200:8006
user: "terraform@pve"
token_id: "provider"
token_secret: "REDACTED"
want_facts: true
groups:
k3s: "'k3s' in (proxmox_tags_parsed|list)"
validate_certs: false
And now let's tell ansible how to figure out what IP to use for a host. Since we are the ones provisioning the VMs, we know for sure that we have configured them to use a static IP, on the single virtual network interface we gave them.
Let's use the compose
parameter to populate an ansible_host
variable that contains the IP of the VM.
plugin: community.general.proxmox
url: https://192.168.1.200:8006
user: "terraform@pve"
token_id: "provider"
token_secret: "REDACT"
want_facts: true
groups:
k3s: "'k3s' in (proxmox_tags_parsed|list)"
compose:
ansible_host: proxmox_ipconfig0.ip | default(proxmox_net0.ip) | ansible.utils.ipaddr('address')
validate_certs: false
And finally let's test this again
$ cd ~/Projects/infra/ansible/inventory
ansible-inventory -i proximighty.proxmox.yaml --list
{
"_meta": {
"hostvars": {
"k3s-main": {
"ansible_host": "192.168.1.221",
"proxmox_acpi": 1,
"proxmox_agent": {
"enabled": "0",
"fstrim_cloned_disks": "0",
"type": "virtio"
},
[...]
}
}
},
"all": {
"children": [
"ungrouped",
"proxmox_all_lxc",
[...]
"k3s"
]
},
"k3s": {
"hosts": [
"k3s-main"
]
},
[...]
}
Brilliant! We now have a k3s
group, that contains our single k3s-main
VM, and it's been able to retrieve its IP successfully! Let's create a simple playbook to try to execute on the VM one command that works and one that doesn't.
Let's create a ~/Projects/infra/ansible/k3s/test.yaml
---
- name: Execute commands on the k3s host
hosts: k3s
remote_user: thib
tasks:
- name: Echo on the remote server
ansible.builtin.command: echo "It worked"
changed_when: false
- name: Get k3s installed version
ansible.builtin.command: k3s --version
register: k3s_version_output
changed_when: false
ignore_errors: true
The only two notable things here are
hosts
is the name of the group we created in the dynamic inventoryremote_user
is the user I have pre-configured via cloud-init when spinning up the VM
$ cd
$ ansible-playbook -i inventory/proximighty.proxmox.yaml k3s/test.yaml
PLAY [Execute commands on the k3s host] ********************************************
TASK [Gathering Facts] *************************************************************
ok: [k3s-main]
TASK [Echo on the remote server] ***************************************************
ok: [k3s-main]
TASK [Get k3s installed version] ***************************************************
fatal: [k3s-main]: FAILED! => {"changed": false, "cmd": "k3s --version", "msg": "[Errno 2] No such file or directory: b'k3s'", "rc": 2, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
PLAY RECAP *************************************************************************
k3s-main : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
It works! Now that we know how to build an inventory based on Proxmox tags and could make a simply ansible playbook use it, let's move forward and actually deploy k3s on our VM!
Deploying k3s on the k3s-main VM
The k3s maintainers have created a k3s-ansible playbook that can preconfigure a machine to ensure it will be ready to make k3s run, and deploy a single or multi-node cluster. It's a great playbook and it's important not to reinvent the wheel. But I also like to understand what I execute, and keep things minimal to limit the risk of breakage.
Let's take inspiration from this excellent playbook to build one tailored for our (very simple) needs: deploying k3s-server on a single node. When trying to install k3s server via the playbook on Debian, it executes 2 roles:
prereq
that performs a series of checks to ensure k3s can be installed and run wellk3s_server
that downloads, preconfigures and install k3s
We know that the OS powering our virtual machine is always going to be a Debian cloud image. None of the checks in prereq
are useful for a fresh vanilla Debian stable. So let's skip it entirely.
Let's have a closer look at what k3s_server
does, and carry the important bits over to our playbook. We want to
- Check whether k3s is already installed so we don't override it if it was already installed
- Download the install script
- Execute the install script to download k3s
- Create a systemd service for k3s to start automatically
- Enable the service
- Copy the kubeconfig file generated by k3s to our laptop, and merge it with our kubeconfig under the cluster name and context
mightykube
To do things cleanly, we will create a k3s_server
role in a k3s
directory.
---
- name: Get k3s installed version
ansible.builtin.command: k3s --version
register: k3s_server_version_output
changed_when: false
ignore_errors: true
- name: Set k3s installed version
when: not ansible_check_mode and k3s_server_version_output.rc == 0
ansible.builtin.set_fact:
k3s_server_installed_version: "{{ k3s_server_version_output.stdout_lines[0].split(' ')[2] }}"
- name: Download and execute k3s installer if k3s is not already installed
when: not ansible_check_mode and (k3s_server_version_output.rc != 0 or k3s_server_installed_version is version(k3s_server_version, '<'))
block:
- name: Download K3s install script
ansible.builtin.get_url:
url: https://get.k3s.io/
timeout: 120
dest: /usr/local/bin/k3s-install.sh
owner: root
group: root
mode: "0755"
- name: Install K3s binary
ansible.builtin.command:
cmd: /usr/local/bin/k3s-install.sh
environment:
INSTALL_K3S_SKIP_START: "true"
INSTALL_K3S_VERSION: "{{ k3s_server_version }}"
changed_when: true
- name: Copy K3s service file [Single]
ansible.builtin.template:
src: "k3s-single.service.j2"
dest: "/etc/systemd/system/k3s.service"
owner: root
group: root
mode: "0644"
register: k3s_server_service_file_single
- name: Enable and check K3s service
ansible.builtin.systemd:
name: k3s
daemon_reload: true
state: started
enabled: true
- name: Check whether kubectl is installed on control node
ansible.builtin.command: 'kubectl'
register: k3s_server_kubectl_installed
ignore_errors: true
delegate_to: 127.0.0.1
become: false
changed_when: false
# Copy the k3s config to a second file to detect changes.
# If no changes are found, we can skip copying the kubeconfig to the control node.
- name: Copy k3s.yaml to second file
ansible.builtin.copy:
src: /etc/rancher/k3s/k3s.yaml
dest: /etc/rancher/k3s/k3s-copy.yaml
mode: "0600"
remote_src: true
register: k3s_server_k3s_yaml_file_copy
- name: Apply k3s kubeconfig to control node if file has change and control node has kubectl installed
when:
- k3s_server_kubectl_installed.rc == 0
- k3s_server_k3s_yaml_file_copy.changed
block:
- name: Copy kubeconfig to control node
ansible.builtin.fetch:
src: /etc/rancher/k3s/k3s.yaml
dest: "~/.kube/config.new"
flat: true
- name: Change server address in kubeconfig on control node
ansible.builtin.shell: |
KUBECONFIG=~/.kube/config.new kubectl config set-cluster default --server=https://{{ hostvars[groups['k3s'][0]]['ansible_host'] }}:6443
delegate_to: 127.0.0.1
become: false
register: k3s_server_csa_result
changed_when:
- k3s_server_csa_result.rc == 0
- name: Setup kubeconfig context on control node - mightykube
ansible.builtin.replace:
path: "~/.kube/config.new"
regexp: 'default'
replace: 'mightykube'
delegate_to: 127.0.0.1
become: false
- name: Merge with any existing kubeconfig on control node
ansible.builtin.shell: |
TFILE=$(mktemp)
KUBECONFIG=~/.kube/config.new:~/.kube/config kubectl config set-context mightykube --user=mightykube --cluster=mightykube
KUBECONFIG=~/.kube/config.new:~/.kube/config kubectl config view --flatten > ${TFILE}
mv ${TFILE} ~/.kube/config
delegate_to: 127.0.0.1
become: false
register: k3s_server_mv_result
changed_when:
- k3s_server_mv_result.rc == 0
Let's also create a template file for the systemd service under ~/Projects/infra/ansible/k3s/roles/k3s_server/templates
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s server
Let's create a ~/Projects/infra/ansible/k3s/k3s_server/default/main.yaml
to set the version of k3s we want to install, and that we might change in the future when doing upgrades.
k3s_server_version: "v1.33.3+k3s1"
Finally, let's create a ~/Projects/infra/ansible/k3s/deploy.yaml
that calls the role we just created on the k3s
servers group.
---
- name: Install k3s
hosts: k3s
remote_user: thib
tasks:
- name: Install k3s server
ansible.builtin.import_role:
name: k3s_server
We can now use everything together by calling the playbook we created (and the role it calls) with the dynamic inventory generated by the Proxmox plugin. Let's try!
$ cd ~/Projects/infra/ansible
$ ansible-playbook -i inventory/proximighty.proxmox.yaml k3s/deploy.yaml
Using kubectl
on my laptop, I can confirm that my single node cluster is ready
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
mightykube Ready control-plane,master 4m v1.33.3+k3s1
Great! We have used ansible to automate the installation of a single node k3s cluster, and how to control it from our laptop. Thanks to our dynamic inventory, ansible also figured out what VM to install it onto automatically. Look, Mom! We have a Kubernetes at home!
It's time to clean things up and remove sensitive credentials from our ansible scripts.
Removing sensitive information
When writing our ansible playbook, we didn't add new credentials. When setting up opentofu we created an API Key, and stored it in our Bitwarden vault under the name PROXMOX_VE_API_TOKEN
. When configuring the dynamic inventory, we reused that same API key but wrote it in the plain text file.
There is a minor difference though. Opentofu uses the API Key formatted as
terraform@pve!provider=REDACTED
Ansible on the other hand uses the API Key formatted as
user: "terraform@pve"
token_id: "provider"
token_secret: "REDACTED"
The information is the same, but formatted differently. Fortunately for us, ansible supports searching strings with regular expressions. The regex to break it down into the three parts we need is rather simple:
([^!]+)!([^=]+)=(.+)
Ansible also has a lookup method to read environment variables. Let's put all the pieces together in our dynamic inventory file
plugin: community.general.proxmox
url: https://192.168.1.200:8006
user: 'terraform@pve'
token_id: 'provider'
token_secret: 'REDACTED'
user: "{{ (lookup('ansible.builtin.env', 'PROXMOX_VE_API_TOKEN') | regex_search('([^!]+)!([^=]+)=(.+)', '\\1'))[0] }}"
token_id: "{{ (lookup('ansible.builtin.env', 'PROXMOX_VE_API_TOKEN') | regex_search('([^!]+)!([^=]+)=(.+)', '\\2'))[0] }}"
token_secret: "{{ (lookup('ansible.builtin.env', 'PROXMOX_VE_API_TOKEN') | regex_search('([^!]+)!([^=]+)=(.+)', '\\3'))[0] }}"
want_facts: true
groups:
k3s: "'k3s' in (proxmox_tags_parsed|list)"
compose:
ansible_host: proxmox_ipconfig0.ip | default(proxmox_net0.ip) | ansible.utils.ipaddr('address')
validate_certs: false
VoilΓ ! Just like that we have removed the secrets from our files, and we're ready to commit them!
[!info] Why not use the Bitwarden plugin for ansible?
It's a good alternative, but I already rely on direnv to extract the relevant secrets from my vault and store them temporarily in environment variables.
Using the Bitwarden plugin in my playbook would tightly couple the playbook to ansible. By relying on the environment variables, only direnv is coupled to Bitwarden!
Now we can spin up a VM and install k3s on it in a handful of seconds! Our homelab is making steady progress. Next up we will see how to get services running on our cluster with GitOps!
A huge thanks to my friends and colleagues Davide, Quentin, Ark, Half-Shot, and Ben!
20 Aug 2025 10:00am GMT
Cassidy James Blaede: Hereβs to Whatβs Next
For the past three and a half years, I've done the most rewarding work of my life with Endless Access (formerly Endless OS Foundation). I've helped ship Endless OS on computers to tens of thousands of people in need around the globe-people who might not otherwise have access to a real computer and especially reliable access to all of the offline knowledge and tools that come out of the box. I've visited students and teachers in the rural US who are struggling due to a lack of funding, and watched as their eyes lit up when they got ahold of Endless Key, an app packed full of safe and fun offline materials to support their self-guided learning. And I've helped learners and educators both unlock so much creativity and skill-building with our game-making learning programs and related open source projects.
Unfortunately, all good things must come to an end: due to strategic decisions at the organization, my particular role won't exist in the coming months. That said, I have some time to look for my next challenge and adventure-and I am excited to do so purposefully.
What is next?
At the end of the day, my family has to eat and we have to pay for a roof over our heads; otherwise, retiring into the wilderness to raise my kids, garden, and play with LEGO would sound pretty nice right about now! Given that I need a decent income to support us, I'm currently looking for a role with some combination of the following:
- open source
- connecting people
- empowering people to build things
- tech for good
These are both broad and vague because I don't have a perfect vision of what I want; my ideal job would hit all four of those, but I'm also a realist and know that I might not be able to support my family with some utopian gig working on something like GNOME with my specific skills and experience. (That said, if you'd like to hire me for that, I'm all ears!) I would love to continue working at a nonprofit, but I'm open to a company that is strongly aligned with my values. I'd prefer to work on a core product that is open source, but I could see myself working for an org that at least substantially supports open source. I don't think I want to be a community manager-unless I can be convinced the org knows precisely what they want and that lines up with my approach-but I do have a passion for connecting with people and helping them.
If you know of something that sounds like it would be a good fit, please reach out via email, Signal, Matrix, or Mastodon. You can also check out my rΓ©sumΓ© or connect on LinkedIn.
And don't worry: no matter what I end up taking on next, I plan to keep my volunteer position on the GNOME Foundation Board of Directors and stick around the GNOME and Flathub communities to help as much as I am able. π
20 Aug 2025 12:00am GMT
19 Aug 2025
Planet GNOME
Michael Meeks: 2025-08-19 Tuesday
- Up early, breakfast, bid a fond farewell to the American Meeks' - into planning call, lunch, sync with Laser.
- Monthly Management meeting, customer sync, admin catch-up, sync with Lily, dinner.
19 Aug 2025 9:00pm GMT
Sam Thursfield: Status update, 19/08/2025
Hello! I'm working on an interesting project this month, related to open source Linux operating systems. Actually I'm not working on it this week, I'm instead battling with vinyl floor tiles. Don't let anyone tell you they are easy to fit. But I think on the 5th attempt I've got the technique. The wooden mallet is essential.

When I started these "status update" posts, back in 2021, I imagined they'd be to talk about open technology projects I was working on, mixed with a bit of music and art and so on. In fact I write more about politics these days. Let me explain why.
In my book review earlier this year I mentioned economics dude Gary Stevenson. I still didn't read his book but I do watch all his videos now and I'm learning a lot.
I learned a bit about the housing crisis, for example. The housing crisis in Manchester had several major effects on my life. I just read today in The Manchester Mill that the average rent in Salford jumped from Β£640/mo to Β£1,121/mo in a decade.
(Lucky for me, I got into computers early in life, and nobody understands their secrets so they still have to pay a good salary to those of us who do. So far I've weathered the crisis. Many of my friends don't have that luck, and some of them have been struggling for 15 years already. Some even had to become project managers.)
Until about 2020, I assumed the Manchester housing crisis was caused by people moving up from London. Galicia had some of the lowest rent I'd ever seen, when I first moved here, and it's only around 2021, when rents started suddenly doubling just as I'd seen happen in Manchester, that I realised the same crisis was about to play out here as well, and perhaps it wasn't entirely the fault of Gen-X Londoners. I thought, maybe it's a Europe-wide housing crisis?
Let me summarize the video Gary Stevenson did about the housing crisis (this one), to save you 28 minutes. It's not just houses but all types of asset which are rapidly going up in price, and it's happening worldwide. We notice the houses because we need them to live normal lives, unlike other types of asset such as gold or government bonds which most of us can live without.
The most recent video is titled like this: "Is the economy causing a mental health crisis?". I've embedded it below. (It's hosted on a platform controlled by Google, but Gary is good enough to turn off the worst of the YouTube ads, those bastards that pop up during a video in mid-sentence or while you're figuring out a yoga pose.)
My answer to that question, when I saw it, was "Yes, obviously". For example, if rent increases by 75% in your city and you're forced back into living with your parents age 35, it's tough to deal with alright. What do you think?
But the video is about more than that. The reason asset prices are through the roof is because the super rich are taking all the assets. The 1% have more money than ever. Wealth inequality is rapidly rising, and nothing is stopping it. For thousands of years, the aristocracy owned all the land and castles and manor houses, and the rest of us had a few cabbages and, perhaps if you were middle class, a pig.
The second half of the 20th century levelled the playing field and put in place systems which evened things out and meant your grandparents maybe could buy a house. The people in charge of those systems have given up, or have been overpowered by the super rich.
In fact, the video "Is the economy causing a mental health crisis?" is about the effect on your mental health when you realize that all of society as you know it is headed towards complete collapse.
(Lucky for me, I grew up thinking society was headed for collapse due to the climate crisis, so I listened to a lot of punk rock and over-developed my capacity for nihilism. Maybe my mind looks for crises everywhere? Or maybe I was born in a time well-supplied with global crises. I share a birthday with the Chernobyl disaster.)
So how does all this relate back to open technology?
Maybe it doesn't. I went to the GNOME conference last month and had very little overtly "political" conversations. We chatted about GNOME OS, live-streaming, openQA, the GNOME Foundation, the history of the GNOME project, accessibility at conferences, our jobs, and so on. Which was great, I for some reason find all that stuff mega interesting. (Hence why I went there instead of a conference about 21st century world politics).
Or maybe it does. Tech is part of everyone's lives now. Some of the most powerful organizations in the world now are tech companies and they get their power from being omnipresent. Software engineers built all of this. What were we thinking?
I think we just enjoyed getting paid to work on fun problems. I suppose none of today's tech billionaires seemed like particularly evil-minded people in the mid 2000s. Spotify used to talk about reducing MP3 piracy, not gutting the income streams of 95% of professional recording artists. Google used to have a now laughable policy of "Don't be evil".
There is one exception who were clearly evil bastards in the 2000s as well. The US anti-trust case against Microsoft, settled in 2001, is an evidence trail of lies and anti-competitive behaviour under Bill Gates' leadership. Perhaps in an attempt to one-up his predecessor, the Satya Nadella Microsoft is now helping the far-right government of Israel to commit war crimes every day. No Azure for Apartheid. At least they are consistent, I suppose.
In fact, I first got interested in Linux due to Microsoft. Initially for selfish reasons. I was a child with a dialup internet connection, and I just wanted to have 5 browser windows open without the OS crashing. (For younger readers - browser tabs weren't invented until the 21st century).
Something has kept me interested in open operating systems, even in this modern era when you can download an MP3 in 5 seconds instead of 5 consecutive evenings. It's partly the community of fun people around Linux. It's partly that it led me to the job that has seen me through the housing crisis so far. And its partly the sense that we are the alternative to Big Tech.
Open source isn't going to "take over the world". That's what Microsoft, Spotify and Google were always going to do (and have now done). I'm honestly not sure where open source is going. Linux will go wherever hardware manufacturers force it to go, as it always has done.
GNOME may or may not make it mainstream one day. I'm all for it, if it means some funding for localsearch maintainers. If it doesn't, that's also fine, and we don't need to be part of some coherent plan to save the world or to achieve a particular political aim. Nothing goes according to plan anyway. Its fine to work on stuff just cus its interesting.
What we are doing is leading by example, showing that its possible to develop high quality software independently of any single corporation. You can create institutions where contributors do what we think is right, instead of doing what lunatics like Sam Altman or Mark Zockerborg think.
At the same time, everything is political.
What would happen if I travelled back to 2008 and asked the PHP developers building Facebook: "Do you think this thing could play a determining role in a genocide in Myanmar?"
I met someone this weekend who had just quit Spotify. She isn't a tech person. She didn't even know Bandcamp exists. Just didn't want to give more money to a company that's clearly evil. This is the future of tech, if there is any. People who pay attention to the world, who are willing to do things the hard way and stop giving money to people who are clearly evil.
19 Aug 2025 1:05pm GMT
18 Aug 2025
Planet GNOME
Christian Hergert: Status Week 33
This week is still largely focused on finalizing API/ABI for the 1.0 of Foundry in a few weeks.
Foundry
-
Did a bunch of work on LLM completion and and conversation APIs. They are not focused on supporting everything possible but instead making some of the common stuff extremely simple. That goes for both the model size of things and the UI side of things.
For example, heavy usage of GListModel everywhere we can.
-
Created new abstractions for LlmTool, LlmToolProviders, and the actual call of a tool (aptly, LlmToolCall). One reason this all takes so much time to scaffold is that you want to allow some amount of flexibility when connecting models, but also avoid too much API surface area.
I think I've managed to do that here.
-
Landed Ollama implementation of the
FoundryLlmConversation
API. The ollama server appears to be stateless, which means copying the conversation over-and-over as you go. I guess this at least gives you an idea of your context window. -
Setup a couple tool call implementations to test out that infrastructure. For example, it's really easy to tell the model that you build with
build
tool and then provide it the results. -
Fixed some licensing issues where I mostly just forgot to update the headers when copying them over. Things should be in a good place now for distributions to adhere to their SPDX rules.
-
Language settings now have a very last resort setting which are the "defaults" we ship with the library. That is just sensible stuff like using 4 spaces for tabs/indent in Python.
Settings at any layer can override these values.
-
Lots of work on project templates. We have both GTK 4 and Adwaita templates again. They support C/Python/rust/JavaScript like Builder does too.
But this time I tried to go a bit further. They should have a bunch of integration bits setup which we didn't get to before.
-
Setup an example Flatpak manifest for applications wanting to use libfoundry (see
examples/flatpak/
) that should help get you started. -
Setup i18n/l10n for
libfoundry
. I don't think anything is consuming translations for GNOME 49 though, so mostly just gets us up and running for 50. -
Landed some new API for working with the stage/index within
FoundryGitVcs
. Tested it with a speed-run challenge a bit later on in this report.
Assist
-
To test out the LLM APIs and ensure they can actually be used I did a speed-run to implement a "Foundry-based Developer Chat" with a time limit of two hours.
The reality is that I'm still _much_ faster writing code with all of my templates and snippets than I thought.
The new templates in Foundry are killer though.
-
It requires a model which supports tool calls if you want to do anything interesting with it. I'm not sure if there are models which can do both written output _and_ tool-calls which makes this a bit annoying to wait while it figures out it should call a tool.
-
While doing this, I realized a bunch of little things to fix in the LLM APIs. One piece still missing that I'd want to have in the future is the ability for specialized
FoundryLlmMessage
which not only have text content but typed data as well.For example, a tool call that is essentially a
ls
should really display the output as an interactive directory list and not text.But since this was a speed run, I did not implement that. Only made sure that the APIs could adapt to it in the future.
Staged
-
Started another speed-run app to test out the version control engine we have in Foundry. This one is basically just to replace my very quick use of
git-gui
to line stage patches.Came up with a neat way to highlight old/new versions of a file and then display them with
GtkListView
instead of using a source view. No reason to power up the editing infrastructure if you'll never be editing.
Manuals
-
Discovered I wasn't getting notifications since the move to the
GNOME/
namespace so flushed out the backlog of MR there.
GtkSourceView
-
Fix click-through on the overview map which broke again during this development cycle. My fault for not reviewing and/or testing better.
-
Now that we have GNOME CI doing LSAN/ASAN/UBSAN/coverage/scanbuild I went ahead and fixed a bunch of leaks that are part of the testsuite.
Additionally, it helped me find a few that were there in everyday code use, so that is always a lovely thing to fix.
Ptyxis
-
Merge some last minute string changes before we can't anymore.
-
Still having to unfortunately close issues which come from Debian not sourcing
/etc/profile.d/vte.sh
by default, thus breaking integration features.The good news I hear is that will be changing before long.
-
Other good news is that Ptyxis has landed in the 25.10 builds and will also be landing in Debian unstable in the near future as the default terminal.
-
After some back-and-forth I merged support for the kgx palette as the "GNOME" palette in Ptyxis. My very hopeful desire is that this becomes something maintained by the design team. The problem is just that terminal colors are a huge piles of hacks on hacks.
-
Nightly builds should be fixed. Apparently something changed in the CI setup and since we're under
chergert/ptyxis/
and notGNOME/
it didn't get automatically applied. -
Some styling changed in libadwaita this cycle and I needed to adapt how we propagate our styling to tab close buttons.
Really though, this all just needs to be redone (like Text Editor and Builder) to use
var()
properly in CSS.
Libspelling
-
Merged patch improving life-cycle tracking of the piecetable/b+tree regions (branches/leaves).
Sysprof
-
More code review and future feature planning so we can land GSoC things after I branch for 49 (very soon I hope).
Other
-
Turned 41, saw Stevie Ray Vaughan's broadcaster guitar, finally had the "weird" pizza at Lovely's fifty/fifty, jammed at MoPOP with my niece.
-
Lots of random little things this week to lend a hand/ear here or there as we get closer to release.
18 Aug 2025 11:59pm GMT
15 Aug 2025
Planet GNOME
Sebastian Wick: Display Next Hackfest 2025
A few weeks ago, a bunch of display driver and compositor developers met once again for the third iteration of the Display Next Hackfest. The tradition was started by Red Hat, followed by Igalia (thanks Melissa), and now AMD (thanks Harry). We met in the AMD offices in Markham, Ontario, Canada; and online, to discuss issues, present things we worked on, figure out future steps on a bunch of topics related to displays, GPUs, and compositors.

The Display Next Hackfest in the AMD Markham offices
It was really nice meeting everyone again, and also seeing some new faces! Notably, Charles Poynton who "decided that HD should have 1080 image rows, and square pixels", and Keith Lee who works for AMD and designed their color pipeline, joined us this year. This turned out to be invaluable. It was also great to see AMD not only organizing the event, but also showing genuine interest and support for what we are trying to achieve.
This year's edition is likely going to be the last dedicated Display Next Hackfest, but we're already plotting to somehow fuse it with XDC next year in some way.
If you're looking for a more detailed technical rundown of what we were doing there, you can read Xaver's, or Louis' blog posts, or our notes.
With all that being said, here is an incomplete list of things I found exciting:
- The biggest update of the Atomic KMS API (used to control displays) is about to get merged. The Color Pipeline API is something I came up with three years ago, and thanks to the tireless efforts of AMD, Intel and Igalia and others, this is about to become reality. Read Melissa's blog post for more details.
- As part of the work enabling displaying HDR content in Wayland compositors, we've been unhappy with the current HDR modes in displays, as they are essentially created for video playback and have lots of unpredictable behavior. To address this, myself and Xaver have since last year been lobbying for displays to allow the use of Source Based Tone Mapping (SBTM), and this year, it seems that what we have asked for have made it to the right people. Let's see!
- In a similar vein, on mobile devices we want to dynamically increase or decrease the HDR headroom, depending on what content applications want to show. This requires backlight changes to be somewhat atomic and having a mapping to luminance. The planned KMS backlight API will allow us to expose this, if the platform supports it. I worked a lot on backlight support in mutter this year so we can immediately start using this when it becomes available.
- Charles, Christopher, and I had a discussion about compositing HDR and SDR content, and specifically about how to adjust content that was mastered for a dark viewing environment that is being shown in a bright viewing environment, so that the perception is maintained. I believe that we now have a complete picture of how compositing should work, and I'm working on documenting this in the color-and-hdr repo.
- For Variable Refresh Rates (VRR) we want a new KMS API to set the minimum and maximum refresh cycle, where setting min=max gives us a fixed refresh rate without a mode set. To make use of VRR in more than the single-fullscreen-window case, we also agreed that a Wayland protocol letting clients communicate their preferred refresh rate would be a good idea.
Like always, lots of work ahead of us, but it's great to actually see the progress this year with the entire ecosystem having HDR support now.

Sampling local craft beers
See you all at XDC this year (or at least the one next year)!
15 Aug 2025 5:41pm GMT
This Week in GNOME: #212 Happy Birthday!
Update on what happened across the GNOME project in the week from August 08 to August 15.
Cassidy says
On August 15, 1997, Miguel de Icaza announced the start of GNOME on the GTK mailing list. Twenty-eight years later a lot has changed, but we continue to develop and iterate on "a free and complete set of user friendly applications and desktop tools⦠based entirely on free software."
To help us continue this work far into the future, we hope you join us in celebrating our birthday by becoming a Friend of GNOME today! π
GNOME Core Apps and Libraries
FineFindus announces
We have now merged the next part of the Rust port of GNOME Disks, which ports the disk image restore dialog (or the more common use case: flashing ISO disk images to USB drives) to Rust. This also enables the new Disk Image Mounter to write disk images to drives when clicking on a disk image file without opening GNOME Disks.
Libadwaita β
Building blocks for modern GNOME apps using GTK4.
Alice (she/her) π³οΈββ§οΈπ³οΈβπ reports
A week ago GTK landed CSS media queries support. As of today, libadwaita supports it too, both in its own styles and in app-provided styles. So, apps can now write CSS like this:
:root { --my-custom-color: black; } my-widget { color: var(--my-custom-color); } @media (prefers-color-scheme: dark) { :root { --my-custom-color: white; } } @media (prefers-contrast: more) { my-widget { box-shadow: inset 0 0 0 1px var(--border-color); } }
style-dark.css
,style-hc.css
andstyle-hc-dark.css
are still supported for this cycle, but they will be deprecated early next cycle and removed in libadwaita 2.0, so apps are encouraged to switch to media queries.
Maps β
Maps gives you quick access to maps all across the world.
mlundblad reports
Maps now shows highway shields in place popovers when clicking on road labels (when custom localized shields are defined). And also the user's avatar is shown in the OpenStreetMap account dialog for setting POI editing (when the user has set an avatar on their account)
Third Party Projects
Jeff reports
In this blog post, Mitchell Hashimoto discusses the recent rewrite of the Ghostty GTK frontend. He focuses on how Zig interfaces with the GObject type system and using Valgrind to ensure that memory leaks are not introduced by application code. https://mitchellh.com/writing/ghostty-gtk-rewrite
andypiper says
Oh, hi. Long time reader, first time poster. I released Fedinspect, a little GNOME app for developers to π inspect the configuration of fediverse servers, and also run WebFinger lookup queries for individual ActivityPub actors. It will query nodeinfo and other .well-known URIs for these servers, and you can dig into JSON responses and HTTP headers as needed. Maybe niche, hopefully useful to some folks!
You can find it on Flathub. Also, the icon in particular could do with some help to be a bit more GNOMEish, so feel free to help out if you're so inclined!
Ronnie Nissan reports
Embellish v0.5.1 was released today, featuring a redesign to the header bar and a new Icons page to explore, search and copy Nerd Fonts icons.
The codebase also switched to using Blueprint instead of UI files.
The issue where the list of fonts would jump to the top whenever a font was installed or removed has also been fixed.
Embellish is available only through Flathub, hope you enjoy the new feature.
Alain says
π Planify 4.13.2 - Improvements, Fixes & More Control Over Your Tasks The new 4.13.2 release of Planify is here, focusing on delivering a more stable, smoother, and customizable task management experience.
Here's what's new and improved:
- Better all-day event handling - Events are now correctly detected based on your local time.
- More control with Todoist - If you can't log in via OAuth, you can now manually enter your Todoist token.
- Improved text editing - The description area now has a limited height with scrolling, placeholders behave correctly, and your text won't reset when repositioning the cursor.
- Natural sorting - Lists now correctly order strings with numbers (e.g., item2 before item10).
- Smoother navigation - Improved visual alignment for note-type tasks and the option to display completed tasks directly below pending ones with pagination.
- Stability fixes - Adjustments to project view transitions, keyboard shortcuts, task duplication, and more.
π¬ We've also updated translations, added a Discord link, and made several under-the-hood optimizations.
Sepehr Rasouli reports
Sudoku V1.1.2 is here! Sudoku is a new modern app focused on delivering a clean, distraction-free experience. Designed with simplicity and comfort in mind, it features a straightforward interface that helps players stay focused and enjoy the game without unnecessary clutter or complications.
Features:
- Modern GTK4 and libadwaita interface
- Keyboard shortcuts for quick access to all functions
- Save and load games seamlessly to continue your progress anytime
- Highlight active row and cell to improve focus and ease of play
- Conflict highlighting to spot mistakes - perfect for learning
- Fun for all skill levels, from beginners to experts
The project is still in its early stages, so contributions are warmly welcome!
Semen Fomchenkov announces
Introducing Hashsum - a modern checksum utility
This week, the ALT Gnome and ALT Linux Team present Hashsum - a file checksum calculation utility built with GTK4/Libadwaita, inspired by the ideas behind Collision and GTK Hash.
We greatly appreciate the minimalist interface of Collision, but most GTK developers in our community create applications in Vala, so we decided to take the base from Collision and rewrite it from Crystal to make future development and maintenance easier. With Hashsum, we've combined the clean UI of Collision with the broad algorithm support of GTK Hash, adding the conveniences our community has been asking for.
Features
- Modern GTK4/Libadwaita interface inspired by Collision.
- Support for the following algorithms: MD5, SHA-1, SHA-256, SHA-512, BLAKE3, CRC-32, Adler-32, GOST R 34.11-94, Streebog-256/512 (via gcrypt and blake3).
- Flexible selection: enable only the algorithms you actually need.
- Accurate progress display for large file computations.
- Files (Nautilus) plugin: calculate checksums directly from the file manager's context menu.
- Developed in Vala with love.
What's next?
We plan to submit Hashsum to Flathub, but our immediate focus will be on adding features important to the community - ensuring it's not just a direct analog of Collision. Ideas and bug reports are welcome: https://altlinux.space/alt-gnome/Hashsum/issues/new
Best regards to the developers of the Collision project - your enthusiasm and drive for innovation are truly inspiring.
Parabolic β
Download web video and audio.
Nick announces
Parabolic V2025.8.0 is here! This release contains new features, bug fixes, and an updated
yt-dlp
.Here's the full changelog:
- Added the ability to update yt-dlp from within the app when a newer version is available
- Added padding to single digit numbered titles in playlist downloads
- Replaced None translation language with en_US
- Fixed an issue where validating some media would cause the app to crash
- Fixed an issue where the app would not open on Windows
- Fixed an issue where download rows disappeared on GNOME
- Updated yt-dlp
Fractal β
Matrix messaging app for GNOME written in Rust.
KΓ©vin Commaille says
Knock, knock, knock⦠on
woodrooms, baby π΅ Ooh ooh ooh ooh ooh ooh πΆ That's right, Fractal 12 adds support for knocking, among other things. Read all about the improvements since 11.2:
- Requesting invites to rooms (aka knocking) is now possible, as is enabling such requests for room admins.
- The upcoming room version 12 is supported, with the special power level of room creators.
- A room can be marked as unread via the context menu in the sidebar.
- You can now see if a section in the sidebar has any notifications or activity when it is collapsed.
- Clicking on the name of the sender of a message adds a mention to them in the composer.
- The safety setting to hide media previews in rooms is now synced between Matrix clients and we added another safety setting (which is also synced) to hide avatars in invites.
As usual, this release includes other improvements and fixes thanks to all our contributors, and our upstream projects.
We want to address special thanks to the translators who worked on this version. We know this is a huge undertaking and have a deep appreciation for what you've done. If you want to help with this effort, head over to Damned Lies.
This version is available right now on Flathub.
If you want to join the gang, you can start by fixing one of our newcomers issues. We are always looking for new members!
Internships
Aryan Kaushik reports
The GNOME Foundation is interested in participating in the December-March cohort of Outreachy.
If you are interested in mentoring AND have a project idea in mind, please visit https://gitlab.gnome.org/Teams/Engagement/internship-project-ideas/-/issues and submit your proposal by 10th September 2025.
We are always on the lookout for project ideas that move the GNOME project forward!
If you have any questions, please feel free to post them on our matrix - #internship:gnome.org or e-mail soc-admins@gnome.org.
Looking forward to your proposals!
GNOME Foundation
steven says
New Foundation Update:
https://blogs.gnome.org/steven/2025/08/08/2025-08-08-foundation-update/
- bureaucracy (yay?)
- apology to GIMP
- advisory board room
- early draft budget
- 501(c)(3) structural improvements
- explaining the travel policy freeze
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
15 Aug 2025 12:00am GMT
14 Aug 2025
Planet GNOME
Gedit Technology blog: Mid-August News
Misc news about the gedit text editor, mid-August edition! (Some sections are a bit technical).
Code Comment plugin rewritten
I forgot to talk about it in the mid-July news, but the Code Comment plugin has been rewritten in C (it was previously implemented in Python) and the bulk of it is implemented as re-usable code in libgedit-tepl. The implementation is now shared between Enter TeX and gedit.
File loading and saving: a new GtkSourceEncoding class
I've modified the GtkSourceEncoding class that is part of libgedit-gtksourceview, and adapted gedit accordingly. The new version of GtkSourceEncoding comes from an experiment that I did in libgedit-tepl several years ago.
GtkSourceEncoding represents a character set (or "charset" for short). It is used in combination with iconv
to convert text files from one encoding to another (for example from ISO-8859-15 to UTF-8).
The purpose of the experiment that was done in libgedit-tepl (the TeplEncoding class) was to accomodate the needs for a uchardet usage (note that uchardet is not yet used by gedit, but it would be useful). uchardet is a library to automatically detect the encoding of some input text. It returns an iconv-compatible charset, as a string.
It is this string - returned by uchardet - that we want to store and pass to iconv
unmodified, to not lose information.
The problem with the old version of GtkSourceEncoding: there was a fixed set of GtkSourceEncoding instances, all const
(so without the need to free them). When trying to get an instance for an unknown charset string, NULL was returned. So this was not appropriate for a uchardet usage (or at least, not a clean solution: with the charset string returned by uchardet it was not guaranteed that a corresponding GtkSourceEncoding instance was available).
Since GtkSourceEncoding is used in a lot of places, we don't want to change the code to represent a charset as just a string. And a simple string is anyway too basic, GtkSourceEncoding provides useful features.
So, long story short: the new GtkSourceEncoding class returns new instances that must be freed, and has a constructor that just makes a copy of the charset string (there is the get_charset()
method to get back the string, unmodified).
So gedit can keep using the GtkSourceEncoding abstraction, and we are one step closer to being able to use uchardet or something similar!
Know more about the gedit's maintainer
I now have a personal web site, or more accurately a single web page:
wilmet-software.be (SΓ©bastien Wilmet)
gedit is a 27-years-old project, the first lines were written in 1998 (and normally it won't be part of the 27 Club!). I've been a contributor to the project for 14 years, so more than half the project existence. Time flies!
Robust file loading - some progress
After the rework of GtkSourceEncoding (which is part of the File Loading and Saving subsystem in libgedit-gtksourceview), I've made good progress to make the file loading more robust - although there is more work still to do.
It is a basis in programming to check all program input. gedit makes things a bit harder to accomplish this. To open a document:
- There is first the problem of the character encoding. It is not sufficient for a general-purpose text editor to accept only UTF-8. So text files can be almost anything in binary form.
- Then gedit allows to open documents containing invalid characters in the specified or auto-detected encoding. With this, documents can really be anything in binary form.
- Finally the GtkTextView widget used at the heart of gedit has several limitations: (1) very big files (like log files or database dumps) are not supported, a limit on the content size must be set (and if reached, still allow to load the file with truncated content). (2) very long lines cause performance problems and can freeze the application.
So, the good news is that progress has been made on this. (There is only a good news, let's stay positive!).
If you appreciate the work that I do in gedit, I would like to know your feedback, what I should improve or which important feature is missing. You can contact me by email for example, or on a discussion channel. Thank you :-) !
14 Aug 2025 10:00am GMT
13 Aug 2025
Planet GNOME
Aryan Kaushik: GUADEC 2025 Experience
Ciao a tutti!
In this blog, I'm pumped to share my experience attending GUADEC 2025 held in Brescia, Italy.
Let's start :)
During the conference, I presented multiple talks -
My main talk on GNOME Internships, Google Summer of Code, Outreachy
Lightning talk on Need for GUADEC and Regional events
Lightning talk on GNOME Extensions as the gateway ****
BoF on GNOME Foundation Internships - Unrecorded π
The main talk was on Day 1 of the conference, which was quite exciting. I had the pleasure of sharing my journey and insights on how to leverage FOSS internships like GSoC and Outreachy to build a successful career in open source.
Attending this conference was far from easy. Due to the tragic Air India flight incident, I had to scrap my original travel plans and book a last-minute alternative to Italy. It was stressful - both emotionally and financially, but I was determined to make it to GUADEC.
Another trouble was with Visa; I had to apply for a Schengen Visa, which was hectic to say the least. Submitting 150+ pages of documents and waking up the GNOME Foundation team at night (their time) just to get some random letters the VFS office (embassy delegates) wanted during the submission process was bliss. So sincere thanks to Steven (ED), Anisa, Kristi, Rosanna, Asmit and others for helping me out with this at the very last minute. You all are heroes!
I usually carry double the required documents just to be on the safe side, but this was just something else.
Anyway, let's proceed with the blog :D
The touchdown
Due to a lack of flights to Brescia, I had to take a flight to Milan and then travel by train to Brescia.
But, since this was my first conference after graduating the same month (yipeee), I fortunately was able to extend my trip for the first time ever.
This let me explore Italy, a dream come true. I visited Milan and the Monza circuit before heading to Brescia.
I have no clue about racing, but visiting the Monza circuit was a surreal experience. The history, the cars, and the atmosphere were just amazing.
The pre-conference party
After some nice struggles with public transport in Brescia (I loved it afterwards though), I decided to take a 20-minute walk to the venue of the pre-conference party.
I don't mind walking, but as I was waiting for the bus, initially, I got late and had to walk fast. The worst part? The bus that didn't allow me to board was constantly catching up to me for half the journey, boosting the frustration.
But well... I finally reached! Met some of my friends and had nice conversations with the organisers, attendees, speakers and some really engaging discussions with Mauro from Canonical.
After which, he consented to me kidnapping him for an Indian dinner. The place we got to was closed (I hate Google Maps), but fortunately, we found another Indian restaurant in very close proximity.
We had tons of exchanges about the future of Ubuntu, Canonical and GNOME. It was great to meet in person after GUADEC 2023.
The first day of the conference
The first day was quite good, got to attend the talks which I was looking forward to and was also able to help with the Ubuntu booth setup (well, not much but somewhat?).
After tackling some great wifi chip delights, we were able to get the anonymous forms up and running for the booth.
And then came my talk. Maria Majadas introduced me (she is awesome btw!), and after some setup tackling, I was able to present my talk on "Making a career out of FOSS Internships GSoC/Outreachy".
I had to rush a bit due to the time constraints, but I was able to cover most of the points I wanted to. So yay!
Afterwards, I was able to volunteer for moderating the talk, "The state of GTK" by Matthias, which was quite insightful. It was great to see the progress GTK has made and the future plans.
We then had a great panel discussion, which was quite a nice twist.
Later Aarti and Sri (Who both are awesome), whom I met for the first time in person, invited me for some snacks and drinks. The stories they shared were just so amazing and inspiring. Due to them, for the first time at GUADEC, I was able to have normal conversations and not just very professional ones. This elevated the conference 10x for me.
If you both are reading this, I just want to say you both are amazing, and I hope to see you again soon!
Then Mauro kidnapped me for a nice Italian dinner. We found a nice pizzeria with amazing views and food. I let him order for me, just like he did with me :).
And I have to say, that was the best pizza I ever had.
Also, I learned some new pizza cutting tricks and info on why you should NEVER share Pizza (apart from exchanging slices to try). This will stay with me for life xD.
Oh man, that was a lot for the first day. I was exhausted but happy.
The second day of the conference
On the second day, the highlight talks for me were "Getting Things Done in GNOME", "State of the Shell" and "Have a GTK app with no tests? No Problem!" (Which I had the pleasure to attend and moderate).
I also gave another lightning talk on "Why do we need GUADEC or GNOME events?" which was quite fun. I shared my experiences and insights on the importance of such events in fostering community and collaboration.
Thanks to Rosanna for giving me the idea to do so. It was really great to share my thoughts and experiences with the community.
After the conference, I took a detour to visit the beautiful Brescia Castle. The views were out of this world. I also, instead of taking the bus to the top or climbing up stairs, took the gravel path around the castle (It had fences which I decided to jump over :)). But it was worth it, climbing this way allowed me to see every corner of the city layer by layer. That you can't beat!
The third day of the conference
As you can guess by now, it was great as well, and I gave another talk - "GNOME Extensions: the gateway drug to GNOME" and also helped in moderating some sessions.
Also, I'll highly recommend you to watch the talk on Gnurd - https://www.youtube.com/live/Z7F3fghCQB4?si=H_HgN6IHeRdSVu10&t=27391 It was nice!
And we ended the day with a great dinner celebrating 25 years of GUADEC. The food was amazing, the company was great, and the atmosphere was just perfect.
The BoFs
Being a GNOME GSoC'22 Intern, and now a part of the GNOME Internship Committee, I had my fourth and final talk (kind of), GNOME Internship Committee Meetup, where we discussed the future of the program, the challenges we face, and how we can improve it.
Thanks, Felipe, for organising it and inviting me to be a part of it. It was great to see the progress we have made and the plans we have for the future.
The next day, I attended the "GTK Settings Hackfest" BoF, and it reminded me why physical meetups are so powerful. Discussing my blockers directly with the maintainers and fixing stuff together. It can't get more delightful than that!
We then went to Lake Iseo for a trip. And the picture will give you a glimpse of the beauty of the place.
The Bergamo tour
The tour was a great opportunity to check out Bergamo and interact with people.
Special shoutout to Ignacy for being my partner in crime for clicking pictures. The skyline, the view from the top and the food were just amazing. We had a great time exploring the city and taking pictures.
It was also Federico's birthday, so we celebrated it with a cake and some drinks. Celebrating the founder at the 25th GUADEC was the cherry on top.
Federico also gave great insights about Coffee. I was looking forward to buying a Bialetti Moka pot, but I wasn't sure. But after his advice, I splurged. And I have to say, it was worth it. The coffee is just amazing, and the experience of making it is just delightful. Instant is not the same anymore :(.
So thanks to Federico, I now have a taste of Italy at home. Next stop, getting a grinder!
Meeting people
At last, I met many new people and got to learn a lot. Made new friends, got to meet people I look up to and many more.
I hope I wasn't that introverted, but yeah, slowly getting comfortable around new people, especially thanks to Aarti and Sri for making me feel comfortable and helping me break the ice.
The End
This GUADEC was just awesome. And I was also able to visit 4 cities in Italy, which was a dream come true. Normally, due to college, I couldn't visit any other city than the conference city, but this time I was able to extend my trip and explore Italy a bit.
Thanks to all the people for making the event so great. It was an experience like no other. I would also like to thank GNOME Foundation for sponsoring the trip :) I hope I used it to the fullest and made the most out of it. :D
I also renewed my GNOME Foundation membership just recently, which is awesome.
13 Aug 2025 8:00pm GMT
11 Aug 2025
Planet GNOME
Christian Hergert: Week 32 Status
Foundry
This week was largely around getting the new template engine landed so it can be part of the 1.0 ABI. Basically just racing to get everything landed in time to commit to the API/ABI contract.
-
FoundryTextBuffer
gained some new type prerequisites to make it easier for writing applications against them. Since Foundry is a command line tool as well as a library, we don't just useGtkTextBuffer
since the CLI doesn't even link against GTK. But it is abstracted in such a way that the GTK application would implement theFoundryTextBuffer
interface with a derivedGtkSourceBuffer
. -
FoundryTextSettings
has landed which provides a layered approach to text editor settings similar (but better) than we have currently in GNOME Builder. There is a new modeline implementation, editorconfig, and gsettings backed settings provider which apply in that order (with per-file overrides allowed at the tip).Where the settings-backed implementation surpasses Builder is that it allows for layering there too. You can have user-overrides by project, project defaults, as well as Foundry defaults.
I still need to get the default settings per-language that we have already (and are mostly shared with Text Editor too) as reasonable defaults.
-
To allow changing the GSettings-based text settings above, the
foundry settings set ...
command gained support for specific paths using the same:/
suffix that thegsettings
command uses. -
Spent some time on the upcoming chat API for models so I can experiment with what is possible when you control the entire tools stack.
-
Dropped some features so they wouldn't be part of the 1.0. We can implement them later on as time permits. Specifically I don't want to commit to a MCP or DAP implementation yet since I'm not fond of either of them as an API.
-
The
FoundryInput
subsystem gained support for license and language inputs. This makes it much simpler to write templates in the new internal template format. -
Allow running
foundry template create ./FILE.template
to create a set of files or project from a template file. That allows you to interate on your own templates for your project without having to have them installed at the right location. -
Wrote new project templates for empty project, shared library project, and gtk4 projects. Still need to finish the gtk4 project a bit to match feature parity with the version from Builder.
I very much am happy with how the library project turned out because this time around it supports Gir, Pkgconfig, Vapi generation, gi-doc, and more. I still need to get library potfile support though.
I also wrote new templates for creating gobjects and gtkwidgets in C (but we can port to other languages if necessary). This is a new type of "code template" as opposed to "project template". It still allows for multiple files to be created in the target project.
What is particularly useful about it though is that we can allow projects to expose templates specific to that project in the UI. In Foundry, that means you have template access to create new plugins, LSPs, and services quite easily.
-
Projects can specify their default license now to make more things just happen automatically for contributors when creating new files.
-
Templates can include the default project license header simply now by doing `{{include "license.c"}} where the suffix gets the properly commented license block.
-
The API for expand templates has changed to return a
GListModel
ofFoundryTemplateOutput
. The primary motivator here is that I want to be able to have UI in Builder that lets you preview template before actually saving the templates to disk. -
A new API landed that we had in Builder for listing build targets. Currently, only the
meson
plugin implements theFoundryBuildTargetProvider
. This is mostly plumbing for upcoming features. -
The new template format is a bit of amalgamation from a few formats that is just based on my experience trying to find a way to maintain these templates.
It starts with a
GKeyFile
block that describes the template and inputs to the template.Then you have a series of what looks like markdown code blocks. You can have conditionals around them which allows for optionally including files based on input.
The filename for the blocks can also be expanded based on template inputs. The expansions are just
TmplExpr
expressions from template-glib.An example can be found at:
https://gitlab.gnome.org/GNOME/foundry/-/blob/main/plugins/meson-templates/library.project
Template-GLib
-
Found some oopsies in how
TmplExpr
evaluated branches so fixed those up. Last year I wrote most of a C compiler and taking a look at this code really makes me want to rewrite it all. The intermixing of Yacc and GObject Introspection is ripe for improvement. -
Added support for
==
and!=
ofGStrv
expressions.
Other
-
Play CI whack-a-mole for ICU changes in nightly SDKs
-
Propagate foundry changes to projects depending on it so that we have useful flatpak manifests with minimal feature flags enabled.
-
Took a look at some performance issues in GNOME OS and passed along some debugging techniques. Especially useful for when all you got is an array of registers and need to know something.
-
Libpeas release for GNOME 49 beta
11 Aug 2025 8:15pm GMT
Peter Hutterer: xkeyboard-config 2.45 has a new install location
This is a heads ups that if you install xkeyboard-config 2.45 (the package that provides the XKB data files), some manual interaction may be needed. Version 2.45 has changed the install location after over 20 years to be a) more correct and b) more flexible.
When you select a keyboard layout like "fr" or "de" (or any other ones really), what typically happens in the background is that an XKB parser (xkbcomp if you're on X, libxkbcommon if you're on Wayland) goes off and parses the data files provided by xkeyboard-config to populate the layouts. For historical reasons these data files have resided in /usr/share/X11/xkb
and that directory is hardcoded in more places than it should be (i.e. more than zero). As of xkeyboard-config 2.45 however, the data files are now installed in the much more sensible directory /usr/share/xkeyboard-config-2
with a matching xkeyboard-config-2.pc
for anyone who relies on the data files. The old location is symlinked to the new location so everything keeps working, people are happy, no hatemail needs to be written, etc. Good times.
The reason for this change is two-fold: moving it to a package-specific directory opens up the (admittedly mostly theoretical) use-case of some other package providing XKB data files. But even more so, it finally allows us to start versioning the data files and introduce new formats that may be backwards-incompatible for current parsers. This is not yet the case however, the current format in the new location is guaranteed to be the same as the format we've always had, it's really just a location change in preparation for future changes.
Now, from an upstream perspective this is not just hunky, it's also dory. Distributions however struggle a bit more with this change because of packaging format restrictions. RPM for example is quite unhappy with a directory being replaced by a symlink which means that Fedora and OpenSuSE have to resort to the .rpmmoved
hack. If you have ever used the custom layout and/or added other files to the XKB data files you will need to manually move those files from /usr/share/X11/xkb.rpmmoved/
to the new equivalent location. If you have never used that layout and/or modified local you can just delete /usr/share/X11/xkb.rpmmoved
. Of course, if you're on Wayland you shouldn't need to modify system directories anyway since you can do it in your $HOME.
Corresponding issues on what to do on Arch and Gentoo, I'm not immediately aware of other distributions's issues but if you search for them in your bugtracker you'll find them.
11 Aug 2025 11:44am GMT