06 Mar 2025
Planet Debian
Antoine Beaupré: Nix Notes
Meta
In case you haven't noticed, I'm trying to post and one of the things that entails is to just dump over the fence a bunch of draft notes. In this specific case, I had a set of rough notes about NixOS and particularly Nix, the package manager.
In this case, you can see the very birth of an article, what it looks like before it becomes the questionable prose it is now, by looking at the Git history of this file, particularly its birth. I have a couple of those left, and it would be pretty easy to publish them as is, but I feel I'd be doing others (and myself! I write for my own documentation too after all) a disservice by not going the extra mile on those.
So here's the long version of my experiment with Nix.
Nix
A couple friends are real fans of Nix. Just like I work with Puppet a lot, they deploy and maintain servers (if not fleets of servers) with NixOS and its declarative package management system. Essentially, they use it as a configuration management system, which is pretty awesome.
That, however, is a bit too high of a bar for me. I rarely try new operating systems these days: I'm a Debian developer and it takes most of my time to keep that functional. I'm not going to go around messing with other systems as I know that would inevitably get me dragged down into contributing into yet another free software project. I'm mature now and know where to draw the line. Right?
So I'm just testing Nix, the package manager, on Debian, because I learned from my friend that nixpkgs is the largest package repository out there, a mind-boggling 100,000 at the time of writing (with 88% of packages up to date), compared to around 40,000 in Debian (or 72,000 if you count binary packages, with 72% up to date). I naively thought Debian was the largest, perhaps competing with Arch, and I was wrong: Arch is larger than Debian too.
What brought me there is I wanted to run Harper, a fast spell-checker written in Rust. The logic behind using Nix instead of just downloading the source and running it myself is that I delegate the work of supply-chain integrity checking to a distributor, a bit like you trust Debian developers like myself to package things in a sane way. I know this widens the attack surface to a third party of course, but the rationale is that I shift cryptographic verification to another stack than just "TLS + GitHub" (although that is somewhat still involved) that's linked with my current chain (Debian packages).
I have since then stopped using Harper for various reasons and also wrapped up my Nix experiment, but felt it worthwhile to jot down some observations on the project.
Hot take
Overall, Nix is hard to get into, with a complicated learning curve. I have found the documentation to be a bit confusing, since there are many ways to do certain things. I particularly tripped on "flakes" and, frankly, incomprehensible error reporting.
It didn't help that I tried to run nixpkgs on Debian which is technically possible, but you can tell that I'm not supposed to be doing this. My friend who reviewed this article expressed surprised at how easy this was, but then he only saw the finished result, not me tearing my hear out to make this actually work.
Nix on Debian primer
So here's how I got started. First I installed the nix binary package:
apt install nix-bin
Then I had to add myself to the right group and logout/log back in to get the rights to deploy Nix packages:
adduser anarcat nix-users
That wasn't easy to find, but is mentioned in the README.Debian file shipped with the Debian package.
Then, I didn't write this down, but the README.Debian
file above mentions it, so I think I added a "channel" like this:
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update
And I likely installed the Harper package with:
nix-env --install harper
At this point, harper
was installed in a ... profile? Not sure.
I had to add ~/.nix-profile/bin
(a symlink to /nix/store/sympqw0zyybxqzz6fzhv03lyivqqrq92-harper-0.10.0/bin
) to my $PATH
environment for this to actually work.
Side notes on documentation
Those last two commands (nix-channel
and nix-env
) were hard to figure out, which is kind of amazing because you'd think a tutorial on Nix would feature something like this prominently. But three different tutorials failed to bring me up to that basic setup, even the README.Debian
didn't spell that out clearly.
The tutorials all show me how to develop packages for Nix, not plainly how to install Nix software. This is presumably because "I'm doing it wrong": you shouldn't just "install a package", you should setup an environment declaratively and tell it what you want to do.
But here's the thing: I didn't want to "do the right thing". I just wanted to install Harper, and documentation failed to bring me to that basic "hello world" stage. Here's what one of the tutorials suggests as a first step, for example:
curl -L https://nixos.org/nix/install | sh
nix-shell --packages cowsay lolcat
nix-collect-garbage
... which, when you follow through, leaves you with almost precisely nothing left installed (apart from Nix itself, setup with a nasty "curl pipe bash". So while that works in testing Nix, you're not much better off than when you started.
Rolling back everything
Now that I have stopped using Harper, I don't need Nix anymore, which I'm sure my Nix friends will be sad to read about. Don't worry, I have notes now, and can try again!
But still, I wanted to clear things out, so I did this, as root:
deluser anarcat nix-users
apt purge nix-bin
rm -rf /nix ~/.nix*
I think this cleared things out, but I'm not actually sure.
Side note on Nix drama
This blurb wouldn't be complete without a mention that the Nix community has been somewhat tainted by the behavior of its founder. I won't bother you too much with this; LWN covered it well in 2024, and made a followup article about spinoffs and forks that's worth reading as well.
I did want to say that everyone I have been in contact with in the Nix community was absolutely fantastic. So I am really sad that the behavior of a single individual can pollute a community in such a way.
As a leader, if you have all but one responsability, it's to behave properly for people around you. It's actually really, really hard to do that, because yes, it means you need to act differently than others and no, you just don't get to be upset at others like you would normally do with friends, because you're in a position of authority.
It's a lesson I'm still learning myself, to be fair. But at least I don't work with arms manufacturers or, if I would, I would be sure as hell to accept the nick (or nix?) on the chin when people would get upset, and try to make amends.
So long live to the Nix people, I hope the community recovers from that dark moment, so far it seems like it will.
And thanks for helping me test Harper!
06 Mar 2025 8:44pm GMT
Dirk Eddelbuettel: RcppDate 0.0.5: Address Minor Compiler Nag
RcppDate wraps the featureful date library written by Howard Hinnant for use with R. This header-only modern C++ library has been in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17 what will is (with minor modifications) the 'date' library in C++20. The RcppDate adds no extra R or C++ code and can therefore be a zero-cost dependency for any other project; yet a number of other projects decided to re-vendor it resulting in less-efficient duplication. Oh well. C'est la via.
This release sync wuth the (already mostly included) upstream release 3.0.3, and also addresses a new fresh (and mildly esoteric) nag from clang++-20
. One upstream PR already addressed this in the files tickled by some CRAN packages, I followed this up with another upstream PR addressing this in a few more occurrences.
Changes in version 0.0.5 (2025-03-06)
Updated to upstream version 3.0.3
Updated 'whitespace in literal' issue upsetting clang++-20; this is also fixed upstream via two PRs
Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
06 Mar 2025 12:50pm GMT
Russell Coker: 8k Video Cards
I previously blogged about getting an 8K TV [1]. Now I'm working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it's specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.
The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.
My theories as to why it doesn't work are:
- NVidia specs lie
- My 8K cable isn't really an 8K cable
- Something weird happens converting DisplayPort to HDMI
- The video card can only handle refresh rates for 8K that don't match supported input for the TV
To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven't tried any others and can't review it but it seems to work well enough and it's free. It's described as "A community of privacy and FOSS enthusiasts, run by Lemmy's developers" which is positive, I recommend that everyone who's into FOSS create an account there or some other Lemmy server.
My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don't support 8K on HDMI because the people who manage HDMI specs are jerks. It's a $200 card at MSY and a bit less on ebay so it's an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.
The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I've wasted money on computer gear. But I'm still hesitating about this.
Any suggestions?
- [1] https://etbe.coker.com.au/2024/12/15/hisense-65u80g-8k-tv/
- [2] https://tinyurl.com/2cr32835
- [3] https://en.wikipedia.org/wiki/DisplayPort
- [4] https://en.wikipedia.org/wiki/Display_Stream_Compression
- [5] https://lemmy.ml/post/26711723
- [6] https://www.techpowerup.com/gpu-specs/arc-b580.c4244
06 Mar 2025 10:53am GMT
05 Mar 2025
Planet Debian
Dima Kogan: Shop scheduling with PuLP
I recently used the PuLP modeler to solve a work scheduling problem to assign workers to shifts. Here are notes about doing that. This is a common use case, but isn't explicitly covered in the case studies in the PuLP documentation.
Here's the problem:
- We are trying to put together a schedule for one week
- Each day has some set of work shifts that need to be staffed
- Each shift must be staffed with exactly one worker
- The shift schedule is known beforehand, and the workers each declare their preferences beforehand: they mark each shift in the week as one of:
- PREFERRED (if they want to be scheduled on that shift)
- NEUTRAL
- DISFAVORED (if they don't love that shift)
- REFUSED (if they absolutely cannot work that shift)
The tool is supposed to allocate workers to the shifts to try to cover all the shifts, give everybody work, and try to match their preferences. I implemented the tool:
#!/usr/bin/python3 import sys import os import re def report_solution_to_console(vars): for w in days_of_week: annotation = '' if human_annotate is not None: for s in shifts.keys(): m = re.match(rf'{w} - ', s) if not m: continue if vars[human_annotate][s].value(): annotation = f" ({human_annotate} SCHEDULED)" break if not len(annotation): annotation = f" ({human_annotate} OFF)" print(f"{w}{annotation}") for s in shifts.keys(): m = re.match(rf'{w} - ', s) if not m: continue annotation = '' if human_annotate is not None: annotation = f" ({human_annotate} {shifts[s][human_annotate]})" print(f" ---- {s[m.end():]}{annotation}") for h in humans: if vars[h][s].value(): print(f" {h} ({shifts[s][h]})") def report_solution_summary_to_console(vars): print("\nSUMMARY") for h in humans: print(f"-- {h}") print(f" benefit: {benefits[h].value():.3f}") counts = dict() for a in availabilities: counts[a] = 0 for s in shifts.keys(): if vars[h][s].value(): counts[shifts[s][h]] += 1 for a in availabilities: print(f" {counts[a]} {a}") human_annotate = None days_of_week = ('SUNDAY', 'MONDAY', 'TUESDAY', 'WEDNESDAY', 'THURSDAY', 'FRIDAY', 'SATURDAY') humans = ['ALICE', 'BOB', 'CAROL', 'DAVID', 'EVE', 'FRANK', 'GRACE', 'HEIDI', 'IVAN', 'JUDY'] shifts = {'SUNDAY - SANDING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'DISFAVORED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'NEUTRAL'}, 'WEDNESDAY - SAWING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED'}, 'THURSDAY - SANDING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED'}, 'SATURDAY - SAWING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'GRACE': 'REFUSED'}, 'SUNDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'MONDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'TUESDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'WEDNESDAY - PAINTING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'FRIDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED'}, 'SATURDAY - PAINTING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'FRANK': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}, 'SUNDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'MONDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'NEUTRAL', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'TUESDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - SANDING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'NEUTRAL', 'EVE': 'REFUSED'}, 'THURSDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'FRIDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - SANDING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'GRACE': 'REFUSED'}, 'SUNDAY - PAINTING 11:00 AM - 6:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'PREFERRED', 'IVAN': 'NEUTRAL', 'JUDY': 'NEUTRAL', 'DAVID': 'REFUSED'}, 'MONDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'PREFERRED', 'IVAN': 'NEUTRAL', 'JUDY': 'NEUTRAL', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'TUESDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'HEIDI': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'PREFERRED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'FRIDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'FRANK': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}, 'SUNDAY - SAWING 12:00 PM - 7:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'MONDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'TUESDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'HEIDI': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'FRIDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'HEIDI': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}, 'SUNDAY - PAINTING 12:15 PM - 7:15 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'NEUTRAL', 'DAVID': 'REFUSED'}, 'MONDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'DAVID': 'REFUSED'}, 'TUESDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'DAVID': 'REFUSED'}, 'FRIDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}} availabilities = ['PREFERRED', 'NEUTRAL', 'DISFAVORED'] import pulp prob = pulp.LpProblem("Scheduling", pulp.LpMaximize) vars = pulp.LpVariable.dicts("Assignments", (humans, shifts.keys()), None,None, # bounds; unused, since these are binary variables pulp.LpBinary) # Everyone works at least 2 shifts Nshifts_min = 2 for h in humans: prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= Nshifts_min, f"{h} works at least {Nshifts_min} shifts", ) # each shift is ~ 8 hours, so I limit everyone to 40/8 = 5 shifts Nshifts_max = 5 for h in humans: prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max, f"{h} works at most {Nshifts_max} shifts", ) # all shifts staffed and not double-staffed for s in shifts.keys(): prob += ( pulp.lpSum([vars[h][s] for h in humans]) == 1, f"{s} is staffed", ) # each human can work at most one shift on any given day for w in days_of_week: for h in humans: prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(rf'{w} ',s)]) <= 1, f"{h} cannot be double-booked on {w}" ) #### Some explicit constraints; as an example # DAVID can't work any PAINTING shift and is off on Thu and Sun h = 'DAVID' prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys() if re.search(r'- PAINTING',s)]) == 0, f"{h} can't work any PAINTING shift" ) prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(r'THURSDAY|SUNDAY',s)]) == 0, f"{h} is off on Thursday and Sunday" ) # Do not assign any "REFUSED" shifts for s in shifts.keys(): for h in humans: if shifts[s][h] == 'REFUSED': prob += ( vars[h][s] == 0, f"{h} is not available for {s}" ) # Objective. I try to maximize the "happiness". Each human sees each shift as # one of: # # PREFERRED # NEUTRAL # DISFAVORED # REFUSED # # I set a hard constraint to handle "REFUSED", and arbitrarily, I set these # benefit values for the others benefit_availability = dict() benefit_availability['PREFERRED'] = 3 benefit_availability['NEUTRAL'] = 2 benefit_availability['DISFAVORED'] = 1 # Not used, since this is a hard constraint. But the code needs this to be a # part of the benefit. I can ignore these in the code, but let's keep this # simple benefit_availability['REFUSED' ] = -1000 benefits = dict() for h in humans: benefits[h] = \ pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \ for s in shifts.keys()]) benefit_total = \ pulp.lpSum([benefits[h] \ for h in humans]) prob += ( benefit_total, "happiness", ) prob.solve() if pulp.LpStatus[prob.status] == "Optimal": report_solution_to_console(vars) report_solution_summary_to_console(vars)
The set of workers is in the humans
variable, and the shift schedule and the workers' preferences are encoded in the shifts
dict. The problem is defined by a vars
dict of dicts, each a boolean variable indicating whether a particular worker is scheduled for a particular shift. We define a set of constraints to these worker allocations to restrict ourselves to valid solutions. And among these valid solutions, we try to find the one that maximizes some benefit function, defined here as:
benefit_availability = dict() benefit_availability['PREFERRED'] = 3 benefit_availability['NEUTRAL'] = 2 benefit_availability['DISFAVORED'] = 1 benefits = dict() for h in humans: benefits[h] = \ pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \ for s in shifts.keys()]) benefit_total = \ pulp.lpSum([benefits[h] \ for h in humans])
So for instance each shift that was scheduled as somebody's PREFERRED shift gives us 3 benefit points. And if all the shifts ended up being PREFERRED, we'd have a total benefit value of 3*Nshifts. This is impossible, however, because that would violate some constraints in the problem.
The exact trade-off between the different preferences is set in the benefit_availability
dict. With the above numbers, it's equally good for somebody to have a NEUTRAL shift and a day off as it is for them to have DISFAVORED shifts. If we really want to encourage the program to work people as much as possible (days off discouraged), we'd want to raise the DISFAVORED threshold.
I run this program and I get:
.... Result - Optimal solution found Objective value: 108.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.01 Time (Wallclock seconds): 0.01 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.02 (Wallclock seconds): 0.02 SUNDAY ---- SANDING 9:00 AM - 4:00 PM EVE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM IVAN (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM FRANK (PREFERRED) ---- PAINTING 11:00 AM - 6:00 PM HEIDI (PREFERRED) ---- SAWING 12:00 PM - 7:00 PM ALICE (PREFERRED) ---- PAINTING 12:15 PM - 7:15 PM CAROL (PREFERRED) MONDAY ---- SAWING 9:00 AM - 4:00 PM DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM IVAN (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM GRACE (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM HEIDI (NEUTRAL) TUESDAY ---- SAWING 9:00 AM - 4:00 PM DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM HEIDI (NEUTRAL) WEDNESDAY ---- SAWING 7:30 AM - 2:30 PM DAVID (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM ALICE (NEUTRAL) THURSDAY ---- SANDING 9:00 AM - 4:00 PM GRACE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM CAROL (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM ALICE (NEUTRAL) FRIDAY ---- SAWING 9:00 AM - 4:00 PM DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM GRACE (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM HEIDI (NEUTRAL) SATURDAY ---- SAWING 7:30 AM - 2:30 PM CAROL (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM DAVID (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM BOB (NEUTRAL) SUMMARY -- ALICE benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- BOB benefit: 14.000 4 PREFERRED 1 NEUTRAL 0 DISFAVORED -- CAROL benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- DAVID benefit: 15.000 5 PREFERRED 0 NEUTRAL 0 DISFAVORED -- EVE benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- FRANK benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- GRACE benefit: 8.000 2 PREFERRED 1 NEUTRAL 0 DISFAVORED -- HEIDI benefit: 9.000 1 PREFERRED 3 NEUTRAL 0 DISFAVORED -- IVAN benefit: 12.000 4 PREFERRED 0 NEUTRAL 0 DISFAVORED -- JUDY benefit: 6.000 2 PREFERRED 0 NEUTRAL 0 DISFAVORED
So we have a solution! We have 108 total benefit points. But it looks a bit uneven: Judy only works 2 days, while some people work many more: David works 5 for instance. Why is that? I update the program with =human_annotate = 'JUDY'=, run it again, and it tells me more about Judy's preferences:
Objective value: 108.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.01 Time (Wallclock seconds): 0.01 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.02 SUNDAY (JUDY OFF) ---- SANDING 9:00 AM - 4:00 PM (JUDY NEUTRAL) EVE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) IVAN (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED) FRANK (PREFERRED) ---- PAINTING 11:00 AM - 6:00 PM (JUDY NEUTRAL) HEIDI (PREFERRED) ---- SAWING 12:00 PM - 7:00 PM (JUDY PREFERRED) ALICE (PREFERRED) ---- PAINTING 12:15 PM - 7:15 PM (JUDY NEUTRAL) CAROL (PREFERRED) MONDAY (JUDY OFF) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY NEUTRAL) IVAN (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY NEUTRAL) GRACE (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) HEIDI (NEUTRAL) TUESDAY (JUDY OFF) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED) EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY REFUSED) FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED) HEIDI (NEUTRAL) WEDNESDAY (JUDY SCHEDULED) ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED) DAVID (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED) IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM (JUDY NEUTRAL) FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED) JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (NEUTRAL) THURSDAY (JUDY SCHEDULED) ---- SANDING 9:00 AM - 4:00 PM (JUDY PREFERRED) GRACE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) CAROL (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED) EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED) JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (NEUTRAL) FRIDAY (JUDY OFF) ---- SAWING 9:00 AM - 4:00 PM (JUDY DISFAVORED) DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY DISFAVORED) FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED) GRACE (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED) HEIDI (NEUTRAL) SATURDAY (JUDY OFF) ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED) CAROL (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED) IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM (JUDY REFUSED) DAVID (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED) FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) BOB (NEUTRAL) SUMMARY -- ALICE benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- BOB benefit: 14.000 4 PREFERRED 1 NEUTRAL 0 DISFAVORED -- CAROL benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- DAVID benefit: 15.000 5 PREFERRED 0 NEUTRAL 0 DISFAVORED -- EVE benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- FRANK benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- GRACE benefit: 8.000 2 PREFERRED 1 NEUTRAL 0 DISFAVORED -- HEIDI benefit: 9.000 1 PREFERRED 3 NEUTRAL 0 DISFAVORED -- IVAN benefit: 12.000 4 PREFERRED 0 NEUTRAL 0 DISFAVORED -- JUDY benefit: 6.000 2 PREFERRED 0 NEUTRAL 0 DISFAVORED
This tells us that on Monday Judy does not work, although she marked the SAWING shift as PREFERRED. Instead David got that shift. What would happen if David gave that shift to Judy? He would lose 3 points, she would gain 3 points, and the total would remain exactly the same at 108.
How would we favor a more even distribution? We need some sort of tie-break. I want to add a nonlinearity to strongly disfavor people getting a low number of shifts. But PuLP is very explicitly a linear programming solver, and cannot solve nonlinear problems. Here we can get around this by enumerating each specific case, and assigning it a nonlinear benefit function. The most obvious approach is to define another set of boolean variables: vars_Nshifts[human][N]
. And then using them to add extra benefit terms, with values nonlinearly related to Nshifts
. Something like this:
benefit_boost_Nshifts = \ {2: -0.8, 3: -0.5, 4: -0.3, 5: -0.2} for h in humans: benefits[h] = \ ... + \ pulp.lpSum([vars_Nshifts[h][n] * benefit_boost_Nshifts[n] \ for n in benefit_boost_Nshifts.keys()])
So in the previous example we considered giving David's 5th shift to Judy, for her 3rd shift. In that scenario, David's extra benefit would change from -0.2 to -0.3 (a shift of -0.1), while Judy's would change from -0.8 to -0.5 (a shift of +0.3). So the balancing out the shifts in this way would work: the solver would favor the solution with the higher benefit function.
Great. In order for this to work, we need the vars_Nshifts[human][N]
variables to function as intended: they need to be binary indicators of whether a specific person has that many shifts or not. That would need to be implemented with constraints. Let's plot it like this:
#!/usr/bin/python3 import numpy as np import gnuplotlib as gp Nshifts_eq = 4 Nshifts_max = 10 Nshifts = np.arange(Nshifts_max+1) i0 = np.nonzero(Nshifts != Nshifts_eq)[0] i1 = np.nonzero(Nshifts == Nshifts_eq)[0] gp.plot( # True value: var_Nshifts4==0, Nshifts!=4 ( np.zeros(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "red"') ), # True value: var_Nshifts4==1, Nshifts==4 ( np.ones(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "red"') ), # False value: var_Nshifts4==1, Nshifts!=4 ( np.ones(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "black"') ), # False value: var_Nshifts4==0, Nshifts==4 ( np.zeros(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "black"') ), unset=('grid'), _set = (f'xtics ("(Nshifts=={Nshifts_eq}) == 0" 0, "(Nshifts=={Nshifts_eq}) == 1" 1)'), _xrange = (-0.1, 1.1), ylabel = "Nshifts", title = "Nshifts equality variable: not linearly separable", hardcopy = "/tmp/scheduling-Nshifts-eq.svg")
So a hypothetical vars_Nshifts[h][4]
variable (plotted on the x axis of this plot) would need to be defined by a set of linear AND constraints to linearly separate the true (red) values of this variable from the false (black) values. As can be seen in this plot, this isn't possible. So this representation does not work.
How do we fix it? We can use inequality variables instead. I define a different set of variables vars_Nshifts_leq[human][N]
that are 1 iff Nshifts
<= N
. The equality variable from before can be expressed as a difference of these inequality variables: vars_Nshifts[human][N] = vars_Nshifts_leq[human][N]-vars_Nshifts_leq[human][N-1]
Can these vars_Nshifts_leq
variables be defined by a set of linear AND constraints? Yes:
#!/usr/bin/python3 import numpy as np import numpysane as nps import gnuplotlib as gp Nshifts_leq = 4 Nshifts_max = 10 Nshifts = np.arange(Nshifts_max+1) i0 = np.nonzero(Nshifts > Nshifts_leq)[0] i1 = np.nonzero(Nshifts <= Nshifts_leq)[0] def linear_slope_yintercept(xy0,xy1): m = (xy1[1] - xy0[1])/(xy1[0] - xy0[0]) b = xy1[1] - m * xy1[0] return np.array(( m, b )) x01 = np.arange(2) x01_one = nps.glue( nps.transpose(x01), np.ones((2,1)), axis=-1) y_lowerbound = nps.inner(x01_one, linear_slope_yintercept( np.array((0, Nshifts_leq+1)), np.array((1, 0)) )) y_upperbound = nps.inner(x01_one, linear_slope_yintercept( np.array((0, Nshifts_max)), np.array((1, Nshifts_leq)) )) y_lowerbound_check = (1-x01) * (Nshifts_leq+1) y_upperbound_check = Nshifts_max - x01*(Nshifts_max-Nshifts_leq) gp.plot( # True value: var_Nshifts_leq4==0, Nshifts>4 ( np.zeros(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "red"') ), # True value: var_Nshifts_leq4==1, Nshifts<=4 ( np.ones(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "red"') ), # False value: var_Nshifts_leq4==1, Nshifts>4 ( np.ones(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "black"') ), # False value: var_Nshifts_leq4==0, Nshifts<=4 ( np.zeros(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "black"') ), ( x01, y_lowerbound, y_upperbound, dict( _with = 'filledcurves lc "green"', tuplesize = 3) ), ( x01, nps.cat(y_lowerbound_check, y_upperbound_check), dict( _with = 'lines lc "green" lw 2', tuplesize = 2) ), unset=('grid'), _set = (f'xtics ("(Nshifts<={Nshifts_leq}) == 0" 0, "(Nshifts<={Nshifts_leq}) == 1" 1)', 'style fill transparent pattern 1'), _xrange = (-0.1, 1.1), ylabel = "Nshifts", title = "Nshifts inequality variable: linearly separable", hardcopy = "/tmp/scheduling-Nshifts-leq.svg")
So we can use two linear constraints to make each of these variables work properly. To use these in the benefit function we can use the equality constraint expression from above, or we can use these directly:
# I want to favor people getting more extra shifts at the start to balance # things out: somebody getting one more shift on their pile shouldn't take # shifts away from under-utilized people benefit_boost_leq_bound = \ {2: .2, 3: .3, 4: .4, 5: .5} # Constrain vars_Nshifts_leq variables to do the right thing for h in humans: for b in benefit_boost_leq_bound.keys(): prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= (1 - vars_Nshifts_leq[h][b])*(b+1), f"{h} at least {b} shifts: lower bound") prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max - vars_Nshifts_leq[h][b]*(Nshifts_max-b), f"{h} at least {b} shifts: upper bound") benefits = dict() for h in humans: benefits[h] = \ ... + \ pulp.lpSum([vars_Nshifts_leq[h][b] * benefit_boost_leq_bound[b] \ for b in benefit_boost_leq_bound.keys()])
In this scenario, David would get a boost of 0.4 from giving up his 5th shift, while Judy would lose a boost of 0.2 from getting her 3rd, for a net gain of 0.2 benefit points. The exact numbers will need to be adjusted on a case by case basis, but this works.
The full program, with this and other extra features is available here.
05 Mar 2025 8:02pm GMT
Reproducible Builds: Reproducible Builds in February 2025
Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we've been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
Table of contents:
- Reproducible Builds at FOSDEM 2025
- Reproducible Builds at PyCascades 2025
- Does Functional Package Management Enable Reproducible Builds at Scale?
- reproduce.debian.net updates
- Upstream patches
- Distribution work
- diffoscope & strip-nondeterminism
- Website updates
- Reproducibility testing framework
Reproducible Builds at FOSDEM 2025
Similar to last year's event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year's event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal - which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew Jędrzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc
files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: "It's been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different." The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: "We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn't exist any monitoring of Nixpkgs as a whole. In this talk I'll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache." Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon's talk "describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research." The slides for the talk are available, as is the full video (23m17s).
Reproducible Builds at PyCascades 2025
Vagrant Cascadian presented at this year's PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant's talk, entitled Re-Py-Ducible Builds caught the audience's attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing… or even more important, if someone else builds it, they get the exact same thing too.
More info is available on the talk's page.
"Does Functional Package Management Enable Reproducible Builds at Scale?"
On our mailing list last month, Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris' in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF).
This month, however, Ludovic Courtès followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.
reproduce.debian.net updates
The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.
Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
-
Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
-
Increased the number of
riscv64
nodes to a total of 4, and added a newamd64
node added thanks to our (now 10-year sponsor), IONOS. -
Discovered an issue in the Debian build service where some new 'incoming' build-dependencies do not end up historically archived.
-
Uploaded the
devscripts
package, incorporating changes from Jochen Sprickerhof to thedebrebuild
script - specifically to fix the handling theRules-Requires-Root
header in Debian source packages. -
Uploaded a number of Rust dependencies of rebuilderd (
rust-libbz2-rs-sys
,rust-actix-web
,rust-actix-server
,rust-actix-http
,rust-actix-server
,rust-actix-http
,rust-actix-web-codegen
andrust-time-tz
) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild
package to:
- Obey requests from the user/developer for a different temporary directory.
- Use the root/superuser for some values of
Rules-Requires-Root
. - Don't pass
--root-owner-group
to old versions of dpkg.
… and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. […][[…][…]
Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial 'copr' repository and in the official repositories after all the dependencies are packaged.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
-
Andrea Manzini:
rust-i8n
(randomHashMap
order)starship/shadow
-
Andreas Stieger:
-
Bernhard M. Wiedemann:
-
Chris Lamb:
- #1095209 filed against
python-assertpy
. - #1096188 filed against
terminaltables3
. - #1098249 filed against
acme.sh
. - #1098251 filed against
node-svgdotjs-svg.js
. - #1098253 filed against
onevpl-intel-gpu
. - #1098350 filed against
rocdbgapi
. - #1098895 filed against
siege
. - #1098945 filed against
pkg-rocm-tools
.
- #1095209 filed against
-
Christian Goll:
warewulf4
(embeds CPU core count)
-
Jay Adddison:
-
Jochen Sprickerhof:
-
kpcyrd:
-
Leonidas Spyropoulos:
-
Robin Candau (Antiz):
highlight
(timestamp)arch-wiki-lite
(timestamp)f3d
(timestamp)jacktrip
(timestamp)prometheus
(timestamp)
-
Wolfgang Frisch:
-
Hongxu Jia:
go
(clear GOROOT for func ldShared when -trimpath is used)
Distribution work
There as been the usual work in various distributions this month, such as:
In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.
Fedora developers Davide Cavalca and Zbigniew Jędrzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project's work on unprivileged and reproducible builds continued this month. Notable fixes include:
pkg
(hash ordering)makefs
(source filesystem inode number leakage)FreeBSD base system packages
(timestamp)
The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.
Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.
diffoscope & strip-nondeterminism
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288
and 289
to Debian:
- Add
asar
toDIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS
in order to address Debian bug#1095057
) […] - Catch a
CalledProcessError
when callinghtml2text
. […] - Update the minimal Black version. […]
Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 […][…] and 288 […][…] as well as submitted a patch to update to 289 […]. Vagrant also fixed an issue that was breaking reprotest on Guix […][…].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2
was uploaded to Debian unstable by Holger Levsen.
Website updates
There were a large number of improvements made to our website this month, including:
-
Bernhard M. Wiedemann fixed an issue on the Commandments of reproducible builds fixing a link to the
readdir
component of Bernhard's own Unreproducible Package. […] -
Holger Levsen clarified the name of a link to our old Wiki pages on the History page […] and added a number of new links to the Talks & Resources page […][…].
-
James Addison update the website's own
README
file to document a couple of additional dependencies […][…], as well as did more work on a future Getting Started guide page […][…].
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
-
reproduce.debian.net-related:
- Add a helper script to manually schedule packages. […][…][…][…][…]
- Fix a link in the website footer. […]
- Strip the "💠🍥♻" emojis from package names on the manual rebuilder in order to ease copy-and-paste. […]
- On the various statistics pages, provide the number of affected source packages […][…] as well as provide various totals […][…].
- Fix graph labels for the various architectures […][…] and make them clickable too […][…][…].
- Break the displayed HTML in blocks of 256 packages in order to address rendering issues. […][…]
- Add monitoring jobs for
riscv64
archicture nodes and integrate them elsewhere in our infrastructure. […][…] - Add
riscv64
architecture nodes. […][…][…][…][…] - Update much of the documentation. […][…][…]
- Make a number of improvements to the layout and style. […][…][…][…][…][…][…]
- Remove direct links to JSON and database backups. […]
- Drop a Blues Brothers reference from frontpage. […]
-
Debian-related:
-
FreeBSD-related:
- Switch to run latest branch of FreeBSD. […]
-
Misc:
In addition:
-
kpcyrd fixed the
/all/api/
API endpoints on reproduce.debian.net by altering the nginx configuration. […] -
James Addison updated reproduce.debian.net to display the so-called 'bad' reasons hyperlink inline […] and merged the "Categorized issues" links into the "Reproduced builds" column […].
-
Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the
mmdebstrap
package […] as well as updating some documentation […]. -
Roland Clobus continued their work on reproducible 'live' images for Debian, making changes related to new clustering of jobs in openQA. […]
And finally, both Holger Levsen […][…][…] and Vagrant Cascadian performed significant node maintenance. […][…][…][…][…]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
-
IRC:
#reproducible-builds
onirc.oftc.net
. -
Mastodon: @reproducible_builds@fosstodon.org
-
Mailing list:
rb-general@lists.reproducible-builds.org
-
Twitter/X: @ReproBuilds
05 Mar 2025 1:31pm GMT
Dirk Eddelbuettel: #46: Adding arm64 to r2u
Welcome to post 46 in the $R^4 series!
r2u, introduced less than three years ago in post #37, has become a runaway success. When I last tabulated downloads in early January, we were already at 33 million downloads of binary CRAN packages across the three Ubuntu LTS releases we support. These were exclusively for the 'amd64' platform of standard (Intel or AMD made) x64_64 cpus. Now we are happy to announce that arm64 support has been added and is available!
Why arm64?
The arm64 platform is already popular on (cloud) servers and is being pushed quite actively by the cloud vendors. AWS calls their cpu 'graviton', GCS calls it 'axion'. General servers call the cpu 'ampere'; on laptop / desktops it is branded 'snapdragon' or 'cortex' or something else. Apple calls their variant M1, M2, … up to M4 by now (and Linux support exists for the brave, it is less straightforward). What these have in common is a generally more favourable 'power consumed to ops provided' ratio. That makes these cheaper to run or rent on cloud providers. And in laptops they tend to last longer on a single charge too.
Distributions such as Debian, Ubuntu, Fedora had arm64 for many years. In fact, the CRAN binaries of R, being made as builds at launchpad.net long provided arm64 in Michael's repo, we now also mirror these to CRAN. Similarly, Docker has long supported containers. And last but not least two issue tickets (#40, #55) had asked a while back.
So Why Now?
Good question. I still do not own any hardware with it, and I have not (yet?) bothered with the qemu-based emulation layer. The real difference maker was the recent availability of GitHub Actions instances of 'ubuntu-24.04-arm' (and now apparently also for 22.04).
So I started some simple experiments … which made it clear this was viable.
What Does It Mean for a CRAN Repo?
Great question. As is commonly known, of the (currently) 22.1k CRAN packages, a little under 5k are 'compiled'. Why does this matter? Because the Linux distributions know what they are doing. The 17k (give or take) packages that do not contain compiled code can be used as is (!!) on another platform. Debian and Ubuntu call these builds 'binary: all' as they work all platforms 'as is'. The others go by 'binary: any' and will work on 'any' platform for which they have been built. So we are looking at roughly 5k new binaries.
So How Many Are There?
As I write this in early March, roughly 4.5k of the 5k. Plus the 17.1k 'binary: all' and we are looking at near complete coverage!
So What Is The Current State?
Pretty complete. Compare to the amd64 side of things, we do not (yet ?) have BioConductor support; this may be added. A handful of packages do not compile because their builds seem to assume 'Linux so must be amd64' and fail over cpu instructions. Similarly, a few packages want to download binary build blobs (my own Rblpapi
among them) but none exist for arm64. Such is life. We will try to fix builds as time permits and report build issues to the respective upstream repos. Help in that endeavour would be most welcome.
But all the big and slow compiles one may care about (hello duckdb
, hello arrow
, …) are there. Which is pretty exciting!
How Does One Get Started?
In GitHub Actions, just pick ubuntu-24.04-arm
as the platform, and use the r-ci
or r2u-setup
actiions. A first test yaml exists worked (though this last version had the arm64 runner commented out again). (And no, arm64 was not faster than amd64. More tests needed.)
For simple tests, Docker. The rocker/r-ubuntu:24.04
container exists for arm64 (see here), and one can add r2u support as is done in this Dockerfile which is used by the builds and available as eddelbuettel/r2u_build:noble
. I will add the standard rocker/r2u:24.04
container (or equally rocker/r2u:noble
) in a day or two, I had not realised I wasn't making them for arm64.
One a real machine such as a cloud instance or a proper instance, just use the standard r2u script for noble
aka 24.04 available here. The key lines are the two lines
echo "deb [arch=amd64,arm64] https://r2u.stat.illinois.edu/ubuntu noble main" \
> /etc/apt/sources.list.d/cranapt.list
# ...
echo "deb [arch=amd64,arm64] https://cloud.r-project.org/bin/linux/ubuntu noble-cran40/" \
> /etc/apt/sources.list.d/cran_r.list
creating the apt
entry and which are now arm64-aware. After that, apt
works as usual, and of course r2u works as usual thanks also to bspm
so you can just do, say
and enjoy the binaries rolling in. So give it a whirl if you have access to such hardware. We look forward to feedback, suggestes, feature requests or bug reports. Let us know how it goes!
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
05 Mar 2025 2:29am GMT
Otto Kekäläinen: Will decentralized social media soon go mainstream?
In today's digital landscape, social media is more than just a communication tool - it is the primary medium for global discourse. Heads of state, corporate leaders and cultural influencers now broadcast their statements directly to the world, shaping public opinion in real time. However, the dominance of a few centralized platforms - X/Twitter, Facebook and YouTube - raises critical concerns about control, censorship and the monopolization of information. Those who control these networks effectively wield significant power over public discourse.
In response, a new wave of distributed social media platforms has emerged, each built on different decentralized protocols designed to provide greater autonomy, censorship resistance and user control. While Wikipedia maintains a comprehensive list of distributed social networking software and protocols, it does not cover recent blockchain-based systems, nor does it highlight which have the most potential for mainstream adoption.
This post explores the leading decentralized social media platforms and the protocols they are based on: Mastodon (ActivityPub), Bluesky (AT Protocol), Warpcast (Farcaster), Hey (Lens) and Primal (Nostr).
Comparison of architecture and mainstream adoption potential
Protocol | Identity System | Example | Storage model | Cost for end users | Potential |
---|---|---|---|---|---|
Mastodon | Tied to server domain | @ottok@mastodon.social |
Federated instances | Free (some instances charge) | High |
Bluesky | Portable (DID) | ottoke.bsky.social |
Federated instances | Free | Moderate |
Farcaster | ENS (Ethereum) | @ottok |
Blockchain + off-chain | Small gas fees | Moderate |
Lens | NFT-based (Polygon) | @ottok |
Blockchain + off-chain | Small gas fees | Niche |
Nostr | Cryptographic Keys | npub16lc6uhqpg6dnqajylkhwuh3j7ynhcnje508tt4v6703w9kjlv9vqzz4z7f |
Federated instances | Free (some instances charge) | Niche |
1. Mastodon (ActivityPub)
Mastodon was created in 2016 by Eugen Rochko, a German software developer who sought to provide a decentralized and user-controlled alternative to Twitter. It was built on the ActivityPub protocol, now standardized by W3C Social Web Working Group, to allow users to join independent servers while still communicating across the broader Mastodon network.
Mastodon operates on a federated model, where multiple independently run servers communicate via ActivityPub. Each server sets its own moderation policies, leading to a decentralized but fragmented experience. The servers can alternatively also be called instances, relays or nodes, depending on what vocabulary a protocol standardized on.
- Identity: User identity is tied to the instance where they registered, represented as
@username@instance.tld
. - Storage: Data is stored on individual instances, which federate messages to other instances based on their configurations.
- Cost: Free to use, but relies on instance operators willing to run the servers.
The protocol defines multiple activities such as:
- Creating a post
- Liking
- Sharing
- Following
- Commenting
Example Message in ActivityPub (JSON-LD Format)
{ "@context": "https://www.w3.org/ns/activitystreams", "type": "Create", "actor": "https://mastodon.social/users/ottok", "object": { "type": "Note", "content": "Hello from #Mastodon!", "published": "2025-03-03T12:00:00Z", "to": ["https://www.w3.org/ns/activitystreams#Public"] } }
{
"@context": "https://www.w3.org/ns/activitystreams",
"type": "Create",
"actor": "https://mastodon.social/users/ottok",
"object": {
"type": "Note",
"content": "Hello from #Mastodon!",
"published": "2025-03-03T12:00:00Z",
"to": ["https://www.w3.org/ns/activitystreams#Public"]
}
}
Servers communicate across different platforms by publishing activities to their followers or forwarding activities between servers. Standard HTTPS is used between servers for communication, and the messages use JSON-LD for data representation. The WebFinger protocol is used for user discovery. There is however no neat way for home server discovery yet. This means that if you are browsing e.g. Fosstodon and want to follow a user and press Follow, a dialog will pop up asking you to enter your own home server (e.g. mastodon.social) to redirect you there for actually executing the Follow action on with your account.
Mastodon is open source under the AGPL at github.com/mastodon/mastodon. Anyone can operate their own instance. It just requires to run your own server and some skills to maintain a Ruby on Rails app with a PostgreSQL database backend, and basic understanding of the protocol to configure federation with other ActivityPub instances.
Popularity: Already established, but will it grow more?
Mastodon has seen steady growth, especially after Twitter's acquisition in 2022, with some estimates stating it peaked at 10 million users across thousands of instances. However, its fragmented user experience and the complexity of choosing instances have hindered mainstream adoption. Still, it remains the most established decentralized alternative to Twitter.
Note that Donald Trump's Truth Social is based on the Mastodon software but does not federate with the ActivityPub network.
The ActivityPub protocol is the most widely used of its kind. One of the other most popular services is the Lemmy link sharing service, similar to Reddit. The larger ecosystem of ActivityPub is called Fediverse, and estimates put the total active user count around 6 million.
2. Bluesky (AT Protocol)
Interestingly, Bluesky was conceived within Twitter in 2019 by Twitter founder Jack Dorsey. After being incubated as a Twitter-funded project, it spun off as an independent Public Benefit LLC in February 2022 and launched its public beta in February 2023.
Bluesky runs on top of the Authenticated Transfer (AT) Protocol published at https://github.com/bluesky-social/atproto. The protocol enables portable identities and data ownership, meaning users can migrate between platforms while keeping their identity and content intact. In practice, however, there is only one popular server at the moment, which is Bluesky itself.
- Identity: Usernames are domain-based (e.g.,
@user.bsky.social
). - Storage: Content is theoretically federated among various servers.
- Cost: Free to use, but relies on instance operators willing to run the servers.
Example Message in AT Protocol (JSON Format)
{ "repo": "did:plc:ottoke.bsky.social", "collection": "app.bsky.feed.post", "record": { "$type": "app.bsky.feed.post", "text": "Hello from Bluesky!", "createdAt": "2025-03-03T12:00:00Z", "langs": ["en"] } }
{
"repo": "did:plc:ottoke.bsky.social",
"collection": "app.bsky.feed.post",
"record": {
"$type": "app.bsky.feed.post",
"text": "Hello from Bluesky!",
"createdAt": "2025-03-03T12:00:00Z",
"langs": ["en"]
}
}
Popularity: Hybrid approach may have business benefits?
Bluesky reported over 3 million users by 2024, probably getting traction due to its Twitter-like interface and Jack Dorsey's involvement. Its hybrid approach - decentralized identity with centralized components - could make it a strong candidate for mainstream adoption, assuming it can scale effectively.
3. Warpcast (Farcaster Network)
Farcaster was launched in 2021 by Dan Romero and Varun Srinivasan, both former crypto exchange Coinbase executives, to create a decentralized but user-friendly social network. Built on the Ethereum blockchain, it could potentially offer a very attack-resistant communication medium.
However, in my own testing Farcaster does not seem to fully leverage what Ethereum could offer. First of all, there is no diversity in programs implementing the protocol as at moment there is only Warpcast. In Warpcast the signup requires an initial 5 USD fee that is not payable in ETH, and users need to create a new wallet address on the Ethereum layer 2 network Base instead of simply reusing their existing Ethereum wallet address or ENS name.
Despite this, I can understand why Farcaster may have decided to start out like this. Having a single client program may be the best strategy initially. One of the decentralized chat protocol Matrix founders, Matthew Hodgson, shared in his FOSDEM 2025 talk that he slightly regrets focusing too much on developing the protocol instead of making sure the app to use it is attractive to end users. So it may be sensible to ensure Warpcast gets popular first, before attempting to make the Farcaster protocol widely used.
As a protocol Farcaster's hybrid approach makes it more scalable than fully on-chain networks, giving it a higher chance of mainstream adoption if it integrates seamlessly with broader Web3 ecosystems.
- Identity: ENS (Ethereum Name Service) domains are used as usernames.
- Storage: Messages are stored in off-chain hubs, while identity is on-chain.
- Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.
Example Message in Farcaster (JSON Format)
{ "fid": 766579, "username": "ottok", "custodyAddress": "0x127853e48be3870172baa4215d63b6d815d18f21", "connectedWallet": "0x3ebe43aa3ae5b891ca1577d9c49563c0cee8da88", "text": "Hello from Farcaster!", "publishedAt": 1709424000, "replyTo": null, "embeds": [] }
{
"fid": 766579,
"username": "ottok",
"custodyAddress": "0x127853e48be3870172baa4215d63b6d815d18f21",
"connectedWallet": "0x3ebe43aa3ae5b891ca1577d9c49563c0cee8da88",
"text": "Hello from Farcaster!",
"publishedAt": 1709424000,
"replyTo": null,
"embeds": []
}
Popularity: Decentralized social media + decentralized payments a winning combo?
Ethereum founder Vitalik Buterin (warpcast.com/vbuterin) and many core developers are active on the platform. Warpcast, the main client for Farcaster, has seen increasing adoption, especially among Ethereum developers and Web3 enthusiasts. I too have an profile at warpcast.com/ottok. However, the numbers are still very low and far from reaching network effects to really take off.
Blockchain-based social media networks, particularly those built on Ethereum, are compelling because they leverage existing user wallets and persistent identities while enabling native payment functionality. When combined with decentralized content funding through micropayments, these blockchain-backed social networks could offer unique advantages that centralized platforms may find difficult to replicate, being decentralized both as a technical network and in a funding mechanism.
4. Hey.xyz (Lens Network)
The Lens Protocol was developed by decentralized finance (DeFi) team Aave and launched in May 2022 to provide a user-owned social media network. While initially built on Polygon, it has since launched its own Layer 2 network called the Lens Network in February 2024. Lens is currently the main competitor to Farcaster.
Lens stores profile ownership and references on-chain, while content is stored on IPFS/Arweave, enabling composability with DeFi and NFTs.
- Identity: Profile ownership is tied to NFTs on the Polygon blockchain.
- Storage: Content is on-chain and integrates with IPFS/Arweave (like NFTs).
- Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.
Example Message in Lens (JSON Format)
{ "profileId": "@ottok", "contentURI": "ar://QmExampleHash", "collectModule": "0x23b9467334bEb345aAa6fd1545538F3d54436e96", "referenceModule": "0x0000000000000000000000000000000000000000", "timestamp": 1709558400 }
{
"profileId": "@ottok",
"contentURI": "ar://QmExampleHash",
"collectModule": "0x23b9467334bEb345aAa6fd1545538F3d54436e96",
"referenceModule": "0x0000000000000000000000000000000000000000",
"timestamp": 1709558400
}
Popularity: Probably not as social media site, but maybe as protocol?
The social media side of Lens is mainly the Hey.xyz website, which seems to have fewer users than Warpcast, and is even further away from reaching critical mass for network effects. The Lens protocol however has a lot of advanced features and it may gain adoption as the building block for many Web3 apps.
5. Primal.net (Nostr Network)
Nostr (Notes and Other Stuff Transmitted by Relays) was conceptualized in 2020 by an anonymous developer known as fiatjaf. One of the primary design tenets was to be a censorship-resistant protocol and it is popular among Bitcoin enthusiasts, with Jack Dorsey being one of the public supporters. Unlike the Farcaster and Lens protocols, Nostr is not blockchain-based but just a network of relay servers for message distribution. If does however use public key cryptography for identities, similar to how wallets work in crypto.
- Identity: Public-private key pairs define identity (with prefix
npub...
). - Storage: Content is federated among multiple servers, which in Nostr vocabulary are called relays.
- Cost: No gas fees, but relies on relay operators willing to run the servers.
Example Message in Nostr (JSON Format)
{ "id": "note1xyz...", "pubkey": "npub1...", "kind": 1, "content": "Hello from Nostr!", "created_at": 1709558400, "tags": [], "sig": "sig1..." }
{
"id": "note1xyz...",
"pubkey": "npub1...",
"kind": 1,
"content": "Hello from Nostr!",
"created_at": 1709558400,
"tags": [],
"sig": "sig1..."
}
Popularity: If Jack Dorsey and Bitcoiners promote it enough?
Primal.net as a web app is pretty solid, but it does not stand out much. While Jack Dorsey has shown support by donating $1.5 million to the protocol development in December 2021, its success likely depends on broader adoption by the Bitcoin community.
Will any of these replace X/Twitter?
As decentralized social media evolves, the balance between usability, cost, and decentralization will determine which protocol achieves mainstream success. Mastodon and Bluesky have already reached millions of users, while Lens and Farcaster are growing within crypto communities. The future of social media lies in whether these decentralized alternatives can provide seamless experiences to rival traditional platforms while maintaining user freedom and autonomy.
The idea of decentralized social media is not new. One early pioneer identi.ca launched in 2008, only two years after Twitter, using the OStatus protocol to promote decentralization. A few years later it evolved into pump.io with the ActivityPump protocol, and also forked into GNU Social that continued with OStatus. I remember when these happened, and that in 2010 also Diaspora launched with fairly large publicity. Surprisingly both of these still operate (I can still post both on identi.ca and diasp.org), but the activity fizzled out years ago. The protocol however survived partially and evolved into ActivityPub, which is now the backbone of the Fediverse.
Who knows, given the right circumstances maybe X.com leadership decides to change the operating model and start federating contents to break out from a walled garden model.
The evolution of decentralized social media over the next decade will likely parallel developments in democracy, freedom of speech and public discourse. While the early 2010s emphasized maximum independence and freedom, the late 2010s saw growing support for content moderation to combat misinformation. The AI era introduces new challenges, potentially requiring proof-of-humanity verification for content authenticity.
This is clearly an area of development worth monitoring closely.
05 Mar 2025 12:00am GMT
04 Mar 2025
Planet Debian
Paul Tagliamonte: Reverse Engineering (another) Restaurant Pager system 🍽️
Some of you may remember that I recently felt a bit underwhelmed by the last pager I reverse engineered - the Retekess TD-158, mostly due to how intuitive their design decions were. It was pretty easy to jump to conclusions because they had made some pretty good decisions on how to do things.
I figured I'd spin the wheel again and try a new pager system - this time I went for a SU-68G-10 pager, since I recognized the form factor as another fairly common unit I've seen around town. Off to Amazon I went, bought a set, and got to work trying to track down the FCC filings on this model. I eventually found what seemed to be the right make/model, and it, once again, indicated that this system should be operating in the 433 MHz
ISM band likely using OOK modulation. So, figured I'd start with the center of the band (again) at 433.92 MHz
, take a capture, test my luck, and was greeted with a now very familiar sight.
Same as the last goarounds, except the premable here is a 0
symbol followed by 6-ish symbol durations of no data, followed by 25 bits of a packet. Careful readers will observe 26 symbols above after the preamble - I did too! The last 0
in the screenshot above is not actually a part of the packet - rather, it's part of the next packet's preamble. Each packet is packed in pretty tight.
By Hand Demodulation
Going off the same premise as last time, I figured i'd give it a manual demod and see what shakes out (again). This is now the third time i've run this play, so check out either of my prior two posts for a better written description of what's going on here - I'll skip all the details since i'd just be copy-pasting from those posts into here. Long story short, I demodulated a call for pager 1, call for pager 10, and a power off command.
What | Bits |
Call 1 | 1101111111100100100000000 |
Call 10 | 1101111111100100010100000 |
Off | 1101111111100111101101110 |
A few things jump out at me here - the first 14 bits are fixed (in my case, 11011111111001
), which means some mix of preamble, system id, or other system-wide constant. Additionally, The last 9 bits also look like they are our pager - the 1
and 10
pager numbers (LSB bit order) jump right out (100000000
and 010100000
, respectively). That just leaves the two remaining bits which look to be the "action" - 00
for a "Call", and 11
for a "Power off". I don't super love this since command has two bits rather than one, the base station ID seems really long, and a 9-bit Pager ID is just weird. Also, what is up with that power-off pager id? Weird. So, let's go and see what we can do to narrow down and confirm things by hand.
Testing bit flips
Rather than call it a day at that, I figure it's worth a bit of diligence to make sure it's all correct - so I figured we should try sending packets to my pagers and see how they react to different messages after flipping bits in parts of the packet.
I implemented a simple base station for the pagers using my Ettus B210mini, and threw together a simple OOK modulator and transmitter program which allows me to send specifically crafted test packets on frequency. Implementing the base station is pretty straightforward, because of the modulation of the signal (OOK), it's mostly a matter of setting a buffer to 1
and 0
for where the carrier signal is on or off timed to the sample rate, and sending that off to the radio. If you're interested in a more detailed writeup on the steps involved, there's a bit more in my christmas tree post.
First off, I'd like to check the base id. I want to know if all the bits in what I'm calling the "base id" are truly part of the base station ID, or perhaps they have some other purpose (version, preamble?). I wound up following a three-step process for every base station id:
- Starting with an unmodified call packet for the pager under test:
- Flip the Nth bit, and transmit the call. See if the pager reacts.
- Hold "SET", and pair the pager with the new packet.
- Transmit the call. See if the pager reacts.
- After re-setting the ID, transmit the call with the physical base station, see if the pager reacts.
- Starting with an unmodified off packet for the pager system
- Flip the Nth bit, transmit the off, see if the pager reacts.
What wound up happening is that changing any bit in the first 14 bits meant that the packet no longer worked with any pager until it was re-paired, at which point it begun to work again. This likely means the first 14 bits are part of the base station ID - and not static between base stations, or some constant like a version or something. All bits appear to be used.
I repeated the same process with the "command" bits, and found that only 11
and 00
caused the pagers to react for the pager ids i've tried.
I repeated this process one last time with the "pager id" bits this time, and found the last bit in the packet isn't part of the pager ID, and can be either a 1
or a 0
and still cause the pager to react as if it were a 0. This means that the last bit is unknown but it has no impact on either a power off or call, and all messages sent by my base station always have a 0 set. It's not clear if this is used by anything - likely not since setting a bit there doesn't result in any change of behavior I can see yet.
Final Packet Structure
After playing around with flipping bits and testing, the final structure I was able to come up with based on behavior I was able to observe from transmitting hand-crafted packets and watching pagers buzz:
Commands
The command
section bit comes in two flavors - either a "call" or an "off" command.
Type | Id (2 bits) | Description |
Call | 00 | Call the pager identified by the id in pager id |
Off | 11 | Request pagers power off, pager id is always 10110111 |
As for the actual RF PHY characteristics, here's my best guesses at what's going on with them:
What | Description |
Center Frequency | 433.92 MHz |
Modulation | OOK |
Symbol Duration | 1300us |
Bits | 25 |
Preamble | 325us of carrier, followed by 8800us of no carrier |
I'm not 100% on the timings, but they appear to be close enough to work reliabily. Same with the center frequency, it's roughly right but there may be a slight difference i'm missing.
Lingering Questions
This was all generally pretty understandable - another system that had some good decisions, and wasn't too bad to reverse engineer. This was a bit more fun to do, since there was a bit more ambiguity here, but still not crazy. At least this one was a bit more ambiguous that needed a bit of followup to confirm things, which made it a bit more fun.
I am left with a few questions, though - which I'm kinda interested in understanding, but I'll likely need a lot more data and/or original source:
Why is the "command" two bits here? This was a bit tough to understand because of the number of bits they have at their disposal - given the one last bit at the end of the packet that doesn't seem to do anything, there's no reason this couldn't have been a 16 bit base station id, and an 8 bit pager id along with a single bit command (call or off).
When sending an "off" - why is power off that bit pattern? Other pager IDs don't seem to work with "off", so it has some meaning, but I'm not sure what that is. You press and hold 9 on the physical base station, but the code winds up coming out to 0xED
, 237
or maybe -19
if it's signed. I can't quite figure out why it's this value. Are there other codes?
Finally - what's up with the last bit? Why is it 25 bits and not 24? It must take more work to process something that isn't 8 bit aligned - and all for something that's not being used!
04 Mar 2025 3:00pm GMT
03 Mar 2025
Planet Debian
Bits from Debian: Bits from the DPL
Dear Debian community,
this is bits from DPL for February.
Ftpmaster team is seeking for new team members
In December, Scott Kitterman announced his retirement from the project. I personally regret this, as I vividly remember his invaluable support during the Debian Med sprint at the start of the COVID-19 pandemic. He even took time off to ensure new packages cleared the queue in under 24 hours. I want to take this opportunity to personally thank Scott for his contributions during that sprint and for all his work in Debian.
With one fewer FTP assistant, I am concerned about the increased workload on the remaining team. I encourage anyone in the Debian community who is interested to consider reaching out to the FTP masters about joining their team.
If you're wondering about the role of the FTP masters, I'd like to share a fellow developer's perspective:
"My read on the FTP masters is:
- In truth, they are the heart of the project.
- They know it.
- They do a fantastic job."
I fully agree and see it as part of my role as DPL to ensure this remains true for Debian's future.
If you're looking for a way to support Debian in a critical role where many developers will deeply appreciate your work, consider reaching out to the team. It's a great opportunity for any Debian Developer to contribute to a key part of the project.
Project Status: Six Months of Bug of the Day
In my Bits from the DPL talk at DebConf24, I announced the Tiny Tasks effort, which I intended to start with a Bug of the Day project. Another idea was an Autopkgtest of the Day, but this has been postponed due to limited time resources-I cannot run both projects in parallel.
The original goal was to provide small, time-bound examples for newcomers. To put it bluntly: in terms of attracting new contributors, it has been a failure so far. My offer to explain individual bug-fixing commits in detail, if needed, received no response, and despite my efforts to encourage questions, none were asked.
However, the project has several positive aspects: experienced developers actively exchange ideas, collaborate on fixing bugs, assess whether packages are worth fixing or should be removed, and work together to find technical solutions for non-trivial problems.
So far, the project has been engaging and rewarding every day, bringing new discoveries and challenges-not just technical, but also social. Fortunately, in the vast majority of cases, I receive positive responses and appreciation from maintainers. Even in the few instances where help was declined, it was encouraging to see that in two cases, maintainers used the ping as motivation to work on their packages themselves. This reflects the dedication and high standards of maintainers, whose work is essential to the project's success.
I once used the metaphor that this project is like wandering through a dark basement with a lone flashlight-exploring aimlessly and discovering a wide variety of things that have accumulated over the years. Among them are true marvels with popcon >10,000, ingenious tools, and delightful games that I only recently learned about. There are also some packages whose time may have come to an end-but each of them reflects the dedication and effort of those who maintained them, and that deserves the utmost respect.
Leaving aside the challenge of attracting newcomers, what have we achieved since August 1st last year?
- Fixed more than one package per day, typically addressing multiple bugs.
- Added and corrected numerous Homepage fields and watch files.
- The most frequently patched issue was "Fails To Cross-Build From Source" (all including patches).
- Migrated several packages from cdbs/debhelper to dh.
- Rewrote many d/copyright files to DEP5 format and thoroughly reviewed them.
- Integrated all affected packages into Salsa and enabled Salsa CI.
- Approximately half of the packages were moved to appropriate teams, while the rest are maintained within the Debian or Salvage teams.
- Regularly performed team uploads, ITS, NMUs, or QA uploads.
- Filed several RoQA bugs to propose package removals where appropriate.
- Reported multiple maintainers to the MIA team when necessary.
With some goodwill, you can see a slight impact on the trends.debian.net graphs (thank you Lucas for the graphs), but I would never claim that this project alone is responsible for the progress. What I have also observed is the steady stream of daily uploads to the delayed queue, demonstrating the continuous efforts of many contributors. This ongoing work often remains unseen by most-including myself, if not for my regular check-ins on this list. I would like to extend my sincere thanks to everyone pushing fixes there, contributing to the overall quality and progress of Debian's QA efforts.
If you examine the graphs for "Version Control System" and "VCS Hosting" with the goodwill mentioned above, you might notice a positive trend since mid-last year. The "Package Smells" category has also seen reductions in several areas: "no git", "no DEP5 copyright", "compat <9", and "not salsa". I'd also like to acknowledge the NMUers who have been working hard to address the "format != 3.0" issue. Thanks to all their efforts, this specific issue never surfaced in the Bug of the Day effort, but their contributions deserve recognition here.
The experience I gathered in this project taught me a lot and inspired me to some followup we should discuss at a Sprint at DebCamp this year.
Finally, if any newcomer finds this information interesting, I'd be happy to slow down and patiently explain individual steps as needed. All it takes is asking questions on the Matrix channel to turn this into a "teaching by example" session.
By the way, for newcomers who are interested, I used quite a few abbreviations-all of which are explained in the Debian Glossary.
Sneak Peek at Upcoming Conferences
I will join two conferences in March-feel free to talk to me if you spot me there.
-
FOSSASIA Summit 2025 (March 13-15, Bangkok, Thailand) Schedule: https://eventyay.com/e/4c0e0c27/schedule
-
Chemnitzer Linux-Tage (March 22-23, Chemnitz, Germany) Schedule: https://chemnitzer.linux-tage.de/2025/de/programm/vortraege
Both events will have a Debian booth-come say hi!
Kind regards Andreas.
03 Mar 2025 11:00pm GMT
Lisandro Damián Nicanor Pérez Meyer: Going to Embedded World 2025 in Nuremberg
This year I'll be participating of Embedded World 2025 in Nuremberg, representing the company I work for, ICS. You will be able to find me at the Automotive Grade Linux booth on hall 4, 4-209.
If you are around be sure to come and say hi, and why not, exchange PGP/GPG keys!
03 Mar 2025 1:44pm GMT
02 Mar 2025
Planet Debian
Lisandro Damián Nicanor Pérez Meyer: PGP/GPG transition from 0x6286A7D0 to 0xB48C1072
I am currently transitioning my GPG/GPG key from D/4096 0x12DDFA84AC23B2BBF04B313CAB645F406286A7D0 to D/4096 0xA94C9FBFA49AA7CD4F40BB9F5E9030CCB48C1072.
Let's put this in plain text, signed with both keys:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
- -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
I am currently transitioning my GPG/GPG key from D/4096 0x12DDFA84AC23B2BBF04B313CAB645F406286A7D0 to D/4096 0xA94C9FBFA49AA7CD4F40BB9F5E9030CCB48C1072.
This file is first signed with the new key and then with the old one.
- -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEqUyfv6Sap81PQLufXpAwzLSMEHIFAmfE6RwACgkQXpAwzLSM
EHJpUBAAwMAbOwGcRiuX/aBjqDMA9HerRgimNWE9xA35Asg3F+A5/AFrBo+BDng3
jviCGxR6YdicSLZptaScLuRnqG1i/OcochGDxvHYVQ9I/G9SuHB7ylqD7zDnO5pw
Lldwx9jovkszgXMC+vs1E9tQ4vpuWNQ1I7q90rdikywhvNdNs8XUSCUNCLol5fzm
u64hcKex3pwt7wYs6TxtgO5DLpp//5Z6NoZ5f/esC0837zqy5Py6+7scN3tgRmXj
SyALlhfOCsy4+v22K5xk0VNelEWUg+VKqgMjPYbEfGQ3e4LXId6gGlKF+OuXCJX5
Eqi2leO/O3c+1MZ8LMh3YQft1/TmYktASMTdwV7Y87qMgVkXsJqIvw8d9VNlZvET
B3MMsuPK9VNKCokbSiHwB2ZQR235Hq6LPrBfMPnoVb5QzUgIk8Kz92wM3NWVAjzE
oj/660SZ7SfbBi6qmQyMjYKSKN+kSZazQfoUZo0fK1Y1mywN/XkeeV+gq/ZiYPhI
QLbjEfoeHEVcufgQCU0PvUuKr/+ud8BAwdH/9YWxYnObAzXFxgOJ9AvDqKxbD+rw
MVXCU4xMtNHHDqgZ+pSdB0br/bYtIqh1YsFfHw16lUgj9lcmfnujhl+h700pob6d
oArO0Bjb0bM9PTRRAn3CMiz2UeerBzY6gvaSnO3oBQc/UAx3RgA=
=r9Sr
- -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEEt36hKwjsrvwSzE8q2RfQGKGp9AFAmfE6U8ACgkQq2RfQGKG
p9DEWA/+N1AtaPwVGRi3OTcC+mzjjVd3oB4H4E80559FCbWQLvbnlazCTgdVHxp5
Pjlm4I/hKYSaWNirUvE7Dq7LNWYYhZRBunXc/VrrX2fkxj99D+F9co5fXYO3fsQn
vlz1UZzq8OrvWJo5Cv65CkblQReB31SNY//gBk5SjaeL4bnH3qOLCn6gGrqIgkyj
qb8vQzk9ssb0b2P2hNJlkYQA20LUshyShyfnaAJuEtmDYp3F3fWfuyTPEznJZ0AJ
efxfkYqQIznY36Om8dW0ec5LI3Xb+Obj4ccfNhWBfVG4RKruKHEhQCDtZbMSGPDn
ns4yOl5cqbN/2Gqa/Ww+LafWPsa73NYQNDOIM2XhVFLf2wikGMnb2bew3iZrEBo5
BORucyd1sBFsdD2tXAZEaXBpuCU+7mI9bJz9Co2+NWf1+IDaKyvJSgl7cQxuUtd4
tp7mDB7Czf4yDK+QHqeWY46DtU0dlDpyOt2IijkJzhH6nL9cfo+W4JUFJrhd42Tr
fRqjt7WeGrauX+d8wfvVV/KFrCkuw51ojLAtztvH7iwDP85wAOu95AlT1kT4ZwlE
uEmdgtYE3GGwQKP2osndJZwic/tZuKrm7p5xFYJr8N95nsRNlk1ia4EkyvQbe49m
2+JHO8Q0EjUGfV2+bSw4Eupi6qEgWp2s4sIGpHEGzWYfNqmozWE=
=A5kI
-----END PGP SIGNATURE-----
The above can be found as a file here.
02 Mar 2025 11:16pm GMT
Jonathan McDowell: RIP: Steve Langasek
[I'd like to stop writing posts like this. I've been trying to work out what to say now for nearly 2 months (writing the mail to -private
to tell the Debian project about his death is one of the hardest things I've had to write, and I bottled out and wrote something that was mostly just factual, because it wasn't the place), and I've decided I just have to accept this won't be the post I want it to be, but posted is better than languishing in drafts.]
Last weekend I was in Portland, for the Celebration of Life of my friend Steve, who sadly passed away at the start of the year. It wasn't entirely unexpected, but that doesn't make it any easier.
I've struggled to work out what to say about Steve. I've seen many touching comments from others in Debian about their work with him, but what that's mostly brought home to me is that while I met Steve through Debian, he was first and foremost my friend rather than someone I worked with in Debian. And so everything I have to say is more about that friendship (and thus feels a bit self-centred).
My first memory of Steve is getting lost with him in Porto Alegre, Brazil, during DebConf4. We'd decided to walk to a local mall to meet up with some other folk (I can't recall how they were getting there, but it wasn't walking), ended up deep in conversation (ISTR it was about shared library transititions), and then it took a bit longer than we expected. I don't know how that managed to cement a friendship (neither of us saw it as the near death experience others feared we'd had), but it did.
Unlike others I never texted Steve much; we'd occasionally chat on IRC, but nothing major. That didn't seem to matter when we actually saw each other in person though, we just picked up like we'd seen each other the previous week. DebConf became a recurring theme of when we'd see each other. Even outside DebConf we went places together. The first time I went somewhere in the US that wasn't the Bay Area, it was to Portland to see Steve. He, and his family, came to visit me in Belfast a couple of times, and I did road trip from Dublin to Cork with him. He took me to a volcano.
Steve saw injustice in the world and actually tried to do something about it. I still have a copy of the US constitution sitting on my desk that he gave me. He made me want to be a better person.
The world is a worse place without him in it, and while I am better for having known him, I am sadder for the fact he's gone.
02 Mar 2025 4:56pm GMT
Colin Watson: Free software activity in February 2025
Most of my Debian contributions this month were sponsored by Freexian.
You can also support my work directly via Liberapay.
OpenSSH
OpenSSH upstream released 9.9p2 with fixes for CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from the Debian security team, and prepared updates for all of testing/unstable, bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few more months, but wasn't affected by either vulnerability.
Although I'm not particularly active in the Perl team, I fixed a libnet-ssleay-perl build failure because it was blocking openssl from migrating to testing, which in turn was blocking the above openssh fixes.
I also sent a minor sshd -T
fix upstream, simplified a number of autopkgtests using the newish Restrictions: needs-sudo
facility, and prepared for removing the obsolete slogin
symlink.
PuTTY
I upgraded to the new upstream version 0.83.
GCC 15 build failures
I fixed build failures with GCC 15 in a few packages:
Python team
A lot of my Python team work is driven by its maintainer dashboard. Now that we've finished the transition to Python 3.13 as the default version, and inspired by a recent debian-devel thread started by Santiago, I thought it might be worth spending a bit of time on the "uscan error" section. uscan
is typically scraping upstream web sites to figure out whether new versions are available, and so it's easy for its configuration to become outdated or broken. Most of this work is pretty boring, but it can often reveal situations where we didn't even realize that a Debian package was out of date. I fixed these packages:
- cssutils (this in particular was very out of date due to a new and active upstream maintainer since 2021)
- django-assets
- django-celery-email
- django-sass
- django-yarnpkg
- json-tricks
- mercurial-extension-utils
- pydbus
- pydispatcher
- pylint-celery
- pyspread
- pytest-pretty
- python-apptools
- python-django-libsass (contributed a packaging fix upstream in passing)
- python-django-postgres-extra
- python-django-waffle
- python-ephemeral-port-reserve
- python-ifaddr
- python-log-symbols
- python-msrest
- python-msrestazure
- python-netdisco
- python-pathtools
- python-user-agents
- sinntp
- wchartype
I upgraded these packages to new upstream versions:
- cssutils (contributed a packaging tweak upstream)
- django-iconify
- django-sass
- domdf-python-tools
- extra-data (fixing a numpy 2.0 failure)
- flufl.i18n
- json-tricks
- jsonpickle
- mercurial-extension-utils
- mod-wsgi
- nbconvert
- orderly-set
- pydispatcher (contributed a Python 3.12 fix upstream)
- pylint
- pytest-rerunfailures
- python-asyncssh
- python-box (contributed a packaging fix upstream)
- python-charset-normalizer
- python-django-constance
- python-django-guid
- python-django-pgtrigger
- python-django-waffle
- python-djangorestframework-simplejwt
- python-formencode
- python-holidays (contributed a test fix upstream)
- python-legacy-cgi
- python-marshmallow-polyfield (fixing a test failure)
- python-model-bakery
- python-mrcz (fixing a numpy 2.0 failure)
- python-netdisco
- python-npe2
- python-persistent
- python-pkginfo (fixing a test failure)
- python-proto-plus
- python-requests-ntlm
- python-roman
- python-semantic-release
- python-setproctitle
- python-stdlib-list
- python-trustme
- python-typeguard (fixing a test failure)
- python-tzlocal
- pyzmq
- setuptools-scm
- sqlfluff
- stravalib
- tomopy
- trove-classifiers
- xhtml2pdf (fixing CVE-2024-25885)
- xonsh
- zodbpickle
- zope.deprecation
- zope.testrunner
In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing BSA-121) and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.
I went through all the build failures related to python-click 8.2.0 (which was confusingly tagged but not fully released upstream and posted an analysis.
I fixed or helped to fix various other build/test failures:
- cython
- dask
- deepdish
- hickle (contributed upstream)
- mdp (contributed upstream)
- mypy
- pillow
- pynput
- python-fonticon-fontawesome6
- python-persistent (contributed upstream)
- python-srsly
I dropped support for the old setup.py ftest
command from zope.testrunner upstream.
I fixed various odds and ends of bugs:
- django-memoize: autopkgtest must be marked superficial
- extra-data: extra-data: please add autopkgtests (to add coverage for python3-numpy)
- fpylll: missing dependency on numpy abi
- python-box: autopkgtest must be marked superficial
- python-hdmedians: missing dependency on numpy abi
- python-legacy-cgi: missing requirement: openstack-pkg-tools
- python-tzlocal: doesn't run any tests during the build or as autopkgtest
- requests: will FTBFS during trixie support period (contributed supporting fix upstream)
- setuptools-scm: project was renamed from
setuptools_scm
tosetuptools-scm
Installer team
Following up on last month, I merged and uploaded Helmut's /usr
-move fix.
02 Mar 2025 1:49pm GMT
01 Mar 2025
Planet Debian
Junichi Uekawa: Network is unreliable.
Network is unreliable. Seems like my router is trying to reconnect every 20 seconds after something triggers.
01 Mar 2025 10:01pm GMT
Debian Brasil: MiniDebConf Belo Horizonte 2024 - a brief report
title: MiniDebConf Belo Horizonte 2024 - a brief report description: by Paulo Henrique de Lima Santana (phls) published: true date: 2025-03-01T17:40:50.904Z tags: blog, english editor: markdown
dateCreated: 2024-06-06T09:00:00.000Z
From April 27th to 30th, 2024, MiniDebConf Belo Horizonte 2024 was held at the Pampulha Campus of UFMG - Federal University of Minas Gerais, in Belo Horizonte city.
This was the fifth time that a MiniDebConf (as an exclusive in-person event about Debian) took place in Brazil. Previous editions were in Curitiba (2016, 2017, and 2018), and in Brasília 2023. We had other MiniDebConfs editions held within Free Software events such as FISL and Latinoware, and other online events. See our event history.
Parallel to MiniDebConf, on 27th (Saturday) FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software, and It has been held since 2005 simultaneously in several cities.
MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about Debian. We value the presence of both beginner users who are familiarizing themselves with the system and the official project developers. The spirit of welcome and collaboration was present during all the event.
2024 edition numbers
During the four days of the event, several activities took place for all levels of users and collaborators of the Debian project. The official schedule was composed of:
- 06 rooms in parallel on Saturday;
- 02 auditoriums in parallel on Monday and Tuesday;
- 30 talks/BoFs of all levels;
- 05 workshops for hands-on activities;
- 09 lightning talks on general topics;
- 01 Live Electronics performance with Free Software;
- Install fest to install Debian on attendees' laptops;
- BSP (Bug Squashing Party);
- Uploads of new or updated packages.
The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a record number of participants.
- Total people registered: 399
- Total attendees in the event: 224
Of the 224 participants, 15 were official Brazilian contributors, 10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to several unofficial contributors.
The organization was carried out by 14 people who started working at the end of 2023, including Prof. Loïc Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event.
As MiniDebConf was held at UFMG facilities, we had the help of more than 10 University employees.
See the list with the names of people who helped in some way in organizing MiniDebConf Belo Horizonte 2024.
The difference between the number of people registered and the number of attendees in the event is probably explained by the fact that there is no registration fee, so if the person decides not to go to the event, they will not suffer financial losses.
The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the result of the constant efforts made over the last few years to attract more contributors to the Debian community in Brazil. With each edition the numbers only increase, with more attendees, more activities, more rooms, and more sponsors/supporters.
Activities
The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th (Saturday, Monday and Tuesday) we had talks, discussions, workshops and many practical activities.
On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing around the city. In the morning we left the hotel and went, on a chartered bus, to the Belo Horizonte Central Market. People took the opportunity to buy various things such as cheeses, sweets, cachaças and souvenirs, as well as tasting some local foods.
After a 2-hour tour of the Market, we got back on the bus and hit the road for lunch at a typical Minas Gerais food restaurant.
With everyone well fed, we returned to Belo Horizonte to visit the city's main tourist attraction: Lagoa da Pampulha and Capela São Francisco de Assis, better known as Igrejinha da Pampulha.
We went back to the hotel and the day ended in the hacker space that we set up in the events room for people to chat, packaging, and eat pizzas.
Crowdfunding
For the third time we ran a crowdfunding campaign and it was incredible how people contributed! The initial goal was to raise the amount equivalent to a gold tier of R$ 3,000.00. When we reached this goal, we defined a new one, equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we achieved this goal. So we proposed as a final goal the value of a gold + silver + bronze tiers, which would be equivalent to R$ 6,000.00. The result was that we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people!
Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated.
Food, accommodation and/or travel grants for participants
Each edition of MiniDebConf brought some innovation, or some different benefit for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need some kind of help.
In the registration form, we included the option for the person to request a food, accommodation and/or travel bursary, but to do so, they would have to identify themselves as a contributor (official or unofficial) to Debian and write a justification for the request.
Number of people benefited:
- Food: 69
- Accommodation: 20
- Travel: 18
The food bursary provided lunch and dinner every day. The lunches included attendees who live in Belo Horizonte and the region. Dinners were paid for attendees who also received accommodation and/or travel. The accommodation was held at the BH Jaraguá Hotel. And the travels included airplane or bus tickets, or fuel (for those who came by car or motorbike).
Much of the money to fund the bursaries came from the Debian Project, mainly for travels. We sent a budget request to the former Debian leader Jonathan Carter, and He promptly approved our request.
In addition to this event budget, the leader also approved individual requests sent by some DDs who preferred to request directly from him.
The experience of offering the bursaries was really good because it allowed several people to come from other cities.
Photos and videos
You can watch recordings of the talks at the links below:
- Youtube - Peertube
-
video.debian.net And see the photos taken by several collaborators in the links below:
- Nextcloud
Thanks
We would like to thank all the attendees, organizers, volunteers, sponsors and supporters who contributed to the success of MiniDebConf Belo Horizonte 2024.
Sponsors
Gold:
Silver:
Bronze:
- BRDSoft - Tecnologias para TI e Telecomunicações
-
EITA - Cooperativa de Trabalho Educação, Informação e Tecnologia para Autogestão
Supporters
-
ICTL - Instituto para Conservação de Tecnologias Livres - Nazinha Alimentos
- DACOMPSI
Organizers
- Projeto Debian
- Comunidade Debian Brasil
- Comunidade Debian MG
- DCC/UFMG - Departamento de Ciência da Computação da Universidade Federal de Minas Gerais
01 Mar 2025 5:40pm GMT
Debian Brasil: Debian Day 2024 in Santa Maria - Brazil
title: Debian Day 2024 in Santa Maria - Brazil description: by Andrew Gonçalves published: true date: 2025-03-01T17:39:21.458Z tags: blog, english editor: markdown
dateCreated: 2024-08-20T13:00:00.000Z
by por Andrew Gonçalves
Debian Day in Santa Maria - RS 2024 was held after a 5-year hiatus from the previous version of the event. It took place on the morning of August 16, in the Blue Hall of the Franciscan University (UFN) with support from the Debian community and the Computing Practices Laboratory of UFN.
The event was attended by students from all semesters of the Computer Science, Digital Games and Informational Systems, where we had the opportunity to talk to the participants.
Around 60 students attended a lecture introducing them to Free and Open Source Software, Linux and were introduced to the Debian project, both about the philosophy of the project and how it works in practice and the opportunities that have opened up for participants by being part of Debian.
After the talk, a packaging demonstration was given by local DD Francisco Vilmar, who demonstrated in practice how software packaging works in Debian.
I would like to thank all the people who helped us:
- Debian Project
- Professor Ana Paula Canal (UFN)
- Professor Sylvio André Garcia (UFN)
- Laboratory of Computing Practices
- Francisco Vilmar (local DD)
And thanks to all the participants who attended this event asking intriguing questions and taking an interest in the world of Free Software.
Photos:
01 Mar 2025 5:39pm GMT