20 Jan 2026
Planet Debian
Sahil Dhiman: Conferences, why?
Back in December, I was working to help organize multiple different conferences. One has already happened; the rest are still works in progress. That's when the thought struck me: why so many conferences, and why do I work for them?
I have been fairly active in the scene since 2020. For most conferences, I usually arrive late in the city on the previous day and usually leave the city on conference close day. Conferences for me are the place to meet friends and new folks and hear about them, their work, new developments, and what's happening in their interest zones. I feel naturally happy talking to folks. In this case, folks inspire me to work. Nothing can replace a passionate technical and social discussion, which stretches way into dinner parties and later.
For most conference discussions now, I just show up wherever needed without a set role (DebConf is probably an exception to it). It usually involves talking to folks, suggesting what needs to be done, doing a bit of it myself, and finishing some last-minute stuff during the actual thing.
Having more of these conferences and helping make them happen naturally gives everyone more places to come together, meet distant friends, talk, and work on something.
No doubt, one reason for all these conferences is evangelism for, let's say Free Software, OpenStreetMap, Debian etc. which is good and needed for the pipeline. But for me, the primary reason would always be meeting folks.
20 Jan 2026 2:27am GMT
19 Jan 2026
Planet Debian
Dirk Eddelbuettel: RApiDatetime 0.0.11 on CRAN: Micro-Maintenance

A new (micro) maintenance release of our RApiDatetime package is now on CRAN, coming only a good week after the 0.0.10 release which itself had a two year gap to its predecessor release.
RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change.
This release adds a single (and ) around one variable as the rchk container and service by Tomas now flagged this. Which is … somewhat peculiar, as this is old code also 'borrowed' from R itself but no point arguing so I just added this.
Details of the release follow based on the NEWS file.
Changes in RApiDatetime version 0.0.11 (2026-01-19)
- Add PROTECT (and UNPROTECT) to appease rchk
Courtesy of my CRANberries, there is also a diffstat report for this release.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
19 Jan 2026 11:21pm GMT
Isoken Ibizugbe: Mid-Point Project Progress
Halfway There
Hurray!
I have officially reached the 6-week mark, the halfway point of my Outreachy internship. The time has flown by incredibly fast, yet it feels short because there is still so much exciting work to do.
I remember starting this journey feeling overwhelmed, trying to gain momentum. Today, I feel much more confident. I began with the apps_startstop task during the contribution period, writing manual test steps and creating preparation Perl scripts for the desktop environments. Since then, I've transitioned into full automation and taken a liking to reading openQA upstream documentation when I have issues or for reference.
In all of this, I've committed over 30 hours a week to the project. This dedicated time has allowed me to look in-depth into the Debian ecosystem and automated quality assurance.
The Original Roadmap vs. Reality
Reviewing my 12-week goal, which included extending automated tests for "live image testing," "installer testing," and "documentation," I am happy to report that I am right on track. My work on desktop apps tests has directly improved the quality of both the Live Images and the netinst (network installer) ISOs.
Accomplishments
I have successfully extended the apps_startstop tests for two Desktop Environments (DEs): Cinnamon and LXQt. These tests ensure that common and DE specific apps launch and close correctly across different environments.
- Merged Milestone: My Cinnamon tests have been officially merged into the upstream repository! [MR !84]
- LXQt & Adaptability: I am in the final stages of the LXQt tests. Interestingly, I had to update these tests mid-way through because of a version update in the DE. This required me to update the needles (image references) to match the new UI, a great lesson in software maintenance.
Solving for "Synergy"
One of my favorite challenges was suggested by my mentor, Roland: synergizing the tests to reduce redundancy. I observed that some applications (like Firefox and LibreOffice) behave identically across different desktops. Instead of duplicating Perl scripts/code for every single DE, I used symbolic links. This allows the use of the same Perl script and possibly the same needles, making the test suite lighter and much easier to maintain.

The Contributor Guide
During the contribution phase, I noticed how rigid the documentation and coding style requirements are. While this ensures high standards and uniformity, it can be intimidating for newcomers and time-consuming for reviewers.
To help, I created a contributor guide [MR !97]. This guide addresses the project's writing style. My goal is to reduce the back-and-forth during reviews, making the process more efficient for everyone and helping new contributors.
Looking Forward
For the second half of the internship, I plan to:
- Assist others: Help new contributors extend apps start-stop tests to even more desktop environments.
- Explore new coverage: Move beyond start-stop tests into deeper functional testing.
This journey has been an amazing experience of learning and connecting with the wider open-source community, especially Debian Women and the Linux QA team.
I am deeply grateful to my mentors, Tassia Camoes Araujo, Roland Clobus, and Philip Hands, for their constant guidance and for believing in my ability to take on this project.
Here's to the next 6 weeks 
19 Jan 2026 9:15pm GMT
Hellen Chemtai: Internship Highlights at Outreachy: My Journey with Debian OpenQA
Highlights
Hello world
. I am an intern here at Outreachy working with Debian OpenQA Image testing team. The work consists of testing Images with OpenQA. The internship has reached midpoint and here are some of the highlights that I have had so far.
- The mentors : Roland Clobus, Tassia Camoes and Philip Hands are very good mentors. I like the constant communication and the help I get while working on the project. I enjoy working with this team.
- The community : The contributors, mentors and the greater SUSE OpenQA community are constantly in communication. I learn a lot from these meetings.
- The women network : The women of Debian meet and network . The meetings are interactive and we are encouraged to interact.
- The project : We are making progress one step at a time. Isoken Ibizugbe is my fellow intern working on start-stop tests. I am working on live installers tests.
Communication
I have learned a lot during my internship. I have always been on the silent path of life with little communication. I once told myself being a developer would hide me behind a computer to avoid socializing. Being in open source especially this internship has helped me out with communication and networking. The team work in the project has helped me a lot
- My mentors encourage communication. Giving project updates and stating when we get stuck.
- My mentors have scheduled weekly meetings to communicate about the project
- We are constantly invited to the SUSE meetings by mentors or by Sam Thursfield who is part of the team.
- Female contributors are encouraged to join Debian women monthly meetings for networking
Lessons so far
I have had challenges , solved problems and learned new skills all this while
- I have learned Perl, OpenQA configuration, needle editing and improved my Linux and Git skills
- I have known how various Images are installed , booted and run through live viewing of tests
- I have solved many test errors and learned to work with applications that are needed in the OS installations. e.g. rufus
- I have learned how virtual machines work and how to solve errors in regards to them
So far so good. I am grateful to be a contributor towards the project and hope to continue learning.
19 Jan 2026 3:28pm GMT
Jonathan Dowland: FOSDEM 2026

I'm going to FOSDEM 2026!
I'm presenting in the Containers dev room. My talk is Java Memory Management in Containers and it's scheduled as the first talk on the first day. I'm the warm-up act!
The Java devroom has been a stalwart at FOSDEM since 2004 (sometimes in other forms), but sadly there's no Java devroom this year. There's a story about that, but it's not mine to tell.
Please recommend to me any interesting talks! Here's a few that caught my eye:
Debian/related:
- Package managers à la carte: A Formal Model of Dependency Resolution
- 32 years of Debian: how a do-ocracy keeps evolving
- Free as in Burned Out: Who Really Pays for Open Source?
Containers:
Research:
- Data science from the command line: a look back at 2 years of using xan
- Research software engineering: a movement and its instantiation at the University of Illinois Urbana-Champaign
Other:
- Window Managers after Xorg (Mir)
- Charming Gray Buttons of the XX century: how widget toolkits evolved with computer architectures
- PAW, a programmable DAW
19 Jan 2026 2:12pm GMT
Francesco Paolo Lovergine: A Terramaster NAS with Debian, take two.
After experimenting at home, the very first professional-grade NAS from Terramaster arrived at work, too, with 12 HDD bays and possibly a pair of M2s. NVME cards. In this case, I again installed a plain Debian distribution, but HDD monitoring required some configuration adjustments to run smartd properly.
A decent approach to data safety is to run regularly scheduled short and long SMART tests on all disks to detect potential damage. Running such tests on all disks at once isn't ideal, so I set up a script to create a staggered configuration and test multiple groups of disks at different times. Note that it is mandatory to read the devices at each reboot because their names and order can change.
Of course, the same principle (short/long test at regular intervals along the week) should be applied for a simpler configuration, as in the case of my home NAS with a pair of RAID1 devices.
What follows is a simple script to create a staggered smartd.conf at boot time:
#!/bin/bash
#
# Save this as /usr/local/bin/create-smartd-conf.sh
#
# Dynamically generate smartd.conf with staggered SMART test scheduling
# at boot time based on discovered ATA devices
# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.
# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS
#
# -d TYPE Set the device type: ata, scsi[+TYPE], nvme[,NSID],
# sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,p][,x][,N],
# usbprolific, usbsunplus, sntasmedia, sntjmicron[,NSID], sntrealtek,
# ... (platform specific)
# -T TYPE Set the tolerance to one of: normal, permissive
# -o VAL Enable/disable automatic offline tests (on/off)
# -S VAL Enable/disable attribute autosave (on/off)
# -n MODE No check if: never, sleep[,N][,q], standby[,N][,q], idle[,N][,q]
# -H Monitor SMART Health Status, report if failed
# -s REG Do Self-Test at time(s) given by regular expression REG
# -l TYPE Monitor SMART log or self-test status:
# error, selftest, xerror, offlinests[,ns], selfteststs[,ns]
# -l scterc,R,W Set SCT Error Recovery Control
# -e Change device setting: aam,[N|off], apm,[N|off], dsn,[on|off],
# lookahead,[on|off], security-freeze, standby,[N|off], wcache,[on|off]
# -f Monitor 'Usage' Attributes, report failures
# -m ADD Send email warning to address ADD
# -M TYPE Modify email warning behavior (see man page)
# -p Report changes in 'Prefailure' Attributes
# -u Report changes in 'Usage' Attributes
# -t Equivalent to -p and -u Directives
# -r ID Also report Raw values of Attribute ID with -p, -u or -t
# -R ID Track changes in Attribute ID Raw value with -p, -u or -t
# -i ID Ignore Attribute ID for -f Directive
# -I ID Ignore Attribute ID for -p, -u or -t Directive
# -C ID[+] Monitor [increases of] Current Pending Sectors in Attribute ID
# -U ID[+] Monitor [increases of] Offline Uncorrectable Sectors in Attribute ID
# -W D,I,C Monitor Temperature D)ifference, I)nformal limit, C)ritical limit
# -v N,ST Modifies labeling of Attribute N (see man page)
# -P TYPE Drive-specific presets: use, ignore, show, showall
# -a Default: -H -f -t -l error -l selftest -l selfteststs -C 197 -U 198
# -F TYPE Use firmware bug workaround:
# none, nologdir, samsung, samsung2, samsung3, xerrorlba
# -c i=N Set interval between disk checks to N seconds
# # Comment: text after a hash sign is ignored
# \ Line continuation character
# Attribute ID is a decimal integer 1 <= ID <= 255
# except for -C and -U, where ID = 0 turns them off.
set -euo pipefail
# Test schedule configuration
BASE_SCHEDULE="L/../../6" # Long test on Saturdays
TEST_HOURS=(01 03 05 07) # 4 time slots: 1am, 3am, 5am, 7am
DEVICES_PER_GROUP=3
main() {
# Get array of device names (e.g., sda, sdb, sdc)
mapfile -t devices < <(ls -l /dev/disk/by-id/ | grep ata | awk '{print $11}' | grep sd | cut -d/ -f3 | sort -u)
if [[ ${#devices[@]} -eq 0 ]]; then
exit 1
fi
# Start building config file
cat << EOF
# smartd.conf - Auto-generated at boot
# Generated: $(date '+%Y-%m-%d %H:%M:%S')
#
# Staggered SMART test scheduling to avoid concurrent disk load
# Long tests run on Saturdays at different times per group
#
EOF
# Process devices into groups
local group=0
local count_in_group=0
for i in "${!devices[@]}"; do
local dev="${devices[$i]}"
local hour="${TEST_HOURS[$group]}"
# Add group header at start of each group
if [[ $count_in_group -eq 0 ]]; then
echo ""
echo "# Group $((group + 1)) - Tests at ${hour}:00 on Saturdays"
fi
# Add device entry
#echo "/dev/${dev} -a -o on -S on -s (${BASE_SCHEDULE}/${hour}) -m root"
echo "/dev/${dev} -a -o on -S on -s (L/../../6/${hour}) -s (S/../.././$(((hour + 12) % 24))) -m root"
# Move to next group when current group is full
count_in_group=$((count_in_group + 1))
if [[ $count_in_group -ge $DEVICES_PER_GROUP ]]; then
count_in_group=0
group=$(((group + 1) % ${#TEST_HOURS[@]}))
fi
done
}
main "$@"
To run such a script at boot, add a unit file to the systemd configuration.
sudo systemctl edit --full /etc/systemd/system/regenerate-smartd-conf.service
sudo systemctl enable regenerate-smartd-conf.service
Where the unit service is the following:
[Unit]
Description=Generate smartd.conf with staggered SMART test scheduling
# Wait for all local filesystems and udev device detection
After=local-fs.target systemd-udev-settle.service
Before=smartd.service
Wants=systemd-udev-settle.service
DefaultDependencies=no
[Service]
Type=oneshot
# Only generate the config file, don't touch smartd here
ExecStart=/bin/bash -c '/usr/local/bin/create-smartd-config.sh > /etc/smartd.conf'
StandardOutput=journal
StandardError=journal
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target19 Jan 2026 1:00pm GMT
Russell Coker: Furilabs FLX1s
The Aim
I have just got a Furilabs FLX1s [1] which is a phone running a modified version of Debian. I want to have a phone that runs all apps that I control and can observe and debug. Android is very good for what it does and there are security focused forks of Android which have a lot of potential, but for my use a Debian phone is what I want.
The FLX1s is not going to be my ideal phone, I am evaluating it for use as a daily-driver until a phone that meets my ideal criteria is built. In this post I aim to provide information to potential users about what it can do, how it does it, and how to get the basic functions working. I also evaluate how well it meets my usage criteria.
I am not anywhere near an average user. I don't think an average user would ever even see one unless a more technical relative showed one to them. So while this phone could be used by an average user I am not evaluating it on that basis. But of course the features of the GUI that make a phone usable for an average user will allow a developer to rapidly get past the beginning stages and into more complex stuff.
Features
The Furilabs FLX1s [1] is a phone that is designed to run FuriOS which is a slightly modified version of Debian. The purpose of this is to run Debian instead of Android on a phone. It has switches to disable camera, phone communication, and microphone (similar to the Librem 5) but the one to disable phone communication doesn't turn off Wifi, the only other phone I know of with such switches is the Purism Librem 5.
It has a 720*1600 display which is only slightly better than the 720*1440 display in the Librem 5 and PinePhone Pro. This doesn't compare well to the OnePlus 6 from early 2018 with 2280*1080 or the Note9 from late 2018 with 2960*1440 - which are both phones that I've run Debian on. The current price is $US499 which isn't that good when compared to the latest Google Pixel series, a Pixel 10 costs $US649 and has a 2424*1080 display and it also has 12G of RAM while the FLX1s only has 8G. Another annoying thing is how rounded the corners are, it seems that round corners that cut off the content are a standard practice nowadays, in my collection of phones the latest one I found with hard right angles on the display was a Huawei Mate 10 Pro which was released in 2017. The corners are rounder than the Note 9, this annoys me because the screen is not high resolution by today's standards so losing the corners matters.
The default installation is Phosh (the GNOME shell for phones) and it is very well configured. Based on my experience with older phone users I think I could give a phone with this configuration to a relative in the 70+ age range who has minimal computer knowledge and they would be happy with it. Additionally I could set it up to allow ssh login and instead of going through the phone support thing of trying to describe every GUI setting to click on based on a web page describing menus for the version of Android they are running I could just ssh in and run diff on the .config directory to find out what they changed. Furilabs have done a very good job of setting up the default configuration, while Debian developers deserve a lot of credit for packaging the apps the Furilabs people have chosen a good set of default apps to install to get it going and appear to have made some noteworthy changes to some of them.
Droidian
The OS is based on Android drivers (using the same techniques as Droidian [2]) and the storage device has the huge number of partitions you expect from Android as well as a 110G Ext4 filesystem for the main OS.
The first issue with the Droidian approach of using an Android kernel and containers for user space code to deal with drivers is that it doesn't work that well. There are 3 D state processes (uninterrupteable sleep - which usually means a kernel bug if the process remains in that state) after booting and doing nothing special. My tests running Droidian on the Note 9 also had D state processes, in this case they are D state kernel threads (I can't remember if the Note 9 had regular processes or kernel threads stuck in D state). It is possible for a system to have full functionality in spite of some kernel threads in D state but generally it's a symptom of things not working as well as you would hope.
The design of Droidian is inherently fragile. You use a kernel and user space code from Android and then use Debian for the rest. You can't do everything the Android way (with the full OS updates etc) and you also can't do everything the Debian way. The TOW Boot functionality in the PinePhone Pro is really handy for recovery [3], it allows the internal storage to be accessed as a USB mass storage device. The full Android setup with ADB has some OK options for recovery, but part Android and part Debian has less options. While it probably is technically possible to do the same things in regard to OS repair and reinstall the fact that it's different from most other devices means that fixes can't be done in the same way.
Applications
GUI
The system uses Phosh and Phoc, the GNOME system for handheld devices. It's a very different UI from Android, I prefer Android but it is usable with Phosh.
IM
Chatty works well for Jabber (XMPP) in my tests. It supports Matrix which I didn't test because I don't desire the same program doing Matrix and Jabber and because Matrix is a heavy protocol which establishes new security keys for each login so I don't want to keep logging in on new applications.
Chatty also does SMS but I couldn't test that without the SIM caddy.
I use Nheko for Matrix which has worked very well for me on desktops and laptops running Debian.
I am currently using Geary for email. It works reasonably well but is lacking proper management of folders, so I can't just subscribe to the important email on my phone so that bandwidth isn't wasted on less important email (there is a GNOME gitlab issue about this - see the Debian Wiki page about Mobile apps [4]).
Music
Music playing isn't a noteworthy thing for a desktop or laptop, but a good music player is important for phone use. The Lollypop music player generally does everything you expect along with support for all the encoding formats including FLAC0 - a major limitation of most Android music players seems to be lack of support for some of the common encoding formats. Lollypop has it's controls for pause/play and going forward and backward one track on the lock screen.
Maps
The installed map program is gnome-maps which works reasonably well. It gets directions via the Graphhopper API [5]. One thing we really need is a FOSS replacement for Graphhopper in GNOME Maps.
Delivery and Unboxing
I received my FLX1s on the 13th of Jan [1]. I had paid for it on the 16th of Oct but hadn't received the email with the confirmation link so the order had been put on hold. But after I contacted support about that on the 5th of Jan they rapidly got it to me which was good. They also gave me a free case and screen protector to apologise, I don't usually use screen protectors but in this case it might be useful as the edges of the case don't even extend 0.5mm above the screen. So if it falls face down the case won't help much.
When I got it there was an open space at the bottom where the caddy for SIMs is supposed to be. So I couldn't immediately test VoLTE functionality. The contact form on their web site wasn't working when I tried to report that and the email for support was bouncing.
Bluetooth
As a test of Bluetooth I connected it to my Nissan LEAF which worked well for playing music and I connected it to several Bluetooth headphones. My Thinkpad running Debian/Trixie doesn't connect to the LEAF and to headphones which have worked on previous laptops running Debian and Ubuntu. A friend's laptop running Debian/Trixie also wouldn't connect to the LEAF so I suspect a bug in Trixie, I need to spend more time investigating this.
Wifi
Currently 5GHz wifi doesn't work, this is a software bug that the Furilabs people are working on. 2.4GHz wifi works fine. I haven't tested running a hotspot due to being unable to get 4G working as they haven't yet shipped me the SIM caddy.
Docking
This phone doesn't support DP Alt-mode or Thunderbolt docking so it can't drive an external monitor. This is disappointing, Samsung phones and tablets have supported such things since long before USB-C was invented. Samsung DeX is quite handy for Android devices and that type feature is much more useful on a device running Debian than on an Android device.
Camera
The camera works reasonably well on the FLX1s. Until recently for the Librem 5 the camera didn't work and the camera on my PinePhone Pro currently doesn't work. Here are samples of the regular camera and the selfie camera on the FLX1s and the Note 9. I think this shows that the camera is pretty decent. The selfie looks better and the front camera is worse for the relatively close photo of a laptop screen - taking photos of computer screens is an important part of my work but I can probably work around that.
I wasn't assessing this camera t find out if it's great, just to find out if I have the sorts of problems I had before and it just worked. The Samsung Galaxy Note series of phones has always had decent specs including good cameras. Even though the Note 9 is old comparing to it is a respectable performance. The lighting was poor for all photos.
FLX1s
Note 9
Power Use
In 93 minutes having the PinePhone Pro, Librem 5, and FLX1s online with open ssh sessions from my workstation the PinePhone Pro went from 100% battery to 26%, the Librem 5 went from 95% to 69%, and the FLX1s went from 100% to 99%. The battery discharge rate of them was reported as 3.0W, 2.6W, and 0.39W respectively. Based on having a 16.7Wh battery 93 minutes of use should have been close to 4% battery use, but in any case all measurements make it clear that the FLX1s will have a much longer battery life. Including the measurement of just putting my fingers on the phones and feeling the temperature (FLX1s felt cool and the others felt hot).
The PinePhone Pro and the Librem 5 have an optional "Caffeine mode" which I enabled for this test, without that enabled the phone goes into a sleep state and disconnects from Wifi. So those phones would use much less power with caffeine mode enabled, but they also couldn't get fast response to notifications etc. I found the option to enable a Caffeine mode switch on the FLX1s but the power use was reported as being the same both with and without it.
Charging
One problem I found with my phone is that in every case it takes 22 seconds to negotiate power. Even when using straight USB charging (no BC or PD) it doesn't draw any current for 22 seconds. When I connect it it will stay at 5V and varying between 0W and 0.1W (current rounded off to zero) for 22 seconds or so and then start charging. After the 22 second display the phone will make the tick sound indicating that it's charging and the power meter will measure that it's drawing some current.
I added the table from my previous post about phone charging speed [6] with an extra row for the FLX1s. For charging from my PC USB ports the results were the worst ever, the port that does BC did not work at all it was looping trying to negotiate after a 22 second negotiation delay the port would turn off. The non-BC port gave only 2.4W which matches the 2.5W given by the spec for a "High-power device" which is what that port is designed to give. In a discussion on the Purism forum about the Librem5 charging speed one of their engineers told me that the reason why their phone would draw 2A from that port was because the cable was identifying itself as a USB-C port not a "High-power device" port. But for some reason out of the 7 phones I tested the FLX1s and the One Plus 6 are the only ones to limit themselves to what the port is apparently supposed to do. Also the One Plus 6 charges slowly on every power supply so I don't know if it is obeying the spec or just sucking.
On a cheap AliExpress charger the FLX1s gets 5.9V and on a USB battery it gets 5.8V. Out of all 42 combinations of device and charger I tested these were the only ones to involve more than 5.1V but less than 9V. I welcome comments suggesting an explanation.
The case that I received has a hole for the USB-C connector that isn't wide enough for the plastic surrounds on most of my USB-C cables (including the Dell dock). Also to make a connection requires a fairly deep insertion (deeper than the One Plus 6 or the Note 9). So without adjustment I have to take the case off to charge it. It's no big deal to adjust the hole (I have done it with other cases) but it's an annoyance.
| Phone | Top z640 | Bottom Z640 | Monitor | Ali Charger | Dell Dock | Battery | Best | Worst |
|---|---|---|---|---|---|---|---|---|
| FLX1s | FAIL | 5.0V 0.49A 2.4W | 4.8V 1.9A 9.0W | 5.9V 1.8A 11W | 4.8V 2.1A 10W | 5.8V 2.1A 12W | 5.8V 2.1A 12W | 5.0V 0.49A 2.4W |
| Note9 | 4.8V 1.0A 5.2W | 4.8V 1.6A 7.5W | 4.9V 2.0A 9.5W | 5.1V 1.9A 9.7W | 4.8V 2.1A 10W | 5.1V 2.1A 10W | 5.1V 2.1A 10W | 4.8V 1.0A 5.2W |
| Pixel 7 pro | 4.9V 0.80A 4.2W | 4.8V 1.2A 5.9W | 9.1V 1.3A 12W | 9.1V 1.2A 11W | 4.9V 1.8A 8.7W | 9.0V 1.3A 12W | 9.1V 1.3A 12W | 4.9V 0.80A 4.2W |
| Pixel 8 | 4.7V 1.2A 5.4W | 4.7V 1.5A 7.2W | 8.9V 2.1A 19W | 9.1V 2.7A 24W | 4.8V 2.3A 11.0W | 9.1V 2.6A 24W | 9.1V 2.7A 24W | 4.7V 1.2A 5.4W |
| PPP | 4.7V 1.2A 6.0W | 4.8V 1.3A 6.8W | 4.9V 1.4A 6.6W | 5.0V 1.2A 5.8W | 4.9V 1.4A 5.9W | 5.1V 1.2A 6.3W | 4.8V 1.3A 6.8W | 5.0V 1.2A 5.8W |
| Librem 5 | 4.4V 1.5A 6.7W | 4.6V 2.0A 9.2W | 4.8V 2.4A 11.2W | 12V 0.48A 5.8W | 5.0V 0.56A 2.7W | 5.1V 2.0A 10W | 4.8V 2.4A 11.2W | 5.0V 0.56A 2.7W |
| OnePlus6 | 5.0V 0.51A 2.5W | 5.0V 0.50A 2.5W | 5.0V 0.81A 4.0W | 5.0V 0.75A 3.7W | 5.0V 0.77A 3.7W | 5.0V 0.77A 3.9W | 5.0V 0.81A 4.0W | 5.0V 0.50A 2.5W |
| Best | 4.4V 1.5A 6.7W | 4.6V 2.0A 9.2W | 8.9V 2.1A 19W | 9.1V 2.7A 24W | 4.8V 2.3A 11.0W | 9.1V 2.6A 24W |
Conclusion
The Furilabs support people are friendly and enthusiastic but my customer experience wasn't ideal. It was good that they could quickly respond to my missing order status and the missing SIM caddy (which I still haven't received but believe is in the mail) but it would be better if such things just didn't happen.
The phone is quite user friendly and could be used by a novice.
I paid $US577 for the FLX1s which is $AU863 by today's exchange rates. For comparison I could get a refurbished Pixel 9 Pro Fold for $891 from Kogan (the major Australian mail-order company for technology) or a refurbished Pixel 9 Pro XL for $842. The Pixel 9 series has security support until 2031 which is probably longer than you can expect a phone to be used without being broken. So a phone with a much higher resolution screen that's only one generation behind the latest high end phones and is refurbished will cost less. For a brand new phone a Pixel 8 Pro which has security updates until 2030 costs $874 and a Pixel 9A which has security updates until 2032 costs $861.
Doing what the Furilabs people have done is not a small project. It's a significant amount of work and the prices of their products need to cover that. I'm not saying that the prices are bad, just that economies of scale and the large quantity of older stock makes the older Google products quite good value for money. The new Pixel phones of the latest models are unreasonably expensive. The Pixel 10 is selling new from Google for $AU1,149 which I consider a ridiculous price that I would not pay given the market for used phones etc. If I had a choice of $1,149 or a "feature phone" I'd pay $1,149. But the FLX1s for $863 is a much better option for me. If all I had to choose from was a new Pixel 10 or a FLX1s for my parents I'd get them the FLX1s.
For a FOSS developer a FLX1s could be a mobile test and development system which could be lent to a relative when their main phone breaks and the replacement is on order. It seems to be fit for use as a commodity phone. Note that I give this review on the assumption that SMS and VoLTE will just work, I haven't tested them yet.
The UI on the FLX1s is functional and easy enough for a new user while allowing an advanced user to do the things they desire. I prefer the Android style and the Plasma Mobile style is closer to Android than Phosh is, but changing it is something I can do later. Generally I think that the differences between UIs matter more when on a desktop environment that could be used for more complex tasks than on a phone which limits what can be done by the size of the screen.
I am comparing the FLX1s to Android phones on the basis of what technology is available. But most people who would consider buying this phone will compare it to the PinePhone Pro and the Librem 5 as they have similar uses. The FLX1s beats both those phones handily in terms of battery life and of having everything just work. But it has the most non free software of the three and the people who want the $2000 Librem 5 that's entirely made in the US won't want the FLX1s.
This isn't the destination for Debian based phones, but it's a good step on the way to it and I don't think I'll regret this purchase.
- [1] https://furilabs.com/shop/flx1s/
- [2] https://droidian.org/
- [3] https://wiki.debian.org/InstallingDebianOn/PINE64/PinePhonePro
- [4] https://wiki.debian.org/MobileApps
- [5] https://www.graphhopper.com/
- [6] https://etbe.coker.com.au/2026/01/05/phone-charging-speeds/
19 Jan 2026 6:43am GMT
Vincent Bernat: RAID 5 with mixed-capacity disks on Linux
Standard RAID solutions waste space when disks have different sizes. Linux software RAID with LVM uses the full capacity of each disk and lets you grow storage by replacing one or two disks at a time.
We start with four disks of equal size:
$ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 101M vdb disk 101M vdc disk 101M vdd disk 101M
We create one partition on each of them:
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vda $ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdb $ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdc $ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdd $ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 101M └─vda1 part 100M vdb disk 101M └─vdb1 part 100M vdc disk 101M └─vdc1 part 100M vdd disk 101M └─vdd1 part 100M
We set up a RAID 5 device by assembling the four partitions:1
$ mdadm --create /dev/md0 --level=raid5 --bitmap=internal --raid-devices=4 \ > /dev/vda1 /dev/vdb1 /dev/vdc1 /dev/vdd1 $ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 101M ┌┈▶ └─vda1 part 100M ┆ vdb disk 101M ├┈▶ └─vdb1 part 100M ┆ vdc disk 101M ├┈▶ └─vdc1 part 100M ┆ vdd disk 101M └┬▶ └─vdd1 part 100M └┈┈md0 raid5 292.5M $ cat /proc/mdstat md0 : active raid5 vdd1[4] vdc1[2] vdb1[1] vda1[0] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk
We use LVM to create logical volumes on top of the RAID 5 device.
$ pvcreate /dev/md0 Physical volume "/dev/md0" successfully created. $ vgcreate data /dev/md0 Volume group "data" successfully created $ lvcreate -L 100m -n bits data Logical volume "bits" created. $ lvcreate -L 100m -n pieces data Logical volume "pieces" created. $ mkfs.ext4 -q /dev/data/bits $ mkfs.ext4 -q /dev/data/pieces $ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 101M ┌┈▶ └─vda1 part 100M ┆ vdb disk 101M ├┈▶ └─vdb1 part 100M ┆ vdc disk 101M ├┈▶ └─vdc1 part 100M ┆ vdd disk 101M └┬▶ └─vdd1 part 100M └┈┈md0 raid5 292.5M ├─data-bits lvm 100M └─data-pieces lvm 100M $ vgs VG #PV #LV #SN Attr VSize VFree data 1 2 0 wz--n- 288.00m 88.00m
This gives us the following setup:
We replace /dev/vda with a bigger disk. We add it back to the RAID 5 array after copying the partitions from /dev/vdb:
$ cat /proc/mdstat md0 : active (auto-read-only) raid5 vdb1[1] vdd1[4] vdc1[2] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU] bitmap: 0/1 pages [0KB], 65536KB chunk $ sgdisk --replicate=/dev/vda /dev/vdb $ sgdisk --randomize-guids /dev/vda $ mdadm --manage /dev/md0 --add /dev/vda1 $ cat /proc/mdstat md0 : active raid5 vda1[5] vdb1[1] vdd1[4] vdc1[2] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk
We do not use the additional capacity: this setup would not survive the loss of /dev/vda because we have no spare capacity. We need a second disk replacement, like /dev/vdb:
$ cat /proc/mdstat md0 : active (auto-read-only) raid5 vda1[5] vdd1[4] vdc1[2] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU] bitmap: 0/1 pages [0KB], 65536KB chunk $ sgdisk --replicate=/dev/vdb /dev/vdc $ sgdisk --randomize-guids /dev/vdb $ mdadm --manage /dev/md0 --add /dev/vdb1 $ cat /proc/mdstat md0 : active raid5 vdb1[6] vda1[5] vdd1[4] vdc1[2] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk
We create a new RAID 1 array by using the free space on /dev/vda and /dev/vdb:
$ sgdisk --new=0:0:0 -t 0:fd00 /dev/vda $ sgdisk --new=0:0:0 -t 0:fd00 /dev/vdb $ mdadm --create /dev/md1 --level=raid1 --bitmap=internal --raid-devices=2 \ > /dev/vda2 /dev/vdb2 $ cat /proc/mdstat md1 : active raid1 vdb2[1] vda2[0] 101312 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid5 vdb1[6] vda1[5] vdd1[4] vdc1[2] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk
We add /dev/md1 to the volume group:
$ pvcreate /dev/md1 Physical volume "/dev/md1" successfully created. $ vgextend data /dev/md1 Volume group "data" successfully extended $ vgs VG #PV #LV #SN Attr VSize VFree data 2 2 0 wz--n- 384.00m 184.00m $ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 201M ┌┈▶ ├─vda1 part 100M ┌┈▶┆ └─vda2 part 100M ┆ ┆ vdb disk 201M ┆ ├┈▶ ├─vdb1 part 100M └┬▶┆ └─vdb2 part 100M └┈┆┈┈┈md1 raid1 98.9M ┆ vdc disk 101M ├┈▶ └─vdc1 part 100M ┆ vdd disk 101M └┬▶ └─vdd1 part 100M └┈┈md0 raid5 292.5M ├─data-bits lvm 100M └─data-pieces lvm 100M
This gives us the following setup:2
We extend our capacity further by replacing /dev/vdc:
$ cat /proc/mdstat md1 : active (auto-read-only) raid1 vda2[0] vdb2[1] 101312 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active (auto-read-only) raid5 vda1[5] vdd1[4] vdb1[6] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U] bitmap: 0/1 pages [0KB], 65536KB chunk $ sgdisk --replicate=/dev/vdc /dev/vdb $ sgdisk --randomize-guids /dev/vdc $ mdadm --manage /dev/md0 --add /dev/vdc1 $ cat /proc/mdstat md1 : active (auto-read-only) raid1 vda2[0] vdb2[1] 101312 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid5 vdc1[7] vda1[5] vdd1[4] vdb1[6] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk
Then, we convert /dev/md1 from RAID 1 to RAID 5:
$ mdadm --grow /dev/md1 --level=5 --raid-devices=3 --add /dev/vdc2 mdadm: level of /dev/md1 changed to raid5 mdadm: added /dev/vdc2 $ cat /proc/mdstat md1 : active raid5 vdc2[2] vda2[0] vdb2[1] 202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid5 vdc1[7] vda1[5] vdd1[4] vdb1[6] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk $ pvresize /dev/md1 $ vgs VG #PV #LV #SN Attr VSize VFree data 2 2 0 wz--n- 482.00m 282.00m
This gives us the following layout:
We further extend our capacity by replacing /dev/vdd:
$ cat /proc/mdstat md0 : active (auto-read-only) raid5 vda1[5] vdc1[7] vdb1[6] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_] bitmap: 0/1 pages [0KB], 65536KB chunk md1 : active (auto-read-only) raid5 vda2[0] vdc2[2] vdb2[1] 202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunk $ sgdisk --replicate=/dev/vdd /dev/vdc $ sgdisk --randomize-guids /dev/vdd $ mdadm --manage /dev/md0 --add /dev/vdd1 $ cat /proc/mdstat md0 : active raid5 vdd1[4] vda1[5] vdc1[7] vdb1[6] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk md1 : active (auto-read-only) raid5 vda2[0] vdc2[2] vdb2[1] 202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunk
We grow the second RAID 5 array:
$ mdadm --grow /dev/md1 --raid-devices=4 --add /dev/vdd2 mdadm: added /dev/vdd2 $ cat /proc/mdstat md0 : active raid5 vdd1[4] vda1[5] vdc1[7] vdb1[6] 299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk md1 : active raid5 vdd2[3] vda2[0] vdc2[2] vdb2[1] 303936 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk $ pvresize /dev/md1 $ vgs VG #PV #LV #SN Attr VSize VFree data 2 2 0 wz--n- 580.00m 380.00m $ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 201M ┌┈▶ ├─vda1 part 100M ┌┈▶┆ └─vda2 part 100M ┆ ┆ vdb disk 201M ┆ ├┈▶ ├─vdb1 part 100M ├┈▶┆ └─vdb2 part 100M ┆ ┆ vdc disk 201M ┆ ├┈▶ ├─vdc1 part 100M ├┈▶┆ └─vdc2 part 100M ┆ ┆ vdd disk 301M ┆ └┬▶ ├─vdd1 part 100M └┬▶ ┆ └─vdd2 part 100M ┆ └┈┈md0 raid5 292.5M ┆ ├─data-bits lvm 100M ┆ └─data-pieces lvm 100M └┈┈┈┈┈md1 raid5 296.8M
You can continue by replacing each disk one by one using the same steps. ♾️
-
Write-intent bitmaps speed up recovery of the RAID array after a power failure by marking unsynchronized regions as dirty. They have an impact on performance, but I did not measure it myself. ↩︎
-
In the
lsblkoutput,/dev/md1appears unused because the logical volumes do not use any space from it yet. Once you create more logical volumes or extend them,lsblkwill reflect the usage. ↩︎
19 Jan 2026 5:49am GMT
Dima Kogan: mrcal 2.5 released!
mrcal 2.5 is out: the release notes. Once again, this is mostly a bug-fix release en route to the big new features coming in 3.0.
One cool thing is that these tools have now matured enough to no longer be considered experimental. They have been used with great success in lots of contexts across many different projects and organizations. Some highlights:
- I've calibrated extremely wide lenses
- and extremely narrow lenses
- and joint systems containing many different kinds of lenses
- with lots of cameras at the same time. The biggest single joint calibration I've done today had 10 cameras, but I'll almost certainly encounter bigger systems in the future
- mrcal has been used to process both visible and thermal cameras
- The new triangulated-feature capability has been used in a structure-from-motion context to compute the world geometry on-line.
- mrcal has been used with weird experimental setups employing custom calibration objects and single-view solves
- mrcal has calibrated joint camera-LIDAR systems
- and joint camera-IMU systems
- Lots of students use mrcal as part of PhotonVision, the toolkit used by teams in the FIRST Robotics Competition
Some of the above is new, and not yet fully polished and documented and tested, but it works.
In mrcal 2.5, most of the implementation of some new big features is written and committed, but it's still incomplete. The new stuff is there, but is lightly tested and documented. This will be completed eventually in mrcal 3.0:
- Cross-reprojection uncertainty, to be able to perform full calibrations with a splined model and without a chessboard.
mrcal-show-projection-uncertainty --method cross-reprojection-rrp-Jfpis available today, and works in the usual moving-chessboard-stationary camera case. Fully boardless coming later. - More general view of uncertainty and diffs. I want to support extrinsics-only and/or intrinsics computations-only in lots of scenarios. Uncertainty in point solves is already available in some conditions, for instance if the points are fixed. New
mrcal-show-stereo-pair-difftool reports an extrinsics+intrinsics diff between two calibrations of a stereo pair; experimentalanalyses/extrinsics-stability.pytool reports an extrinsics-only diff. These are in contrast to the intrinsics-only uncertainty and diffs in the existingmrcal-show-projection-diffandmrcal-show-projection-uncertaintytools. Some documentation in the uncertainty and differencing pages. - Implicit point solves, using the triangulation routines in the optimization cost function. Should produce much more efficient structure-from-motion solves. This is all the "triangulated-features" stuff. The cost function is primarily built around
_mrcal_triangulated_error(). This is demoed intest/test-sfm-triangulated-points.py. And I've been using_mrcal_triangulated_error()in structure-from-motion implementations within other optimization routines.
mrcal is quite good already, and will be even better in the future. Try it today!
19 Jan 2026 12:00am GMT
17 Jan 2026
Planet Debian
Simon Josefsson: Backup of S3 Objects Using rsnapshot
I've been using rsnapshot to take backups of around 10 servers and laptops for well over 15 years, and it is a remarkably reliable tool that has proven itself many times. Rsnapshot uses rsync over SSH and maintains a temporal hard-link file pool. Once rsnapshot is configured and running, on the backup server, you get a hardlink farm with directories like this for the remote server:
/backup/serverA.domain/.sync/foo
/backup/serverA.domain/daily.0/foo
/backup/serverA.domain/daily.1/foo
/backup/serverA.domain/daily.2/foo
...
/backup/serverA.domain/daily.6/foo
/backup/serverA.domain/weekly.0/foo
/backup/serverA.domain/weekly.1/foo
...
/backup/serverA.domain/monthly.0/foo
/backup/serverA.domain/monthly.1/foo
...
/backup/serverA.domain/yearly.0/foo
I can browse and rescue files easily, going back in time when needed.
The rsnapshot project README explains more, there is a long rsnapshot HOWTO although I usually find the rsnapshot man page the easiest to digest.
I have stored multi-TB Git-LFS data on GitLab.com for some time. The yearly renewal is coming up, and the price for Git-LFS storage on GitLab.com is now excessive (~$10.000/year). I have reworked my work-flow and finally migrated debdistget to only store Git-LFS stubs on GitLab.com and push the real files to S3 object storage. The cost for this is barely measurable, I have yet to run into the €25/month warning threshold.
But how do you backup stuff stored in S3?
For some time, my S3 backup solution has been to run the minio-client mirror command to download all S3 objects to my laptop, and rely on rsnapshot to keep backups of this. While 4TB NVME's are relatively cheap, I've felt that this disk and network churn on my laptop is unsatisfactory for quite some time.
What is a better approach?
I find S3 hosting sites fairly unreliable by design. Only a couple of clicks in your web browser and you have dropped 100TB of data. Or by someone else who steal your plaintext-equivalent cookie. Thus, I haven't really felt comfortable using any S3-based backup option. I prefer to self-host, although continously running a mirror job is not sufficient: if I accidentally drop the entire S3 object store, my mirror run will remove all files locally too.
The rsnapshot approach that allows going back in time and having data on self-managed servers feels superior to me.
What if we could use rsnapshot with a S3 client instead of rsync?
Someone else asked about this several years ago, and the suggestion was to use the fuse-based s3fs which sounded unreliable to me. After some experimentation, working around some hard-coded assumption in the rsnapshot implementation, I came up with a small configuration pattern and a wrapper tool to implement what I desired.
Here is my configuration snippet:
cmd_rsync /backup/s3/s3rsync
rsync_short_args -Q
rsync_long_args --json --remove
lockfile /backup/s3/rsnapshot.pid
snapshot_root /backup/s3
backup s3:://hetzner/debdistget-gnuinos ./debdistget-gnuinos
backup s3:://hetzner/debdistget-tacos ./debdistget-tacos
backup s3:://hetzner/debdistget-diffos ./debdistget-diffos
backup s3:://hetzner/debdistget-pureos ./debdistget-pureos
backup s3:://hetzner/debdistget-kali ./debdistget-kali
backup s3:://hetzner/debdistget-devuan ./debdistget-devuan
backup s3:://hetzner/debdistget-trisquel ./debdistget-trisquel
backup s3:://hetzner/debdistget-debian ./debdistget-debian
The idea is to save a backup of a couple of S3 buckets under /backup/s3/.
I have some scripts that take a complete rsnapshot.conf file and append my per-directory configuration so that this becomes a complete configuration. If you are curious how I roll this, backup-all invokes backup-one appending my rsnapshot.conf template with the snippet above.
The s3rsync wrapper script is the essential hack to convert rsnapshot's rsync parameters into something that talks S3 and the script is as follows:
#!/bin/sh
set -eu
S3ARG=
for ARG in "$@"; do
case $ARG in
s3:://*) S3ARG="$S3ARG "$(echo $ARG | sed -e 's,s3:://,,');;
-Q*) ;;
*) S3ARG="$S3ARG $ARG";;
esac
done
echo /backup/s3/mc mirror $S3ARG
exec /backup/s3/mc mirror $S3ARG
It uses the minio-client tool. I first tried s3cmd but its sync command read all files to compute MD5 checksums every time you invoke it, which is very slow. The mc mirror command is blazingly fast since it only compare mtime's, just like rsync or git.
First you need to store credentials for your S3 bucket. These are stored in plaintext in ~/.mc/config.json which I find to be sloppy security practices, but I don't know of any better way to do this. Replace AKEY and SKEY with your access token and secret token from your S3 provider:
/backup/s3/mc alias set hetzner AKEY SKEY
If I invoke a sync job for a fully synced up directory the output looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V sync
Setting locale to POSIX "C"
echo 1443 > /backup/s3/rsnapshot.pid
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-gnuinos \
/backup/s3/.sync//debdistget-gnuinos
/backup/s3/mc mirror --json --remove hetzner/debdistget-gnuinos /backup/s3/.sync//debdistget-gnuinos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-tacos \
/backup/s3/.sync//debdistget-tacos
/backup/s3/mc mirror --json --remove hetzner/debdistget-tacos /backup/s3/.sync//debdistget-tacos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-diffos \
/backup/s3/.sync//debdistget-diffos
/backup/s3/mc mirror --json --remove hetzner/debdistget-diffos /backup/s3/.sync//debdistget-diffos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-pureos \
/backup/s3/.sync//debdistget-pureos
/backup/s3/mc mirror --json --remove hetzner/debdistget-pureos /backup/s3/.sync//debdistget-pureos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-kali \
/backup/s3/.sync//debdistget-kali
/backup/s3/mc mirror --json --remove hetzner/debdistget-kali /backup/s3/.sync//debdistget-kali
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-devuan \
/backup/s3/.sync//debdistget-devuan
/backup/s3/mc mirror --json --remove hetzner/debdistget-devuan /backup/s3/.sync//debdistget-devuan
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-trisquel \
/backup/s3/.sync//debdistget-trisquel
/backup/s3/mc mirror --json --remove hetzner/debdistget-trisquel /backup/s3/.sync//debdistget-trisquel
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-debian \
/backup/s3/.sync//debdistget-debian
/backup/s3/mc mirror --json --remove hetzner/debdistget-debian /backup/s3/.sync//debdistget-debian
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
touch /backup/s3/.sync/
rm -f /backup/s3/rsnapshot.pid
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1443] \
/run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
-V sync: completed successfully
root@hamster /backup#
You can tell from the paths that this machine runs Guix. This was the first production use of the Guix System for me, and the machine has been running since 2015 (with the occasional new hard drive). Before, I used rsnapshot on Debian, but some stable release of Debian dropped the rsnapshot package, paving the way for me to test Guix in production on a non-Internet exposed machine. Unfortunately, mc is not packaged in Guix, so you will have to install it from the MinIO Client GitHub page manually.
Running the daily rotation looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V daily
Setting locale to POSIX "C"
echo 1549 > /backup/s3/rsnapshot.pid
mv /backup/s3/daily.5/ /backup/s3/daily.6/
mv /backup/s3/daily.4/ /backup/s3/daily.5/
mv /backup/s3/daily.3/ /backup/s3/daily.4/
mv /backup/s3/daily.2/ /backup/s3/daily.3/
mv /backup/s3/daily.1/ /backup/s3/daily.2/
mv /backup/s3/daily.0/ /backup/s3/daily.1/
/run/current-system/profile/bin/cp -al /backup/s3/.sync /backup/s3/daily.0
rm -f /backup/s3/rsnapshot.pid
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1549] \
/run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
-V daily: completed successfully
root@hamster /backup#
Hopefully you will feel inspired to take backups of your S3 buckets now!
17 Jan 2026 10:04pm GMT
Jonathan Dowland: Honest Jon's lightly-used Starships

No man's Sky (or as it's known in our house, "spaceship game") is a space exploration/sandbox game that was originally released 10 years ago. Back then I tried it on my brother's PS4 but I couldn't get into it. In 2022 it launched for the Nintendo Switch1 and the game finally clicked for me.
I play it very casually. I mostly don't play at all, except sometimes when there are time-limited "expeditions" running, which I find refreshing, and usually have some exclusives as a reward for play.
One of the many things you can do in the game is collect star ships. I started keeping a list of notable ones I've found, and I've decided to occasionally blog about them.
The Horizon Vector NX is a small sporty ship that players on Nintendo Switch could claim within the first month or so after it launched. The colour scheme resembles the original "neon" switch controllers. Although the ship type occurs naturally in the game in other configurations, I think differently-painted wings are unique to this ship.
For most of the last 4 years, my copy of this ship was confined to the Switch, until November 2024, when they added cross-save capability to the game. I was then able to access the ship when playing on Linux (or Mac).
- The game runs very well natively on Mac, flawlessly on Steam for Linux, but struggles on the origins switch. It's a marvel it runs there at all.↩
17 Jan 2026 8:02pm GMT
Ravi Dwivedi: My experiences in Brunei
This post covers my friend Badri and my experiences in Brunei. Brunei - officially Brunei Darussalam - is a country in Southeast Asia, located on Borneo island. It is one of the few remaining absolute monarchies on Earth.
On the morning of the 10th of December 2024, Badri and I reached Brunei International Airport by taking a flight from Kuala Lumpur. Upon arrival at the airport, we had to go through the immigration, of course. However, I forgot to fill my arrival card, which I filled while I was in the queue for my immigration.
The immigration officer asked me how much cash I was carrying of each currency. After completing the formalities, the immigration officer stamped my passport and let me in. Take a look at Brunei's entry stamp in my passport.
Brunei entry stamp on my passport. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.
We exchanged Singapore dollars to get some Brunei dollars at the airport. The Brunei dollar was pegged 1:1 with the Singapore dollar, meaning 1 Singapore dollar equals 1 Brunei dollar. The exchange rate we received at the airport was the same.
Our (pre-booked) accommodation was located near Gadong mall. So, we went to the information center at the airport to ask how to get there by public transport. However, the person at the information center told us that they didn't know the public transport routes and suggested we take a taxi instead.
We came out of the airport and came across an Indian with a mini bus. He offered to drop us at our accommodation for 10 Brunei dollars (₹630). As we were tired after a sleepless night, we didn't negotiate and took the offer. It felt a bit weird using the minibus as our private taxi.
In around half-an-hour, we reach our accommodation. The place was more like a guest house than a hotel. In addition to the rooms, it had common space consisting of a hall, a kitchen and a balcony.
Our room in Brunei. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
Upon reaching the place, we paid for our room in cash, which was 66.70 Singapore dollars (4200 Indian rupees) for two nights. We reached before the check-in time, so we had to wait for our room to get ready before we entered.
The room had a double bed and also a place to hang clothes. We slept for a few hours before going out at night. We went into Gadong mall and had coffee at a café named The Coffee Bean & Tea Leaf. The regular caffe latte I had here been 5.20 Brunei dollars. On another note, the snacks we got us from Kuala Lumpur covered us for the dinner.
The next day-11th of December 2024-we went to a nearby restaurant named Nadj for lunch. The owner was from Kerala. Here we ordered:
- 1 paneer pepper masala for 5 Brunei dollars (320 rupees)
- 1 Nasi goreng pattaya biasa for 4.50 Brunei dollars (290 rupees)
- 1 plain naan for 1.50 Brunei dollars (100 rupees)
- 1 butter naan for 1.80 Brunei dollars (115 rupees)
So, our lunch cost a total of 12.80 Brunei dollars (825 rupees). The naan was unusually thick, and didn't like the taste.
After the lunch, we planned to visit Brunei's famous Omar Ali Saifuddien Mosque. However, a minibus driver outside of Gadong Mall told us that the mosque would be closed in half-an-hour and suggested we visit the nearby Jame' Asr Hassanil Bolkiah Mosque instead.
Jame' Asr Hassanil Bolkiah Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
He dropped us there for 1 Brunei dollar per person. The person hailed from Uttar Pradesh and told us about bus routes in Hindi. Buses routes in Brunei were confusing, so the information he gave us was valuable.
It was evening, and we had an impression that the mosque and its premises were closed. However, soon enough, we stumbled across an open gate entering the mosque complex. We walked inside for some time, took pictures and exited. Walking in Bandar Seri Begawan wasn't pleasant, though. The pedestrian infrastructure wasn't good.
Then we walked back to our place and bought some souvenirs. For dinner and breakfast, we bought bread, fruits and eggs from local shops as we had a kitchen to cook for ourselves.
The guest house also had a washing machine (free of charge) which we wanted to use. However, they didn't have detergent. Therefore, we went outside to get some detergent. It was 8 o'clock, and most of the shops were closed already. Others had had detergents in large sizes, the ones you would use if you lived there. We ended up getting a small packet at a supermarket.
The next day-12th of December-we had a flight to Ho Chi Minh City in Vietnam with a long layover in Kuala Lumpur. We had breakfast in the morning and took a bus to Omar Ali Saifuddien Mosque. The mosque was in prayer session, so it was closed for Muslims. Therefore, we just took pictures from the outside and took a bus for the airport.
Omar Ali Saifuddien Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
When the bus reached near the airport, the bus went straight rather than taking a left turn for the airport. Initially, I thought the bus would just take a turn and come back. However, the bus kept going away from the airport. Confused by this, I asked other passengers if the bus was going to the airport. The driver stopped the bus at Muara Town terminal- 20 km from the airport. At this point, everyone alighted, except for us. The driver went to a nearby restaurant to have lunch.
I felt very uncomfortable stranded in a town which was 20 km from the airport. We had a lot of time, but I was still worried about missing our flight, as I didn't want to get stuck in Brunei. After waiting for 15 minutes, I went inside the restaurant and reminded the driver that we had a flight in a couple of hours and needed to go to the airport. He said he will leave soon.
When he was done with his lunch, he drove us to the airport. It was incredibly frustrating. On a positive note, we saw countryside of Brunei that we would have seen otherwise. The bus ride cost us 1 Brunei dollars each.
A shot of Brunei's countryside. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.
That's it for this one. Meet you in the next one. Stay tuned for the Vietnam post!
17 Jan 2026 5:15pm GMT
16 Jan 2026
Planet Debian
Dirk Eddelbuettel: RcppSpdlog 0.0.26 on CRAN: Another Microfix

Version 0.0.26 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site has been refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.
Brian Ripley noticed an infelicity when building under C++20 which he is testing hard and fast. Sadly, this came a day late for yesterday's upload of release 0.0.25 with another trivial fix so another incremental release was called for. We already accommodated C++20 and its use of std::format (in lieu of the included fmt::format) but had not turned it on unconditionally. We do so now, but offer an opt-out for those who prefer the previous build type.
The NEWS entry for this release follows.
Changes in RcppSpdlog version 0.0.26 (2026-01-16)
- Under C++20 or later, switch to using
std::formatto avoid a compiler nag that CRAN now complains about
Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
16 Jan 2026 2:26pm GMT
Kentaro Hayashi: Budgie Desktop 10.10 is out, but not for me yet :(

Introduction
I'm one of a Budgie Desktop user since 2020. (Budgie Desktop 10.5 or so)
Recently Budgie Desktop 10.10 had been available from Debian experimental.
I've tried it and realized that not for me yet.
What is the requirement for desktop environment
- Screen sharing with specifying the specific window/application for web meeting
- Sharing keyboard input smoothly with deskflow
- Switch focus with mouse movement
- and ...
Is Budgie Desktop 10.10 suitable?
Short answer: No, not yet.
It seems that Budgie Desktop 10.10 (wayland) lost the functionality - screen sharing with specifying the specific window/application for web meeting.
Of course, you can share your screen itself.
It also can't sharing keyboard input smoothly with deskflow.
Both of them seems that they are supported in GNOME? or KDE?.
In contrast to Budgie Desktop 10.9 (X11), Budgie Desktop 10.10 seems missing effective wayland + xdg-desktop-portal support for them yet.
It might be supported in the future release, but it might be 11.x.
Alternatives?
Budgie Desktop 10.x will come into maintenance mode, so they will not be fixed in 10.x releases (guess).
Switching DE might be an option - GNOME or KDE. but I don't have much the energy to make the transition for now.
I decided to take the conservative option and go back to 10.9.
Note that if you upgrade to Budgie Desktop 10.10, it is hard to downgrade to Budgie Desktop 10.9 because python3-gi, python3-gi-cairo dependency blocks budgie-desktop.
See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1120138 for details.
As a dirty hack, you can modify python3-gi dependency and install it as your own risk for downgrading.
Conclusion
It was still too early to adopt Budgie Desktop (wayland) for me. I'll stay with Budgie Desktop 10.9 for a while.
That said, it's unclear how long Budgie Desktop 10.9 will remain unstable and usable. There might be a case that sticking to Budgie Desktop 10.9 might be problematic when other packages are updated.
When that happens, I'd like to reconsider this issue again.
16 Jan 2026 12:36pm GMT
Jonathan Dowland: Ye Gods

Via (I think) @mcc on the Fediverse, I learned of GetMusic: a sort-of "clearing house" for Free Bandcamp codes. I think the way it works is, some artists release a limited set of download codes for their albums in order to promote them, and GetMusic help them to keep track of that, and helps listeners to discover them.
GetMusic mail me occasionally, and once they highlighted an album The Arcane & Paranormal Earth which they described as "Post-Industrial in the vein of Coil and Nurse With Wound with shades of Aphex Twin, Autechre and assorted film music."
Well that description hooked me immediately but I missed out on the code. However, I sampled the album on Bandcamp directly a few times as well as a few of his others (Ye Gods is a side-project of Antoni Maiovvi, which itself is a pen-name) and liked them very much. I picked up the full collection of Ye Gods albums in one go for 30% off.
Here's a stand-out track:
So I guess this service works! Although I didn't actually get a free code in this instance, it promoted the artist, introduced me to something I really liked and drove a sale.
16 Jan 2026 10:14am GMT
15 Jan 2026
Planet Debian
Dirk Eddelbuettel: RcppSpdlog 0.0.25 on CRAN: Microfix

Version 0.0.25 of RcppSpdlog arrived on CRAN right now, and will be uploaded to Debian and built for r2u shortly along with a minimal refresh of the documentation site. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.
This release fixes a minuscule cosmetic issue from the previous release a week ago. We rely on two #defines that R sets to signal to spdlog that we are building in the R context (which matters for the R-specific logging sink, and picks up something Gabi added upon my suggestion at the very start of this package). But I use the same #defines to now check in Rcpp that we are building with R and, in this case, wrongly conclude R headers have already been installed so Rcpp (incorrectly) nags about that. The solution is to add two #undefine and proceed as normal (with Rcpp controlling and taking care of R header includion too) and that is what we do here. All good now, no nags from a false positive.
The NEWS entry for this release follows.
Changes in RcppSpdlog version 0.0.25 (2026-01-15)
- Ensure
#definesignaling R build (needed with spdlog) is unset before including R headers to not falsely triggering message from Rcpp
Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
15 Jan 2026 1:20pm GMT



