21 Mar 2026
Planet Grep
Paul Cobbaut: lastb on Debian
So there is this post which says:
"Yes, the people who are likely to care are admins with cobwebby
homebrew cronjobs that regularly generate painstakingly formatted
security reports and send them to the fax machine, or whatever."
...I feel extremely personally attacked by this :)
(Replace fax with mail, or whatever.)
I want my freshly upgraded to Debian Trixie Raspberry Pi to keep sending this cobwebby report... even if it is only for a couple of years to come... thus:
How to get lastb back on Debian
install dependencies:
apt install build-essential gettext autoconf flex bison libtool autopoint
get the source:
git clone https://github.com/util-linux/util-linux.git
rtfm:
cd util-linux
more Documentation/howto-compilation.txt
compile:
./autogen.sh && ./configure --disable-all-programs --enable-last
make last
install:
cp ./last /usr/bin/last
ln -s /usr/bin/last /usr/bin/lastb
happiness:
# lastb
btmp begins Thu Dec 4 15:50:47 2025
21 Mar 2026 12:12pm GMT
Lionel Dricot: The Social Smolnet

The Social Smolnet
It might have been an email thread. Or a lobste.rs comment. It was a discussion about yet another attempt at a new decentralized social protocol. And we reached the conclusion that with blogs and email, we already had a decentralized social network. We only needed to use it.
This was the last push I needed to implement in Offpunk the social features I had imagined years ago. Share and Reply. Available since Offpunk 3.0.
Share
Are you reading something interesting in Offpunk and want to share it? Well, simply write it:
share
or
share myfriend@example.com
A new mail containing the URL to share will be opened in your email client of choice (as determined by xdg-open). The title will be the title of the page. You only need to add some text to explain why you want to share that page.
Reply
Ever read a blog post and wanted to send feedback or a simple thank you to the author? Simply write:
reply
Reply will try to find a mailto link by exploring the page, root pages and, since 3.1, potential "contact" pages. It sometimes works really well. Often, the mail address is obscured or hidden. That's not a problem. You only need to find it once because Offpunk allows you to save it for the page or the whole online space.
Give an email address as an argument to reply and it will be saved in Offpunk for the page or the whole online space.
If you come across an email address that may be of use in the future but don't want to react now, use "save":
reply save author@example.com
or, if you want to use autodetection:
reply save
Yes, it is enough
It looks like nothing. It looks like trivial. But for me, this really transformed Gemini/Gopher and the Small Web into a social network. As I use neomutt+neovim as my mail client, I don't leave my terminal. I simply write "reply", neovim opens, I write "Thank you for this nice post", :wq, ,and voilà. The mail will be sent during my next synchronization.
Almost as easy as clicking a "like" button but way more personal. Even easier if, like me, you dislike touching a mouse or opening a browser!
Replying to my own post in Neovim
This is the Social SmolNet
In less than two months, I already used this feature to react to 40 different online spaces, not counting that I've used it multiple times with some people.
40 saved reply addresses (41 but the first line is wrongly counted)
I even started using Offpunk as an address book for my blogger friends. Instead of laboriously autocompleting their email addresses, I go to their blog/gemini capsule/gopher hole and write "reply".
The biggest lesson I take is that "social networks" are not about protocols but about how we use the existing infrastructure. Microsoft and Google are working hard to make sure you hate email and hate building a website. But we don't have to obey. We can enjoy writing lightweight HTML and sending quick emails to each other. We have the right to read, write, and have social fun without Javascript and centralized platforms. We have the duty to keep this torch lit.
In the meantime, if you receive from me very short emails reacting to some of your posts, now you know why.
But, of course, feel free not to reply!
About the author
I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
21 Mar 2026 12:12pm GMT
Dries Buytaert: Elo 1800
I finally crossed 1800 on Chess.com. It took 17 months to gain 100 points. It felt endless.
A few times, I was one game away from reaching 1800. Each time, I collapsed into a losing streak and dropped back to the low 1700s. Few things I do for fun frustrate me as much as chess.
Growth never happens in a straight line. Improvement often looks like regression. Even when I'm going backward, I'm still improving. That lesson carries into work and life.
When working out on the Peloton, I often watch 2000-rated players on YouTube. They're only 200 Elo points higher, but the gap feels massive. Skill doesn't scale linearly.
My game has certainly improved. I see weaknesses faster now, which helps me form better middle game plans. For better or worse, I still rely on a few familiar openings, but they usually get me to a playable position.
Still, 17 months for 100 points feels slow. Should I aim for 2000? Part of me wants the challenge. Another part questions the tradeoff, or whether I'll get there at all. I've not decided yet. For now, I am proud of reaching 1800.
21 Mar 2026 12:12pm GMT
Planet Debian
C.J. Collier: The WWW::Mechanize::Chrome Saga: A Comprehensive Narrative of PR #104

The
WWW::Mechanize::Chrome Saga: A Comprehensive Narrative of PR #104
This document synthesizes the extensive work performed from March
13th to March 20th, 2026, to harden, stabilize, and refactor the
WWW::Mechanize::Chrome library and its test suite. This
effort involved deep dives into asynchronous programming,
platform-specific bug hunting, and strategic architectural
decisions.
Part I:
The Quest for Cross-Platform Stability (March 13 - 16)
The initial phase of work focused on achieving a "green" test suite
across a variety of Linux distributions and preparing for a new release.
This involved significant hardening of the library to account for
different browser versions, OS-level security restrictions, and
filesystem differences.
Key Milestones &
Engineering Decisions:
- Fedora & RHEL-family Success: A major effort
was undertaken to achieve a 100% pass rate on modern Fedora 43 and
CentOS Stream 10. This required several key engineering decisions to
handle modern browser behavior:- Decision: Implement Asynchronous DOM Serialization
Fallback. Synchronous fallbacks in an async context are
dangerous. To preventResource was not cachederrors during
saveResources, we implemented a fully asynchronous fallback
in_saveResourceTree. By chaining
_cached_documentwithDOM.getOuterHTML
messages, we can reconstruct document content without blocking the event
loop, even if Chromium has evicted the resource from its cache. This
also proved resilient against Fedora's security policies, which often
blockfile://access. - Decision: Truncate Filenames for Cross-Platform
Safety. To avoidFile name too longerrors,
especially on Windows where theMAX_PATHlimit is 260
characters,filenameFromUrlwas hardened. The filename
truncation was reduced to a more conservative 150
characters, leaving ample headroom for deeply nested CI
temporary directories. Logic was also added to preserve file extensions
during truncation and to sanitize backslashes from URI paths. - Decision: Expand Browser Discovery Paths. To
support RHEL-based systems out-of-the-box, the
default_executable_nameswas expanded to include
headless_shelland search paths were updated to include
/usr/lib64/chromium-browser/. - Decision: Mitigate Race Conditions with Stabilization Waits
and Resilient Fetching. On fast systems,
DOM.documentUpdatedevents could invalidate
nodeIds immediately after navigation, causing XPath queries
to fail with "Could not find node with given id". A small stabilization
sleep(0.25s)was added after page loads to ensure the DOM
is settled. Furthermore, the asynchronous DOM fetching loop was hardened
to gracefully handle these errors by catching protocol errors and
returning an empty string for any node that was invalidated during
serialization, ensuring the overall process could complete.
- Decision: Implement Asynchronous DOM Serialization
- Windows Hardening:
- Decision: Adopt Platform-Aware Watchdogs. The test
suite's reliance onualarmwas a blocker for Windows, where
it is not implemented. Thet::helper::set_watchdogfunction
was refactored to use standardalarm()(seconds) on Windows
andualarm(microseconds) on Unix-like systems, enabling
consistent test-level timeout enforcement.
- Decision: Adopt Platform-Aware Watchdogs. The test
- Version 0.77 Release:
- Decision: Adopt SOP for Version Synchronization.
The project maintains duplicate version strings across 24+ files. A
Standard Operating Procedure was adopted to use a batch-replacement tool
to update all sub-modules inlib/and to always run
make cleanandperl Makefile.PLto ensure
META.jsonandMETA.ymlreflect the new
version. After achieving stability on Linux, the project version was
bumped to 0.77.
- Decision: Adopt SOP for Version Synchronization.
- Infrastructure & Strategic Work:
- The
ad2Windows Server 2025 instance was restored and
optimized, with Active Directory demoted and disk I/O performance
improved. - A strategic proposal for the Heterogeneous Directory
Replication Protocol (HDRP) was drafted and published.
- The
Part II: The
Great Async Refactor (March 17 - 18)
Despite success on Linux, tests on the slow ad2 Windows
host were still plagued by intermittent, indefinite hangs. This
triggered a fundamental architectural shift to move the library's core
from a mix of synchronous and asynchronous code to a fully non-blocking
internal API.
Key Milestones &
Engineering Decisions:
-
Decision: Expose a
_futureAPI.
Instead of hardcoding timeouts in the library, the core strategy was to
refactor all blocking methods (xpath,field,
get, etc.) into thin wrappers around new non-blocking
..._futurecounterparts. This moved timeout management to
the test harness, allowing for flexible and explicit handling of
stalls. -
Decision: Centralize Test Hardening in a Helper.
A dedicated test library,t/lib/t/helper.pm, was created to
contain all stabilization logic. "Safe" wrappers (safe_get,
safe_xpath) were implemented there, using
Future->wait_anyto race asynchronous operations against
a timeout, preventing tests from hanging.# Example test helper implementation sub safe_xpath { my ($mech, $query, %options) = @_; my $timeout = delete $options{timeout} || 5; my $call_f = $mech->xpath_future($query, %options); my $timeout_f = $mech->sleep_future($timeout)->then(sub { Future->fail("Timeout") }); return Future->wait_any($call_f, $timeout_f)->get; } -
Decision: Refactor Node Attribute Cache.
Investigations into flaky checkbox tests (t/50-tick.t)
revealed thatWWW::Mechanize::Chrome::Nodewas storing
attributes as a flat list ([key, val, key, val]), which was
inefficient for lookups and individual updates. The cache was refactored
to definitively use a HashRef, providing O(1) lookups
and enabling atomic dual-updates where both the browser property (via
JS) and the internal library attribute are synchronized
simultaneously. -
Decision: Implement Self-Cancelling Socket
Watchdog. On Windows, traditional watchdog processes often
failed to detect parent termination, leading to 60-second hangs after
successful tests. We implemented a new socket-based watchdog in
t::helperthat listens on an ephemeral port; the background
process terminates immediately when the parent socket closes,
eliminating these cumulative delays. -
Decision: Deep Recursive Refactoring & Form
Selection. To make the API truly non-blocking, the entire
internal call stack had to be refactored. For example, making
get_set_value_futurenon-blocking required first making its
dependency,_field_by_name, asynchronous. This culminated
in refactoring the entire form selection API (form_name,
form_id, etc.) to use the new asynchronous
_futurelookups, which was a key step in mitigating the
Windows deadlocks. -
Decision: Fix Critical Regressions & Memory
Cycles.-
Evaluation Normalization: Implemented a
_process_eval_resulthelper to centralize the parsing of
results fromRuntime.evaluate. This ensures consistent
handling of return values and exceptions between synchronous
(eval_in_page) and asynchronous (eval_future)
calls. -
Memory Cycle Mitigation: A significant memory
leak was discovered where closures attached to CDP event futures (like
for asynchronous body retrieval) would capture strong references to
$selfand the$responseobject, creating a
circular reference. The established rule is to now always use
Scalar::Util::weakenon both$selfand any
other relevant objects before they are used inside a
->thenblock that is stored on an object. -
Context Propagation (
wantarray): A
major regression was discovered where Perl'swantarray
context, which distinguishes between scalar and list context, was lost
inside asynchronousFuture->thenblocks. This caused
methods likexpathto return incorrect results (e.g., a
count instead of a list of nodes). The solution was to adopt the "Async
Context Pattern": capturewantarrayin the synchronous
wrapper, pass it as an option to the_futuremethod, and
then use that captured value inside the future's final resolution
block.# Synchronous Wrapper sub xpath($self, $query, %options) { $options{ wantarray } = wantarray; # 1. Capture return $self->xpath_future($query, %options)->get; # 2. Pass } # Asynchronous Implementation sub xpath_future($self, $query, %options) { my $wantarray = delete $options{ wantarray }; # 3. Retrieve # ... async logic ... return $doc->then(sub { if ($wantarray) { # 4. Respect return Future->done(@results); } else { return Future->done($results[0]); } }); } -
Asynchronous Body Retrieval & Robust Content
Fallbacks: Fixed a bug wheredecoded_content()
would return empty strings by ensuring it awaited a
__body_future. This was implemented by storing the
retrieval future directly on the response object
($response->{__body_future}). To make this more robust,
a tiered strategy was implemented: first try to get the content from the
network response, but if that fails (e.g., forabout:blank
or due to cache eviction), fall back to a JavaScript
XMLSerializerto get the live DOM content. -
Signature Hardening: Fixed "Too few arguments"
errors when using modern Perl signatures with
Future->then. Callbacks were updated to use optional
parameters (sub($result = undef) { ... }) to gracefully
handle futures that resolve with no value. -
XHTML "Split-Brain" Bug: Resolved a
long-standing Chromium bug (40130141) where content provided via
setDocumentContentis parsed differently than content
loaded from a URL. A workaround was implemented: for XHTML documents,
WMC now uses a JavaScript-based XPath evaluation
(document.evaluate) against the live DOM, bypassing the
broken CDP search mechanism.
-
Derived Architectural Rules
& SOPs:
- Rule: Always provide
_futurevariants.
Every library method that interacts with the browser via CDP must have a
non-blocking asynchronous counterpart. - Rule: Centralize stabilization in the test layer.
All timeout and retry logic should reside in the test harness
(t/lib/t/helper.pm), not in the core library. - Rule: Explicitly propagate
wantarray
context. Synchronous wrappers must capture the caller's context
and pass it down theFuturechain to ensure correct
scalar/list behavior. - Rule: The entire call chain must be asynchronous.
To enable non-blocking timeouts, even a single "hidden" blocking call in
an otherwise asynchronous method will cause a stall. - SOP: Reduce Library Noise. Diagnostic messages
(warn,note,diag) should be
removed from library code before commits. All such messages should be
converted to use the internal$self->log('debug', ...)
mechanism, ensuring a clean TAP output for CI systems.
Part III: The
MutationObserver Saga (March 19)
With most of the library refactored to be asynchronous, one stubborn
test, t/65-is_visible.t, continued to fail with timeouts.
This led to an ambitious, but ultimately unsuccessful, attempt to
replace the wait_until_visible polling logic with a more
"modern" MutationObserver.
Key Milestones & Challenges:
- The Theory: The goal was to replace an inefficient
repeat { sleep }loop with an event-driven
MutationObserverin JavaScript that would notify Perl
immediately when an element's visibility changed. - Implementation & Cascade Failure: The
implementation proved incredibly difficult and introduced a series of
new, hard-to-diagnose bugs:- An incorrect function signature for
callFunctionOn_future. - A critical unit mismatch, passing seconds from Perl to JavaScript's
setTimeout, which expected milliseconds. - A fundamental hang where the
MutationObserver's
JavaScriptPromisewould never resolve, even after the
underlying DOM element changed.
- An incorrect function signature for
- Debugging Maze: Multiple attempts to fix the
checkVisibilityJavaScript logic inside the observer
callback, including making it more robust by adding DOM tree traversal
and extensiveconsole.logtracing, failed to resolve the
hang. This highlighted the opacity and difficulty of debugging complex,
cross-language asynchronous interactions, especially when dealing with
low-level browser APIs.
Procedural Learning:
Granular Edits
The effort was plagued by procedural missteps in using automated
file-editing tools. Initial attempts to replace large code blocks in a
single operation led to accidental code loss and match failures.
- Decision: Adopt "Delete, then Add" Workflow.
Following forceful user correction, a new SOP was established for all
future modifications:- Isolate: Break the file into small, manageable
chunks (e.g., 250 lines). - Delete: Perform a "delete" operation by replacing
the old code block with an empty string. - Add: Perform an "add" operation by inserting the
new code into the empty space. - Verify: Verifying each atomic step before
proceeding. This granular process, while slower, ensured surgical
precision and regained technical control over the large
Chrome.pmmodule.
- Isolate: Break the file into small, manageable
The consistent failure of the MutationObserver approach
eventually led to the decision to abandon it in favor of stabilizing the
original, more transparent implementation.
Part IV:
Reversion and Final Stabilization (March 20)
After exhausting all reasonable attempts to fix the
MutationObserver, a strategic decision was made to revert
to the simpler, more transparent polling implementation and fix it
correctly. This proved to be the correct path to a stable solution.
Key Milestones &
Engineering Decisions:
- Decision: Perform Strategic Reversion. The
MutationObserverimplementation, when integrated via
callFunctionOn_futurewithawaitPromise,
proved fundamentally unstable. Its JavaScript promise would consistently
fail to resolve, causing indefinite hangs. A decision was made to
revert allMutationObservercode from
WWW::Mechanize::Chrome.pmand restore the original
repeat { sleep }polling mechanism. A stable,
understandable solution was prioritized over an elegant but broken
one. - Decision: Correct Timeout Delegation in the
Harness. The root cause of the original timeout failure was
identified as a race condition in thet/lib/t/helper.pm
test harness. Thesafe_wait_until_*wrappers were
implementing their own timeout (viawait_anyand
sleep_future) that raced against the underlying polling
function's internal timeout. This led to intermittent failures on slow
machines. The helpers were refactored to delegate all timeout
management to the library's polling functions, ensuring a
single, authoritative timer controlled the operation. - Decision: Optimize Polling Performance. At the
user's request, the polling interval was reduced from 300ms to
150ms. This modest performance improvement reduced the
test suite's wallclock execution time by over a second while maintaining
stability. - Decision: Tune Test Watchdogs. The global watchdog
timeout was adjusted to 12 seconds, specifically calculated as 1.5x the
observed real execution time of the optimized test. This provides a
data-driven safety margin for CI.
Part
V: The Last Bug - A Platform-Specific Memory Leak (March 20)
With all other tests passing, a single memory leak failure in
t/78-memleak.t persisted, but only on the Windows
ad2 environment. This required a different approach than
the timeout fixes.
Key Milestones:
- The Bug: A strong reference cycle involving the
on_dialogevent listener was not being broken on Windows,
despite multiple attempts to fix it. Fixes that worked on Linux (such as
callingon_dialog(undef)inDESTROY) were not
sufficient on the Windows host. - The Diagnosis: The issue was determined to be a
deep, platform-specific interaction between Perl's garbage collector,
theIO::Asyncevent loop implementation on Windows, and the
Test::Memory::Cyclemodule. The cycle report was identical
on both platforms, but the cleanup behavior was different. - Failed Attempts: A series of increasingly
aggressive fixes were attempted to break the cycle, including:- Moving the
on_dialog(undef)call from
close()toDESTROY(). - Explicitly
deleteing the listener and callback
properties from the object hash inDESTROY. - Swapping between
$self->remove_listenerand
$self->target->unlistenin a mistaken attempt to find
the correct un-registration method.
- Moving the
- Pragmatic Solution: After exhausting all reasonable
code-level fixes without a resolution on Windows, the user opted to mark
the failing test as a known issue for that specific platform. - Final Fix: The single failing test in
t/78-memleak.twas wrapped in a conditional
TODOblock that only executes on Windows
(if ($^O =~ /MSWin32/i)), formally acknowledging the bug
without blocking the build. This allows the test suite to pass in CI
environments while flagging the issue for future, deeper
investigation.
Part VI: CI Hardening (March
20)
A final failure in the GitHub Actions CI environment revealed one
last configuration flaw.
Key Milestones:
- The Bug: The CI was running
prove --nocount --jobs 3 -I local/ -bl xt tdirectly. This
command was missing the crucial-It/libinclude path, which
is necessary for test files to locate thet::helpermodule.
This resulted in nearly all tests failing with
Can't locate t/helper.pm in @INC. - The Investigation: An analysis of
Makefile.PLrevealed a customMY::testblock
specifically designed to inject the-It/libflag into the
make testcommand. This confirmed that
make testis the correct, canonical way to run the test
suite for this project. - The Fix: The
.github/workflows/linux.ymlfile was modified to replace
the directprovecall withmake testin the
Run Testsstep. This ensures the CI environment runs the
tests in the exact same way as a local developer, with all necessary
include paths correctly configured by the project's build system.
Final Outcome
After this long and arduous journey, the
WWW::Mechanize::Chrome test suite is now stable and
passing on all targeted platforms, with known
platform-specific issues clearly documented in the code. The project is
in a vastly more robust and reliable state.
21 Mar 2026 1:52am GMT
20 Mar 2026
Planet Debian
Dirk Eddelbuettel: RcppSpdlog 0.0.28 on CRAN: Micro-Maintenance

Version 0.0.28 of RcppSpdlog arrived on CRAN today, has been uploaded to Debian and built for r2u. The (nice) documentation site has been refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.
This release contains a rebuild RcppExports.cpp to aid Rcpp in the transition towards Rcpp::stop() and away from Rf_error() in its user packages. No othe
The NEWS entry for this release follows.
Changes in RcppSpdlog version 0.0.28 (2026-03-19)
- Regenerate
RcppExports.cppto switch to(Rf_error)aiding in Rcpp transition toRcpp::stop()
Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
20 Mar 2026 9:47pm GMT
Michael Ablassmeier: virtnbdbackup 2.46 - bitlocker recovery keys
I've released virtnbdbackup 2.46 which now attempts to extract the bitlocker recovery keys during backup. The windows domains need a working qemu agent installed during backup for this to work.
Using the agent, it also extracts the available guestinfo (network config, OS version etc..) from the domain and stores it alongside with the backup.
20 Mar 2026 12:00am GMT
12 Mar 2026
Planet Lisp
Christoph Breitkopf: Functional Valhalla?
Pointer-rich data layouts lead to suboptimal performance on modern hardware. For an excellent introduction to this, see the article The Road to Valhalla. While it is specifically about Java, many parts of the article also apply to other languages. To summarize some of the key points of the article:
- In 1990, a main memory fetch was about as expensive as an arithmetic operation. Now, it might be a hundred times slower.
- A pointer-rich data layout involving indirections between data at different locations is not ideal for today's hardware.
- A language should make flat (cache-efficient) and dense (memory-efficient) memory layouts possible without compromising abstraction or type safety.
Consider a vector of records (or tuples, structures, product types - I'll stay with "record" in this article). A pointer-rich layout has each record allocated separately in the heap, with a vector containing pointers to the records. For example, given a "Point" record of two numbers:
The flat and dense layout has the records directly in the array:
(Note that there is another flat layout, namely, using one vector per field of the record. This is better suited to instruction-level parallelism or specialized hardware (e.g., GPUs), especially when the record fields have different sizes. But it is less suited for general-purpose computing, as reading a single vector element requires one memory access per field, whereas the "vector of records" layout above requires only one access per record. Such a layout can be easily implemented in any language that has arrays of native types, whether in the language itself or in a library (e.g., OCaml's Owl library). Thus, in this article, I will only consider the "array of records" layout above.)
Functional language considerations
Things should be much easier in functional languages than in Java: we have purity, referential transparency, and everything is a value. So it should be simple enough to store these values in memory in their native representation. But there are reasons that that is often not the case in practice:
- Lazyness: a value can be a computation that produces a value only when needed.
- Layout polymorphism: unless we replicate the code for every type (as, for example, Rust does), we need to be able to store every possible value in the same kind of slot.
- Dynamically typed languages require type information at runtime.
- Functional languages often have automatic memory management, which may require runtime type information.
- Many of our languages are not purely functional, but contain impure features.
- Pure languages often lack traditional vectors or arrays, since making them perform well in immutable code is not easy.
- Historical reasons: Graph reduction was a common implementation technique for lazy languages, and graphs involve pointers.
- Implementation restrictions: not being mainstream, fewer resources are devoted to implementation and optimization.
Many implementations can not even lay out native types flat in records, so a Point record of IEEE 754 double-precision numbers may actually look like this in memory:
The (very short) List
So, given a record type, which functional languages allow a collection of values of that type to have a flat, linear memory layout? The number of programming languages that claim to be "functional" is huge, so the ones listed here are just a selection based on my preferences - mainly languages that allow that layout, and some I have some experience with and can speculate on how easy or hard it would be to add that as a library or extension.
Since the Point record can be misleading in its simplicity when it comes to the question of whether the functionality could be implemented as a library, I'll point out that there are records where the layout is a bit more interesting:
- Records containing different types with different storage sizes, for example, one 64-bit float and one 32-bit integer. On most architectures, this will require 4 bytes of padding between elements.
- Records containing native values along with something that has to be represented as a pointer, for example, a reference-type or a lazy value. In a flat layout, this means that every nth element will be a pointer, requiring special support from the memory management system, either by providing layout information or by using a conservative GC that treats everything as a potential pointer.
Pure languages:
Clean
Yes: Clean has unboxed arrays of records in the base language.
Caveat: it does not have integer types of specific sizes and only one floating-point type, making it harder to reduce memory usage by using the smallest type just large enough to support the required value range. It seems possible to implement such types in a library (the mTask system does that).
Futhark
No. Futhark does not intend to be a general-purpose language, so this is not surprising.
I mention it here because it does have arrays of records, but, since it targets GPUs and related hardware, it uses the "record of arrays" layout mentioned above.
Haskell
Yes. Not in the base language, but there is library support via Data.Vector.Unboxed. Types that implement the Unbox type class can be used in these vectors. Many basic types and tuples have an Unbox instance. However, when you care about efficiency, you probably do not want to use tuples but rather a data type with strict fields, i.e., not:
but:
Writing an Unbox instance for such a type is not trivial. The vector-th-unbox library makes it easier, but requires Template Haskell. Unboxed vectors are implemented by marshalling the values to byte arrays, so records with pointer fields are not supported.
Impure Languages
F#
Yes, even records with pointer fields. Records have structural equality, and you can use structs or the [<Struct>] attribute to get a flat layout.
And that's all I could find. Unless I follow Wikipedia's list of functional programming languages, which contains languages such as C++, C#, Rust, or Swift, that allow the flat layout, but don't really fit my idea of a functional language. But SML, OCaml, Erlang (Elixir, Gleam), Scala? Not that I could see (but please correct me if I'm wrong).
Rolling your own
Since there is a library implementation for Haskell, maybe that's a possibility for other languages?
You should be able to implement flat layouts in any language that supports byte vectors. More interesting is how well such a library fits into the language, and whether a user of the library has to write code or annotations for user-defined record types, or whether the library can handle part or all of that automagically.
I'll only mention my beloved Lisp/Scheme here. Lisp's uniform syntax and macro system are a bonus here, but the lack of static typing makes things harder.
In Scheme, R6RS (and R7RS with the help of some SRFIs) has byte-vectors and marshalling to/from them in the standard library. But Scheme does not have type annotations, so you either need to offer a macro to define records with typed fields or to define how to marshal the fields of a regular (sealed) record. Since you can shadow standard procedures in a library, you can write code that looks like regular Scheme code, but, perhaps surprisingly, loses identity when storing/retrieving values from records:
(let ((vec (make-typed-vector 'point 1000))
(pt (make-point x y)))
(vector-set! vec 0 pt)
(eq? (vector-ref vec 0) pt))
⇒ #f(But then, you probably shouldn't be using eq? when doing functional programming in Scheme).
The same approach is possible in Common Lisp. In contrast to Scheme, it does have optional type annotations, and, together with a helper library for accessing the innards of floats and either the meta-object protocol to get type information or (probably better) a macro to define typed records, an implementation should be reasonably straightforward. Making it play nice with inheritance and the dynamic nature of Common Lisp (e.g., adding slots to classes or even changing an object's class at runtime) would be a much harder undertaking.
Conclusion
Of the functional languages I looked at, only F# fully supports flat and dense memory layouts. Among the pure languages, Haskell and Clean come close.
The question is how important this really is. There's a good argument to be made for turning to more specialized languages like Futhark if you mainly care about performance. On the other hand, having a uniform codebase in one language also has advantages.
Then, the performance story has changed, too. While the points Project Valhalla raises remain true in principle, processor designers are aware of this as well. They are doing their best to hide memory latency with techniques such as out-of-order execution or humongous caches. Thus, on a modern CPU, the effects of a pointer-rich layout are often only observable with large working set sizes.
Still, given the plethora of imperative language that can get you to Valhalla, support for this in the functional landscape seems lacking. In the future, I hope to see more languages or libraries that will make this possible.
12 Mar 2026 11:17am GMT
07 Mar 2026
Planet Lisp
Scott L. Burson: FSet v2.3.0: Transients!
FSet v2.3.0 added transients! These make it faster to populate new collections with data, especially as the collections get large. I shamelessly stole the idea from Clojure.
They are currently implemented only for the CHAMP types ch-set, ch-map, ch-2-relation, ch-replay-set, and ch-replay-map.
The term "transient" contrasts with "persistent". I'm using the term "persistent" in its functional-data-structure sense, as Clojure does: a data structure is persistent if multiple states of it can coexist in memory efficiently. (The probably more familiar use of the term is in the database sense, where it refers to nonvolatile storage of data.) FSet collections have, up to now, all been persistent in this sense; a point modification to one, such as by with or less, takes only O(log n) space and time to return a new state of the collection, without disturbing the previous state.
A transient encapsulates the internal tree of a collection so as to guarantee that it holds the only pointer to the tree; this allows modifications to tree nodes to be made in-place, so long as the node has sufficient allocated space. Once the collection is built, the tree is in the same format that existing FSet code expects, and can be accessed and functionally updated as usual.
Some quick micro-benchmarking suggests that speedups, for constructing a set from scratch, range from 1.6x at size 64 to as much as 2.4x at size 4096.
You don't necessarily even have to use transients explicitly in order to benefit from them. Some FSet builtins such as filter and image use them now. The GMap result types ch-set etc. also use them.
For details, see the GitLab MR.
07 Mar 2026 8:04am GMT
28 Feb 2026
Planet Lisp
Neil Munro: Ningle Tutorial 15: Pagination, Part 2
Contents
- Part 1 (Hello World)
- Part 2 (Basic Templates)
- Part 3 (Introduction to middleware and Static File management)
- Part 4 (Forms)
- Part 5 (Environmental Variables)
- Part 6 (Database Connections)
- Part 7 (Envy Configuation Switching)
- Part 8 (Mounting Middleware)
- Part 9 (Authentication System)
- Part 10 (Email)
- Part 11 (Posting Tweets & Advanced Database Queries)
- Part 12 (Clean Up & Bug Fix)
- Part 13 (Adding Comments)
- Part 14 (Pagination, Part 1)
- Part 15 (Pagination, Part 2)
Introduction
Welcome back! We will be revisiting the pagination from last time, however we are going to try and make this easier on ourselves, I built a package for pagination mito-pager, the idea is that much of what we looked at in the last lesson was very boiler plate and repetitive so we should look at removing this.
I will say, my mito-pager can do a little more than just what I show here, it has two modes, you can use paginate-dao (named this way so that it is familiar to mito) to paginate over simple models, however, if you need to perform complex queries there is a macro with-pager that you can use to paginate. It is this second form we will use in this tutorial.
There is one thing to bear in mind, when using mito-pager, you must implement your data retrieval functions in such a way to return a values object, as mito-pager relies on this to work.
I encourge you to try the library out in other use-cases and, of course, if you have ideas, please let me know.
Changes
Most of our changes are quite limited in scope, really it's just our controllers and models that need most of the edits.
ningle-tutorial-project.asd
We need to add the mito-pager package to our project asd file.
- :ningle-auth)
+ :ningle-auth
+ :mito-pager)
src/controllers.lisp
Here is the real payoff! I almost dreaded writing the sheer volume of the change but then realised it's so simple, we only need to change our index function, and it may be better to delete it all and write our new simplified version.
(defun index (params)
(let* ((user (gethash :user ningle:*session*))
(req-page (or (parse-integer (or (ingle:get-param "page" params) "1") :junk-allowed t) 1))
(req-limit (or (parse-integer (or (ingle:get-param "limit" params) "50") :junk-allowed t) 50)))
(flet ((get-posts (limit offset) (ningle-tutorial-project/models:posts user :offset offset :limit limit)))
(mito-pager:with-pager ((posts pager #'get-posts :page req-page :limit req-limit))
(djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :pager pager)))))
This is much nicer, and in my opinion, the controller should be this simple.
src/main.lisp
We need to ensure we include the templates from mito-pager, this is a simple one line change.
(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
(djula:add-template-directory (asdf:system-relative-pathname :ningle-tutorial-project "src/templates/"))
+ (djula:add-template-directory (asdf:system-relative-pathname :mito-pager "src/templates/"))
src/models.lisp
As mentioned at the top of this tutorial, we have to implement our data retrieval functions in a certain way. While there are some changes here, we ultimately end up with less code.
We can start by removing the count parameter, we wont be needing it in this implementation, and since we don't need the count parameter anymore, the :around method can go too!
- (defgeneric posts (user &key offset limit count)
+ (defgeneric posts (user &key offset limit)
-
- (defmethod posts :around (user &key (offset 0) (limit 50) &allow-other-keys)
- (let ((count (mito:count-dao 'post))
- (offset (max 0 offset))
- (limit (max 1 limit)))
- (if (and (> count 0) (>= offset count))
- (let* ((page-count (max 1 (ceiling count limit)))
- (corrected-offset (* (1- page-count) limit)))
- (posts user :offset corrected-offset :limit limit))
- (call-next-method user :offset offset :limit limit :count count))))
There's two methods to look at, the first is when the type of user is user:
-
- (defmethod posts ((user user) &key offset limit count)
+ (defmethod posts ((user user) &key offset limit)
...
(values
- (mito:retrieve-by-sql sql :binds params)
- count
- offset)))
+ (mito:retrieve-by-sql sql :binds params)
+ (mito:count-dao 'post))))
The second is when the type of user is null:
-
- (defmethod posts ((user null) &key offset limit count)
+ (defmethod posts ((user null) &key offset limit)
...
(values
- (mito:retrieve-by-sql sql)
- count
- offset)))
+ (mito:retrieve-by-sql sql)
+ (mito:count-dao 'post))))
As you can see, all we are really doing is relying on mito to do the lions share of the work, right down to the count.
src/templates/main/index.html
The change here is quite simple, all we need to do is to change the path to the partial, we need to simply point to the partial provided by mito-pager.
- {% include "partials/pager.html" with url="/" title="Posts" %}
+ {% include "mito-pager/partials/pager.html" with url="/" title="Posts" %}
src/templates/partials/pagination.html
This one is easy, we can delete it! mito-pager provides its own template, and while you can override it (if you so wish), in this tutorial we do not need it anymore.
Conclusion
I hope you will agree that this time, using a prebuilt package takes a lot of the pain out of pagination. I don't like to dictate what developers should, or shouldn't use, so that's why last time you were given the same information I had, so if you wish to build your own library, you can, or if you want to focus on getting things done, you are more than welcome to use mine, and of course, if you find issues please do let me know!
Learning Outcomes
| Level | Learning Outcome |
|---|---|
| Understand | Understand how third-party pagination libraries like mito-pager abstract boilerplate pagination logic, and how with-pager expects a fetch function returning (values items count) to handle page clamping, offset calculation, and boundary correction automatically. |
| Apply | Apply flet to define a local adapter function that bridges the project's posts generic function with mito-pager's expected (lambda (limit offset) ...) interface, and use with-pager to reduce controller complexity to its essential logic. |
| Analyse | Analyse what responsibilities were transferred from the manual pagination implementation to mito-pager - count caching, boundary checking, offset calculation, page correction, and range generation - contrasting the complexity of both approaches. |
| Create | Refactor a manual pagination implementation to use mito-pager by simplifying model methods to return (values items count), replacing complex multi-step controller calculations with with-pager, and delegating the pagination template partial to the library. |
Github
- The link for the custom pagination part of the tutorials code is available here.
Common Lisp HyperSpec
| Symbol | Type | Why it appears in this lesson | CLHS |
|---|---|---|---|
defpackage |
Macro | Define project packages like ningle-tutorial-project/models, /forms, /controllers. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defpac.htm |
in-package |
Macro | Enter each package before defining models, controllers, and functions. | http://www.lispworks.com/documentation/HyperSpec/Body/m_in_pkg.htm |
defgeneric |
Macro | Define the simplified generic posts function signature with keyword parameters offset and limit (the count parameter is removed). |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defgen.htm |
defmethod |
Macro | Implement the simplified posts methods for user and null types (the :around validation method is removed). |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defmet.htm |
flet |
Special Operator | Define the local get-posts adapter function that wraps posts to match mito-pager's expected (lambda (limit offset) ...) interface. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_flet_.htm |
let* |
Special Operator | Sequentially bind user, req-page, and req-limit in the controller where each value is used in subsequent bindings. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_let_l.htm |
or |
Macro | Provide fallback values when parsing page and limit parameters, defaulting to 1 and 50 respectively. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_or.htm |
multiple-value-bind |
Macro | Capture the SQL string and bind parameters returned by sxql:yield in the model methods. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_multip.htm |
values |
Function | Return two values from posts methods - the list of results and the total count - as required by mito-pager:with-pager. |
http://www.lispworks.com/documentation/HyperSpec/Body/a_values.htm |
parse-integer |
Function | Convert string query parameters ("1", "50") to integers, with :junk-allowed t for safe parsing. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_parse_.htm |
28 Feb 2026 8:00am GMT
29 Jan 2026
FOSDEM 2026
Join the FOSDEM Treasure Hunt!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
29 Jan 2026 11:00pm GMT
26 Jan 2026
FOSDEM 2026
Guided sightseeing tours
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
26 Jan 2026 11:00pm GMT
Call for volunteers
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
26 Jan 2026 11:00pm GMT