28 Mar 2026

feedPlanet Grep

Frederic Descamps: MariaDB observability – results from the poll: the community has clearly chosen its default stack

Before I share my takeaway from this MariaDB observability poll, I would like to thank all participants and highlight that these recent polls are very popular, and your participation makes us happy. That said, we recently asked the MariaDB community the following question: Which observability tools do you use for MariaDB? I like polls like […]

28 Mar 2026 5:34am GMT

Frederic Descamps: MariaDB Keeps Climbing: Community, Adoption, and Momentum

If you've been around the MariaDB community for a while, you can probably feel it already: things are moving in the right direction. And no, I'm not talking about one vanity metric, one lucky spike, or one noisy social post. I'm talking about a broader trend. The latest Adoption Index data shows something I really […]

28 Mar 2026 5:34am GMT

Dries Buytaert: State of Drupal presentation (March 2026)

This year, Drupal turned 25. DrupalCon Chicago felt like the right place to mark that milestone. My keynote was part celebration and part wake-up call. I talked about Drupal's foundations, how AI is putting pressure on them, and why I believe we can rebuild them stronger than before.

If you missed the keynote, you can watch the video below or download my slides (32.6 MB).

It will be interesting to rewatch this keynote in 10 years, when AI is fully mainstream and has reshaped how we work, including our agencies, our craft, and how we collaborate in Open Source. It feels like a snapshot of an industry in transition.

Site templates and the marketplace

About a year ago at DrupalCon Atlanta, I introduced the idea of site templates and a marketplace to go with them. By DrupalCon Vienna, we had one site template, but no marketplace.

In Chicago, I showed eleven site templates available in a basic marketplace at marketplace.drupal.org. All eleven can be installed directly from the Drupal CMS installer.

AI for site building

For more than 20 years, Drupal's ecosystem has rested on a stable triangle: the platform itself, digital agencies who bring Drupal into the real world, and the community that builds and maintains it. That triangle has proven remarkably resilient through many waves of new technologies.

But what happens when AI disrupts all three sides at the same time? In my keynote, I showed how Drupal is responding.

I started by showing a demo of a workflow I believe will become common for Drupal agencies. You quickly prototype a website with AI, then turn it into a Drupal site with the help of AI and a skilled developer, all within hours.

AI gets you to a prototype fast. Drupal gives it the foundations that last.

I believe Drupal has a unique advantage in this new world. Organizations will always need real workflows, permissions, security, scalability, integrations, compliance, and governance. Drupal is very well suited for AI-driven workflows.

The demo worked because Drupal CMS ships with Drupal Canvas, which includes both CLI tools and AI skills. But the real strength comes from Drupal's foundations: its APIs, reusable building blocks, and mature architecture, refined over 25 years. This is the accidental AI advantage I have written about before. This is what makes Drupal one of the best platforms for AI-driven development.

Front view of a car with a transparent hood revealing a Drupal engine. Labels point to features like governance, security, permissions, customizations, scalability, integrations, authoring, and compliance.

AI for content management

At DrupalCon Vienna, I introduced the Context Control Center as a rough prototype. Since then, we have added many features. It is now nearly production-ready.

The idea is straightforward: AI agents need good context to help manage tasks in Drupal. With the Context Control Center, teams define their brand voice, target audiences, key messages, product details, and editorial guidelines in one place. Then every AI agent on the site draws from this single source of truth. The result is that you create knowledge once, and scale it to all the pages and content on your website.

In my keynote, I showed two demos of the Context Control Center in action. First, Drupal's AI agents turn a simple marketing brief into a complete, on-brand page using Drupal Canvas, consulting the Context Control Center along the way. It followed brand rules, asked clarifying questions, generated structured data for search, and added cross-links.

Second, I showed a proof of concept for dynamic contexts, where the Context Control Center pulls in real-time data from Google Analytics to help improve content performance after publication.

Saying no to AI slop

AI is lowering the barrier to contribute to Open Source projects like Drupal. On paper, that sounds great. More contributors, more patches, more momentum.

But it can also be a real challenge. The volume of contributions is going up while the quality is going down. More patches are landing on a small group of maintainers, and reviewing low-quality code wastes their time. This creates asymmetric pressure on Open Source.

If you're using AI to contribute, you are responsible for what you submit: don't submit code you don't understand. Our quality standards matter, and we will uphold them.

Our craft always evolves

Slide with the text "Our craft always evolves".

In my keynote, I also told the stories of two community members who embraced AI in a meaningful way.

Aidan Foster, who has been running Foster Interactive for 17 years, chose to go all in on the Drupal AI Initiative instead of staying on the sidelines. Together with his team, he is rebuilding the foundations of his agency to leverage AI and prepare for what is next.

And Jürgen Haas, a longtime contributor and creator of the ECA module, used AI to move at the speed of a team and make Drupal's ECA module much easier to use. In both cases, AI amplifies expertise. It does not replace it.

The world is being flooded with AI-generated average. Average is cheap now, but expertise remains hard-earned and valuable. This community has spent 25 years building it, and that is not something AI can replicate.

A human in a space suit and a large cyborg stand side by side before a vast blue wave or cloud, stirred up by a mysterious technological behemoth on the horizon. The image includes the text: "AI is the storm, and the way through it."

AI is the storm, and AI is the way through the storm. I said that first in Vienna. Six months later, I believe it more than ever. Not as a slogan, but as something I have watched happen. We need more people like Aidan and Jürgen. If you want to get involved, join us on Drupal Slack or attend DrupalCon Rotterdam this fall.

I want to extend my gratitude to everyone who contributed to making my presentation and demos a success. A special thank you to Adam G-H, Aidan Foster, ASH Sullivan, Christoph Breidert, Cristina Chumillas, Emma Horrell, Gábor Hojtsy, Gurwinder Antal, James Abrahams, Jurgen Haas, Kristen Pol, Lauri Timmanee, Marcus Johansson, Martin Anderson-Clutz, Pamela Barone, Scott Falconer, Tim Lehnen. Many others contributed indirectly to make this possible. If I've inadvertently omitted anyone, please reach out.

28 Mar 2026 5:34am GMT

27 Mar 2026

feedPlanet Debian

Jonathan Dowland: Digital gardening

I was reading a post on Alex Chan's website1 that referenced the concept of digital gardens, a concept/analogy for organising information which dates back to the 90s. This old concept is getting new traction today by contrasting the approach with "endless stream" as used and abused by social media, but also how blogs are typically presented.

This site, my homepage, has a blog, and that's the bit that most people who interact with the site will experience. Partly, because it's the bit that gets syndicated out: via feeds; on Planet Debian and downstream from it; once upon a time on Twitter; nowadays on the Fediverse.

However there's more to my homepage than that. The rest of it may be of little interest to anyone beside me, but it's useful to me, at least. So I may switch focus a little bit from mainly writing blog posts, and tend to the rest of the garden a bit more.

Some recent seeding and pruning: Recently my guest status at Newcastle University came up for renewal, so I wrote down my goals in the Historic Computing Committee for the next year or so, and put them here: nuhcc. I've also been pondering what I'm up to in Debian at the moment, so took some time to add my current projects to that page.


  1. I'm reminded that I should really publish a "blog roll" of cool blogs I'm following at the moment, of which Alex Chan's is one.

27 Mar 2026 10:05pm GMT

Bits from Debian: New Debian Developers and Maintainers (January and February 2026)

The following contributors got their Debian Developer accounts in the last two months:

The following contributors were added as Debian Maintainers in the last two months:

Congratulations!

27 Mar 2026 10:00pm GMT

Paul Tagliamonte: librtlsdr.so for fun and profit

Interested in future updates? Follow me on mastodon at @paul@soylent.green. Posts about hz.tools will be tagged #hztools.

It's well known and universally agreed that radios are cool. Among the contested field of coolest radios, Software Defined Radios (SDRs) are definitely the most interesting to me. Out of all of my (entirely too many) SDRs I own, the rtlsdr is still my #1. It's just good. It's a great price, extremely capable, reliable, well-supported, and compact. Why bother with anything else? Sure, it can't transmit, uses a (fairly weird) 8 bit unsigned integer IQ representation, limited sampling rate, limited frequency range - but even with all that, it's still the radio I will pack first. Don't get me wrong, I love my Ettus radios, PlutoSDRs, HackRFs, my AirspyHF+ - they're great! I just always find myself falling back to an rtl-sdr, every time.

Perhaps the best reason to use an rtlsdr is the absolutely mind-boggling amount of cool stuff people have written for it. The rtlsdr API is super easy to use, widely supported if you're building on top of existing radio processing frameworks - it's still a shock to me when something omits rtlsdr support.

sparky

Over the last 7 years, I've been learning about radios - I got my ham radio license (de K3XEC), hacked on some cool stuff where I've learned how radios work by "doing", and even was lucky enough to give my first rf-centric talk at districtcon. Embarrassingly, I still haven't gotten around to learning how the fancy stuff like GNU Radio works. I'm sure I'm going to love it when I do.

As part of this, I've also cooked up some very unprofessional formats and protocols I use for convenience. Locally, all my on-disk captures are stored in rfcap or more recently arf (post on this coming soon), while direct SDR access at my house is almost entirely a mix of the widely used rtl-tcp protocol, and my "riq" protocol (post on this coming soon). Both rtl-tcp and riq operate over the network, so I don't have to bother with plugging things into USB ports, and I can share my radios with my friends.

All of that work sits in my current generation of radio processing code, "sparky" (a reference to spark-gap transmitters), which is a heap of Rust, supporting everything from no_std for embedded experiments, conditional support for interfacing with all the radios I own, and tokio-based async support in addition to blocking i/o for highly concurrent daemons. This quickly advanced beyond my old Go-based code (hz.tools/go-sdr), which I archived so I can focus on learning. I still think Go is a great language to write RF code in - but I can't focus on that tech tree anymore.

Of course, this now poses a new problem - no one supports my format(s) or radio protocol(s), since, well, I'm the only one using them. I've committed a fair amount of my hardware to this setup, and yanking it from the rack to try something out does pose a bit of a pickle. This isn't a huge deal for learning, but it does make it tedious to try out something from the internets.

librtlsdr.so

Thankfully, Rust has robust support for wrap[ping itself] in a grotesque simulacra of C's skin and mak[ing its] flesh undulate, which is an attractive nuisance if i've ever seen one. Naturally, my ability to restrain myself from engaging in ill-advised rf adventures is basically zero, so it's time to do the thing any similarly situated person would do - reimplement the API and ABI of librtlsdr.so, backed with sparky instead.

Since enumeration of devices is going to be annoying (specifically, they're over the network), I decided early-on to rely on an explicit list of devices via a configuration file. I'd rather only load that once so programs don't get confused, so I opted to use a CTOR to run a stub when the ELF is linked at runtime.

// lightly edited for clarity

#[used]
#[expect(unused)]
#[unsafe(link_section = ".init_array")]
pub static INITIALIZE: extern "C" fn() = sparky_rtlsdr_ctor;

#[unsafe(no_mangle)]
pub extern "C" fn sparky_rtlsdr_ctor() {
 let config: Config = {
 if let Ok(config_bytes) = std::fs::read("/etc/sparky-rtlsdr.toml") {
 toml::from_slice(&config_bytes).unwrap()
 } else {
 Config { device: vec![] }
 }
 };
 CONFIG.set(config);
}

Next, it's time to start with the basics. Opening and closing a handle using rtlsdr_open and rtlsdr_close. Given we don't control the runtime, and the rtl-sdr device handle is opaque (for good reason!), I opted to smuggle a rust Box<Device> non-FFI safe heap-allocated struct through the device handle pointer, and let C take ownership of the Box. No one should be looking in there anyway.

// lightly edited for clarity

#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_open(dev: *mut *mut Handle, index: u32) -> int {
 let config = &CONFIG.device[index as usize];
 let sdr = match config.load() {
 Ok(v) => v,
 Err(err) => {
 return -1;
 }
 };
 let handle = Box::new(Handle { config, sdr });
 unsafe { *dev = Box::into_raw(handle) };
 0
}

#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_close(dev: *mut Handle) -> int {
 let dev = unsafe { Box::from_raw(dev) };
 drop(dev);
 0
}

With that in place, we can chip away at the API surface, translating calls as best as we can. I won't bother listing it all, since it's not very interesting - but here's an example implementation of rtlsdr_set_sample_rate and rtlsdr_get_sample_rate. These calls are translating from an rtl-sdr frequency (which is a u32 containing the value as Hz) into a sparky Frequency type, and invoking get_sample_rate or set_sample_rate on the device's rust handle. Since each device implements the sparky Sdr trait, the actual underlying device doesn't matter much here.

#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_set_sample_rate(dev: *mut Handle, rate: u32) -> int {
 let dev = unsafe { &mut *dev };
 let rate = Frequency::from_hz(rate as i64);
 if let Err(err) = dev.sdr.set_sample_rate(dev.channel, rate) {
 return -1;
 }
 0
}

#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_get_sample_rate(dev: *mut Handle) -> u32 {
 let dev = unsafe { &mut *dev };
 let freq = match dev.sdr.get_sample_rate(dev.channel) {
 Ok(freq) => freq,
 Err(err) => {
 return 0;
 }
 };
 freq.as_hz() as u32
}

After repeating this process for the rest of the stubs I could (and otherwise setting error conditions if the functionality is not supported), I was ready to try it out. Within sparky, I patched my "MockSDR" (basically a Sdr traited Mock type) to implement the same testmode IQ protocol that the RTL-SDR has, and decided to see if rtl_test from apt without any changes could be fooled.

$ rtl_test
No supported devices found.

Great, cool. No devices plugged in. Looks great. Let's try it with my librtlsdr.so LD_PRELOAD-ed into the binary first:

$ LD_PRELOAD=target/release/librtlsdr.so rtl_test
Found 1 device(s):
 0: hz.tools, mock sdr, SN: totally legit no tricks

Using device 0: sparky mock sdr
Supported gain values (0):
Sampling at 2048000 S/s.

Info: This tool will continuously read from the device, and report if
samples get lost. If you observe no further output, everything is fine.

Reading samples in async mode...
^CSignal caught, exiting!

User cancel, exiting...
Samples per million lost (minimum): 0
$

Outstanding. Even more outstandingly, if I change my testmode implementation to skip samples, rtl_test correctly reports the errors - I think it's showing promise! On to try the real endgame here - let's have our new librtlsdr.so connect to an rtl-tcp endpoint and see if rtl_fm works:

LD_PRELOAD=target/release/librtlsdr.so \
 rtl_fm -d 1 -s 120k -E deemp -M fm -f 90.9M | \
 ffplay -f s16le -ar 120k -i -
Found 2 device(s):
 0: hz.tools, mock sdr, SN: totally legit no tricks
 1: hz.tools, rtl-tcp, SN: node2.rf.lan:1202

Using device 1: sparky rtltcp node2
Tuner gain set to automatic.
Tuned to 91170000 Hz.
Oversampling input by: 9x.
Oversampling output by: 1x.
Buffer size: 7.59ms
Sampling at 1080000 S/s.
Output at 120000 Hz.

And there it was! Not the best audio quality (mostly due to my inability to correctly read the rtl_fm manpage to tune the filter and downsample/oversampling rates to audio), but it's definitely passable. I figured I'd try something that was a bit more interesting next - gqrx, since it's super handy, I use it a ton, and will definitely amuse me to no end. To my surprise and delight, LD_PRELOAD=target/release/librtlsdr.so gqrx wound up running, and I saw my devices pop right up in the setting menu:

Huge. Huge. Amazing. It did crash as soon as I tried to actually use the radio, but after fixing a few dangling bugs in the API surface (and some assumptions I think some underlying gnuradio driver may be making that I need to double check in the code), I was able to get a super solid stream of broadcast fm radio, with gqrx being none the wiser. It thought it was "just" talking to the device it knows as rtl=1.

Nice. I can't wait to try this with the rest of the rtl-sdr based tools I like having around using my riq protocol next. I don't think that'll be worth a post, but hopefully I'll get around to publishing details on that stack next.

epilogue

Well. That's it. End of story. A bit anti-climatic, sure. While this new shim will provide me endless minutes of mild amusement, I could see using this to expose my sparky testing utilities via librtlsdr.so - my "mock sdr" driver allows for replaying captures off disk, which could be interesting to make sure that signals are still properly decoded after changes, or instrument performance changes (via SNR, BER, packets observed, etc) on reference samples I have on my NAS. Maybe that'll come in handy one day!

Truth be told, I'm not sure I actually want to encourage anyone to do this for real (although I think I'll definitely be using it on my LAN to see what happens). I also don't have a repo to share - I don't particularly feel with dealing with the secondary effects of publishing sparky (and sparky-rtlsdr) yet, since i'm still getting my feet under me on the radio aspect of all this.

I'll be sure to post updates if anything changes with this here (tagged sparky) and at @paul@soylent.green. I can't wait to post more about some of the odd sidequests (like this one!) i've completed over the last few years - I've been waiting to feel confident that my work has matured and was withstood the new problems i've thrown at it, and it largely has.

It's my hope that these projects (and this project in particular) has provided a glimpse into the world of software defined radio for my systems friends, and a bit about systems for my radio friends. It's not all magic, and I hope someone out there feels inclined to have some fun with radios themselves!

27 Mar 2026 5:30pm GMT

12 Mar 2026

feedPlanet Lisp

Christoph Breitkopf: Functional Valhalla?

Pointer-rich data layouts lead to suboptimal performance on modern hardware. For an excellent introduction to this, see the article The Road to Valhalla. While it is specifically about Java, many parts of the article also apply to other languages. To summarize some of the key points of the article:

Consider a vector of records (or tuples, structures, product types - I'll stay with "record" in this article). A pointer-rich layout has each record allocated separately in the heap, with a vector containing pointers to the records. For example, given a "Point" record of two numbers:

pikchr diagram

The flat and dense layout has the records directly in the array:

pikchr diagram

(Note that there is another flat layout, namely, using one vector per field of the record. This is better suited to instruction-level parallelism or specialized hardware (e.g., GPUs), especially when the record fields have different sizes. But it is less suited for general-purpose computing, as reading a single vector element requires one memory access per field, whereas the "vector of records" layout above requires only one access per record. Such a layout can be easily implemented in any language that has arrays of native types, whether in the language itself or in a library (e.g., OCaml's Owl library). Thus, in this article, I will only consider the "array of records" layout above.)

Functional language considerations

Things should be much easier in functional languages than in Java: we have purity, referential transparency, and everything is a value. So it should be simple enough to store these values in memory in their native representation. But there are reasons that that is often not the case in practice:

Many implementations can not even lay out native types flat in records, so a Point record of IEEE 754 double-precision numbers may actually look like this in memory:

pikchr diagram

The (very short) List

So, given a record type, which functional languages allow a collection of values of that type to have a flat, linear memory layout? The number of programming languages that claim to be "functional" is huge, so the ones listed here are just a selection based on my preferences - mainly languages that allow that layout, and some I have some experience with and can speculate on how easy or hard it would be to add that as a library or extension.

Since the Point record can be misleading in its simplicity when it comes to the question of whether the functionality could be implemented as a library, I'll point out that there are records where the layout is a bit more interesting:

Pure languages:

Clean

Yes: Clean has unboxed arrays of records in the base language.

Caveat: it does not have integer types of specific sizes and only one floating-point type, making it harder to reduce memory usage by using the smallest type just large enough to support the required value range. It seems possible to implement such types in a library (the mTask system does that).

Futhark

No. Futhark does not intend to be a general-purpose language, so this is not surprising.

I mention it here because it does have arrays of records, but, since it targets GPUs and related hardware, it uses the "record of arrays" layout mentioned above.

Haskell

Yes. Not in the base language, but there is library support via Data.Vector.Unboxed. Types that implement the Unbox type class can be used in these vectors. Many basic types and tuples have an Unbox instance. However, when you care about efficiency, you probably do not want to use tuples but rather a data type with strict fields, i.e., not:

type Point = (Double, Double)

but:

data Point = Point !Double !Double

Writing an Unbox instance for such a type is not trivial. The vector-th-unbox library makes it easier, but requires Template Haskell. Unboxed vectors are implemented by marshalling the values to byte arrays, so records with pointer fields are not supported.

Impure Languages

F#

Yes, even records with pointer fields. Records have structural equality, and you can use structs or the [<Struct>] attribute to get a flat layout.


And that's all I could find. Unless I follow Wikipedia's list of functional programming languages, which contains languages such as C++, C#, Rust, or Swift, that allow the flat layout, but don't really fit my idea of a functional language. But SML, OCaml, Erlang (Elixir, Gleam), Scala? Not that I could see (but please correct me if I'm wrong).

Rolling your own

Since there is a library implementation for Haskell, maybe that's a possibility for other languages?

You should be able to implement flat layouts in any language that supports byte vectors. More interesting is how well such a library fits into the language, and whether a user of the library has to write code or annotations for user-defined record types, or whether the library can handle part or all of that automagically.

I'll only mention my beloved Lisp/Scheme here. Lisp's uniform syntax and macro system are a bonus here, but the lack of static typing makes things harder.

In Scheme, R6RS (and R7RS with the help of some SRFIs) has byte-vectors and marshalling to/from them in the standard library. But Scheme does not have type annotations, so you either need to offer a macro to define records with typed fields or to define how to marshal the fields of a regular (sealed) record. Since you can shadow standard procedures in a library, you can write code that looks like regular Scheme code, but, perhaps surprisingly, loses identity when storing/retrieving values from records:

(let ((vec (make-typed-vector 'point 1000))
      (pt (make-point x y)))
  (vector-set! vec 0 pt)
  (eq? (vector-ref vec 0) pt))
#f

(But then, you probably shouldn't be using eq? when doing functional programming in Scheme).

The same approach is possible in Common Lisp. In contrast to Scheme, it does have optional type annotations, and, together with a helper library for accessing the innards of floats and either the meta-object protocol to get type information or (probably better) a macro to define typed records, an implementation should be reasonably straightforward. Making it play nice with inheritance and the dynamic nature of Common Lisp (e.g., adding slots to classes or even changing an object's class at runtime) would be a much harder undertaking.

Conclusion

Of the functional languages I looked at, only F# fully supports flat and dense memory layouts. Among the pure languages, Haskell and Clean come close.

The question is how important this really is. There's a good argument to be made for turning to more specialized languages like Futhark if you mainly care about performance. On the other hand, having a uniform codebase in one language also has advantages.

Then, the performance story has changed, too. While the points Project Valhalla raises remain true in principle, processor designers are aware of this as well. They are doing their best to hide memory latency with techniques such as out-of-order execution or humongous caches. Thus, on a modern CPU, the effects of a pointer-rich layout are often only observable with large working set sizes.

Still, given the plethora of imperative language that can get you to Valhalla, support for this in the functional landscape seems lacking. In the future, I hope to see more languages or libraries that will make this possible.

12 Mar 2026 11:17am GMT

07 Mar 2026

feedPlanet Lisp

Scott L. Burson: FSet v2.3.0: Transients!

FSet v2.3.0 added transients! These make it faster to populate new collections with data, especially as the collections get large. I shamelessly stole the idea from Clojure.

They are currently implemented only for the CHAMP types ch-set, ch-map, ch-2-relation, ch-replay-set, and ch-replay-map.

The term "transient" contrasts with "persistent". I'm using the term "persistent" in its functional-data-structure sense, as Clojure does: a data structure is persistent if multiple states of it can coexist in memory efficiently. (The probably more familiar use of the term is in the database sense, where it refers to nonvolatile storage of data.) FSet collections have, up to now, all been persistent in this sense; a point modification to one, such as by with or less, takes only O(log n) space and time to return a new state of the collection, without disturbing the previous state.

A transient encapsulates the internal tree of a collection so as to guarantee that it holds the only pointer to the tree; this allows modifications to tree nodes to be made in-place, so long as the node has sufficient allocated space. Once the collection is built, the tree is in the same format that existing FSet code expects, and can be accessed and functionally updated as usual.

Some quick micro-benchmarking suggests that speedups, for constructing a set from scratch, range from 1.6x at size 64 to as much as 2.4x at size 4096.

You don't necessarily even have to use transients explicitly in order to benefit from them. Some FSet builtins such as filter and image use them now. The GMap result types ch-set etc. also use them.

For details, see the GitLab MR.


07 Mar 2026 8:04am GMT

28 Feb 2026

feedPlanet Lisp

Neil Munro: Ningle Tutorial 15: Pagination, Part 2

Contents

Introduction

Welcome back! We will be revisiting the pagination from last time, however we are going to try and make this easier on ourselves, I built a package for pagination mito-pager, the idea is that much of what we looked at in the last lesson was very boiler plate and repetitive so we should look at removing this.

I will say, my mito-pager can do a little more than just what I show here, it has two modes, you can use paginate-dao (named this way so that it is familiar to mito) to paginate over simple models, however, if you need to perform complex queries there is a macro with-pager that you can use to paginate. It is this second form we will use in this tutorial.

There is one thing to bear in mind, when using mito-pager, you must implement your data retrieval functions in such a way to return a values object, as mito-pager relies on this to work.

I encourge you to try the library out in other use-cases and, of course, if you have ideas, please let me know.

Changes

Most of our changes are quite limited in scope, really it's just our controllers and models that need most of the edits.

ningle-tutorial-project.asd

We need to add the mito-pager package to our project asd file.

- :ningle-auth)
+ :ningle-auth
+ :mito-pager)

src/controllers.lisp

Here is the real payoff! I almost dreaded writing the sheer volume of the change but then realised it's so simple, we only need to change our index function, and it may be better to delete it all and write our new simplified version.

(defun index (params)
  (let* ((user (gethash :user ningle:*session*))
         (req-page (or (parse-integer (or (ingle:get-param "page" params) "1") :junk-allowed t) 1))
         (req-limit (or (parse-integer (or (ingle:get-param "limit" params) "50") :junk-allowed t) 50)))
    (flet ((get-posts (limit offset) (ningle-tutorial-project/models:posts user :offset offset :limit limit)))
      (mito-pager:with-pager ((posts pager #'get-posts :page req-page :limit req-limit))
        (djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :pager pager)))))

This is much nicer, and in my opinion, the controller should be this simple.

src/main.lisp

We need to ensure we include the templates from mito-pager, this is a simple one line change.

 (defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
    (djula:add-template-directory (asdf:system-relative-pathname :ningle-tutorial-project "src/templates/"))
+   (djula:add-template-directory (asdf:system-relative-pathname :mito-pager "src/templates/"))

src/models.lisp

As mentioned at the top of this tutorial, we have to implement our data retrieval functions in a certain way. While there are some changes here, we ultimately end up with less code.

We can start by removing the count parameter, we wont be needing it in this implementation, and since we don't need the count parameter anymore, the :around method can go too!

- (defgeneric posts (user &key offset limit count)
+ (defgeneric posts (user &key offset limit)
-
- (defmethod posts :around (user &key (offset 0) (limit 50) &allow-other-keys)
-   (let ((count (mito:count-dao 'post))
-         (offset (max 0 offset))
-         (limit (max 1 limit)))
-     (if (and (> count 0) (>= offset count))
-       (let* ((page-count (max 1 (ceiling count limit)))
-              (corrected-offset (* (1- page-count) limit)))
-         (posts user :offset corrected-offset :limit limit))
-       (call-next-method user :offset offset :limit limit :count count))))

There's two methods to look at, the first is when the type of user is user:

-
- (defmethod posts ((user user) &key offset limit count)
+ (defmethod posts ((user user) &key offset limit)
...
      (values
-         (mito:retrieve-by-sql sql :binds params)
-         count
-         offset)))
+         (mito:retrieve-by-sql sql :binds params)
+         (mito:count-dao 'post))))

The second is when the type of user is null:

-
- (defmethod posts ((user null) &key offset limit count)
+ (defmethod posts ((user null) &key offset limit)
...
    (values
-       (mito:retrieve-by-sql sql)
-       count
-       offset)))
+       (mito:retrieve-by-sql sql)
+       (mito:count-dao 'post))))

As you can see, all we are really doing is relying on mito to do the lions share of the work, right down to the count.

src/templates/main/index.html

The change here is quite simple, all we need to do is to change the path to the partial, we need to simply point to the partial provided by mito-pager.


- {% include "partials/pager.html" with url="/" title="Posts" %}
+ {% include "mito-pager/partials/pager.html" with url="/" title="Posts" %}

src/templates/partials/pagination.html

This one is easy, we can delete it! mito-pager provides its own template, and while you can override it (if you so wish), in this tutorial we do not need it anymore.

Conclusion

I hope you will agree that this time, using a prebuilt package takes a lot of the pain out of pagination. I don't like to dictate what developers should, or shouldn't use, so that's why last time you were given the same information I had, so if you wish to build your own library, you can, or if you want to focus on getting things done, you are more than welcome to use mine, and of course, if you find issues please do let me know!

Learning Outcomes

Level Learning Outcome
Understand Understand how third-party pagination libraries like mito-pager abstract boilerplate pagination logic, and how with-pager expects a fetch function returning (values items count) to handle page clamping, offset calculation, and boundary correction automatically.
Apply Apply flet to define a local adapter function that bridges the project's posts generic function with mito-pager's expected (lambda (limit offset) ...) interface, and use with-pager to reduce controller complexity to its essential logic.
Analyse Analyse what responsibilities were transferred from the manual pagination implementation to mito-pager - count caching, boundary checking, offset calculation, page correction, and range generation - contrasting the complexity of both approaches.
Create Refactor a manual pagination implementation to use mito-pager by simplifying model methods to return (values items count), replacing complex multi-step controller calculations with with-pager, and delegating the pagination template partial to the library.

Github

Common Lisp HyperSpec

Symbol Type Why it appears in this lesson CLHS
defpackage Macro Define project packages like ningle-tutorial-project/models, /forms, /controllers. http://www.lispworks.com/documentation/HyperSpec/Body/m_defpac.htm
in-package Macro Enter each package before defining models, controllers, and functions. http://www.lispworks.com/documentation/HyperSpec/Body/m_in_pkg.htm
defgeneric Macro Define the simplified generic posts function signature with keyword parameters offset and limit (the count parameter is removed). http://www.lispworks.com/documentation/HyperSpec/Body/m_defgen.htm
defmethod Macro Implement the simplified posts methods for user and null types (the :around validation method is removed). http://www.lispworks.com/documentation/HyperSpec/Body/m_defmet.htm
flet Special Operator Define the local get-posts adapter function that wraps posts to match mito-pager's expected (lambda (limit offset) ...) interface. http://www.lispworks.com/documentation/HyperSpec/Body/s_flet_.htm
let* Special Operator Sequentially bind user, req-page, and req-limit in the controller where each value is used in subsequent bindings. http://www.lispworks.com/documentation/HyperSpec/Body/s_let_l.htm
or Macro Provide fallback values when parsing page and limit parameters, defaulting to 1 and 50 respectively. http://www.lispworks.com/documentation/HyperSpec/Body/m_or.htm
multiple-value-bind Macro Capture the SQL string and bind parameters returned by sxql:yield in the model methods. http://www.lispworks.com/documentation/HyperSpec/Body/m_multip.htm
values Function Return two values from posts methods - the list of results and the total count - as required by mito-pager:with-pager. http://www.lispworks.com/documentation/HyperSpec/Body/a_values.htm
parse-integer Function Convert string query parameters ("1", "50") to integers, with :junk-allowed t for safe parsing. http://www.lispworks.com/documentation/HyperSpec/Body/f_parse_.htm

28 Feb 2026 8:00am GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

26 Jan 2026

feedFOSDEM 2026

Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

26 Jan 2026 11:00pm GMT

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT