20 Mar 2026
Planet Grep
Paul Cobbaut: lastb on Debian
So there is this post which says:
"Yes, the people who are likely to care are admins with cobwebby
homebrew cronjobs that regularly generate painstakingly formatted
security reports and send them to the fax machine, or whatever."
...I feel extremely personally attacked by this :)
(Replace fax with mail, or whatever.)
I want my freshly upgraded to Debian Trixie Raspberry Pi to keep sending this cobwebby report... even if it is only for a couple of years to come... thus:
How to get lastb back on Debian
install dependencies:
apt install build-essential gettext autoconf flex bison libtool autopoint
get the source:
git clone https://github.com/util-linux/util-linux.git
rtfm:
cd util-linux
more Documentation/howto-compilation.txt
compile:
./autogen.sh && ./configure --disable-all-programs --enable-last
make last
install:
cp ./last /usr/bin/last
ln -s /usr/bin/last /usr/bin/lastb
happiness:
# lastb
btmp begins Thu Dec 4 15:50:47 2025
20 Mar 2026 3:32pm GMT
Lionel Dricot: The Social Smolnet

The Social Smolnet
It might have been an email thread. Or a lobste.rs comment. It was a discussion about yet another attempt at a new decentralized social protocol. And we reached the conclusion that with blogs and email, we already had a decentralized social network. We only needed to use it.
This was the last push I needed to implement in Offpunk the social features I had imagined years ago. Share and Reply. Available since Offpunk 3.0.
Share
Are you reading something interesting in Offpunk and want to share it? Well, simply write it:
share
or
share myfriend@example.com
A new mail containing the URL to share will be opened in your email client of choice (as determined by xdg-open). The title will be the title of the page. You only need to add some text to explain why you want to share that page.
Reply
Ever read a blog post and wanted to send feedback or a simple thank you to the author? Simply write:
reply
Reply will try to find a mailto link by exploring the page, root pages and, since 3.1, potential "contact" pages. It sometimes works really well. Often, the mail address is obscured or hidden. That's not a problem. You only need to find it once because Offpunk allows you to save it for the page or the whole online space.
Give an email address as an argument to reply and it will be saved in Offpunk for the page or the whole online space.
If you come across an email address that may be of use in the future but don't want to react now, use "save":
reply save author@example.com
or, if you want to use autodetection:
reply save
Yes, it is enough
It looks like nothing. It looks like trivial. But for me, this really transformed Gemini/Gopher and the Small Web into a social network. As I use neomutt+neovim as my mail client, I don't leave my terminal. I simply write "reply", neovim opens, I write "Thank you for this nice post", :wq, ,and voilà. The mail will be sent during my next synchronization.
Almost as easy as clicking a "like" button but way more personal. Even easier if, like me, you dislike touching a mouse or opening a browser!
Replying to my own post in Neovim
This is the Social SmolNet
In less than two months, I already used this feature to react to 40 different online spaces, not counting that I've used it multiple times with some people.
40 saved reply addresses (41 but the first line is wrongly counted)
I even started using Offpunk as an address book for my blogger friends. Instead of laboriously autocompleting their email addresses, I go to their blog/gemini capsule/gopher hole and write "reply".
The biggest lesson I take is that "social networks" are not about protocols but about how we use the existing infrastructure. Microsoft and Google are working hard to make sure you hate email and hate building a website. But we don't have to obey. We can enjoy writing lightweight HTML and sending quick emails to each other. We have the right to read, write, and have social fun without Javascript and centralized platforms. We have the duty to keep this torch lit.
In the meantime, if you receive from me very short emails reacting to some of your posts, now you know why.
But, of course, feel free not to reply!
About the author
I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
20 Mar 2026 3:32pm GMT
Dries Buytaert: Never submit code you don't understand

Years ago, in the early Drupal days, you would see a mantra everywhere: "Don't hack core".
It showed up in issue queues, conference talks, support channels, stickers, and even on T-shirts. It was short and memorable, and it solved a real problem: too many people were modifying Drupal Core instead of extending it properly.
Over time the mantra worked. The ecosystem matured. Not just the software itself, but also the habits and expectations around it. Today you rarely hear people say "Don't hack core".
With AI changing how code gets written, we may need a new mantra.
In Open Source, all code needs to be understood and reviewed before it can be merged. That responsibility belongs to both contributors and maintainers. AI is changing how code gets written, but it does not change that responsibility. In fact, it may make it easier to forget.
Code you don't understand becomes someone else's problem. In Open Source, that someone is often the maintainer reviewing your patch.
Offloading bad code onto maintainers wastes people's time, which is inconsiderate, and slows down reviews for everyone. You also miss the chance to learn from the code and grow as a developer.
It shouldn't matter what tools you use. But if you submit code, you should be able to explain what it does, why it works, and how it interacts with the rest of the code.
Everyone starts somewhere. Even today's top contributors submitted imperfect patches early on. You are welcome here, with or without AI tools. Perfection isn't required, but understanding your code is. Own your code and respect people's time.
Maybe it's time for some new stickers and T-shirts.
Never submit code you don't understand.
Thanks to Natalie Cainaru, Jeremy Andrews and Gábor Hojtsy for reviewing my draft.
20 Mar 2026 3:32pm GMT
Planet Debian
Michael Ablassmeier: virtnbdbackup 2.46 - bitlocker recovery keys
I've released virtnbdbackup 2.46 which now attempts to extract the bitlocker recovery keys during backup. The windows domains need a working qemu agent installed during backup for this to work.
Using the agent, it also extracts the available guestinfo (network config, OS version etc..) from the domain and stores it alongside with the backup.
20 Mar 2026 12:00am GMT
19 Mar 2026
Planet Debian
Otto Kekäläinen: Automated security validation: How 7,000+ tests shaped MariaDB's new AppArmor profile

Linux kernel security modules provide a good additional layer of security around individual programs by restricting what they are allowed to do, and at best block and detect zero-day security vulnerabilities as soon as anyone tries to exploit them, long before they are widely known and reported. However, the challenge is how to create these security profiles without accidentally also blocking legitimate actions. For MariaDB in Debian and Ubuntu, a new AppArmor profile was recently created by leveraging the extensive test suite with 7000+ tests, giving good confidence that AppArmor is unlikely to yield false positive alerts with it.
AppArmor is a Mandatory Access Control (MAC) system, meaning that each process controlled by AppArmor has a sort of an "allowlist" called profile that defines all capabilities and file paths a program can access. If a program tries to do something not covered by the rules in its AppArmor profile, the action will be denied on the Linux kernel level and a warning logged in the system journal. This additional security layer is valuable because even if a malicious user found a security vulnerability some day in the future, the AppArmor profile severely restricts the ability to exploit it and gain access to the operating system.
AppArmor was originally developed by Novell for use in SUSE Linux, but nowadays the main driver is Canonical and AppArmor is extensively used in Ubuntu and Debian, and many of their derivatives (e.g. Linux Mint, Pop!_OS, Zorin OS) and in Arch. AppArmor's benefit compared to the main alternative SELinux (used mainly in the RedHat/Fedora ecosystem) is that AppArmor is easier to manage. AppArmor continues to be actively developed, with new major version 5.0 expected to arrive soon.
I also have some personal history contributing some notification handler scripts in Python and I also created the website that AppArmor.net still runs.
Regular review of denials in the system log required
Any system administrator using Debian/Ubuntu needs to know how to check for AppArmor denials. The point of using AppArmor is kind of moot if nobody is checking the denials. When AppArmor blocks an action, it logs the event to the system audit or kernel logs. Understanding these logs is crucial for troubleshooting custom configurations or identifying potential security incidents.
To view recent denials, check /var/log/audit/audit.log or run journalctl -ke --grep=apparmor.
A typical denial entry for MariaDB will look like this (split across multiple lines for legibility):
msg=audit(1700000000.123:456): apparmor="DENIED" operation="open" profile="/usr/sbin/mariadbd" name="/custom/data/path/test.ibd" pid=1234 comm="mariadbd" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0
msg=audit(1700000000.123:456): apparmor="DENIED" operation="open"
profile="/usr/sbin/mariadbd" name="/custom/data/path/test.ibd" pid=1234
comm="mariadbd" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0How to interpret this output:
- msg=audit(…): The audit timestamp and event serial number.
- apparmor="DENIED": Indicates AppArmor blocked the action.
- operation: The action being attempted (e.g.,
open,mknod,file_mmap,file_perm). - profile: The specific AppArmor profile that triggered the denial (in this case the
/usr/sbin/mariadbdprofile). - name: The file path or resource that was blocked. In the example above, a custom data path was denied access because it wasn't defined in the profile's allowed abstractions.
- comm: The command name that triggered the denial (here
mariadbd). - requested_mask / denied_mask: Shows the permissions requested (e.g.,
rfor read,wfor write). - pid: The process ID.
- fsuid: The user ID of the process attempting the action.
- ouid: The owner user ID of the target file.
If an action seems legit and should not be denied, the sysadmin needs to update the existing rules at /etc/apparmor.d/ or drop a local customization file in at /etc/apparmor.d/local/. If the denied action looks malicious, the sysadmin should start a security investigation and if needed report a suspected zero-day vulnerability to the upstream software vendor (e.g. Ubuntu customers to Canonical, or MariaDB customers to MariaDB).
AppArmor in MariaDB - not a novel thing, and not easy to implement well
Based on old bug reports, there was an AppArmor profile already back in 2011, but it was removed in MariaDB 5.1.56 due to backlash from users running into various issues. A new profile was created in 2015, but kept opt-in only due to the risk of side effects. It likely had very few users and saw minimal maintenance, getting only a handful of updates in the past 10 years.
The primary challenge in using mandatory access control systems with MariaDB lies in the sheer breadth of MariaDB's operational footprint with diverse storage engines and plugins. Also the code base in MariaDB assumes that system calls to Linux always work - which they do under normal circumstances - and do not handle errors well if AppArmor suddenly denies a system call. MariaDB is also a large and complex piece of software to run and operate, and it can be very challenging for system administrators to root-cause that a misbehavior in their system was due to AppArmor blocking a single syscall.
Ironically, AppArmor is most beneficial exactly due to the same reasons for MariaDB. The larger and more complex a software is, the larger are the odds of a security vulnerability arising between the various components. And AppArmor profile helps reduce this complexity down to a single access list.
Over the years there has been users requesting to get the AppArmor profile back, such as Debian Bug#875890 since 2017. The need was raised recently again by the Ubuntu security team during the MariaDB Ubuntu 'main' inclusion review in 2025, which prompted a renewed effort by Debian/Ubuntu developers, mainly myself and Aquila Macedo, with upstream MariaDB assistance from Daniel Black.
A fresh approach: leverage the MariaDB test suite for automated testing and the open source community for reviews
The key to creating a robust AppArmor profile is the ability to know in detail what is expected and normal behavior of the system. One could in theory read all of the source code in MariaDB, but with over two million lines, it is of course not feasible in practice. However, MariaDB does have a very extensive 7000+ test suite, and running it should trigger most code paths in MariaDB. Utilizing the test suite was key in creating the new AppArmor profile for MariaDB: we installed MariaDB on a Ubuntu system, enabled AppArmor in complain mode and iterated on the allowlist by running the full mariadb-test-run with all MariaDB plugins and features enabled until we had a comprehensive yet clean list of rules.
To be extra diligent, we also reworked the autopkgtest for MariaDB in Debian and Ubuntu CI systems to run with the AppArmor profile enabled and to print all AppArmor notices at the end of the run, making it easy to detect now and in the future if the MariaDB test suite triggers any AppArmor denials. If any test fails, the release would not get promoted further, protecting users from regressions.
While developing and triggering manual test runs we used the maximal achievable test suite with 7177 tests. The test is however so extensive it takes over two hours to run, and it also has some brittle tests, so the standard test run in Debian and Ubuntu autopkgtest is limited just to MariaDB's main suite with about 1000 tests. Having some tests fail while testing the AppArmor profile was not a problem, because we didn't need all the tests to pass - we merely needed them to run as many code paths as possible to see if they run any system calls not accounted for in the AppArmor profile.
Note that extending the profile was not just mechanical copying of log messages to the profile. For example, even though a couple of tests involve running the dash shell, we decided to not allow it, as it opens too much of a path for a potential exploit to access the operating system.
The result of this effort is a modernized, robust profile that is now production-ready. Those interested in the exact technical details can read the Debian Bug#1130272 and the Merge Request discussions at salsa.debian.org, which hosts the Debian packaging source code.
Now available in Debian unstable, soon Ubuntu - feedback welcome!
Even though the file is just 200 lines long, the work to craft it spanned several weeks. To minimize risk we also did a gradual rollout by releasing the first new profile version in complain mode, so AppArmor only logs would-be-denials without blocking anything. The AppArmor profile was switched to enforce mode only in the very latest MariaDB revision 1:11.8.6-4 in Debian, and a NEWS item issued to help increase user awareness of this change. It is also slated for the upcoming Ubuntu 26.04 "Resolute Raccoon" release next month, providing out-of-the-box hardening for the wider ecosystem.
While automated testing is extensive, it cannot simulate everything. Most notably various complicated replication topologies and all Galera setups are likely not covered. Thus, I am calling on the community to deploy this profile and monitor for any audit denials in the kernel logs. If you encounter unexpected behavior or legitimate denials, please submit a bug report via the Debian Bug Tracking System.
To ensure you are running the latest MariaDB version, run apt install --update --yes mariadb-server. To view the latest profile rules, run cat /etc/apparmor.d/mariadbd and to see if it is enforced review the output of aa-status. To quickly check if there were any AppArmor denials, simply run journalctl -k | grep -i apparmor | grep -i mariadb.
Systemd hardening also adopted as security features keep evolving
For those interested in MariaDB security hardening, note that also new systemd hardening options were rolled out in Debian/Ubuntu recently. Note that Debian and Ubuntu are mainly volunteer-driven open source developer communities, and if you find this topic interesting and you think you have the necessary skills, feel free to submit your improvement ideas as Merge Requests at salsa.debian.org/mariadb-team. If your improvement suggestions are not Debian/Ubuntu specific, please submit them directly to upstream at GitHub.com/MariaDB.
19 Mar 2026 12:00am GMT
18 Mar 2026
Planet Debian
Dirk Eddelbuettel: tidyCpp 0.0.11 on CRAN: Microfix

And yet another maintenance release of the tidyCpp package arrived on CRAN this morning, just days after previous release which itself came a mere week and a half after its predecessor. It has been built for r2u as well. The package offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples.
This release restores the small CSS file used by the vignette which we, in a last-second decision, omitted from the previous release. Oddly, it only failed under the 'oldrel' i.e. the R from now nearly two years ago. But it was still an unenforced error, and this upload corrects it.
Changes are summarized in the NEWS entry that follows.
Changes in tidyCpp version 0.0.11 (2026-03-17)
- Keep a CSS file in the package to allow vignette build on r-oldrel too
Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
18 Mar 2026 8:49pm GMT
12 Mar 2026
Planet Lisp
Christoph Breitkopf: Functional Valhalla?
Pointer-rich data layouts lead to suboptimal performance on modern hardware. For an excellent introduction to this, see the article The Road to Valhalla. While it is specifically about Java, many parts of the article also apply to other languages. To summarize some of the key points of the article:
- In 1990, a main memory fetch was about as expensive as an arithmetic operation. Now, it might be a hundred times slower.
- A pointer-rich data layout involving indirections between data at different locations is not ideal for today's hardware.
- A language should make flat (cache-efficient) and dense (memory-efficient) memory layouts possible without compromising abstraction or type safety.
Consider a vector of records (or tuples, structures, product types - I'll stay with "record" in this article). A pointer-rich layout has each record allocated separately in the heap, with a vector containing pointers to the records. For example, given a "Point" record of two numbers:
The flat and dense layout has the records directly in the array:
(Note that there is another flat layout, namely, using one vector per field of the record. This is better suited to instruction-level parallelism or specialized hardware (e.g., GPUs), especially when the record fields have different sizes. But it is less suited for general-purpose computing, as reading a single vector element requires one memory access per field, whereas the "vector of records" layout above requires only one access per record. Such a layout can be easily implemented in any language that has arrays of native types, whether in the language itself or in a library (e.g., OCaml's Owl library). Thus, in this article, I will only consider the "array of records" layout above.)
Functional language considerations
Things should be much easier in functional languages than in Java: we have purity, referential transparency, and everything is a value. So it should be simple enough to store these values in memory in their native representation. But there are reasons that that is often not the case in practice:
- Lazyness: a value can be a computation that produces a value only when needed.
- Layout polymorphism: unless we replicate the code for every type (as, for example, Rust does), we need to be able to store every possible value in the same kind of slot.
- Dynamically typed languages require type information at runtime.
- Functional languages often have automatic memory management, which may require runtime type information.
- Many of our languages are not purely functional, but contain impure features.
- Pure languages often lack traditional vectors or arrays, since making them perform well in immutable code is not easy.
- Historical reasons: Graph reduction was a common implementation technique for lazy languages, and graphs involve pointers.
- Implementation restrictions: not being mainstream, fewer resources are devoted to implementation and optimization.
Many implementations can not even lay out native types flat in records, so a Point record of IEEE 754 double-precision numbers may actually look like this in memory:
The (very short) List
So, given a record type, which functional languages allow a collection of values of that type to have a flat, linear memory layout? The number of programming languages that claim to be "functional" is huge, so the ones listed here are just a selection based on my preferences - mainly languages that allow that layout, and some I have some experience with and can speculate on how easy or hard it would be to add that as a library or extension.
Since the Point record can be misleading in its simplicity when it comes to the question of whether the functionality could be implemented as a library, I'll point out that there are records where the layout is a bit more interesting:
- Records containing different types with different storage sizes, for example, one 64-bit float and one 32-bit integer. On most architectures, this will require 4 bytes of padding between elements.
- Records containing native values along with something that has to be represented as a pointer, for example, a reference-type or a lazy value. In a flat layout, this means that every nth element will be a pointer, requiring special support from the memory management system, either by providing layout information or by using a conservative GC that treats everything as a potential pointer.
Pure languages:
Clean
Yes: Clean has unboxed arrays of records in the base language.
Caveat: it does not have integer types of specific sizes and only one floating-point type, making it harder to reduce memory usage by using the smallest type just large enough to support the required value range. It seems possible to implement such types in a library (the mTask system does that).
Futhark
No. Futhark does not intend to be a general-purpose language, so this is not surprising.
I mention it here because it does have arrays of records, but, since it targets GPUs and related hardware, it uses the "record of arrays" layout mentioned above.
Haskell
Yes. Not in the base language, but there is library support via Data.Vector.Unboxed. Types that implement the Unbox type class can be used in these vectors. Many basic types and tuples have an Unbox instance. However, when you care about efficiency, you probably do not want to use tuples but rather a data type with strict fields, i.e., not:
but:
Writing an Unbox instance for such a type is not trivial. The vector-th-unbox library makes it easier, but requires Template Haskell. Unboxed vectors are implemented by marshalling the values to byte arrays, so records with pointer fields are not supported.
Impure Languages
F#
Yes, even records with pointer fields. Records have structural equality, and you can use structs or the [<Struct>] attribute to get a flat layout.
And that's all I could find. Unless I follow Wikipedia's list of functional programming languages, which contains languages such as C++, C#, Rust, or Swift, that allow the flat layout, but don't really fit my idea of a functional language. But SML, OCaml, Erlang (Elixir, Gleam), Scala? Not that I could see (but please correct me if I'm wrong).
Rolling your own
Since there is a library implementation for Haskell, maybe that's a possibility for other languages?
You should be able to implement flat layouts in any language that supports byte vectors. More interesting is how well such a library fits into the language, and whether a user of the library has to write code or annotations for user-defined record types, or whether the library can handle part or all of that automagically.
I'll only mention my beloved Lisp/Scheme here. Lisp's uniform syntax and macro system are a bonus here, but the lack of static typing makes things harder.
In Scheme, R6RS (and R7RS with the help of some SRFIs) has byte-vectors and marshalling to/from them in the standard library. But Scheme does not have type annotations, so you either need to offer a macro to define records with typed fields or to define how to marshal the fields of a regular (sealed) record. Since you can shadow standard procedures in a library, you can write code that looks like regular Scheme code, but, perhaps surprisingly, loses identity when storing/retrieving values from records:
(let ((vec (make-typed-vector 'point 1000))
(pt (make-point x y)))
(vector-set! vec 0 pt)
(eq? (vector-ref vec 0) pt))
⇒ #f(But then, you probably shouldn't be using eq? when doing functional programming in Scheme).
The same approach is possible in Common Lisp. In contrast to Scheme, it does have optional type annotations, and, together with a helper library for accessing the innards of floats and either the meta-object protocol to get type information or (probably better) a macro to define typed records, an implementation should be reasonably straightforward. Making it play nice with inheritance and the dynamic nature of Common Lisp (e.g., adding slots to classes or even changing an object's class at runtime) would be a much harder undertaking.
Conclusion
Of the functional languages I looked at, only F# fully supports flat and dense memory layouts. Among the pure languages, Haskell and Clean come close.
The question is how important this really is. There's a good argument to be made for turning to more specialized languages like Futhark if you mainly care about performance. On the other hand, having a uniform codebase in one language also has advantages.
Then, the performance story has changed, too. While the points Project Valhalla raises remain true in principle, processor designers are aware of this as well. They are doing their best to hide memory latency with techniques such as out-of-order execution or humongous caches. Thus, on a modern CPU, the effects of a pointer-rich layout are often only observable with large working set sizes.
Still, given the plethora of imperative language that can get you to Valhalla, support for this in the functional landscape seems lacking. In the future, I hope to see more languages or libraries that will make this possible.
12 Mar 2026 11:17am GMT
07 Mar 2026
Planet Lisp
Scott L. Burson: FSet v2.3.0: Transients!
FSet v2.3.0 added transients! These make it faster to populate new collections with data, especially as the collections get large. I shamelessly stole the idea from Clojure.
They are currently implemented only for the CHAMP types ch-set, ch-map, ch-2-relation, ch-replay-set, and ch-replay-map.
The term "transient" contrasts with "persistent". I'm using the term "persistent" in its functional-data-structure sense, as Clojure does: a data structure is persistent if multiple states of it can coexist in memory efficiently. (The probably more familiar use of the term is in the database sense, where it refers to nonvolatile storage of data.) FSet collections have, up to now, all been persistent in this sense; a point modification to one, such as by with or less, takes only O(log n) space and time to return a new state of the collection, without disturbing the previous state.
A transient encapsulates the internal tree of a collection so as to guarantee that it holds the only pointer to the tree; this allows modifications to tree nodes to be made in-place, so long as the node has sufficient allocated space. Once the collection is built, the tree is in the same format that existing FSet code expects, and can be accessed and functionally updated as usual.
Some quick micro-benchmarking suggests that speedups, for constructing a set from scratch, range from 1.6x at size 64 to as much as 2.4x at size 4096.
You don't necessarily even have to use transients explicitly in order to benefit from them. Some FSet builtins such as filter and image use them now. The GMap result types ch-set etc. also use them.
For details, see the GitLab MR.
07 Mar 2026 8:04am GMT
28 Feb 2026
Planet Lisp
Neil Munro: Ningle Tutorial 15: Pagination, Part 2
Contents
- Part 1 (Hello World)
- Part 2 (Basic Templates)
- Part 3 (Introduction to middleware and Static File management)
- Part 4 (Forms)
- Part 5 (Environmental Variables)
- Part 6 (Database Connections)
- Part 7 (Envy Configuation Switching)
- Part 8 (Mounting Middleware)
- Part 9 (Authentication System)
- Part 10 (Email)
- Part 11 (Posting Tweets & Advanced Database Queries)
- Part 12 (Clean Up & Bug Fix)
- Part 13 (Adding Comments)
- Part 14 (Pagination, Part 1)
- Part 15 (Pagination, Part 2)
Introduction
Welcome back! We will be revisiting the pagination from last time, however we are going to try and make this easier on ourselves, I built a package for pagination mito-pager, the idea is that much of what we looked at in the last lesson was very boiler plate and repetitive so we should look at removing this.
I will say, my mito-pager can do a little more than just what I show here, it has two modes, you can use paginate-dao (named this way so that it is familiar to mito) to paginate over simple models, however, if you need to perform complex queries there is a macro with-pager that you can use to paginate. It is this second form we will use in this tutorial.
There is one thing to bear in mind, when using mito-pager, you must implement your data retrieval functions in such a way to return a values object, as mito-pager relies on this to work.
I encourge you to try the library out in other use-cases and, of course, if you have ideas, please let me know.
Changes
Most of our changes are quite limited in scope, really it's just our controllers and models that need most of the edits.
ningle-tutorial-project.asd
We need to add the mito-pager package to our project asd file.
- :ningle-auth)
+ :ningle-auth
+ :mito-pager)
src/controllers.lisp
Here is the real payoff! I almost dreaded writing the sheer volume of the change but then realised it's so simple, we only need to change our index function, and it may be better to delete it all and write our new simplified version.
(defun index (params)
(let* ((user (gethash :user ningle:*session*))
(req-page (or (parse-integer (or (ingle:get-param "page" params) "1") :junk-allowed t) 1))
(req-limit (or (parse-integer (or (ingle:get-param "limit" params) "50") :junk-allowed t) 50)))
(flet ((get-posts (limit offset) (ningle-tutorial-project/models:posts user :offset offset :limit limit)))
(mito-pager:with-pager ((posts pager #'get-posts :page req-page :limit req-limit))
(djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :pager pager)))))
This is much nicer, and in my opinion, the controller should be this simple.
src/main.lisp
We need to ensure we include the templates from mito-pager, this is a simple one line change.
(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
(djula:add-template-directory (asdf:system-relative-pathname :ningle-tutorial-project "src/templates/"))
+ (djula:add-template-directory (asdf:system-relative-pathname :mito-pager "src/templates/"))
src/models.lisp
As mentioned at the top of this tutorial, we have to implement our data retrieval functions in a certain way. While there are some changes here, we ultimately end up with less code.
We can start by removing the count parameter, we wont be needing it in this implementation, and since we don't need the count parameter anymore, the :around method can go too!
- (defgeneric posts (user &key offset limit count)
+ (defgeneric posts (user &key offset limit)
-
- (defmethod posts :around (user &key (offset 0) (limit 50) &allow-other-keys)
- (let ((count (mito:count-dao 'post))
- (offset (max 0 offset))
- (limit (max 1 limit)))
- (if (and (> count 0) (>= offset count))
- (let* ((page-count (max 1 (ceiling count limit)))
- (corrected-offset (* (1- page-count) limit)))
- (posts user :offset corrected-offset :limit limit))
- (call-next-method user :offset offset :limit limit :count count))))
There's two methods to look at, the first is when the type of user is user:
-
- (defmethod posts ((user user) &key offset limit count)
+ (defmethod posts ((user user) &key offset limit)
...
(values
- (mito:retrieve-by-sql sql :binds params)
- count
- offset)))
+ (mito:retrieve-by-sql sql :binds params)
+ (mito:count-dao 'post))))
The second is when the type of user is null:
-
- (defmethod posts ((user null) &key offset limit count)
+ (defmethod posts ((user null) &key offset limit)
...
(values
- (mito:retrieve-by-sql sql)
- count
- offset)))
+ (mito:retrieve-by-sql sql)
+ (mito:count-dao 'post))))
As you can see, all we are really doing is relying on mito to do the lions share of the work, right down to the count.
src/templates/main/index.html
The change here is quite simple, all we need to do is to change the path to the partial, we need to simply point to the partial provided by mito-pager.
- {% include "partials/pager.html" with url="/" title="Posts" %}
+ {% include "mito-pager/partials/pager.html" with url="/" title="Posts" %}
src/templates/partials/pagination.html
This one is easy, we can delete it! mito-pager provides its own template, and while you can override it (if you so wish), in this tutorial we do not need it anymore.
Conclusion
I hope you will agree that this time, using a prebuilt package takes a lot of the pain out of pagination. I don't like to dictate what developers should, or shouldn't use, so that's why last time you were given the same information I had, so if you wish to build your own library, you can, or if you want to focus on getting things done, you are more than welcome to use mine, and of course, if you find issues please do let me know!
Learning Outcomes
| Level | Learning Outcome |
|---|---|
| Understand | Understand how third-party pagination libraries like mito-pager abstract boilerplate pagination logic, and how with-pager expects a fetch function returning (values items count) to handle page clamping, offset calculation, and boundary correction automatically. |
| Apply | Apply flet to define a local adapter function that bridges the project's posts generic function with mito-pager's expected (lambda (limit offset) ...) interface, and use with-pager to reduce controller complexity to its essential logic. |
| Analyse | Analyse what responsibilities were transferred from the manual pagination implementation to mito-pager - count caching, boundary checking, offset calculation, page correction, and range generation - contrasting the complexity of both approaches. |
| Create | Refactor a manual pagination implementation to use mito-pager by simplifying model methods to return (values items count), replacing complex multi-step controller calculations with with-pager, and delegating the pagination template partial to the library. |
Github
- The link for the custom pagination part of the tutorials code is available here.
Common Lisp HyperSpec
| Symbol | Type | Why it appears in this lesson | CLHS |
|---|---|---|---|
defpackage |
Macro | Define project packages like ningle-tutorial-project/models, /forms, /controllers. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defpac.htm |
in-package |
Macro | Enter each package before defining models, controllers, and functions. | http://www.lispworks.com/documentation/HyperSpec/Body/m_in_pkg.htm |
defgeneric |
Macro | Define the simplified generic posts function signature with keyword parameters offset and limit (the count parameter is removed). |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defgen.htm |
defmethod |
Macro | Implement the simplified posts methods for user and null types (the :around validation method is removed). |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defmet.htm |
flet |
Special Operator | Define the local get-posts adapter function that wraps posts to match mito-pager's expected (lambda (limit offset) ...) interface. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_flet_.htm |
let* |
Special Operator | Sequentially bind user, req-page, and req-limit in the controller where each value is used in subsequent bindings. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_let_l.htm |
or |
Macro | Provide fallback values when parsing page and limit parameters, defaulting to 1 and 50 respectively. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_or.htm |
multiple-value-bind |
Macro | Capture the SQL string and bind parameters returned by sxql:yield in the model methods. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_multip.htm |
values |
Function | Return two values from posts methods - the list of results and the total count - as required by mito-pager:with-pager. |
http://www.lispworks.com/documentation/HyperSpec/Body/a_values.htm |
parse-integer |
Function | Convert string query parameters ("1", "50") to integers, with :junk-allowed t for safe parsing. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_parse_.htm |
28 Feb 2026 8:00am GMT
29 Jan 2026
FOSDEM 2026
Join the FOSDEM Treasure Hunt!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
29 Jan 2026 11:00pm GMT
26 Jan 2026
FOSDEM 2026
Guided sightseeing tours
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
26 Jan 2026 11:00pm GMT
Call for volunteers
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
26 Jan 2026 11:00pm GMT