18 Mar 2026
Planet Grep
Frederic Descamps: Where does the Community run most MariaDB in production – results from the poll
We recently asked the MariaDB community a simple question: Where do you run MariaDB most in production? The responses give a useful snapshot of how MariaDB is deployed today across our community: The big takeaway: MariaDB remains strongly infrastructure-aware The clearest signal from this poll is that MariaDB is still most commonly run in environments […]
18 Mar 2026 1:09am GMT
Frederic Descamps: Improving MariaDB Observability with OpenSearch and Grafana
When dealing with queries in MariaDB, there are several approaches, such as the general query log, the slow query log, and the performance_schema. The general query log is not recommended as it doesn't contain much valuable information and can use a lot of resources when writing to the file on busy systems. The slow query […]
18 Mar 2026 1:09am GMT
Dries Buytaert: Never submit code you don't understand

Years ago, in the early Drupal days, you would see a mantra everywhere: "Don't hack core".
It showed up in issue queues, conference talks, support channels, stickers, and even on T-shirts. It was short and memorable, and it solved a real problem: too many people were modifying Drupal Core instead of extending it properly.
Over time the mantra worked. The ecosystem matured. Not just the software itself, but also the habits and expectations around it. Today you rarely hear people say "Don't hack core".
With AI changing how code gets written, we may need a new mantra.
In Open Source, all code needs to be understood and reviewed before it can be merged. That responsibility belongs to both contributors and maintainers. AI is changing how code gets written, but it does not change that responsibility. In fact, it may make it easier to forget.
Code you don't understand becomes someone else's problem. In Open Source, that someone is often the maintainer reviewing your patch.
Offloading bad code onto maintainers slows down reviews for everyone. Plus, you miss the chance to learn from the code and grow as a developer.
It shouldn't matter what tools you use. But if you submit code, you should be able to explain what it does, why it works, and how it interacts with the rest of the code.
Everyone starts somewhere. Even today's top contributors submitted imperfect patches early on. You are welcome here, with or without AI tools. Perfection isn't required, but understanding your code is. Own your code.
Maybe it's time for some new stickers and T-shirts.
Never submit code you don't understand.
Thanks to Natalie Cainaru, Jeremy Andrews and Gábor Hojtsy for reviewing my draft.
18 Mar 2026 1:09am GMT
17 Mar 2026
Planet Debian
Dirk Eddelbuettel: RcppArmadillo 15.2.4-1 on CRAN: Upstream Update


Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language-and is widely used by (currently) 1235 other packages on CRAN, downloaded 44.9 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 672 times according to Google Scholar.
This versions updates to the 15.2.4 upstream Armadillo release from yesterday. The package has already been updated for Debian, and for r2u. This release, which we as usual checked against the reverse-dependencies, brings minor changes over the RcppArmadillo release 15.2.3 made in December (and described here) by addressing some corner-case ASAN/UBSAN reports (which Conrad, true to his style of course labels as 'false positive' just how he initially responded that he would 'never' add a fix based on such a false report; as always it is best to just watch what does as he is rather good at it, and, written comments notwithstanding, quite responsive) as well as speed-ups for empty sparse matrices. I made one more follow-up refinement on the OpenMP setup which should now 'just work' on all suitable platforms.
The detailed changes since the last release follow.
Changes in RcppArmadillo version 15.2.4-1 (2026-03-17)
Upgraded to Armadillo release 15.2.4 (Medium Roast Deluxe)
Workarounds for bugs in GCC and Clang sanitisers (ASAN false positives)
Faster handling of blank sparse matrices
Refined OpenMP setup (Dirk in #500)
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
17 Mar 2026 7:26pm GMT
16 Mar 2026
Planet Debian
Dirk Eddelbuettel: RcppClassicExamples 0.1.4 on CRAN: Maintenance

Another minor maintenance release version 0.1.4 of package RcppClassicExamples arrived earlier today on CRAN, and has been built for r2u. This package illustrates usage of the old and otherwise deprecated initial Rcpp API which no new projects should use as the normal and current Rcpp API is so much better.
This release, the first in two and half years, mostly aids Rcpp in moving from Rf_error() to Rcpp::stop() for better behaviour under error conditions or excections. A few other things were updated in the interim such as standard upgrade to continuous integration, use of Authors@R, and switch to static linking and an improved build to support multiple macOS architectures.
No new code or features. Full details below. And as a reminder, don't use the old RcppClassic - use Rcpp instead.
Changes in version 0.1.4 (2026-03-16)
Continuous integration has been updated several times
DESCRIPTION now uses Authors@R
Static linking is enforced, RcppClassic (>= 0.9.14) required
Calls to
Rf_error()have been replaced withRcpp::stop()Updated versioned dependencies
Thanks to CRANberries, you can also look at a diff to the previous release.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
16 Mar 2026 9:20pm GMT
Jonathan Dowland: My Prusa Mini+ is broken

Oh dear! I've been suffering print reliability issues on my Prusa Mini+ for quite a while, roughly since they introduced Input Shaping (although that might not be the culprit). Whilst trying different things to resolve it, I managed to sheer off the brass nozzle within the heatblock. I now have half the nozzle stuck in the ratchet spanner, and half in the heatblock.
What to do next?
I can try and get the nozzle out of the heatblock, by screwing something into it or using an extraction screw. I've been warned this could be messy and dangerous. Less risky might be to change out the whole heatblock. They don't seem to be expensive.
Back in FOSDEM I asked the Prusa folks what cool projects I could do with the Mini+… they looked a little blank (I think the Mini+ is now a somewhat forgotten product) but they did say somebody had managed to port over the "Nextruder" from the more recent Prusa XL/MK4. I could take a look at that.
Another thing I've always wanted to explore (although I had intended it to be temporary/reversible) was converting it into a plotter, for plotter art.
Somehow this is my first 3d printing blog post in over a year. The printables.com feed I linked to is still going, I'm happy to report (as is the one I wrote but didn't publish, slightly more surprisingly)
16 Mar 2026 8:45pm GMT
12 Mar 2026
Planet Lisp
Christoph Breitkopf: Functional Valhalla?
Pointer-rich data layouts lead to suboptimal performance on modern hardware. For an excellent introduction to this, see the article The Road to Valhalla. While it is specifically about Java, many parts of the article also apply to other languages. To summarize some of the key points of the article:
- In 1990, a main memory fetch was about as expensive as an arithmetic operation. Now, it might be a hundred times slower.
- A pointer-rich data layout involving indirections between data at different locations is not ideal for today's hardware.
- A language should make flat (cache-efficient) and dense (memory-efficient) memory layouts possible without compromising abstraction or type safety.
Consider a vector of records (or tuples, structures, product types - I'll stay with "record" in this article). A pointer-rich layout has each record allocated separately in the heap, with a vector containing pointers to the records. For example, given a "Point" record of two numbers:
The flat and dense layout has the records directly in the array:
(Note that there is another flat layout, namely, using one vector per field of the record. This is better suited to instruction-level parallelism or specialized hardware (e.g., GPUs), especially when the record fields have different sizes. But it is less suited for general-purpose computing, as reading a single vector element requires one memory access per field, whereas the "vector of records" layout above requires only one access per record. Such a layout can be easily implemented in any language that has arrays of native types, whether in the language itself or in a library (e.g., OCaml's Owl library). Thus, in this article, I will only consider the "array of records" layout above.)
Functional language considerations
Things should be much easier in functional languages than in Java: we have purity, referential transparency, and everything is a value. So it should be simple enough to store these values in memory in their native representation. But there are reasons that that is often not the case in practice:
- Lazyness: a value can be a computation that produces a value only when needed.
- Layout polymorphism: unless we replicate the code for every type (as, for example, Rust does), we need to be able to store every possible value in the same kind of slot.
- Dynamically typed languages require type information at runtime.
- Functional languages often have automatic memory management, which may require runtime type information.
- Many of our languages are not purely functional, but contain impure features.
- Pure languages often lack traditional vectors or arrays, since making them perform well in immutable code is not easy.
- Historical reasons: Graph reduction was a common implementation technique for lazy languages, and graphs involve pointers.
- Implementation restrictions: not being mainstream, fewer resources are devoted to implementation and optimization.
Many implementations can not even lay out native types flat in records, so a Point record of IEEE 754 double-precision numbers may actually look like this in memory:
The (very short) List
So, given a record type, which functional languages allow a collection of values of that type to have a flat, linear memory layout? The number of programming languages that claim to be "functional" is huge, so the ones listed here are just a selection based on my preferences - mainly languages that allow that layout, and some I have some experience with and can speculate on how easy or hard it would be to add that as a library or extension.
Since the Point record can be misleading in its simplicity when it comes to the question of whether the functionality could be implemented as a library, I'll point out that there are records where the layout is a bit more interesting:
- Records containing different types with different storage sizes, for example, one 64-bit float and one 32-bit integer. On most architectures, this will require 4 bytes of padding between elements.
- Records containing native values along with something that has to be represented as a pointer, for example, a reference-type or a lazy value. In a flat layout, this means that every nth element will be a pointer, requiring special support from the memory management system, either by providing layout information or by using a conservative GC that treats everything as a potential pointer.
Pure languages:
Clean
Yes: Clean has unboxed arrays of records in the base language.
Caveat: it does not have integer types of specific sizes and only one floating-point type, making it harder to reduce memory usage by using the smallest type just large enough to support the required value range. It seems possible to implement such types in a library (the mTask system does that).
Futhark
No. Futhark does not intend to be a general-purpose language, so this is not surprising.
I mention it here because it does have arrays of records, but, since it targets GPUs and related hardware, it uses the "record of arrays" layout mentioned above.
Haskell
Yes. Not in the base language, but there is library support via Data.Vector.Unboxed. Types that implement the Unbox type class can be used in these vectors. Many basic types and tuples have an Unbox instance. However, when you care about efficiency, you probably do not want to use tuples but rather a data type with strict fields, i.e., not:
but:
Writing an Unbox instance for such a type is not trivial. The vector-th-unbox library makes it easier, but requires Template Haskell. Unboxed vectors are implemented by marshalling the values to byte arrays, so records with pointer fields are not supported.
Impure Languages
F#
Yes, even records with pointer fields. Records have structural equality, and you can use structs or the [<Struct>] attribute to get a flat layout.
And that's all I could find. Unless I follow Wikipedia's list of functional programming languages, which contains languages such as C++, C#, Rust, or Swift, that allow the flat layout, but don't really fit my idea of a functional language. But SML, OCaml, Erlang (Elixir, Gleam), Scala? Not that I could see (but please correct me if I'm wrong).
Rolling your own
Since there is a library implementation for Haskell, maybe that's a possibility for other languages?
You should be able to implement flat layouts in any language that supports byte vectors. More interesting is how well such a library fits into the language, and whether a user of the library has to write code or annotations for user-defined record types, or whether the library can handle part or all of that automagically.
I'll only mention my beloved Lisp/Scheme here. Lisp's uniform syntax and macro system are a bonus here, but the lack of static typing makes things harder.
In Scheme, R6RS (and R7RS with the help of some SRFIs) has byte-vectors and marshalling to/from them in the standard library. But Scheme does not have type annotations, so you either need to offer a macro to define records with typed fields or to define how to marshal the fields of a regular (sealed) record. Since you can shadow standard procedures in a library, you can write code that looks like regular Scheme code, but, perhaps surprisingly, loses identity when storing/retrieving values from records:
(let ((vec (make-typed-vector 'point 1000))
(pt (make-point x y)))
(vector-set! vec 0 pt)
(eq? (vector-ref vec 0) pt))
⇒ #f(But then, you probably shouldn't be using eq? when doing functional programming in Scheme).
The same approach is possible in Common Lisp. In contrast to Scheme, it does have optional type annotations, and, together with a helper library for accessing the innards of floats and either the meta-object protocol to get type information or (probably better) a macro to define typed records, an implementation should be reasonably straightforward. Making it play nice with inheritance and the dynamic nature of Common Lisp (e.g., adding slots to classes or even changing an object's class at runtime) would be a much harder undertaking.
Conclusion
Of the functional languages I looked at, only F# fully supports flat and dense memory layouts. Among the pure languages, Haskell and Clean come close.
The question is how important this really is. There's a good argument to be made for turning to more specialized languages like Futhark if you mainly care about performance. On the other hand, having a uniform codebase in one language also has advantages.
Then, the performance story has changed, too. While the points Project Valhalla raises remain true in principle, processor designers are aware of this as well. They are doing their best to hide memory latency with techniques such as out-of-order execution or humongous caches. Thus, on a modern CPU, the effects of a pointer-rich layout are often only observable with large working set sizes.
Still, given the plethora of imperative language that can get you to Valhalla, support for this in the functional landscape seems lacking. In the future, I hope to see more languages or libraries that will make this possible.
12 Mar 2026 11:17am GMT
07 Mar 2026
Planet Lisp
Scott L. Burson: FSet v2.3.0: Transients!
FSet v2.3.0 added transients! These make it faster to populate new collections with data, especially as the collections get large. I shamelessly stole the idea from Clojure.
They are currently implemented only for the CHAMP types ch-set, ch-map, ch-2-relation, ch-replay-set, and ch-replay-map.
The term "transient" contrasts with "persistent". I'm using the term "persistent" in its functional-data-structure sense, as Clojure does: a data structure is persistent if multiple states of it can coexist in memory efficiently. (The probably more familiar use of the term is in the database sense, where it refers to nonvolatile storage of data.) FSet collections have, up to now, all been persistent in this sense; a point modification to one, such as by with or less, takes only O(log n) space and time to return a new state of the collection, without disturbing the previous state.
A transient encapsulates the internal tree of a collection so as to guarantee that it holds the only pointer to the tree; this allows modifications to tree nodes to be made in-place, so long as the node has sufficient allocated space. Once the collection is built, the tree is in the same format that existing FSet code expects, and can be accessed and functionally updated as usual.
Some quick micro-benchmarking suggests that speedups, for constructing a set from scratch, range from 1.6x at size 64 to as much as 2.4x at size 4096.
You don't necessarily even have to use transients explicitly in order to benefit from them. Some FSet builtins such as filter and image use them now. The GMap result types ch-set etc. also use them.
For details, see the GitLab MR.
07 Mar 2026 8:04am GMT
28 Feb 2026
Planet Lisp
Neil Munro: Ningle Tutorial 15: Pagination, Part 2
Contents
- Part 1 (Hello World)
- Part 2 (Basic Templates)
- Part 3 (Introduction to middleware and Static File management)
- Part 4 (Forms)
- Part 5 (Environmental Variables)
- Part 6 (Database Connections)
- Part 7 (Envy Configuation Switching)
- Part 8 (Mounting Middleware)
- Part 9 (Authentication System)
- Part 10 (Email)
- Part 11 (Posting Tweets & Advanced Database Queries)
- Part 12 (Clean Up & Bug Fix)
- Part 13 (Adding Comments)
- Part 14 (Pagination, Part 1)
- Part 15 (Pagination, Part 2)
Introduction
Welcome back! We will be revisiting the pagination from last time, however we are going to try and make this easier on ourselves, I built a package for pagination mito-pager, the idea is that much of what we looked at in the last lesson was very boiler plate and repetitive so we should look at removing this.
I will say, my mito-pager can do a little more than just what I show here, it has two modes, you can use paginate-dao (named this way so that it is familiar to mito) to paginate over simple models, however, if you need to perform complex queries there is a macro with-pager that you can use to paginate. It is this second form we will use in this tutorial.
There is one thing to bear in mind, when using mito-pager, you must implement your data retrieval functions in such a way to return a values object, as mito-pager relies on this to work.
I encourge you to try the library out in other use-cases and, of course, if you have ideas, please let me know.
Changes
Most of our changes are quite limited in scope, really it's just our controllers and models that need most of the edits.
ningle-tutorial-project.asd
We need to add the mito-pager package to our project asd file.
- :ningle-auth)
+ :ningle-auth
+ :mito-pager)
src/controllers.lisp
Here is the real payoff! I almost dreaded writing the sheer volume of the change but then realised it's so simple, we only need to change our index function, and it may be better to delete it all and write our new simplified version.
(defun index (params)
(let* ((user (gethash :user ningle:*session*))
(req-page (or (parse-integer (or (ingle:get-param "page" params) "1") :junk-allowed t) 1))
(req-limit (or (parse-integer (or (ingle:get-param "limit" params) "50") :junk-allowed t) 50)))
(flet ((get-posts (limit offset) (ningle-tutorial-project/models:posts user :offset offset :limit limit)))
(mito-pager:with-pager ((posts pager #'get-posts :page req-page :limit req-limit))
(djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :pager pager)))))
This is much nicer, and in my opinion, the controller should be this simple.
src/main.lisp
We need to ensure we include the templates from mito-pager, this is a simple one line change.
(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
(djula:add-template-directory (asdf:system-relative-pathname :ningle-tutorial-project "src/templates/"))
+ (djula:add-template-directory (asdf:system-relative-pathname :mito-pager "src/templates/"))
src/models.lisp
As mentioned at the top of this tutorial, we have to implement our data retrieval functions in a certain way. While there are some changes here, we ultimately end up with less code.
We can start by removing the count parameter, we wont be needing it in this implementation, and since we don't need the count parameter anymore, the :around method can go too!
- (defgeneric posts (user &key offset limit count)
+ (defgeneric posts (user &key offset limit)
-
- (defmethod posts :around (user &key (offset 0) (limit 50) &allow-other-keys)
- (let ((count (mito:count-dao 'post))
- (offset (max 0 offset))
- (limit (max 1 limit)))
- (if (and (> count 0) (>= offset count))
- (let* ((page-count (max 1 (ceiling count limit)))
- (corrected-offset (* (1- page-count) limit)))
- (posts user :offset corrected-offset :limit limit))
- (call-next-method user :offset offset :limit limit :count count))))
There's two methods to look at, the first is when the type of user is user:
-
- (defmethod posts ((user user) &key offset limit count)
+ (defmethod posts ((user user) &key offset limit)
...
(values
- (mito:retrieve-by-sql sql :binds params)
- count
- offset)))
+ (mito:retrieve-by-sql sql :binds params)
+ (mito:count-dao 'post))))
The second is when the type of user is null:
-
- (defmethod posts ((user null) &key offset limit count)
+ (defmethod posts ((user null) &key offset limit)
...
(values
- (mito:retrieve-by-sql sql)
- count
- offset)))
+ (mito:retrieve-by-sql sql)
+ (mito:count-dao 'post))))
As you can see, all we are really doing is relying on mito to do the lions share of the work, right down to the count.
src/templates/main/index.html
The change here is quite simple, all we need to do is to change the path to the partial, we need to simply point to the partial provided by mito-pager.
- {% include "partials/pager.html" with url="/" title="Posts" %}
+ {% include "mito-pager/partials/pager.html" with url="/" title="Posts" %}
src/templates/partials/pagination.html
This one is easy, we can delete it! mito-pager provides its own template, and while you can override it (if you so wish), in this tutorial we do not need it anymore.
Conclusion
I hope you will agree that this time, using a prebuilt package takes a lot of the pain out of pagination. I don't like to dictate what developers should, or shouldn't use, so that's why last time you were given the same information I had, so if you wish to build your own library, you can, or if you want to focus on getting things done, you are more than welcome to use mine, and of course, if you find issues please do let me know!
Learning Outcomes
| Level | Learning Outcome |
|---|---|
| Understand | Understand how third-party pagination libraries like mito-pager abstract boilerplate pagination logic, and how with-pager expects a fetch function returning (values items count) to handle page clamping, offset calculation, and boundary correction automatically. |
| Apply | Apply flet to define a local adapter function that bridges the project's posts generic function with mito-pager's expected (lambda (limit offset) ...) interface, and use with-pager to reduce controller complexity to its essential logic. |
| Analyse | Analyse what responsibilities were transferred from the manual pagination implementation to mito-pager - count caching, boundary checking, offset calculation, page correction, and range generation - contrasting the complexity of both approaches. |
| Create | Refactor a manual pagination implementation to use mito-pager by simplifying model methods to return (values items count), replacing complex multi-step controller calculations with with-pager, and delegating the pagination template partial to the library. |
Github
- The link for the custom pagination part of the tutorials code is available here.
Common Lisp HyperSpec
| Symbol | Type | Why it appears in this lesson | CLHS |
|---|---|---|---|
defpackage |
Macro | Define project packages like ningle-tutorial-project/models, /forms, /controllers. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defpac.htm |
in-package |
Macro | Enter each package before defining models, controllers, and functions. | http://www.lispworks.com/documentation/HyperSpec/Body/m_in_pkg.htm |
defgeneric |
Macro | Define the simplified generic posts function signature with keyword parameters offset and limit (the count parameter is removed). |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defgen.htm |
defmethod |
Macro | Implement the simplified posts methods for user and null types (the :around validation method is removed). |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defmet.htm |
flet |
Special Operator | Define the local get-posts adapter function that wraps posts to match mito-pager's expected (lambda (limit offset) ...) interface. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_flet_.htm |
let* |
Special Operator | Sequentially bind user, req-page, and req-limit in the controller where each value is used in subsequent bindings. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_let_l.htm |
or |
Macro | Provide fallback values when parsing page and limit parameters, defaulting to 1 and 50 respectively. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_or.htm |
multiple-value-bind |
Macro | Capture the SQL string and bind parameters returned by sxql:yield in the model methods. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_multip.htm |
values |
Function | Return two values from posts methods - the list of results and the total count - as required by mito-pager:with-pager. |
http://www.lispworks.com/documentation/HyperSpec/Body/a_values.htm |
parse-integer |
Function | Convert string query parameters ("1", "50") to integers, with :junk-allowed t for safe parsing. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_parse_.htm |
28 Feb 2026 8:00am GMT
29 Jan 2026
FOSDEM 2026
Join the FOSDEM Treasure Hunt!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
29 Jan 2026 11:00pm GMT
26 Jan 2026
FOSDEM 2026
Guided sightseeing tours
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
26 Jan 2026 11:00pm GMT
Call for volunteers
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
26 Jan 2026 11:00pm GMT

