25 Feb 2026
Planet Debian
Sahil Dhiman: Publicly Available NKN Data Traffic Graphs
National Knowledge Network (NKN) is one of India's main National Research and Educational Network (NREN). The other being the less prevalent Education and Research Network (ERNET).
This post grew out of this Mastodon thread where I kept on adding various public graphs (from various global research and educational entities) that peer or connect with NKN. This was to get some purview about traffic data between them and NKN.
CERN
CERN, birthplace of the World Wide Web (WWW) and home of the Large Hadron Collider (LHC).
- Graph - https://monit-grafana-open.cern.ch/d/XoND3VQ4k/nkn?orgId=16&from=now-30d&to=now&timezone=browser&var-source=raw&var-bin=5m
- Connection seems to be 2x10 G through CERNLight.
India participates in the LHCONE project, which carries LHC data over these links for scientific research purposes. This presentation from Vikas Singhal from Variable Energy Cyclotron Centre (VECC), Kolkata, at the 8th Asian Tier Center Forum in 2024 gives some details.
GÉANT
GÉANT is pan European Union's collaboration of NRENs.
- Graph https://public-brian.geant.org/d/ZyVgYFgVk/nkn?orgId=5&from=now-30d&to=now
- NKN connects at Amsterdam POP.
- From the 2016 press release from GÉANT, NKN seems to have 2x10 G capacity towards GÉANT. Things might have changed since.
LEARN
Lanka Education and Research Network (LEARN) is Sri Lanka's NREN.
- Graph https://kirigal.learn.ac.lk/traffic/details.php?desc=slt-b1-new&data_id=1788&data_id_v6=&sub_bw=1000&name=National%20Knowledge%20Network%20of%20India%20(1G)&swap_inout=
- Connection seems to be 1 G.
NORDUnet
NREN for Nordic countries.
I couldn't find any public live data transfer graphs from NKN side. If you know any other graphs, do let me know.
25 Feb 2026 1:38pm GMT
Joachim Breitner: Vibe-coding a debugger for a DSL

Earlier this week a colleague of mine, Emilio Jesús Gallego Arias, shared a demo of something he built as an experiment, and I felt the desire to share this and add a bit of reflection. (Not keen on watching a 5 min video? Read on below.)
What was that?
So what did you just see (or skipped watching)? You could see Emilio's screen, running VSCode and editing a Lean file. He designed a small programming language that he embedded into Lean, including an evaluator. So far, so standard, but a few things stick out already:
- Using Lean's very extensible syntax this embedding is rather elegant and pretty.
- Furthermore, he can run this DSL code right there, in the source code, using commands like
#eval. This is a bit like the interpreter found in Haskell or Python, but without needing a separate process, or like using a Jupyter notebook, but without the stateful cell management.
This is already a nice demonstration of Lean's abilities and strength, as we know them. But what blew my mind the first time was what happened next: He had a visual debugger that allowed him to debug his DSL program. It appeared on the right, in Lean's "Info View", where various Lean tools can hook into, show information and allow the user to interact.
But it did not stop there, and my mind was blown a second time: Emilio opened VSCode's "Debugger" pane on the left, and was able to properly use VSCode's full-fledged debugger frontend for his own little embedded programming language! Complete with highlighting the executed line, with the ability to set breakpoints there, and showing the state of local variables in the debugger.
Having a good debugger is not to be taken for granted even for serious, practical programming languages. Having it for a small embedded language that you just built yourself? I wouldn't have even considered that.
Did it take long?
If I were Emilio's manager I would applaud the demo and then would have to ask how many weeks he spent on that. Coming up with the language, getting the syntax extension right, writing the evaluator and especially learning how the debugger integration into VSCode (using the DAP protocol) works, and then instrumenting his evaluator to speak that protocol - that is a sizeable project!
It turns out the answer isn't measured in weeks: it took just one day of coding together with GPT-Codex 5.3. My mind was blown a third time.
Why does Lean make a difference?
I am sure this post is just one of many stories you have read in recent weeks about how new models like Claude Opus 4.6 and GPT-Codex 5.3 built impressive things in hours that would have taken days or more before. But have you seen something like this? Agentic coding is powerful, but limited by what the underlying platform exposes. I claim that Lean is a particularly well-suited platform to unleash the agents' versatility.
Here we are using Lean as a programming language, not as a theorem prover (which brings other immediate benefits when using agents, e.g. the produced code can be verified rather than merely plausible, but that's a story to be told elsewhere.)
But arguably because Lean is also a theorem prover, and because of the requirements that stem from that, its architecture is different from that of a conventional programming language implementation:
- As a theorem prover, it needs extensible syntax to allow formalizing mathematics in an ergonomic way, but it can also be used for embedding syntax.
- As a theorem prover, it needs the ability to run "tactics" written by the user, hence the ability to evaluate the code right there in the editor.
- As a theorem prover, it needs to give access to information such as tactic state, and such introspection abilities unlock many other features - such as a debugger for an embedded language.
- As a theorem prover, it has to allow tools to present information like the tactic state, so it has the concept of interactive "Widgets".
So Lean's design has always made such a feat possible. But it was no easy feat. The Lean API is large, and documentation never ceases to be improvable. In the past, it would take an expert (or someone willing to become one) to pull off that stunt. These days, coding assistants have no issue digesting, understanding and using the API, as Emilio's demo shows.
The combination of Lean's extensibility and the ability of coding agents to make use of that is a game changer to how we can develop software, with rich, deep, flexible and bespoke ways to interact with our code, created on demand.
Where does that lead us?
Emilio actually shared more such demos (Github repository). A visual explorer for the compiler output (have a look at the screenshot. A browser-devtool-like inspection tool for Lean's "InfoTree". Any of these provide a significant productivity boost. Any of these would have been a sizeable project half a year ago. Now it's just a few hours of chatting with the agent.
So allow me to try and extrapolate into a future where coding agents have continued to advance at the current pace, and are used ubiquitously. Is there then even a point in polishing these tools, shipping them to our users, documenting them? Why build a compiler explorer for our users, if our users can just ask their agent to build one for them, right then when they need it, tailored to precisely the use case they have, with no unnecessary or confusing feature. The code would be single use, as the next time the user needs something like that the agent can just re-create it, maybe slightly different because every use case is different.
If that comes to pass then Lean may no longer get praise for its nice out-of-the-box user experience, but instead because it is such a powerful framework for ad-hoc UX improvements.
And Emilio wouldn't post demos about his debugger. He'd just use it.
25 Feb 2026 10:53am GMT
24 Feb 2026
Planet Debian
Louis-Philippe Véronneau: Montreal's Debian & Stuff - February 2026

Our Debian User Group met on February 22nd for our first meeting of the year!
Here's what we did:
pollo:
- reviewed and merged Lintian contributions:
- released lintian version
2.130.0 - upstreamed a patch for python-wilderness, fixed a few things and released version
0.1.10-3 - updated python-clevercsv to version
0.8.4 - updated python-mediafile to version
0.14.0
lelutin:
- opened up a RFH for co-maintenance for smokeping and added Marc Haber who responded really quickly to the call
- with mjeanson's help: prepped and uploaded a new smokeping version to release pending work
- opened a NM request to become DM
viashimo:
- fixed freshrss timer
- updated freshrss
- installed new navidrome container
- configured backups for new host (beelink mini s12)
tvaz:
- did NM work
- learned more about debusine and tested it
- uploaded antimony to debusine
- (co-)convinced lelutin to apply for DM (yay!)
lavamind:
- worked on autopkgtests for a new version of jruby
Pictures
This time around, we held our meeting at cégep du Vieux Montréal, the college where I currently work. Here is the view we had:

We also ordered some delicious pizzas from Pizzeria dei Compari, a nice pizzeria on Saint-Denis street that's been there forever.

Some of us ended up grabbing a drink after the event at l'Amère à boire, a pub right next to the venue, but I didn't take any pictures.
24 Feb 2026 9:45pm GMT