After my last post on superimposed codes, I discovered that OEIS already had a sequence for it (I had just missed it due to a slightly different convention), namely A286874 (and its sister sequence A303977, which lists the number of distinct maximal solutions). However, very few terms of this sequence were known; in particular, it was known that a(12) >= 20 (easily proved by simply demonstrating a set of twenty 12-bit numbers with the desired property), but it wasn't known if the value could be higher (i.e., whether there existed a 12-bit set with 21 elements or more). The SAT solver wasn't really working well for this anymore, so I thought; can I just bruteforce it? I.e., can I enumerate all 12-bit 20-element sets and then see if any of them have room for a 21st element?
Now, obviously you cannot run a completely dumb bruteforce. The raw state space is 12*20 = 240 bits, and going through 2^240 different options is far away. But it's a good place to start, and then we can start employing tricks from there. (I'm sure there are more fancy ways somehow, but this one was what I chose. I'm a genius with mathematics, but I can write code.)
So I started with a 20-level deep for loop, with each element counting from 0 to 4095 (inclusive). Now, there are some speedups that are obvious; for instance, once you have two elements, you can check that neither is a subset of the other (which is, except in some edge cases with small sets that we don't need to worry about here, a looser condition than what we're trying to test for), and then skip the remaining 18 levels. Similarly, once we have the first three elements, we can start testing whether one is a subset of OR of the two others, and abort similarly.
Furthermore, we can start considering symmetries. We only care about solutions that are qualitatively distinct, in that the ordering of the elements don't matter and the ordering of the bits also don't matter. So we can simply only consider sequences where the elements are in order, which is extremely simple, very cheap, and nets us a speedup of 20! ~= 2.4 * 10^18. We have to be a bit careful, though, because this symmetry can conflict with other symmetries that we'd like to use for speedup. For instance, it would be nice to impose the condition that the elements must be in order of increasing population count (number of set bits), but if we do this at the same time as the "strictly increasing" condition, we'll start missing valid solutions. (I did use a very weak variant of it, though; no element can have smaller popcount than the first one. Otherwise, you could just swap those two elements and shuffle columns around, and it wouldn't be in conflict.)
However, there is more that we can do which isn't in conflict. In particular, let's consider (writing only 5-bit elements for brevity) that we are considering candidates for the first element:
00011
00101
00110
10010
These are all, obviously, the same (except that the latter ones will be more restrictive); we could just shuffle bits around and get the same thing. So we impose a new symmetry: Whenever we introduce new bits (bits that were previously always set), they need to start from the right. So now this start of a sequence is valid:
00011
00101
but this is not:
00011
01001
The reason is, again, that we could get the first sequence from the second by flipping the second and third bit (counting from the left). This is cheap and easy to test for, and is not in conflict with our "increasing" criterion as long as we make this specific choice.
But we can extend this even further. Look at these two alternatives:
00111
01001
and
00111
01010
They are also obviously equivalent as prefixes (just swap the fourth and fifth bits), so we don't want to keep both. We make a very similar restriction as before; if all previous bits in a position are the same, then we need to fill bits from the right. (If they're not, then we cannot impose a restriction.) This is also fairly easy to do with some bit fiddling, although my implementation only considers consecutive bits. (It's not in conflict with the strictly-increasing criterion, again because it only makes values lower, not higher. It is, in a sense, a non-decreasing criterion on the columns.)
And finally, consider these two sequences (with some other elements in-between):
00111
01001
.....
10011
and
00111
01011
.....
10001
They are also equivalent; if you exchange first and second bit and then swap the order of them, you end up with the same. So this brings us to the last symmetry: If you introduce a new bit (or more generally N new bits), then you are not allowed to introduce later a value that is the same bit shifted more to the left and with the other bits being lower. So the second sequence would be outlawed.
Now, how do we do all of these tests efficiently? (In particular, the last symmetry, while it helped a lot in reducing the number of duplicate solutions, wasn't a speed win at first.) My first choice was to just generate code that did all the tests, and did them as fast as possible. This was actually quite efficient, although it took GCC several minutes to compile (and Clang even more, although the resulting code wasn't much faster). Amusingly, this code ended up with an IPC above 6 on my Zen 3 (5950X); no need for hyperthreading here! I don't think I've ever seen real-life code this taxing on the execution units, even though this code is naturally extremely branch-heavy. Modern CPUs are amazing beasts.
It's a bit wasteful that we have 64-bit ALUs (and 256-bit SIMD ALUs) and use them to do AND/OR on 12 bits at a time. So I tried various tricks with packing the values to do more tests at a time, but unfortunately, it only lead to slowdowns. So eventually, I settled at a very different solution: Bitsets. At any given time, we have a 4096-bit set of valid future values for the inner for loops. Whenever we decide on a value, we look up in a set of pregenerated tables and just AND them into our set. For instance, if we just picked the value 3 (00011), we look up into the "3" table and it will instantly tell us that values like 7 (00111), 11 (01011), and many others are going to be invalid for all inner iterations and we can just avoid considering them altogether. (Iterating over only the set bits in a bitset is pretty fast in general, using only standard tricks.) This saves us from testing any further value against these illegals, so it's super-fast. The resulting tables are large (~4 GB), since we need to look up pairs of values into it, so this essentially transforms our high-ALU problem into a memory-bound problem, but it's still easily worth it (I think it gave a speedup of something like 80x). The actual ANDing is easily done with AVX2, 256 bits at a time.
This optimization not only made the last symmetry-breaking feasible, but also sped up the entire process enough (you essentially get O(n) bitset intersections instead of O(n²) new tests per level) that it went from a "multiple machines, multiple months" project to running comfortably within a day on my 5950X (~6 core-days). I guess maybe a bit anticlimactic; I had to move the database I used for work distribution locally to the machine or else the latency would be killing me. It found the five different solutions very quickly and then a couple of thousand duplicates of them (filtering those out efficiently is a kind of tricky problem in itself!), and then confirmed there were no others. I submitted it to OEIS, and it should hopefully go through the editing process fairly fast.
The obvious next question is: Can we calculate a(13) in the same way? Unfortunately, it seems the answer is no. Recompiling the same code with 13-bit parameters (taking the LUTs up to ~31 GB, still within the amount of RAM I've got) and making a 25-deep instead of 20-level deep for loop, and then running for a while, it seems that we're looking at roughly 4-5000 core years. Which is infeasible unless you've got a lot of money to burn (with spot VMs on GCE, you're talking about roughly half a million dollars, give or take) on something that isn't a very important problem in computer science.
In theory, there's still hope, though: The fact that we're still finding the same solution ~1000x (down from ~100000x before the last symmetries were added!) indicates that there's some more symmetry that we could in theory exploit and break (and that factor 1000 is likely to be much larger for 25 elements than for 20). So if someone more creative than me could invent code for identifying them-or some other way of rejecting elements early-we could perhaps identify a(13). But I don't think that's happening anytime soon. Brute force found its sweet spot and I'm happy about that, but it doesn't scale forever. :-)
Krita - 5.2.11 - Excellent Graphic art platform ( compares to Photoshop )
kgraphviewer - Graphiz .dot file viewer
I am happy to report my arm is mostly functional! Unfortunately, maintaining all these snaps is an enormous amount of work, with time I don't have! Please consider a donation for the time I should be spending job hunting / getting a website business off the ground. Thank you for your consideration!
From the PCB I can confirm J16 and pins numbered left (sysctl) to right.
attach "dtech" prolific PL2303 based serial to usb cable per serial console section of PR manual
lsusb shows ID 067b:23a3 Prolific Technology, Inc. ATEN Serial Bridge
install tio
add my user to group dialout
newgrp dialout
tio /dev/ttyUSB0 -b 1500000
A closer look at the PCB in kicad makes me realize the pin labels in the manual are wrong. 4 = GND, 5 = UART1_RX, 6= UART1_TX. With that change I have U-boot output on boot.
Serial console software
With some help from minute on ircs://irc.libera.chat:6697/#mnt-reform, I got the kernel boot arguments right to have not just u-boot output but linux kernel output on the serial console. In consfigurator notation
Okay, here's the deal, I pushed my first post on Reimagined Doodle - Alias Command, five years ago on July 8th, 2020. Don't think I ever mentioned that post started out as a Github Gist which I later transferred here seeking a more long-term home on an independent platform.
Writing about writings, motivations, and the blog itself has been a recurring theme here over the years. 123456789
I'm unsure how I sustained expressing myself and writing here for this long. Now and then, I go months without any thought of writing, and then all of a sudden I start in bursts with sequential posts one after another. There isn't a pattern per se in topics other than whatever burning question I have at the moment.
Typesetters did not like the laser printer. Wedding photographers still hate the iphone. And some musicians are outraged that AI is now making mediocre pop music.
In the article, Seth connected how AI is increasing productivity and how anything that improves productivity always wins.
Nowadays, large language models (LLMs) have become synonymous with AI, while AI is a broader field. AI has brought a shift in how things are done. Use cases might vary, but it's helping in ways like quickly summarizing huge knowledge bases to answer questions or, in my case, helping understand the contextual meaning of complex word (or sentence) usage in language and literature in both English and Hindi, which was sometimes not easy to comprehend with simple web search results.
Even if you or I don't really like "AI in everything", we can't deny the fact that AI is here to stay. This doesn't take away from the fact that AI needs to become ethical, regulated, and environmentally sustainable.
This was my hundred-thirty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4221-1] libblockdev security update of one embargoed CVE related to obtaining full root privileges.
[hardening udisks2] uploaded new version of udisks2 with a hardening patch related to DLA 4221-1
[DLA 4235-1] sudo security update to fix one embargoed CVE related to prevent a local privilege escalation.
[#1106867] got permission to upload kmail-account-wizard; the package was marked as accepted in July.
This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the eighty-third ELTS month. During my allocated time I uploaded or worked on:
[ELA-1465-1] libblockdev security update to fix one embargoed CVE in Buster, related to obtaining full root privileges.
[ELA-1475-1] gst-plugins-good1.0 security update to fix 16 CVEs in Stretch. This also includes cherry picking other commits to make this fixes possible.
[ELA-1476-1] sudo security update to fix one embargoed CVE in Buster, Stretch and Jessie. The fix is related to prevent a local privilege escalation.
This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.
… ta-lib to close at least one RFP by uploading a real package
Unfortunately I stumbled over a discussion about RFPs. One part of those involved wanted to automatically close older RFPs, the other part just wanted to keep them. But nobody suggested to really take care of those RFPs. Why is it easier to spend time on talking about something instead of solving the real problem? Anyway, I had a look at those open RFPs. Some of them can be just closed because they haven't been closed when uploading the corresponding package. For some others the corresponding software has not seen any upstream activity for several years and depends on older software no longer in Debian (like Python 2). Such bugs can be just closed. Some requested software only works together with long gone technology (for example the open Twitter API). Such bugs can be just closed. Last but not least, even the old RFPs contain nice software, that is still maintained upstream and useful. One example is ta-lib that I uploaded in June. So, please, let's put our money where out mouths are. My diary of closed RFP bugs is on people.d.o. If only ten people follow suit, all bugs can be closed within a year.
FTP master
It is still this time of the year when just a few packages arrive in NEW: it is Hard Freeze. So please don't hold it against me that I enjoy the sun more than processing packages in NEW. This month I accepted 104 and rejected 13 packages. The overall number of packages that got accepted was 105.
For some time now I was looking for a device to replace my Thinkpad. Its a 14" device, but thats to big for my taste. I am a big fan of small notebooks, so when frame.work announced their 12" laptop, I took the chance and ordered one right away.
I was in one of the very early batches and got my package a couple of days ago. When ordering, I chose the DIY edition, but in the end there was not that much of DIY to do: I had to plug in the storage and the memory, put the keyboard in and tighten some screws. There are very detailed instructions with a lot of photos that tell you which part to put where, which is nice.
My first impressions of the device are good - it is heavier than I anticipated, but very vell made. It is very easy to assemble and disassemble and it feels like it can take a hit.
When I started it the first time it took some minutes to boot because of the new memory module, but then it told me right away that it could not detect an operating system. As usual when I want to install a new system, I created a GRML live usb system and tried to boot from this USB device. But the Framwork BIOS did not want to let me boot GRML, telling me it is blocked by the current security policy. So I started to look in the BIOS where I could find the SecureBoot configuration, but there was no such setting anywhere. I then resorted to a Debian Live image, which was allowed to boot.
I only learned later, that the SecureBoot setting is in a separate section that is not part of the main BIOS configuration dialog. There is an "Administer Secure Boot" icon which you can choose when starting the device, but apparently only before you try to load an image that is not allowed.
I always use my personal minimal install script to install my Debian systems, so it did not make that much of a difference to use Debian Live instead of GRML. I only had to apt install debootstrap before running the script.
I updated the install script to default to trixie and to also install shim-signed and after successful installation booted into Debian 13 on the Framwork 12. Everthing seems to work fine so far. WIFI works. For sway to start I had to install firmware-intel-graphics. The touchscreen works without me having to configure anything (though I don't have frame.work stylus, as they are not yet available), also changing the brightness of the screen worked right away. The keyboard feels very nice, likewise the touchpad, which I configured to allow tap-to-click using the tap enabled option of sway-input.
One small downside of the keyboard is that it does not have a backlight, which was a surprise. But given that this is a frame.work laptop, there are chances that a future generation of the keyboard will have backlight support.
The screen of the laptop can be turned all the way around to the back of the laptops body, so it can be used as a tablet. In this mode the keyboard gets disabled to prevent accidently pushing keys when using the device in tablet mode.
For online meetings I still prefer using headphones with cables over bluetooth once, so I'm glad that the laptop has a headphone jack on the side.
Above the screen there are a camera and a microphone, which both have separate physical switches to disable them.
I ordered a couple of expansion cards, in the current setup I use two USB-C, one HDMI and one USB-A. I also ordered a 1TB expansion card and only used this to transfer my /home, but I soon realized that the card got rather hot, so I probably won't use it as a permanent expansion.
I can not yet say a lot about how long the battery lasts, but I will bring the laptop to DebConf 25, I guess there I'll find out. There I might also have a chance to test if the screen is bright enough to be usable outdoors ;)
In June there was an extended discussion about the ongoing challenges around mentoring newcomers in Debian. As many of you know, this is a topic I've cared about deeply--long before becoming DPL. In my view, the issue isn't just a matter of lacking tools or needing to "try harder" to attract contributors. Anyone who followed the discussion will likely agree that it's more complex than that.
I sometimes wonder whether Debian's success contributes to the problem. From the outside, things may appear to "just work", which can lead to the impression: "Debian is doing fine without me--they clearly have everything under control." But that overlooks how much volunteer effort it takes to keep the project running smoothly.
We should make it clearer that help is always needed--not only in packaging, but also in writing technical documentation, designing web pages, reaching out to upstreams about license issues, finding sponsors, or organising events. (Speaking from experience, I would have appreciated help in patiently explaining Free Software benefits to upstream authors.) Sometimes we think too narrowly about what newcomers can do, and also about which tasks could be offloaded from overcommitted contributors.
In fact, one of the most valuable things a newcomer can contribute is better documentation. Those of us who've been around for years may be too used to how things work--or make assumptions about what others already know. A person who just joined the project is often in the best position to document what's confusing, what's missing, and what they wish they had known sooner.
In that sense, the recent "random new contributor's experience" posts might be a useful starting point for further reflection. I think we can learn a lot from positive user stories, like this recent experience of a newcomer adopting the courier package. I'm absolutely convinced that those who just found their way into Debian have valuable perspectives--and that we stand to learn the most from listening to them.
Lucas Nussbaum has volunteered to handle the paperwork and submit a request on Debian's behalf to LLM providers, aiming to secure project-wide access for Debian Developers. If successful, every DD will be free to use this access--or not--according to their own preferences.
A long time ago, I became aware of UDD (Ultimate Debian Database), which gathers various Debian data into a single SQL database.
At that time, we were trying to do something simple: list the contributions (package uploads) of our local community, Debian Brasília. We ended up with a script that counted uploads to unstable and experimental.
I was never satisfied with the final result because some uploads were always missing. Here is an example:
I made changes in debci 3.0, but the upload was done by someone else. This kind of contribution cannot be tracked by that script.
Then, a few years ago, I learned about Minechangelogs, which allows us to search through the changelogs of all Debian packages currently published.
Today, I decided to explore how this was done, since I couldn't find anything useful for that kind of query in UDD's tables.
That's when I came across ProjectB. It was my first time hearing about it. ProjectB is a database that stores all the metadata about the packages in the Debian archive, including the changelogs of those packages.
Now that I'm a Debian Developer, I have access to this database. If you also have access and want to try some queries, you can do this:
Rumour has it that I might be a bit of a train nerd. At least I want to collect various nerdy data about my travels. Historically that data has lived in manual form in several places,1 but over the past year and a half I've been working on a toy project to collect most of that information into a custom tool.
That toy project2 uses various sources to get information about trains to fill up its database: for example, in Finland Fintraffic, the organization responsible for railway traffic management, publishes very comprehensive open data about almost everything that's moving on the Finnish railway network. Unfortunately, I cannot be on all of the trains.3 Thus I need to tell the system details about my journeys.
The obvious solution is to make a form that lets me save that data. Which I did, but I got very quickly bored of filling out that form, and as regular readers of this blog know, there is no reason to settle for a simple but boring solution when the alternative is to make something that is ridiculously overengineered.
Parsing data out of my train tickets
Finnish long-distance trains generally require train-specific seat reservations, which means VR (the train company) knows which trains I am on. We just need to find a way to extract that information in some machine-readable format. So my plan for the ridiculously overengineered solution was to parse the booking emails to get the details I need.
Now, VR ticket emails include the data I want in a couple of different formats: they're included as text in the HTML email body, they're in the embedded calendar invite, as text in the included PDF ticket, and encoded in the Aztec Code in the included PDF ticket. I chose to parse the last option with the hopes of building something that could be ported to parse other operators' tickets with relative ease.
I then wrote a parser in Go for the binary data embedded in these codes. So far it works, although I wouldn't be surprised if there are some edge cases that it doesn't handle. In particular, the spec specifies a custom lookup table for converting between text and binary data, and that only has support for characters 0-9 and A-Z. But Finnish railway station codes can also use Ä and Ö.. maybe I need to buy a ticket to a station with one of those.
Extracting barcodes out of emails
A parser just for the binary format isn't enough here if the intended source input is the emails that VR sends upon making a booking. Time to write a single-purpose email server! In short, the logic in the server, again written in Go and with the help of go-smtp and go-message, is:
For all decoded barcodes, try to parse them with my new ticket parsing library I mentioned earlier
If any tickets are found, send the data from them and any metadata to the main backend, which will save them to a database
The custom mail server exposes an LMTP interface over TCP for my internet-facing mail servers to forward to. I chose LMTP for this because it seemed like a better fit in theory than normal (E)SMTP. I've since discovered that curl doesn't support LMTP which makes development much harder, and in practice there's no benefit of LMTP here as all mails are being sent to the backend in a single request regardless of the number of recipients, so maybe I'll migrate it to regular SMTP at some point.
Side quest time
The last missing part is automatically forwarding the ticket mails to the new service. I've routed a dedicated subdomain to the new service, and the backend is configured to allocate addresses like i2v44g2pygkcth64stjgyuqz@somedomain.example for each user. That's great if we wanted to manually forward mails to the service, but we can go one step above that. I created a dedicated email alias in my mail server config that routes both to my regular mailbox and the service address. That way I can update my VR account to use the alias and have mails automatically processed while still receiving backup copies of the tickets (and any other important mail that VR might send me).
Unfortunately that last part turns out something that's easier said than done. Logging in on the website, I'm greeted by this text stating I need to contact customer service by phone to change the address associated with my account.5 After a bit of digging, I noticed that the mobile app suggests filling out a feedback form in order to change the address. So I filled that, and after a day or two I got a "confirm you want to change your email" mail. Success!
Including (but not limited to): a page of this website, the notes app on my phone, and an uMap map. ↩︎
Which I'm not directly naming here because I still think it needs a lot more work before being presentable, but if you're really interested it's not that hard to find out. ↩︎
Someone should invent human cloning so that we can fix this. ↩︎
People who know much more about railway ticketing than I do were surprised when I told them this format is still in use somewhere. So, uh, sorry if you were expecting a nice universal worldwide standard! ↩︎
In case you have not guessed yet, I do not like making phone calls. ↩︎
For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It's been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I've configured all my laptops to have the traditional function keys as the default.
Recently I've been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.
F1 key launches help doesn't seem to get much use. The main help option in practice is Google (I anticipate controversy about this and welcome comments) and all the software vendors are investigating LLM options for help which probably won't involve F1.
F2 is for renaming files but doesn't get much use. Probably most people who use graphical file managers use the right mouse button for it. I use it when sorting a selection of photos.
F3 is for launching a search (which is CTRL-F in most programs).
ALT-F4 is for closing a window which gets some use, although for me the windows I close are web browsers (via CTRL-W) and terminals (via CTRL-D).
F5 is for reloading a page which is used a lot in web browsers.
F6 moves the input focus to the URL field of a web browser.
F8 is for moving a file which in the degenerate case covers the rename functionality of F2.
F11 is for full-screen mode in browsers which is sometimes handy.
The keys F1, F3, F4, F7, F9, F10, and F12 don't get much use for me and for the people I observe. The F2 and F8 keys aren't useful in most programs, F6 is only really used in web browsers - but the web browser counts as "most programs" nowadays.
Here's the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don't. Dell doesn't document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.
I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.
The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that's not something I use much.
It's annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that's needed in that regard.
In the past few months, I have moved authoritative name servers (NS) of two of my domains (sahilister.net and sahil.rocks) in house using PowerDNS. Subdomains of sahilister.net see roughly 320,000 hits/day across my IN and DE mirror nodes, so adding secondary name servers with good availability (in addition to my own) servers was one of my first priorities.
I explored the following options for my secondary NS, which also didn't cost me anything:
One has to delegate NS towards one or more of ns[1-5]he.net to verify ownership. It does lead to a minor lame server period between NS addition and first zone transfer.
ns-global.kjsl.com uses Afraid.org, Puck and their NS for their own zone.
Asking friends
Two of my friends and fellow mirror hosts have their own authoritative name server setup, Shrirang (ie albony) and Luke. Shirang gave me another POP in IN and through Luke (who does have an insane amount of in-house NS, see dig ns jing.rocks +short), I added a JP POP.
If we know each other, I would be glad to host a secondary NS for you in (IN and/or DE locations).
Some notes
Adding a third-party secondary is putting trust that the third party would serve your zone right.
Hurricane Electric and 1984 hosting provide multiple NS. One can use some or all of them. Ideally, you can get away with just using your own with full set from any of these two. Play around with adding and removing secondaries, which gives you the best results. . Using everyone is anyhow overkill, unless you have specific reasons for it.
Moving NS in-house isn't that hard. Though, be prepared to get it wrong a few times (and some more). I have already faced partial outages because:
Recursive resolvers (RR) in the wild behave in a weird way and cache the wrong NS response for longer time than in TTL.
NS expiry took more than time. 2 out of 3 of my Netim's NS (my domain registrar) had stopped serving my domain, while RRs in the wild hadn't picked up my new in-house NS. I couldn't really do anything about it, though.
Dot is pretty important at the end.
With HE.net, I forgot to delegate my domain on their panel and just added in my NS set, thinking I've already done so (which I did but for another domain), leading to a lame server situation.
In terms of serving traffic, there's no distinction between primary and secondary NS. RR don't really care who they're asking the query to. So one can have hidden primary too.
I initially thought of adding periodic RIPE Atlas measurements from the global set but thought against it as I already host a termux mirror, which brings in thousands of queries from around the world leading to a diverse set of RRs querying my domain already.
In most cases, query resolution time would increase with out of zone NS servers (which most likely would be in external secondary). 1 query vs. 2 queries. Pay close attention to ADDITIONAL SECTION Shrirang's case followed by mine:
$ dig ns albony.in
; <<>> DiG 9.18.36 <<>> ns albony.in
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60525
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 9
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;albony.in. IN NS
;; ANSWER SECTION:
albony.in. 1049 IN NS ns3.albony.in.
albony.in. 1049 IN NS ns4.albony.in.
albony.in. 1049 IN NS ns2.albony.in.
albony.in. 1049 IN NS ns1.albony.in.
;; ADDITIONAL SECTION:
ns3.albony.in. 1049 IN AAAA 2a14:3f87:f002:7::a
ns1.albony.in. 1049 IN A 82.180.145.196
ns2.albony.in. 1049 IN AAAA 2403:44c0:1:4::2
ns4.albony.in. 1049 IN A 45.64.190.62
ns2.albony.in. 1049 IN A 103.77.111.150
ns1.albony.in. 1049 IN AAAA 2400:d321:2191:8363::1
ns3.albony.in. 1049 IN A 45.90.187.14
ns4.albony.in. 1049 IN AAAA 2402:c4c0:1:10::2
;; Query time: 29 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:01 IST 2025
;; MSG SIZE rcvd: 286
vs mine
$ dig ns sahil.rocks
; <<>> DiG 9.18.36 <<>> ns sahil.rocks
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64497
;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;sahil.rocks. IN NS
;; ANSWER SECTION:
sahil.rocks. 6385 IN NS ns5.he.net.
sahil.rocks. 6385 IN NS puck.nether.net.
sahil.rocks. 6385 IN NS colin.sahilister.net.
sahil.rocks. 6385 IN NS marvin.sahilister.net.
sahil.rocks. 6385 IN NS ns2.afraid.org.
sahil.rocks. 6385 IN NS ns4.he.net.
sahil.rocks. 6385 IN NS ns2.albony.in.
sahil.rocks. 6385 IN NS ns3.jing.rocks.
sahil.rocks. 6385 IN NS ns0.1984.is.
sahil.rocks. 6385 IN NS ns1.1984.is.
sahil.rocks. 6385 IN NS ns-global.kjsl.com.
;; Query time: 24 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:20 IST 2025
;; MSG SIZE rcvd: 313
Theoretically speaking, a small increase/decrease in resolution would occur based on the chosen TLD and the popularity of the TLD in query originators area (already cached vs. fresh recursion).
One can get away with having only 3 NS (or be like Google and have 4 anycast NS or like Amazon and have 8 or like Verisign and make it 13 :P).
Nowhere it's written, your NS needs not to be called dns* or ns1, ns2 etc. Get creative with naming NS; be deceptive with the naming :D.
A good understanding of RR behavior can help engineer a good authoritative NS system.
And this is the time when one realizes that she only has one white camisole left. And it's summer, so I'm wearing a lot of white shirts, and I always wear a white camisole under a white shirt (unless I'm wearing a full chemise).
Not a problem, I have a good pattern for a well fitting camisole that I've done multiple times, I don't even need to take my measurements and draft things, I can get some white jersey from the stash and quickly make a few.
From the stash. Where I have a roll of white jersey and one of off-white jersey. It's in the inventory. With the "position" field set to a place that no longer exists. uooops.
But I have some leftover lightweight (woven) linen fabric. Surely if I cut the pattern as is with 2 cm of allowance and then sew it with just 1 cm of allowance it will work even in a woven fabric, right?
Wrong.
I mean, it would have probably fit, but it was too tight to squeeze into, and would require adding maybe a button closure to the front. feasible, but not something I wanted.
But that's nothing that can't be solved with the Power of Insertion Lace, right?
One dig through the Lace Stash1 and some frantic zig-zag sewing later, I had a tube wide enough for me to squiggle in, with lace on the sides not because it was the easiest place for me to put it, but because it was the right place for it to preserve my modesty, of course.
Encouraged by this, I added a bit of lace to the front, for the look of it, and used some more insertion lace for the straps, instead of making them out of fabric.
And, it looks like it can work. I plan to wear it tonight, so that I can find out whether there is something that chafes or anything, but from a quick test it feels reasonable.
At bust level it's now a bit too wide, and it gapes a bit under the arms, but I don't think that it's going to cause significant problems, and (other than everybody on the internet) nobody is going to see it, so it's not a big deal.
I still have some linen, but I don't think I'm going to make another one with the same pattern: maybe I'll try to do something with a front opening, but I'll see later on, also after I've been looking for the missing jersey in a few more potential places.
As for now, the number of white camisoles I have has doubled, and this is progress enough for today.
with many thanks to my mother's friend who gave me quite a bit of vintage cotton lace.↩︎
Since some time now debputy is available in the archive. It is a declarative buildsystem for debian packages, but also includes a Language Server (LS) part. A LS is a binary can hook into any client (editor) supporting the LSP (Language Server Protocol) and deliver syntax highlighting, completions, warnings and …
There are many negative articles about "AI" (which is not about actual Artificial Intelligence also known as "AGI"). Which I think are mostly overblown and often ridiculous.
Resource Usage
Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as "10,000 round trips by car between Los Angeles and New York City". That's not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn't seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?
ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.
The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn't get to witness what happened with the other one). As far as I'm aware random Dutch citizens and residents didn't suffer from this and employees just got jobs elsewhere.
There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.
NVidia isn't ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google's profits now.
The Real Upsides of ML
Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that's a huge business expense).
There are many applications of ML in medical research such as recognising cancer cells in tissue samples.
There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers - technology that was apparently repurposed for recognising cancer cells.
The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn't be good for safety critical systems (don't cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn't a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.
Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.
ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won't necessarily allow them to solve problems that they couldn't solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.
I don't think it's reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn't mean everything will be fine but it is something that can seem OK after the changes have happened. I'm not saying "apart from the death and destruction everything will be good", the death and destruction are optional. Improvements in manufacturing and farming didn't have to involve poverty and death for many people, improvements to agriculture didn't have to involve overcrowding and death from disease. This was an issue of political decisions that were made.
The Real Problems of ML
Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven't been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren't going to have revolutions.
There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It's interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.
The cases of LLM systems being used for cheating on assignments etc isn't a real issue. People have been cheating on assignments since organised education was invented.
There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn't going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it's mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.
For a long time there has been excessive trust in computers. Computers aren't magic they just do maths really fast and implement choices based on the work of programmers - who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.
Self driving cars kill people, this is the truth that Tesla stock holders don't want people to know.
Companies that try to automate everything with "AI" are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.
I've previously blogged about ML Security [5]. I don't think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.
How Will It Go?
Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won't go well. But their assets can be used by new companies when sold at less than 10% the purchase price.
Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into "AI" then that could be a win for humanity.
Companies that bet their entire business on AI even when it's not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.