19 Dec 2025

feedPlanet Grep

Lionel Dricot: Prepare for That Stupid World

Prepare for That Stupid World

You probably heard about the Wall Street Journal story where they had a snack-vending machine run by a chatbot created by Anthropic.

At first glance, it is funny and it looks like journalists doing their job criticising the AI industry. If you are curious, the video is there (requires JS).

But what appears to be journalism is, in fact, pure advertising. For both WSJ and Anthropic. Look at how WSJ journalists are presented as "world class", how no-subtle the Anthropic guy is when telling them they are the best and how the journalist blush at it. If you are taking the story at face value, you are failing for the trap which is simple: "AI is not really good but funny, we must improve it."

The first thing that blew my mind was how stupid the whole idea is. Think for one second. One full second. Why do you ever want to add a chatbot to a snack vending machine? The video states it clearly: the vending machine must be stocked by humans. Customers must order and take their snack by themselves. The AI has no value at all.

Automated snack vending machine is a solved problem since nearly a century. Why do you want to make your vending machine more expensive, more error-prone, more fragile and less efficient for your customers?

What this video is really doing is normalising the fact that "even if it is completely stupid, AI will be everywhere, get used to it!"

The Anthropic guy himself doesn't seem to believe his own lies, to the point of making me uncomfortable. Toward the ends, he even tries to warn us: "Claude AI could run your business but you don't want to come one day and see you have been locked out." At which the journalist adds, "Or has ordered 100 PlayStations."

And then he gives up:

"Well, the best you can do is probably prepare for that world."

Still from the video where Anthropic’s employee says "probably prepare for that world" Still from the video where Anthropic's employee says "probably prepare for that world"

None of the world class journalists seemed to care. They are probably too badly paid for that. I was astonished to see how proud they were, having spent literally hours chatting with a bot just to get a free coke, even queuing for the privilege of having a free coke. A coke that cost a few minutes of minimum-wage work.

So the whole thing is advertising a world where chatbots will be everywhere and where world-class workers will do long queue just to get a free soda.

And the best advice about it is that you should probably prepare for that world.

I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

19 Dec 2025 3:42pm GMT

Frederic Descamps: Deploying on OCI with the starter kit – part 6 (GenAI)

In the previous articles [1], [2], [3], [4], [5], we saw how to easily and quickly deploy an application server and a database to OCI. We also noticed that we have multiple programming languages to choose from. In this article, we will see how to use OCI GenAI Service (some are also available with the […]

19 Dec 2025 3:42pm GMT

Dries Buytaert: Adaptable Drupal modules: code meant to be adapted, not installed

Over the years, I've built dozens of small, site-specific Drupal modules. None of them live on Drupal.org.

It makes me wonder: how many modules like that exist across the Drupal ecosystem? I'm guessing a lot.

For example, I recently open-sourced the content of this blog by exporting my posts as Markdown files and publishing them on GitHub. To do that, I built two custom Drupal modules with Claude Code: one that converts HTML to Markdown, and another that exports content as YAML with Markdown.

Both modules embed architectural choices and algorithms I explicitly described to Claude Code. Both have unit tests and have been used in production. But both only work for my site.

They're built around my specific content model and field names. For example, my export module expects fields like field_summary and field_image to exist. I'd love to contribute them to Drupal.org, but turning site-specific code into something reusable can be a lot of work.

On Drupal.org, contributed modules are expected to work for everyone. That means abstracting away my content model, adding configuration options I'll never use, handling edge cases I'll never hit, and documenting setups I haven't tested.

There is a "generalization tax": the cost of making code flexible enough for every possible site. Drupal has always had a strong culture of contribution, but this tax has kept a lot of useful code private. My blog alone has ten custom modules that will probably never make it to Drupal.org under the current model.

Generalization work is extremely valuable, and the maintainers who do it deserve a lot of credit. But it can be a high bar, and a lot of useful code never clears it.

That made me wonder: what if we had a different category of contributed code on Drupal.org?

Let's call them "adaptable modules", though the name matters less than the idea.

The concept is simple: tested, working code that solves a real problem for a real site, shared explicitly as a starting point. You don't install these modules. You certainly don't expect them to work out of the box. Instead, an AI adapts the code for you by reading it and understanding the design decisions embedded in it. Or a human can do the same.

In practice, that might mean pointing Claude Code at my Markdown export module and prompting: "I need something like this, but my site uses Paragraphs instead of a regular Body field". Or: "I store images in a media field instead of an image field". The AI reads the code, understands the approach, and generates a version tailored to your setup.

This workflow made less sense when humans had to do all the adaptation. But AI changes the economics. AI is good at reading code, understanding what it does, and reshaping it for a new context. The mechanical work of adaptation is becoming both cheap and reliable.

What matters are the design decisions embedded in the code: the architecture, the algorithms, the trade-offs. Those came from me, a human. They are worth sharing, even if AI handles the mechanical adaptation.

This aligns with where engineering is heading. As developers, we'll spend less time on syntax and boilerplate, and more time on understanding problems, making architectural choices, and weighing trade-offs. Our craft is shifting from writing code to shaping code. And orchestrating the AI agents that writes it. Adaptable modules fit that future.

Modules that work for everyone are still important. Drupal's success will always depend on them. But maybe they're not the only kind worth sharing. The traditional contribution model, generalizing everything for everyone, makes less sense for smaller utility modules when AI can generate context-specific code on demand.

Opinionated, site-specific modules have always lived in private repositories. What is new is that AI makes them worth sharing. Code that only works for my site becomes a useful starting point when AI can adapt it to yours.

I created an issue on Drupal.org to explore this further. I'd love to hear what you think.

(Thanks to phenaproxima, Tim Lehnen, Gábor Hojtsy and Wim Leers for reviewing my draft.)

19 Dec 2025 3:42pm GMT

feedPlanet Debian

Kartik Mistry: KDE Needs You!

* KDE Randa Meetings and make a donation!

I know that my contributions to KDE are minimal at this stage, but hey, I'm doing my part this time for sure!

19 Dec 2025 1:44pm GMT

Otto Kekäläinen: Backtesting trailing stop-loss strategies with Python and market data

Featured image of post Backtesting trailing stop-loss strategies with Python and market data

In January 2024 I wrote about the insanity of the magnificent seven dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional I decided to analyze if using stop-loss orders could be a reliable way to automate avoiding deep drawdowns.

As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of when and how deep the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, the price of a restaurant dinner that is today 20 euros will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years.

Hence, if you intend to retain the value of your dear savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically grow faster than inflation during normal years, but likely drop in value during recessions.

What is a trailing stop-loss order?

What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin.

For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus no matter what happens, you lost only 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash.

In the simple case above it obviously makes sense in theory, but it might not make sense in practice. Prices constantly oscillate, so you don't want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in a too large of a drawdown before exiting. If markets crash rapidly it might be that nobody buys your stocks at the stop-loss price and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached?

Backtesting stock market strategies with Python, YFinance, Pandas and Lightweight Charts

I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock.

First you need to have data. YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. Then you need to manipulate the data. Pandas is the Python data analysis library with advanced data structures for working with relational or labeled data. Finally, to visualize the results, I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the datapoints feel effortless.

The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files):

python
CACHE_DIR.mkdir(parents=True, exist_ok=True) end_date = datetime.today().strftime("%Y-%m-%d") cache_file = CACHE_DIR / f"{TICKER}-{START_DATE}--{end_date}.parquet" if cache_file.is_file(): dataframe = pandas.read_parquet(cache_file) print(f"Loaded price data from cache: {cache_file}") else: dataframe = yfinance.download( TICKER, start=START_DATE, end=end_date, progress=False, auto_adjust=False ) dataframe.to_parquet(cache_file) print(f"Fetched new price data from Yahoo Finance and cached to: {cache_file}")
CACHE_DIR.mkdir(parents=True, exist_ok=True)
end_date = datetime.today().strftime("%Y-%m-%d")
cache_file = CACHE_DIR / f"{TICKER}-{START_DATE}--{end_date}.parquet"

if cache_file.is_file():
 dataframe = pandas.read_parquet(cache_file)
 print(f"Loaded price data from cache: {cache_file}")
else:
 dataframe = yfinance.download(
 TICKER,
 start=START_DATE,
 end=end_date,
 progress=False,
 auto_adjust=False
 )

 dataframe.to_parquet(cache_file)
 print(f"Fetched new price data from Yahoo Finance and cached to: {cache_file}")

The dataframe is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like you can use:

python
print("First 5 rows of the raw data:") print(df.head()) print("Last 5 rows of the raw data:") print(df.tail())
print("First 5 rows of the raw data:")
print(df.head())
print("Last 5 rows of the raw data:")
print(df.tail())

Example output:

First 5 rows of the raw data Price Adj Close Close High Low Open Volume Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P Dat 2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552 2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044 2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142 2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397 2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940 Last 5 rows of the raw data Price Adj Close Close High Low Open Volume Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P Dat 2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918 2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477 2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852 2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057 2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818
First 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P
Dat
2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552
2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044
2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142
2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397
2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940
Last 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P
Dat
2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918
2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477
2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852
2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057
2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818

Adding new columns to the dataframe is easy. For example I used a custom function to calculate relative strength index (RSI) and to add a new column "RSI" with a value for every row based on the price from that row, only one line of code is needed, without custom loops:

python
df["RSI"] = compute_rsi(df["price"], period=14)
df["RSI"] = compute_rsi(df["price"], period=14)

After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template:

python
baseline_series = [ {"time": ts, "value": val} for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False) ] baseline_json = json.dumps(baseline_series) template = jinja2.Template("template.html") rendered_html = template.render( title=title, heading=heading, description=description_html, ... baseline_json=baseline_json, ... ) with open("report.html", "w", encoding="utf-8") as f: f.write(rendered_html) print("Report generated!")
 baseline_series = [
 {"time": ts, "value": val}
 for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False)
 ]

 baseline_json = json.dumps(baseline_series)
 template = jinja2.Template("template.html")
 rendered_html = template.render(
 title=title,
 heading=heading,
 description=description_html,
 ...
 baseline_json=baseline_json,
 ...
 )

 with open("report.html", "w", encoding="utf-8") as f:
 f.write(rendered_html)
 print("Report generated!")

In the HTML template the marker {{ variable }} in Jinja syntax gets replaced with the actual JSON:

html
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>{{ title }}</title> ... </head> <body> <h1>{{ heading }}</h1> <div id="chart"></div> <script> // Ensure the DOM is ready before we initialise the chart document.addEventListener('DOMContentLoaded', () => { // Parse the JSON data passed from Python const baselineData = {{ baseline_json | safe }} const strategyData = {{ strategy_json | safe }} const markersData = {{ markers_json | safe }} // Create the chart - use a unique variable name to avoid any clash with the DOM element ID const chart = LightweightCharts.createChart(document.getElementById('chart'), width: document.getElementById('chart').clientWidth height: 500 layout: background: { color: "#222" } textColor: "#ccc" } grid: vertLines: { color: "#555" } horzLines: { color: "#555" } } }) // Add baseline serie const baselineSeries = chart.addLineSeries( title: '{{ baseline_label }}' lastValueVisible: false priceLineVisible: false priceLineWidth: 1 }) baselineSeries.setData(baselineData) baselineSeries.priceScale().applyOptions( entireTextOnly: true }) // Add strategy serie const strategySeries = chart.addLineSeries( title: '{{ strategy_label }}' lastValueVisible: false priceLineVisible: false color: '#FF6D00' ) strategySeries.setData(strategyData) // Add buy/sell markers to the strategy serie strategySeries.setMarkers(markersData) // Fit the chart to show the full data range (full zoom chart.timeScale().fitContent() }) </script> </body> </html>
<!DOCTYPE html>
<html lang="en">
<head>
 <meta charset="UTF-8">
 <title>{{ title }}</title>
 ...
</head>
<body>
 <h1>{{ heading }}</h1>
 <div id="chart"></div>
 <script>
 // Ensure the DOM is ready before we initialise the chart
 document.addEventListener('DOMContentLoaded', () => {
 // Parse the JSON data passed from Python
 const baselineData = {{ baseline_json | safe }}
 const strategyData = {{ strategy_json | safe }}
 const markersData = {{ markers_json | safe }}

 // Create the chart - use a unique variable name to avoid any clash with the DOM element ID
 const chart = LightweightCharts.createChart(document.getElementById('chart'),
 width: document.getElementById('chart').clientWidth
 height: 500
 layout:
 background: { color: "#222" }
 textColor: "#ccc"
 }
 grid:
 vertLines: { color: "#555" }
 horzLines: { color: "#555" }
 }
 })

 // Add baseline serie
 const baselineSeries = chart.addLineSeries(
 title: '{{ baseline_label }}'
 lastValueVisible: false
 priceLineVisible: false
 priceLineWidth: 1
 })
 baselineSeries.setData(baselineData)

 baselineSeries.priceScale().applyOptions(
 entireTextOnly: true
 })

 // Add strategy serie
 const strategySeries = chart.addLineSeries(
 title: '{{ strategy_label }}'
 lastValueVisible: false
 priceLineVisible: false
 color: '#FF6D00'
 )
 strategySeries.setData(strategyData)

 // Add buy/sell markers to the strategy serie
 strategySeries.setMarkers(markersData)

 // Fit the chart to show the full data range (full zoom
 chart.timeScale().fitContent()
 })
 </script>
</body>
</html>

There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test.

The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline "Buy and hold" scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way.

Backtest run example

Results

I experimented with multiple strategies and tested them with various parameters, but I don't think I found a strategy that was consistently and clearly better than just buy-and-hold.

It basically boils down to the fact that I was not able to find any way to calculate when the crash has bottomed based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, most strategies underperformed buy-and-hold because they sold when the crash started, but bought back after it recovered at a slightly higher price.

In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold.

The strategy worked better in large market crashes as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks, so the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash would have been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price.

Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if a lot of people have stop-loss orders in place, a large price drop would trigger all of them, and create a flood of sales orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don't need to fear that too many people would be using them.

Conclusion

Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is useful in protecting from the downside of stock investing. It can act as a kind of "insurance policy" to considerably decrease the chances of losing big while increasing the chances of losing a little bit. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I'd rather miss out on an initial 50% loss and an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels.

Most notably, the trailing stop-loss strategy works best if used only once. If it is repeated multiple times, the small losses in gains will compound into big losses overall.

Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently.

Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!

19 Dec 2025 12:00am GMT

18 Dec 2025

feedPlanet Debian

Dirk Eddelbuettel: dang 0.0.17: New Features, Plus Maintenance

dang image

A new release of my mixed collection of things package dang package arrived at CRAN earlier today. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor blogged about as well as the checkCRANStatus() function tweeted about by Tim Taylor. And more so take a look.

This release retires two functions: the social media site nobody ever visits anymore shut down its API too, so no way to mute posts by a given handle. Similarly, the (never official) ability by Google to supply financial data is no more, so the function to access data this way is gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and another little helper to re-order microbenchmark results by summary column (defaulting to the median). Other than the usual updates to continuous integrations, as well as a switch to Authors@R which will result in CRAN nagging me less about this, and another argument update.

The detailed NEWS entry follows.

Changes in version 0.0.17 (2025-12-18)

  • Added new funtion reorderMicrobenchmarkResults with alias rmr

  • Use tolower on email argument to checkCRANStatus

  • Added new function cranORCIDs bootstrapped from two emails by Kurt Hornik

  • Switched to using Authors@R in DESCRIPTION and added ORCIDs where available

  • Switched to r-ci action with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessor

  • Removed googleFinanceData as the (unofficial) API access point no longer works

  • Removed muteTweeters because the API was turned off

Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

18 Dec 2025 9:14pm GMT

feedPlanet Lisp

Eugene Zaikonnikov: Lisp job opening in Bergen, Norway

As a heads-up my employer now has an opening for a Lisp programmer in Bergen area. Due to hands-on nature of developing the distributed hardware product the position is 100% on-prem.

18 Dec 2025 12:00am GMT

11 Dec 2025

feedPlanet Lisp

Scott L. Burson: FSet v2.1.0 released: Seq improvements

I have just released FSet v2.1.0 (also on GitHub).

This release is mostly to add some performance and functionality improvements for seqs. Briefly:

See the above links for the full release notes.

UPDATE: there's already a v2.1.1; I had forgotten to export the new function char-seq?.

11 Dec 2025 4:01am GMT

09 Dec 2025

feedFOSDEM 2026

/dev/random and lightning talks

The room formally known as "Lightning Talks" is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰

09 Dec 2025 11:00pm GMT

04 Dec 2025

feedPlanet Lisp

Tim Bradshaw: Literals and constants in Common Lisp

Or, constantp is not enough.

Because I do a lot of things with Štar, and for other reasons, I spend a fair amount of time writing various compile-time optimizers for things which have the semantics of function calls. You can think of iterator optimizers in Štar as being a bit like compiler macros: the aim is to take a function call form and to turn it, in good cases, into something quicker1. One important way of doing this is to be able to detect things which are known at compile-time: constants and literals, for instance.

One of the things this has made clear to me is that, like John Peel, constantp is not enough. Here's an example.

(in-row-major-array a :simple t :element-type 'fixnum) is a function call whose values Štar can use to tell it how to iterate (via row-major-aref) over an array. When used in a for form, its optimizer would like to be able to expand into something involving (declare (type (simple-array fixnum *) ...), so that the details of the array are known to the compiler, which can then generate fast code for row-major-aref. This makes a great deal of difference to performance: array access to simple arrays of known element types is usually much faster than to general arrays.

In order to do this it needs to know two things:

You might say, well, that's what constantp is for2. It's not: constantp tells you only the first of these, and you need both.

Consider this code, in a file to be compiled:

(defconstant et 'fixnum)

(defun ... ...
  (for ((e (in-array a :element-type et)))
    ...)
  ...)

Now, constantpwill tell you that et is indeed a compile-time constant. But it won't tell you its value, and in particular nothing says it needs to be bound at compile-time at all: (symbol-value 'et) may well be an error at compile-time.

constantp is not enough3! instead you need a function that tells you 'yes, this thing is a compile-time constant, and its value is …'. This is what literal does4: it conservatively answers the question, and tells you the value if so. In particular, an expression like (literal '(quote fixnum)) will return fixnum, the value, and t to say yes, it is a compile-time constant. It can't do this for things defined with defconstant, and it may miss other cases, but when it says something is a compile-time constant, it is. In particular it works for actual literals (hence its name), and for forms whose macroexpansion is a literal.

That is enough in practice.


  1. Śtar's iterator optimizers are not compiler macros, because the code they write is inserted in various places in the iteration construct, but they're doing a similar job: turning a construct involving many function calls into one requiring fewer or no function calls.

  2. And you may ask yourself, "How do I work this?" / And you may ask yourself, "Where is that large automobile?" / And you may tell yourself, "This is not my beautiful house" / And you may tell yourself, "This is not my beautiful wife"

  3. Here's something that staryed as a mail message which tries to explain this in some more detail. In the case of variables defconstant is required to tell constantp that a variable is a constant at compile-time but is not required (and should not be required) to evaluate the initform, let alone actually establish a binding at that time. In SBCL it does both (SBCL doesn't really have a compilation environment). In LW, say, it at least does not establish a binding, because LW does have a compilation environment. That means that in LW compiling a file has fewer compile-time side-effects than it does in SBCL. Outside of variables, it's easily possible that a compiler might be smart enough to know that, given (defun c (n) (+ n 15)), then (constantp '(c 1) <compilation environment>) is true. But you can't evaluate (c 1) at compile-time at all. constantp tells you that you don't need to bind variables to prevent multiple evaluation, it doesn't, and can't, tell you what their values will be.

  4. Part of the org.tfeb.star/utilities package.

04 Dec 2025 4:23pm GMT

15 Nov 2025

feedFOSDEM 2026

FOSDEM 2026 Accepted Stands

With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + OpenInfra & the Linux Foundation: Building the Open Source Infrastructure Ecosystem Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation + The Zephyr Project Eclipse Foundation F-Droid and /e/OS + OW2 FOSS community / Murena degooglized phones and suite Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework舰

15 Nov 2025 11:00pm GMT

13 Nov 2025

feedFOSDEM 2026

FOSDEM 2026 Main Track Deadline Reminder

Submit your proposal for the FOSDEM main track before it's too late! The deadline for main track submissions is earlier than it usually is (16th November, that's in a couple of days!), so don't be caught out. For full details on submission information, look at the original call for participation.

13 Nov 2025 11:00pm GMT