21 Jan 2026
Drupal.org aggregator
UI Suite Initiative website: Announcement - Display Builder beta 1 has been released
Monday January the 12th 2026, the UI Suite team has proudly released the first beta of the Display Builder module, a display building tool for ambitious site builders:A single display building tool deeply integrated with DrupalIts powerful unified UI can be used instead of Layout Builder for entity view displays, Block Layout for page displays, and as a replacement of the Views' display building feature. No more struggle with overcomplicated inconsistent UIs!
21 Jan 2026 12:49pm GMT
20 Jan 2026
Drupal.org aggregator
Dries Buytaert: Software as clay on the wheel

A few weeks ago, Simon Willison started a coding agent, went to decorate a Christmas tree with his family, watched a movie, and came back to a working HTML5 parser.
That sounds like a party trick. It isn't.
It worked because the result was easy to check. The parser tests either pass or they don't. The type checker either accepts the code or it doesn't. In that kind of environment, the work can keep moving without much supervision.
Geoffrey Huntley's Ralph Wiggum loop is probably the cleanest expression of this idea I've seen, and it's becoming more popular quickly. In his demonstration video, he describes creating specifications through conversation with an AI agent, then letting the loop run. Each iteration starts fresh: the agent reads the specification, picks the most important remaining task, implements it, runs the tests. If they pass, it commits and exits. The next iteration begins with empty context, reads the current state from disk, and picks up where the previous run left off.
If you think about it, that's what human prompting already looks like: prompt, wait, review, prompt again. You're shaping the code or text the way a potter shapes clay: push a little, spin the wheel, look, push again. The Ralph loop just automates the spinning, which makes much more ambitious tasks practical.
The difference is how state is handled. When you work this way by hand, the whole conversation comes along for the ride. In the Ralph loop, it doesn't. Each iteration starts clean.
Why? Because carrying everything with you all the time is a great way to stop getting anywhere. If you're going to work on a problem for hundreds of iterations, things start to pile up. As tokens accumulate, the signal can get lost in noise. By flushing context between iterations and storing state in files, each run can start clean.
Simon Willison's port of an HTML5 parsing library from Python to JavaScript showed the principle at larger scale. Using GPT-5.2 through Codex CLI with the --yolo flag for uninterrupted execution, he gave a handful of directional prompts: API design, milestones, CI setup. Then he let it run while he decorated a Christmas tree with his family and watched a movie.
Four and a half hours later, the agent had produced a working HTML5 parser. It passed over 9,200 tests from the html5lib-tests suite. HTML5 parsing is notoriously complex. The specification precisely defines how even malformed markup should be handled, with thousands of edge cases accumulated over years. But the agent had constant grounding: each test run pulled it back to reality before errors could compound.
As Willison put it: "If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed". Ralph loops and Willison's approach differ in structure, but both depend on tests as the source of truth.
Cursor's research on scaling agents confirms this is starting to work at enterprise scale. Their team explored what happens when hundreds of agents work concurrently on a single codebase for weeks. In one experiment, they built a web browser from scratch. Over a million lines of code across a thousand files, generated in a week. And the browser worked.
That doesn't mean it's secure, fast, or something you'd ship. It means it met the criteria they gave it. If you decide to check for security or performance, it will work toward that as well. But the pattern is the same: clear tests, constant verification, agents that know when they're done.
From solo loops to hundreds of agents running in parallel, the same pattern keeps emerging. It feels like something fundamental is crystallizing: autonomous AI is starting to work well when you can accurately define success upfront.
Willison's success criteria were "simple": all 9,200 tests pass. That is a lot of tests, but the agent got there. Clear criteria made autonomy possible.
As I argued in AI flattens interfaces and deepens foundations, this changes where humans add value:
Humans are moving to where they set direction at the start and refine results at the end. AI handles everything in between.
The title of this post comes from Geoffrey Huntley. He describes software as clay on the pottery wheel, and once you've worked this way, it's hard to think about it any other way. As Huntley wrote: "If something isn't right, you throw it back on the wheel and keep going". That's exactly how it feels. Throw it back, refine it, spin again until it's right.
Of course, the Ralph Wiggum loop has limits. It works well when verification is unambiguous. A unit test returns pass or fail. But not all problems come with clear tests. And writing tests can be a lot of work.
For example, I've been thinking about how such loops could work for Drupal, where non-technical users build pages. "Make this page more on-brand" isn't a test you can run.
Or maybe it is? An AI agent could evaluate a page against brand guidelines and return pass or fail. It could check reading level and even do some basic accessibility tests. The verifier doesn't have to be a traditional test suite. It just has to provide clear feedback.
All of this just exposes something we already intuitively understand: defining success is hard. Really hard. When people build pages manually, they often iterate until it "feels right". They know what they want when they see it, but can't always articulate it upfront. Or they hire experts who carry that judgment from years of experience. This is the part of the work that's hardest to automate. The craft is moving upstream, from implementation to specification and validation.
The question for any task is becoming: can you tell, reliably, whether the result is getting better or worse? Where you can, the loop takes over. Where you can't, your judgment still matters.
The boundary keeps moving fast. A year ago, I was wrestling with local LLMs to generate good alt-text for images. Today, AI agents build working HTML5 parsers while you watch a movie. It's hard not to find that a little absurd. And hard not to be excited.
20 Jan 2026 7:39pm GMT
Drupal AI Initiative: The Future of AI-Powered Web Creation Is People-First, Not-Prompt First
Aidan Foster - Strategy Lead, Foster Interactive
The Big Shift in Web Creation
For years, the hardest part of building a website was technical execution. Slow development cycles, code barriers, and long timelines created bottlenecks.
AI has changed this.
Execution is no longer the limiting factor. Understanding is. The new challenge is knowing your audience, clarifying your message, and structuring the story your website needs to tell.
The future is not prompt-first. It is people first. Strategy, insight, empathy, and structure.
This was the core message of my talk AI Page Building with Drupal Canvas. It is also why Foster Interactive joined the Drupal AI Makers initiative.
But none of this works unless the human layer comes first.
Why People-First AI Matters and Why AI Slop Happens

AI is a powerful assistant, but it cannot replace human judgment.
Large language models can synthesize patterns, but they cannot invent your strategy.
When teams skip the foundational work such as audience research, messaging clarity, and brand systems, AI produces generic output that feels shallow and off-brand.
This is what we call AI slop. The issue is not the model. The issue is unclear inputs.
AI can only accelerate the parts you already understand. The human layer must come first. Audience insight. Value propositions. Tone and language rules. Page-level content strategy.
Without this structure, every output becomes guesswork.
The New Tools: Canvas and the Context Control Center
Drupal's new AI features are powerful because they finally support how marketers work.
Canvas: The visual editor built for marketers
Canvas allows anyone to build pages using drag and drop.
It offers instant previews, mobile and desktop views, simple undo and redo, and AI built directly into the editor.
You can ask Canvas to assemble a campaign landing page and it uses your brand components, design system, content rules, and tone to create useful starting points.
This is the most marketer-friendly Drupal experience ever made.
The Context Control Center: The AI knowledge base
This is where strategy becomes usable by AI. It allows teams to load audience personas, value propositions, tone guides, brand rules, page templates, messaging frameworks, and content strategy documents.
With this context available, the AI produces work that is aligned, accurate, and consistent.
Instead of guessing, it draws from your organization's strategic foundation.
For the first time, brand and audience knowledge can be reused across the entire website.
The Demo: How We Built FinDrop Landing Page Demo

To demonstrate what is possible, we built a fictional SaaS company called FinDrop.
We created product stories, value props, audience personas, PPC ads, content strategy, and a visual system that matched the Mercury design system.
We generated all of this using strategy first, then AI. We crafted brand rules, used Nano Banana for consistent imagery, built campaign assets, and generated full landing pages for three stages of a funnel.
AI gave us speed, but only because the human structure was already in place. Without strategy the output collapsed. With structure it accelerated.
The Real Lesson: AI Only Works When the Inputs Are Strong

The FinDrop demo made something clear. AI did not save time because it is smart. It saved time because the rules were defined. Your success depends on the strength of your foundations.
Clear value propositions. Real audience insight. A defined tone. Predictable page patterns. Brand rules the AI can follow. Without this, AI slows teams down.
At Foster Interactive we are testing the best models for Drupal workflows, refining content strategy structures for the Context Control Center, creating systems to make AI-ready brands easier to build, and bringing the marketer's perspective into the AI Makers roadmap.
Our goal is simple. Make AI genuinely useful for small marketing teams without sacrificing accuracy or authenticity.
What Is Coming Next for Drupal AI and Why It Matters
Drupal CMS 2 is coming in early 2026. It will include deeper Canvas integration, more intuitive site templates, a lighter AI suite, reusable design systems, expanded knowledge base support, and better tools for auditing and maintaining content.
But the biggest change is this. It will become easy to install the tools and it will be obvious who has done the strategic work. Teams relying solely on AI will blend into the noise.
Teams grounded in human insight will stand out.
Now Is the Time to Unlearn What Is Possible
A few months ago, I did not believe a CMS could generate usable landing pages in minutes or create consistent AI imagery. Then we built FinDrop.
The tools have changed. The pace has changed.
Human insight cannot be outsourced to AI.
We want our AI tools to take care of boring, repetitive jobs to free up our time for creative and strategic work.
The role of marketers is shifting away from production bottlenecks and toward clarity, empathy, positioning, narrative, and audience understanding.
AI can accelerate execution and remove repetitive tasks. But it cannot replace the strategy behind them.
If we get the human foundations right, we create a future where imagination becomes the bottleneck, not time.
That's the future I want to live in.
Ready to put people-first AI into practice?
Start with your foundations. Sit down with your team and audit your brand guidelines. Talk to front-line support and sales - the people closest to your customers. Update your tone, messaging, and audience details. This is the work that makes AI useful.
Then try Canvas. Once your foundations are solid, test what's possible with the upcoming Drupal CMS 2.0 demo at drupalforge.org. (Or if you're a little more technical, test the Driesnote Demo which is available right now).
20 Jan 2026 6:05pm GMT