17 Dec 2025

feedDrupal.org aggregator

Freelock Blog: What Went Wrong? Error Identification and Helpful Suggestions

Day 17 - Error Identification and Suggestions


You're checking out on an e-commerce site. You click Submit, and the page reloads with an error message at the top: "There were errors in your submission." That's it. No indication of which fields have problems. No explanation of what's wrong. You start hunting through the form, checking each field, trying to figure out what went wrong.

This frustrating experience is unfortunately common, especially on e-commerce sites, membership portals, and complex forms. But it's also completely avoidable - and fixing it makes your site accessible and more usable for everyone.

Read More

17 Dec 2025 4:00pm GMT

drunomics: Lupus Decoupled 1.4: Component Previews, Canvas-Ready, and a Better JSON API Enabling Native Vue Slots

Lupus Decoupled 1.4: Component Previews, Canvas-Ready, and a Better JSON API Enabling Native Vue Slots wolfgang.ziegler

Lupus Decoupled 1.4 introduces Component Previews, Canvas-ready features, and an improved JSON API with native Vue slot support - enhancing developer flexibility and front-end integration.

17 Dec 2025 12:35pm GMT

Drupal blog: Drupal 11.3.0 is now available

The third feature release of Drupal 11 is here with the biggest performance boost in a decade. Serve 26-33% more requests with the same database load. New native HTMX support enables rich UX with up to 71% less JavaScript. Plus, enjoy the new stable Navigation module, improved CKEditor content editing, native content export, and cleaner OOP hooks for themes.

New in Drupal 11.3

Biggest performance boost in a decade

Database query and cache operations on both cold and warm caches have been significantly reduced. Our automated tests show that the new optimization for cold caches is about one third and on partially-warm cache requests by up to one fourth. Independent testing shows even bigger improvements on complex sites.

The render and caching layers now combine database and cache operations, notably in path alias and entity loading. BigPipe also now uses HTMX on the frontend, leading to a significant reduction in JavaScript weight.

Read more about performance improvements in Drupal 11.3.0.

Native HTMX: Rich UX with up to 71% less JavaScript

Drupal 11.3.0 now natively integrates HTMX, a powerful, dependency-free JavaScript library. HTMX dramatically enhances how developers build fast, interactive user interfaces. It enables modern browser features directly in HTML attributes, significantly reducing the need for extensive custom JavaScript.

Read more about HTMX support in Drupal 11.3.0.

Navigation module is now stable

The Navigation module is now stable, offering a superior and more modern experience than the old Toolbar. While it is an experience worth installing on all sites, it is most useful for sites with complex administration structures. While not yet the default, we strongly encourage users to switch and benefit from its improvements.

Improved content editing

CKEditor now natively supports linking content on the site by selecting it from an autocomplete or dropdown (using entity references).. CKEditor also has new, user-friendly options for formatting list bullets and numbering.. Finally, a dedicated Administer node published status permission is introduced to manage publication status of content (which does not require Administer nodes anymore).

Object-oriented hooks in themes

Themes can now use the same #[Hook()] attribute system as modules, with theme namespaces registered in the container for easier integration. This change allows themers to write cleaner, more structured code. Themes' OOP hook implementations are placed in the src/Hook/ directory, similarly to modules'. Themes support a defined subset of both normal and alter hooks.

Native support for content export

Drupal core now includes a command-line tool to export content in the format previously introduced by the contributed Default Content module. Drupal can export a single entity at a time, but it is also possible to export the dependencies of the entity automatically (for example, images or taxonomy terms it references).To use the export tool, run the following from the Drupal site's root:

php core/scripts/drupal content:export ENTITY_TYPE_ID ENTITY_ID

New experimental database driver for MySQL/MariaDB for parallel queries

A new, experimental MySQLi database driver has been added for MySQL and MariaDB. It is not yet fully supported and is hidden from the user interface.

While the current default drivers use PDO to connect to MySQL or MariaDB, this new database driver instead uses the mysqli PHP extension. MySQLi is more modern and allows database queries to be run in parallel instead of sequentially as with PDO. We plan to add asynchronous database query support in a future Drupal release.

Core maintainer team updates

Since Drupal 11.2, we reached out to all subsystem and topic maintainers to confirm whether they wished to continue in their roles. Several long-term contributors stepped back and opened up roles for new contributors. We would like to thank them for their contributions.

Additionally, Roy Scholten stepped back from his Usability maintainership and Drupal core product manager role. He has been inactive for a while, but his impact on Drupal since 2007 has been profound. We thank him for his involvement!

Mohit Aghera joined as a maintainer for the File subsystem. Shawn Duncan is a new maintainer for the Ajax subsystem. David Cameron was added as a maintainer of the Link Field module. Pierre Dureau and Florent Torregrosa are now the maintainers for the Asset Library API. Finally, codebymikey is the new maintainer for Basic Auth.

Going forward, we plan to review core maintainer appointments annually. We hope this will reduce the burden on maintainers when transitioning between roles or stepping down, and also provide more opportunities for new contributors.

Want to get involved?

If you are looking to make the leap from Drupal user to Drupal contributor, or you want to share resources with your team as part of their professional development, there are many opportunities to deepen your Drupal skill set and give back to the community. Check out the Drupal contributor guide.

You would be more than welcome to join us at DrupalCon Chicago in March 2026 to attend sessions, network, and enjoy mentorship for your first contributions.

The Core Leadership Team is always looking for new contributors to help steward the project. As recently various new opportunities have opened up. If you are looking to deepen your Drupal skill set, we encourage you to read more about the open subsystem and topic maintainer roles and consider stepping up to contribute your expertise.

Drupal 10.6 is also available

The next maintenance minor release of Drupal 10 has also been released, and will be supported until December 9, 2026, after the release of Drupal 12. Long-term support for Drupal 10 gives more flexibility for sites to move to Drupal 11 when they are ready while staying up-to-date with Drupal's dependencies.

This release schedule also allows sites to move from one long-term support version to the next if that is the best strategy for their needs. For more information on maintenance minors, read the previous post on the new major release schedule.

17 Dec 2025 12:04pm GMT

Drupal Core News: Drupal 11.3.0: Biggest performance boost in a decade

Drupal 11.3 includes a number of significant performance improvements, altogether making it the most significant step forward for Drupal performance in the last 10 years (since the Drupal 8.0.0 release).

These improvements have been driven by enhancements to Drupal's render and caching layers in 11.2.x, notably taking advantage of Fibers, a new PHP feature added in PHP 8.1. By rendering more parts of the page in placeholders, we have enabled similar database and cache operations that used to occur individually to be combined, with particular improvements in path alias and entity loading. We have also learned from Drupal's automated performance testing framework, allowing us to identify and execute several optimizations during Drupal's hook and field discovery processes, to significantly reduce database and cache i/o, and memory usage on cold caches.

On the front end we have converted Drupal's BigPipe implementation to use HTMX, reducing JavaScript weight significantly. We also intercept placeholders with warm render caches prior to BigPipe replacement, so that BigPipe's JavaScript is not loaded at all on requests that will be served quickly without it, allowing BigPipe to be used more widely for the requests that do need it. These changes may also allow us to enable BigPipe for anonymous site visitors in a future release.

Combined, these changes reduce database query and cache operations on cold cache requests by around one third, with smaller but still significant improvements when caches become warmer, up to and including dynamic and internal page cache hits.

Drupal's automated performance tests show many of these improvements, and will ensure that we continue to build and maintain on these gains over time.

Drupal Umami demo's anonymous cold cache front page request

Let's look at an example. These are the changes in Drupal core's included Umami demo's anonymous cold cache front page request performance test between 11.2.x and 11.3.x.

11.2.0 11.3.0 Reduction
SQL Query Count 381 263 31%
Cache Get Count 471 316 33%
Cache Set Count 467 315 33%
CacheTag Lookup Query Count 49 27 45%
Estimated ms (assuming 1ms per operation) 1368 921 33%

Particularly notable is the benefit for requests to pages with partially warmed caches, where site-wide caches are full, but page-specific caches are invalid or empty. In core's performance test for this scenario, we saw an almost 50% reduction in database queries. Requests like this make up a large percentage of the slowest responses from real Drupal sites, and core now provides a much lower baseline to work against. Medium to large sites often hit constraints with the database first, because it is harder to scale than simply adding webservers, and these improvements reduce load when it is at its most constrained.

Drupal Umami demo's anonymous node page with partially-warmed cache

11.2.0 11.3.0 Reduction
SQL Query Count 171 91 47%
Cache Get Count 202 168 17%
Cache Set Count 41 42 -2%
CacheTag Lookup Query Count 22 22 0%
Estimated ms (assuming 1ms per operation) 436 323 26%

While different sites, and even different pages on the same site, will show different results, we would expect all Drupal sites to see a significant reduction in database and cache i/o per request once they've updated to Drupal 11.3.

Independent testing and further improvements with Paragraphs

Independent testing by MD Systems on their internal Primer Drupal distribution shows even better improvements with Drupal 11.3, especially for complex pages. This is also thanks to further improvements enabled and inspired by Drupal 11.3 in the Entity Reference Revisions module which resulted in considerable performance improvements for Paragraphs. Their results show a dramatic reduction in database and cache operations across different cache states. Their cold cache total query count dropped by 62% (from 1097 to 420), and total cache lookups decreased by 47% (from 991 to 522) compared to Drupal 11.2. At the same time their partially-warm cache total query count dropped by 61% (from 696 to 274) and total cache lookups decreased by 34% (from 562 to 373).

Even more technical details

For further details on how these improvements happened, check out some of the core issues that introduced them, or watch Nathaniel Catchpole's DrupalCon Vienna presentation.

#1237636: Lazy load multiple entities at a time using fibers
#2620980: Add static and persistent caching to ContentEntityStorageBase::loadRevision()
#3496369: Multiple load path aliases without the preload cache
#3537863: Optimize field module's hook_entity_bundle_info() implementation
#3538006: Optimize EntityFieldManager::buildBundleFieldDefinitions()
#3526080: Reduce write contention to the fast and consistent backend in ChainedFastBackend
#3493911: Add a CachedPlaceholderStrategy to optimize render cache hits and reduce layout shift from big pipe
#3526267: Remove core/drupal.ajax dependency from big_pipe/big_pipe
#3506930: Separate hooks from events
#3505248: Ability to preload frequently used cache tags (11.2.x)

There is more to do for Drupal 11.4!

If you'd like to help 11.4 become even faster yet, check out core issues tagged with 'performance' and try to help us get them done. We have multiple issues in progress that didn't quite make it into 11.3.0 but could form the basis of another significant set of improvements in 11.4.0.

17 Dec 2025 11:57am GMT

Drupal Core News: Native HTMX in Drupal 11.3.0: Rich UX with up to 71% less JavaScript

Drupal developers always face the dilemma of building classic multi-page applications or headless solutions with a modern JavaScript stack. Especially when they need to build UIs that feel fast and are highly reactive. While there were some Drupal specific solutions for parts of this need (Form State API, AJAX API and BigPipe), these were dated, only solving very specific use cases and comparatively heavy in implementation.

HTMX is a tiny, dependency-free, and extensible JavaScript library that allows you to access modern browser features directly from HTML, rather than using extensive amounts of JavaScript. It essentially enables you to use HTML to make AJAX requests, CSS transitions, WebSockets, and Server-Sent Events (SSE) directly.

As a replacement for Drupal's mostly home grown solutions, this reduced the loaded JavaScript size by up to 71% for browser-server interactions, including HTML streaming with BigPipe. While enabling a whole set of new functionality at the same time!

HTMX was originally created by Carson Gross. The motivation was to provide a modern, yet simple, way to build dynamic user interfaces by leveraging the existing capabilities of HTML and the server-side architecture, effectively offering an alternative to the complexity of heavy, client-side JavaScript frameworks. By sending less data (HTML fragments instead of large JSON payloads and complex client-side rendering logic), HTMX often results in faster perceived performance and less bandwidth consumption. It is being adopted by developers across diverse ecosystems.

Principles of HTMX

HTMX operates on a few core, simple principles, all expressed via HTML attributes. While pure HTMX does not require the data- prefix, Drupal uses it to achieve valid HTML. That is how you'll see it used in Drupal, so we'll use that notation in this post.

  1. Any Element Can Make a Request: Unlike standard HTML forms and anchors, HTMX allows any element (a <div>, a <span>, a <button>) to trigger an HTTP request. All five HTTP verbs are available.
    • Attributes: data-hx-get, data-hx-post, data-hx-put, data-hx-delete, data-hx-patch.
  2. Any Event Can Trigger a Request: You are not limited to click (for anchors/buttons) or submit (for forms). Requests can be triggered by any JavaScript event, such as mouseover, keyup, or a custom event.
    • Attribute: data-hx-trigger.
  3. Any Target Can Be Updated: By default, HTMX replaces the inner HTML of the element that triggered the request. However, you can use a CSS selector to specify any element on the page to be updated with the response HTML.
    • Attribute: data-hx-target.
  4. Any Transition Can Be Used: HTMX allows you to define how the new content is swapped into the target element (e.g., replace, prepend, append, outerHTML) and works with the new View Transition API.
    • Attribute: data-hx-swap.

Short Code Example

This Drupal-independent example demonstrates how to fetch and swap new content into a div when a button is clicked, without writing any custom JavaScript.

<!-- The button that triggers the request -->
<button data-hx-get="/clicked" 
        data-hx-target="#content-area" 
        data-hx-swap="outerHTML">
  Load New Content
</button>

<!-- The area that will be updated -->
<div id="content-area">
  This content will be replaced.
</div>

In this code example, when the button is clicked:

  1. A GET request is made to the server at the URL: /clicked
  2. The server responds with a fragment of HTML (e.g., <div>New Content Loaded!</div>)
  3. The HTML content of the #content-area element is replaced (outerHTML) by the response.

Introducing HTMX in Drupal 11.3.0

HTMX was added as a dependency to Drupal core in 11.2, but is now fully featured in 11.3. A new factory class is provided for developers building or extending render arrays. The Htmx class provides methods that build every HTMX attribute or response header, therefore documenting and exposing the HTMX API to Drupal.

Drupal 11.3 also extends the FormBuilder class to support dynamic forms built with HTMX. When a form is rebuilt from an HTMX request, all the form values will be available to the form class for dynamically restructuring the form. Here's an example of both features:

function buildForm(array $form, FormStateInterface $form_state) {
  $make = ['Audi', 'Toyota', 'BMW'];
  $models = [
    ['A1', 'A4', 'A6'],
    ['Landcruiser', 'Tacoma', 'Yaris'],
    ['325i', '325ix', 'X5'],
  ];
  $form['make'] = [
    '#title' => 'Make',
    '#type' => 'select',
    '#options' => $make,
  ];
  $form['model'] = [
    '#title' => 'Models',
    '#type' => 'select',
    '#options' => $models[$form_state->getValue('make', 0)] ?? [],
    // We'll need that later.
    '#wrapper_attributes' => ['id' => 'models-wrapper'],
  ];
  return $form;
}

(new Htmx())
  // An empty method call uses the current URL.
  ->post()
  // We select the wrapper around the select.
  ->target('#models-wrapper')
  // And replace the whole wrapper
  // not simply updating the options in place,
  // so that any errors also display.
  ->select('#models-wrapper')
  // We replace the whole element for this form.
  ->swap('outerHTML')
  ->applyTo($form['make']);

In this form, whenever the make selector is changed, the models selector will be updated.

Drupal 11.3 also adds a dedicated renderer and associated wrapper format that can be used to keep the response to an HTMX request as small as possible. This render only returns the main content and its CSS/Javascript assets. There are two ways to take advantage of this renderer.

One is to add an attribute to the HTMX enhanced element, which will cause the wrapper format to be used:

(new Htmx())
  ->post()
  ->onlyMainContent()
  ->target('#models-wrapper')
  ->select('#models-wrapper')
  ->swap('outerHTML')
  ->applyTo($form['make']);

There is also a new route option that can be used when creating a route specifically to service HTMX requests. This route option will also be useful with the dynamic routes in Views as we refactor to use HTMX.

demo.route_option:
  path: '/htmx-demo/route-option'
  defaults:
    _title: 'Using _htmx_route option'
    _controller: '\Drupal\module_name\Controller\DemoController::replace'
  requirements:
    _permission: 'access content'
  options:
    _htmx_route: TRUE

Drupal is still committed to supporting decoupled architectures

HTMX is an excellent solution for progressive enhancement and dynamic front-ends. It is a powerful tool in the core toolkit, not a replacement for the flexibility offered by a fully decoupled backend. Drupal remains committed to supporting decoupled and headless architectures, especially where necessary, such as mobile applications, client-side state management, deep offline capabilities, etc.

17 Dec 2025 11:50am GMT

16 Dec 2025

feedDrupal.org aggregator

Freelock Blog: Can Everyone Hear You? Captions and Sign Language

16 Dec 2025 4:00pm GMT

LakeDrops Drupal Consulting, Development and Hosting: ECA Use Case: Authentication

ECA Use Case: Authentication

Security Logo with a pointer

Jürgen Haas

This article explores how ECA (Event-Condition-Action) can handle common authentication workflows in Drupal, including access denied redirects, user registration forms, and post-login actions. It demonstrates how ECA models can replace multiple contributed modules while offering greater flexibility - such as role-based redirects after login, hiding unnecessary password fields during account creation, and automatically assigning roles based on email domains. The key benefits include fewer dependencies, easier customization, simpler upgrades, and self-documenting configuration. However, ECA still needs improvement in discoverability and usability to become accessible to all Drupal site builders.

16 Dec 2025 1:59pm GMT

joshics.in: Beyond the Request: Best Practices for Third-Party API Integration in Drupal

Beyond the Request: Best Practices for Third-Party API Integration in Drupal bhavinhjoshi

Beyond the Request: Best Practices for Third-Party API Integration in Drupal - Joshi Consultancy Services

As businesses continue to innovate in the digital space, no website is an island. Whether it's pulling payment data from Stripe, syncing leads with Salesforce, or fetching live race results from a sports data provider, your Drupal site almost certainly needs to communicate with external services.

Drupal 10 (and the upcoming 11) is a powerful platform for these integrations, but simply connecting to an API isn't enough. Poorly built integrations can result in a fragile system, where a third-party service outage brings your entire site down.

At Joshi Consultancy Services, we've seen the difference between "it works" and "it scales." Here's how we ensure our API integrations are robust, reliable, and future-proof.

01

Embrace the Guzzle Client

Don't resort to raw cURL. Since Drupal 8, the Guzzle HTTP client has been bundled in the core. It's a robust, standards-compliant client that simplifies API interactions and offers better extensibility.

Why it matters: Guzzle allows us to standardize outgoing requests across your site. We can easily add middleware for tasks like logging, authentication, and debugging without redoing the connection logic for every API call. This leads to cleaner, maintainable code.

02

Never Hardcode Credentials

It's tempting to paste your API keys directly into the code or configuration settings to get things up and running quickly. But this creates a serious security risk, exposing sensitive credentials in code repositories or database backups.

The Solution: We use the Key module to securely store API credentials outside the web root. The module references API keys from environment variables or secure locations, ensuring they remain hidden from unauthorized access.

03

Caching is Non-Negotiable

External APIs can be slow, and relying on them for every page load will degrade your site's performance. Moreover, many APIs impose rate limits (e.g., "1000 requests per hour"), making it crucial to minimize the number of calls.

Best Practice: Decouple the view from the request.

  • When we fetch data, we store it in Drupal's Cache API.
  • Subsequent page loads fetch the cached data, resulting in faster load times.
  • We set a "Time to Live" (TTL) for the cached data based on business needs.

Result: Your site stays fast, and you don't exceed API rate limits.

04

Fail Gracefully

What happens if the third-party API goes down? Does your site crash with a "500 Error" or a blank screen?

Defensive Coding: We wrap all API requests in try/catch blocks. If an external service times out or returns a 404, we handle it gracefully. The user might see old cached data or a friendly message like "Live data is temporarily unavailable" instead of a crash.

05

Use the Queue API

Certain tasks should not block the user experience. If an action takes longer than a couple of seconds, it shouldn't be performed while the user waits for the page to load.

Example: If a user submits a form and the data needs to be sent to multiple third-party services (CRM, ERP, marketing platform), don't make them wait for each one.

The Solution: We use Drupal's Queue API to handle time-consuming tasks in the background. The user's submission is saved immediately, while a background process (using Cron) picks up the task and sends the data to the external APIs without blocking the user's experience.

Final Thoughts

API integration is straightforward, but resilient integration requires careful planning. By treating external APIs as unreliable services that need to be managed, cached, and secured, we ensure your Drupal site remains robust, even when things go wrong on the other side of the connection.

Are you struggling with slow API integrations or need to connect your Drupal site to complex third-party services? Let's discuss how to architect a solution that scales, ensuring both performance and reliability.

Add new comment

16 Dec 2025 12:50pm GMT

LostCarPark Drupal Blog: Advent Calendar day 16 – Drupal CMS now and beyond

Advent Calendar day 16 - Drupal CMS now and beyond james

Door 16 containing a Dreamchaser shuttle launching with the launch complex in the distance

Yesterday's door was looking back at the launch of Drupal CMS 1.0, and today we look forward to CMS 2.0, which currently has an Alpha release available.

It is expected to launch early in 2026, though I suspect it won't make it for Drupal's 25th birthday.

Cristina Chumillas smiling against a dark backgroundAt DrupalCon Nara, Cristina Chumillas and Pamela Barone talked about developments since the first release, and how community and company-backed contributions have increased significantly, strengthening the ecosystem. CMS 2.0 builds on this momentum, prioritising usability, and enabling non-technical users to build sites in hours rather than…

16 Dec 2025 9:00am GMT

Specbee: Drupal Paragraphs or Layout Builder? When to use what

Should we use Paragraphs or Layout Builder in Drupal? This guide breaks down the strengths of each approach and helps you choose the right module for flexible, editor-friendly page building.

16 Dec 2025 6:31am GMT

15 Dec 2025

feedDrupal.org aggregator

Drupal.org blog: GitLab CI: Drupal's strategy to empower a whole ecosystem

In this post, we share our CI strategy for all Drupal contributed modules. We believe that other large open-source projects may want to adopt or learn from the way we implemented a solution to centrally-manage CI while still allowing per-project customization.

How Drupal contributed modules do CI today?

Let's give some more details about how did we get here.

The past

In summer 2023, only 2 and a half years from today, we enabled the usage of GitLab CI for the Drupal ecosystem, which includes all contrib modules and Drupal core. We announced and promoted it at DrupalCon Lille 2023.

This new system would replace entirely the DrupalCI, the custom testing solution that Drupal core and projects used for nearly 10 years prior to enabling GitLab CI.

Core tests went from taking nearly 1h to taking 10 minutes. Adoption for contrib modules was as easy as adding a six-line file to their project.

The present

Core continued to evolve at its own pace, and the CI runs are now down to 5 minutes. They've been able to leverage concurrency, caching, and many other features available on GitLab CI.

Contrib modules also saw significant changes to improve their quality. Adoption was continuously growing, and the standard templates really took off, adding many new features.

As of today, we have more than 2000 contrib projects using GitLab CI.

Jobs

We offer, without writing a single line of code, the same type of tests and checks that core does.

These are: Composer lint, PHPCS, CSpell, PHPStan, ESLint, Stylelint, Nightwatch, PHPUnit, Test-only.

CI jobs

In addition to those, we also have: Upgrade status, Drupal CMS, GitLab Pages.

You can see that having things like "Upgrade status" or "Drupal CMS" compatibility are key for our contrib modules, and it's available out of the box.

Also, the GitLab Pages job allows for modules to publish a full documentation site based on their markdown files. If the files are there, the documentation site will be published. An example of this is our own documentation site for the shared CI templates: https://project.pages.drupalcode.org/gitlab_templates.

Most of these jobs will offer artifacts that can be downloaded by maintainers to fix the issues reported.

Customizations

Most of the above jobs can be disabled, if they are not wanted, with only a few lines of code (turn variables to 0).

We can also test multiple versions of Drupal, like the next or previous minors or majors, again with a few lines of code (turn variables to 1).

We achieved this by extending base jobs that can be configured via variables, like this:

composer:
  extends: .composer-base
  variables:
    DRUPAL_CORE: $CORE_STABLE
    IGNORE_PROJECT_DRUPAL_CORE_VERSION: 1

composer (max PHP version):
  extends: .composer-base
  rules:
    - *opt-in-max-php-rule
    - *check-max-php-version-rule
    - when: always
  variables:
    PHP_VERSION: $CORE_PHP_MAX
    DRUPAL_CORE: $CORE_STABLE
    IGNORE_PROJECT_DRUPAL_CORE_VERSION: 1

composer (previous minor):
  extends: .composer-base
  rules:
    - *opt-in-previous-minor-rule
    - when: always
  variables:
    DRUPAL_CORE: $CORE_PREVIOUS_MINOR
    IGNORE_PROJECT_DRUPAL_CORE_VERSION: 1

We always keep up with the latest core releases, so maintainers don't need to change anything to test the latest core versions. But if they want to "fix" the versions tested so these don't change, they can pin the version of the templates that they are using with just one line of code.

They can choose with PHP version or database engine to run tests with.

External integrations

The contrib templates can be used in external instances. This is actually a five-line file (similar to the one mentioned above), but the integration remains the same. We have several community members using the templates in their own GitLab instances with their own company projects, and everything works the same.

The future

Ever since we made the switch, we have positively shaped the contribution to Drupal. Module standards are very much aligned with core standards. We get really quick in-browser feedback about what to fix; we no longer need to upload extra (test-only) patches, etc.

The possibilities are endless, and we continue looking at the future as well. We are always open to hearing about improvements. For example, only recently, thanks to suggestions from the community, we added Drupal CMS compatibility check and support for recipes.

We are also checking if we can convert some of the jobs to reusable GitLab CI components (they weren't stable when we launched the templates).

All in all, the future looks bright, and we are really glad that we made this move as part of our broader GitLab initiative.

How other open source projects can adopt a similar solution (aka "implementation details")

Whether you have an open source project and want to do something similar, or you are just curious, here are some of the details about how we implemented this for the Drupal Ecosystem.

We had several goals in mind, some of them as must-haves, some of them as nice-to-haves. The must-haves were that it needed to be easy to adopt, and that it should allow the same functionality as the previous system. The nice-to-haves were that it would be easy to iterate and push changes to all projects using it, without project interaction, and that we could easily add new features and turn them on/off from a central place.

At the time, GitLab components were still in the works and didn't have a timeline to be stable, so we needed to think which other options were available. GitLab has the include functionality, that allows including external YAML files in a project's CI configuration. This was our starting point.

Template inheritance

We control the templates centrally at the GitLab Templates project. In there, you can see a folder called "includes", and those are the files that projects include. That's it! To make this easier, we provide a default template that gets prepopulated in GitLab and that containsthe right "includes" in the right places. The six-line template is here.

You can create a ".gitlab-ci.yml" file in the repo and add these:

include:
  - project: $_GITLAB_TEMPLATES_REPO
    ref: $_GITLAB_TEMPLATES_REF
    file:
      - '/includes/include.drupalci.main.yml'
      - '/includes/include.drupalci.variables.yml'
      - '/includes/include.drupalci.workflows.yml'

From that moment on, the project "inherits" all the templates (that we control centrally) and will start running the above CI jobs automatically.

You can see that there are three main files: one with variables, one with global workflow rules, and one containing all the jobs.

That is just the base. Each project can deviate, configure, or override any part of the template as desired, giving them flexibility that we might not be able to accommodate centrally.

We created extensive documentation and generated a GitLab Pages site to help with this: https://project.pages.drupalcode.org/gitlab_templates.

Should you want to include this in any other external GitLab instance, you just need to adapt the above to be fully qualified links as explained in our documentation page here.

As mentioned before, we can push a change (eg: bug fix, new feature) centrally, and as long as the projects make reference to our files, they will automatically receive the changes. This gives us great flexibility and extendibility, and best of all, maintainers don't need to worry about it as it is automatic for their projects.

We define variables that control the Drupal versions to tests against, the workflow rules that determine which jobs run and under which conditions, and most important of all, the logic for all the jobs ran in the pipelines.

We did it this way because it was the solution that would allow us to get all the must-haves and all the nice-to-haves. It allows literally thousands of projects to benefit instantly from shared CI checks and integration without barely writing code.

Versioning

We don't need a complex system for this, as the code is relatively small and straightforward compared to other projects, but we realised early that we needed a system because pushing the "latest" to everybody was risky, should a bug or unplanned issue arise.

We document our versioning system in the "Templates version" page. We use semver tagging, but we only maintain one branch. Depending on the changes introduced since the last tag, we increment X.Y.Z (X for breaking changes, Y for new features, Z for bug fixes), and we also generate a set of tags that will allow maintainers to pin specific versions, or moving-tags within the same major or minor. You can see the tagging script we use here.

Excerpt:

# Compute tags.
TAG="$1"
IFS=. read major minor micro <<<"${TAG}"
MINOR_TAG="${major}.${minor}.x-latest"
MAJOR_TAG="${major}.x-latest"

...
echo "Setting tag: $TAG"
git tag $TAG
git push origin $TAG

...
echo "Setting latest minor tag: $MINOR_TAG"
git tag -d $MINOR_TAG || TRUE
git push origin --delete $MINOR_TAG || TRUE
git tag $MINOR_TAG
git push origin $MINOR_TAG

...
echo "Setting latest major tag: $MAJOR_TAG"
git tag -d $MAJOR_TAG || TRUE
git push origin --delete $MAJOR_TAG || TRUE
git tag $MAJOR_TAG
git push origin $MAJOR_TAG

This process has been working well for us for around 2 years already.

Pushing changes to all contributed projects

Once the above versioning system was implemented, it was easier and quicker to iterate, and it also gave maintainers a chance to pin things. We normally push changes to the "main" branch, so all users wanting the latest changes can both benefit from them and also help us discover any possible regressions.

Once we are happy with the set of changes from the last tag, we can create new tags that maintainers can reference. Also, once we are happy that a tag is stable enough, we have a special tag named "default-ref" and all we need to do is change that tag to point to the specific stable version we want. Once we do it, that tag will automatically be pushed to all the contributed projects using the default setup.

The script that we use to set the default tag can be seen here.

Excerpt:

TAG="$1"
DEFAULT_TAG="default-ref"

echo "Setting default tag to be the same as: $TAG"

# Checkout the tag.
git checkout $TAG

# Override the default one.
git tag -d $DEFAULT_TAG || TRUE
git push origin --delete $DEFAULT_TAG || TRUE
git tag $DEFAULT_TAG
git push origin $DEFAULT_TAG

# Back to the main branch.
git checkout main

Implement it in your project

In the spirit of open source, we've documented the overarching strategy we used so that other teams fostering open source projects can adopt similar principles. We wanted to share how we did it, in case it helps any other project.

The key is to have a central place where you can control the default setup, and from there on, let projects decide what's best for their needs. They can stick to the default and recommended setup, but they could deviate from it should they need to.

15 Dec 2025 9:02pm GMT

Talking Drupal: Talking Drupal #532 - AI Marketing and Stuff

Today we are talking about AI Marketing,Marketing Trends, and The caber toss with guest Hayden Baillio. We'll also cover Drupal core 11.3 as our module of the week.

For show notes visit: https://www.talkingDrupal.com/532

Topics

Resources

Guests

Hayden Baillio - hounder.co hgbaillio

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Fei Lauren - feilauren

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

15 Dec 2025 7:00pm GMT

Freelock Blog: Catching Accessibility Issues While You Edit: Editoria11y

Day 15 - Editoria11y


We've spent the past two weeks discussing accessibility standards - what they mean, why they matter, and how to implement them. But there's a gap between knowing what to do and actually doing it consistently. Content editors add images without alt text. Headings get used for styling instead of structure. Links say "click here" instead of describing their destination.

Read More

15 Dec 2025 4:00pm GMT

The Drop Times: Genero Builds on DrupalCon Nara to Drive Digital Transformation Through Open Source in Japan

Genero Inc. positioned its sponsorship of DrupalCon Nara as part of a larger strategy to promote digital transformation and open-source adoption in Japan. More than just event support, their involvement emphasized cross-cultural knowledge sharing, community engagement, and visibility for Drupal beyond technical circles. Through DXTimes and global partnerships, the company continues to amplify Drupal's relevance for both business and development audiences locally and internationally. Weeks after the event, Genero's commitment endures, reinforcing Nara's potential as a growing hub for open tech talent.

15 Dec 2025 3:08pm GMT

LakeDrops Drupal Consulting, Development and Hosting: ECA Use Case: Notifications

ECA Use Case: Notifications

Silver call bell

Jürgen Haas

This article discusses how Drupal, despite its mature and robust APIs, lacks a unified notification framework - leaving site builders to navigate hundreds of separate modules for different notification types (new content, comments, user registrations, form submissions, etc.), each with its own configuration and limitations. ECA (Event-Condition-Action) offers a natural solution because notifications inherently involve events, conditions for determining who should be notified, and actions for message delivery, all of which ECA handles well. By using ECA, sites can consolidate notification management into a single module, reducing technical debt and ensuring consistent messaging across all delivery channels. The piece concludes by calling for the development of pre-built ECA recipes and a more intuitive UI to make this approach accessible to all Drupal site builders.

15 Dec 2025 2:10pm GMT

Drupal AI Initiative: Reflections from the Drupal AI Summit in Paris

On December 9th, 2025, the Drupal community gathered in Paris for a special AI summit.

As part of the larger apidays / FOST conference, we held the inaugural Drupal AI Summit designed specifically for end customers. Despite a compressed four-week organization window and a marketing campaign that ran for only three weeks, the event was oversubscribed with 170+ registered attendees.

We saw a standing-room-only crowd, peaking at around 120 people, with a sustained audience of 80-90 highly engaged delegates throughout the day. Crucially, many attendees spilled in from the wider conference, people who hadn't considered Drupal before but were drawn in by the energy and the promise of our open ecosystem.

Watch the highlights reel here! (Credit: Dan Lemon, Amazee.io)

Attention is on Drupal AI

Why the World is Watching Drupal AI

The summit attracted a diverse mix of end users, agencies, and AI specialists from the UK, Switzerland, Germany, the Netherlands, Italy, and beyond. We saw heavy representation from Higher Education, Government, Manufacturing, and Enterprise sectors.

Why the sudden surge in interest? Because the Drupal AI Initiative is moving at incredible speed and momentum, leading to attracting many developers back into the community.

Want to be notified when the next Drupal AI events are coming? Follow us on LinkedIn.

over 50 agents now supported

One moment captured this velocity perfectly: a slide presented during the opening remarks claimed Drupal AI supported 50 AI providers, which was already outdated a week later, with another few providers being added to the list. This is the power of the open web.

While proprietary systems lock you in, the Drupal AI ecosystem expands daily, offering sovereignty, flexibility, and rapid innovation that closed platforms simply cannot match.

Showcasing the Ecosystem

The day featured eleven leading experts who demonstrated that Drupal is the most robust platform for building AI-driven digital experiences. The sessions proved that whether you need autonomous agents, secure enterprise data handling, or next-generation search, Drupal is ready today.

All eyes on Drupal

The Architecture of the Future

We explored the foundational power of Drupal as an orchestration layer. Alex Moreno (Pantheon) showcased how our modular architecture makes Drupal the ideal bridge between content, data, and intelligence. Giorgi Jibladze (Omedia) took this further by demonstrating Drupal as a Model Context Protocol (MCP) server, integrating seamlessly with tools like ChatGPT for interactive UI components.

Giorgio

Agents and Automation

The Summit highlighted that we are moving beyond simple chatbots. Marcus Johansson (FreelyGive) and Jamie Abrahams (FreelyGive) were standout voices here. Jamie introduced the Drupal AI Agent Framework, showing how we can build autonomous agents that create complex pages while keeping humans in the loop.

Marcus demonstrated the power of AI Automators, proving that complex, chained AI workflows can be built without writing a single line of code. Andrew Mallis (Kalamuna) showed how these agents can even revolutionize our workflows, using AI to "chat" with the migration process to speed up complex site upgrades.

Andrew on AI assisted migrations

Trust, Sovereignty, and UX

For our Enterprise and Government attendees, security was paramount. Dan Lemon (amazee.io) presented a privacy-first AI assistant that ensures data sovereignty when interacting with internal documents. Moritz Arendt (Open Social) discussed the ethics of AI in community platforms, focusing on maintaining trust. Emma Horrell (University of Edinburgh) proved that the success of these tools relies on people, sharing research on how AI-assisted editorial guidance can reshape web publishing in Higher Education.

Amazee IO's Dan Lemon on sovereignty

Optimization and Search

We also looked at how content is consumed by machines. Christoph Breidert (1xINTERNET) introduced the concept of AIO (Artificial Intelligence Optimisation), teaching us how to prepare Drupal content for AI-generated search results. Artem Dmitriiev (1xINTERNET) showed how to build intelligent, semantic search tools without writing code, and Kristof Van Tomme (PRONOVIX) highlighted how AI automates documentation and "answer engine" optimization.

Configuring Drupal AI Search

A Community Like No Other

This event was a huge success, not just in numbers, but in the connections made. It was a powerful way to promote Drupal to a non-Drupal audience, proving that our open-source ecosystem is the safest and most powerful bet for the future of AI.

After a day of high-speed innovation, we reconnected with our roots - human connection. We wrapped up the summit at the Paris Christmas market, sharing ideas over glühwein. It was a fitting end to a day about the future: high-tech solutions grounded in a warm, collaborative community.

Here's to wishing we all get an upgraded Drupal site with AI for Christmas!

Want to be notified when the next Drupal AI events are coming? Follow us on LinkedIn.

15 Dec 2025 12:37pm GMT