14 Feb 2026

feedDrupal.org aggregator

DDEV Blog: Mutagen in DDEV: Functionality, Issues, and Debugging

Friendly illustration of how Mutagen sync works between host and container filesystems

Mutagen has been a part of DDEV for years, providing dramatic performance improvements for macOS and traditional Windows users. It's enabled by default on these platforms, but understanding how it works, what can go wrong, and how to debug issues is key to getting the most out of DDEV.

Just Need to Debug Something?

If you're here because you just need to debug a Mutagen problem, this will probably help:

ddev utility mutagen-diagnose

See more below.

Contributor Training Video

This blog is based on the Mutagen Fundamentals and Troubleshooting Contributor Training held on January 22, 2026.

See the slides for the training video.

What Mutagen Does

Mutagen is an asynchronous file synchronization tool that decouples in-container reads and writes from reads and writes on the host machine. Each filesystem enjoys near-native speed because neither is stuck waiting on the other.

Traditional Docker bind-mounts check every file access against the file on the host. On macOS and Windows, Docker's implementation of these checks is not performant. Mutagen solves this by maintaining a cached copy of your project files in a Docker volume, syncing changes between host and container asynchronously.

Mostly for PHP

The primary target of Mutagen syncing is PHP files. These were the fundamental problem with Docker as the number of files in a Docker-hosted PHP website grew into the Composer generation with tens of thousands of files, so php-fpm had to open so very many of them all at once. Now with DDEV on macOS using Mutagen, php-fpm is opening files that are just on its local Linux filesystem, not opening ten thousand files that all have to be verified on the host.

Webserving Performance Improvement

Mutagen has delighted many developers with its web-serving performance. One dev said "the first time I tried it I cried."

Filesystem Notifications

Mutagen supports filesystem notifications (inotify/fsnotify), so file-watchers on both the host and inside the container are notified when changes occur. This is a significant advantage for development tools that would otherwise need to poll for changes.

How Mutagen Works in DDEV

When Mutagen is enabled, DDEV:

  1. Mounts a Docker volume onto /var/www inside the web container
  2. A Linux Mutagen daemon is installed inside the web container
  3. A host-side Mutagen daemon is started by DDEV
  4. The two daemons keep each other up-to-date with changes on either side

Lifecycle

Upload Directories

DDEV automatically excludes user-generated files in upload_dirs from Mutagen syncing, using bind-mounts instead. For most CMS types, this is configured automatically, for example:

If your project has non-standard locations, override defaults by setting upload_dirs in .ddev/config.yaml.

We do note that upload_dirs is not an adequate description for this behavior. It was originally intended for user-generated files, but now is used for heavy directories like node_modules, etc.

Common Issues and Caveats

Initial Sync Time

The first-time Mutagen sync takes 5-60 seconds depending on project size. A Magento 2 site with sample data might take 48 seconds initially, 12 seconds on subsequent starts. If sync takes longer than a minute, you're likely syncing large files or directories unnecessarily.

Large node_modules Directories

Frontend build tools create massive node_modules directories that slow Mutagen sync significantly. Solution: Add node_modules to upload_dirs:

upload_dirs: #upload_dirs entries are relative to docroot
  - sites/default/files # Keep existing CMS defaults
  - ../node_modules # Exclude from Mutagen

Then run ddev restart. The directory remains available in the container via Docker bind-mount.

File Changes When DDEV is Stopped

If you change files (checking out branches, running git pull, deleting files) while DDEV is stopped, Mutagen has no awareness of these changes. When you start again, it may restore old files from the volume.

Solution: Run ddev mutagen reset before restarting if you've made significant changes while stopped. That removes the volume so everything comes first from the host side in a fresh sync.

Simultaneous Changes

If the same file changes on both host and container while out of sync, conflicts can occur. This is quite rare but possible with:

Best practices:

Disk Space Considerations

Mutagen increases disk usage because project code exists both on your computer and inside a Docker volume. The upload_dirs directories are excluded to mitigate this.

Watch for volumes larger than 5GB (warning) or 10GB (critical). Use ddev utility mutagen-diagnose --all to check all projects.

Debugging Mutagen Issues

The New ddev utility mutagen-diagnose Command

DDEV now includes a diagnostic tool that automatically checks for common issues:

ddev utility mutagen-diagnose

This command analyzes:

Use --all flag to analyze all Mutagen volumes system-wide:

ddev utility mutagen-diagnose --all

The diagnostic provides actionable recommendations like:

⚠ 3 node_modules directories exist but are not excluded from sync (can cause slow sync)
  → Add to .ddev/config.yaml:
    upload_dirs:
      - sites/default/files
      - web/themes/custom/mytheme/node_modules
      - web/themes/contrib/bootstrap/node_modules
      - app/node_modules
  → Then run: ddev restart

Debugging Long Startup Times

If ddev start takes longer than a minute and ddev utility mutagen-diagnose doesn't give you clues about why, watch what Mutagen is syncing:

ddev mutagen reset  # Start from scratch
ddev start

In another terminal:

while true; do ddev mutagen st -l | grep "^Current"; sleep 1; done

This shows which files Mutagen is working on:

Current file: vendor/bin/large-binary (306 MB/5.2 GB)
Current file: vendor/bin/large-binary (687 MB/5.2 GB)
Current file: vendor/bin/large-binary (1.1 GB/5.2 GB)

Add problem directories to upload_dirs or move them to .tarballs (automatically excluded).

Monitoring Sync Activity

Watch real-time sync activity:

ddev mutagen monitor

This shows when Mutagen responds to changes and helps identify sync delays.

Manual Sync Control

Force an explicit sync:

ddev mutagen sync

Check sync status:

ddev mutagen status

View detailed status:

ddev mutagen status -l

Troubleshooting Steps

  1. Verify that your project works without Mutagen:

    ddev config --performance-mode=none && ddev restart
    
  2. Run diagnostics:

    ddev utility mutagen-diagnose
    
  3. Reset to clean .ddev/mutagen/mutagen.yml:

    # Backup customizations first
    mv .ddev/mutagen/mutagen.yml .ddev/mutagen/mutagen.yml.bak
    ddev restart
    
  4. Reset Mutagen volume and recreate it:

    ddev mutagen reset
    ddev restart
    
  5. Enable debug output:

    DDEV_DEBUG=true ddev start
    
  6. View Mutagen logs:

    ddev mutagen logs
    
  7. Restart Mutagen daemon:

    ddev utility mutagen daemon stop
    ddev utility mutagen daemon start
    

Advanced Configuration

Excluding Directories from Sync

Recommended approach: Use upload_dirs in .ddev/config.yaml:

upload_dirs:
  - sites/default/files # CMS uploads
  - ../node_modules # Build dependencies
  - ../vendor/bin # Large binaries

Advanced approach: Edit .ddev/mutagen/mutagen.yml after removing the #ddev-generated line:

ignore:
  paths:
    - "/web/themes/custom/mytheme/node_modules"
    - "/vendor/large-package"

Then add corresponding bind-mounts in .ddev/docker-compose.bindmount.yaml:

services:
  web:
    volumes:
      - "../web/themes/custom/mytheme/node_modules:/var/www/html/web/themes/custom/mytheme/node_modules"

Always run ddev mutagen reset after changing mutagen.yml.

Git Hooks for Automatic Sync

Add .git/hooks/post-checkout and make it executable:

#!/usr/bin/env bash
ddev mutagen sync || true
chmod +x .git/hooks/post-checkout

Use Global Configuration for performance_mode

The standard practice is to use global configuration for performance_mode so that each user does what's normal for them, and the project configuration does not have configuration that might not work for another team member.

Most people don't have to change this anyway; macOS and traditional Windows default to performance_mode: mutagen and Linux/WSL default to performance_mode: none.

When to Disable Mutagen

Disable Mutagen if:

Disable per-project:

ddev mutagen reset && ddev config --performance-mode=none && ddev restart

Disable globally:

ddev config global --performance-mode=none

Mutagen Data and DDEV

DDEV uses its own Mutagen installation, normally in ~/.ddev, but using $XDG_CONFIG_HOME when that is defined.

Access Mutagen directly:

ddev utility mutagen sync list
ddev utility mutagen sync monitor <projectname>

Summary

Mutagen provides dramatic performance improvements for macOS and traditional Windows users, but understanding its asynchronous nature is key to avoiding issues:

The benefits far outweigh the caveats for most projects, especially with the new diagnostic tools that identify and help resolve common issues automatically.

For more information, see the DDEV Performance Documentation and the Mutagen documentation.

Copilot was used to create an initial draft for this blog, and for subsequent reviews.

14 Feb 2026 1:39am GMT

13 Feb 2026

feedDrupal.org aggregator

A Drupal Couple: Why I Do Not Trust Independent AI Agents Without Strict Supervision

Image
Imagen
A human hand drawing a single clean glowing line through complex AI circuit patterns, representing human supervision guiding AI toward simpler solutions
Article body

I use Claude Code almost exclusively. Every day, for hours. It allowed me to get back into developing great tools, and I have published several results that are working very well. Plugins, skills, frameworks, development workflows. Real things that real people can use. The productivity is undeniable.

So let me be clear about what this post is. This is not a take on what AI can do. This is about AI doing it completely alone.

The results are there. But under supervision.

The laollita.es Moment

When we were building laollita.es, something happened that I documented in a previous post. We needed to apply some visual changes to the site. The AI agent offered a solution: a custom module with a preprocess function. It would work. Then we iterated, and it moved to a theme-level implementation with a preprocess function. That would also work. Both approaches would accomplish the goal.

Until I asked: isn't it easier to just apply CSS to the new classes?

Yes. It was. Simple CSS. No module, no preprocess, no custom code beyond what was needed.

Here is what matters. All three solutions would have accomplished the goal. The module approach, the theme preprocess, the CSS. They all would have worked. But two of them create technical debt and maintenance load that was completely unnecessary. The AI did not choose the simplest path because it does not understand the maintenance burden. It does not think about who comes after. It generates a solution that works and moves on.

This is what I see every time I let the AI make decisions without questioning them. It works... and it creates problems you only discover later.

Why This Happens

I have been thinking about this for a while. I have my own theories, and they keep getting confirmed the more I work with these tools. Here is what I think is going on.

AI Cannot Form New Memories

Eddie Chu made this point at the latest AI Tinkerers meeting, and it resonated with me because I live it every day.

I use frameworks. Skills. Plugins. Commands. CLAUDE.md files. I have written before about my approach to working with AI tools. I have built an entire organization of reference documents, development guides, content frameworks, tone guides, project structure plans. All of this exists to create guardrails, to force best practices, to give AI the context it needs to do good work.

And it will not keep the memory.

We need to force it. Repeat it. Say it again.

This is not just about development. It has the same problem when creating content. I built a creative brief step into my workflow because the AI was generating content that reflected its own patterns instead of my message. I use markdown files, state files, reference documents, the whole structure in my projects folder. And still, every session starts from zero. The AI reads what it reads, processes what it processes, and the rest... it is as if it never existed.

The Expo.dev engineering team described this perfectly after using Claude Code for a month [1]. They said the tool "starts fresh every session" like "a new hire who needs onboarding each time." Pre-packaged skills? It "often forgets to apply them without explicit reminders." Exactly my experience.

Context Is Everything (And Context Is the Problem)

Here is something I have noticed repeatedly. In a chat interaction, in agentic work, the full history is the context. Everything that was said, every mistake, every correction, every back-and-forth. That is what the AI is working with.

When the AI is already confused and I have asked for the same correction three times and it is going in strange ways... starting a new session and asking it to analyze the code fresh, to understand what is there, it magically finds the solution.

Why? Because the previous mistakes are in the context. The AI does not read everything from top to bottom. It scans for what seems relevant, picks up fragments, skips over the rest. Which means even the guardrails I put in MD files, the frameworks, the instructions... they are not always read. They are not always in the window of what the AI is paying attention to at that moment.

And when errors are in the context, they compound. Research calls this "cascading failures" [2]. A small mistake becomes the foundation for every subsequent decision, and by the time you review the output, the error has propagated through multiple layers. An inventory agent hallucinated a nonexistent product, then called four downstream systems to price, stock, and ship the phantom item [3]. One hallucinated fact, one multi-system incident.

Starting fresh clears the poison. But an unsupervised agent never gets to start fresh. It just keeps building on what came before.

The Dunning-Kruger Effect of AI

The Dunning-Kruger effect is a cognitive bias where people with limited ability in a task overestimate their competence. AI has its own version of this.

When we ask AI to research, write, or code something, it typically responds with "this is done, production ready" or some variation of "this is done, final, perfect!" But it is not. And going back to the previous point, that false confidence is now in the context. So no matter if you discuss it later and explain what was wrong or that something is missing... it is already "done." If the AI's targeted search through the conversation does not bring the correction back into focus... there you go.

Expo.dev documented the same pattern [1]. Claude "produces poorly architected solutions with surprising frequency, and the solutions are presented with confidence." It never says "I am getting confused, maybe we should start over." It just keeps going, confidently wrong.

The METR study puts hard numbers on this [4]. In a randomized controlled trial with experienced developers, AI tools made them 19% slower. Not faster. Slower. But the developers still believed AI sped them up by 20%. The perception-reality gap is not just an AI problem. It is a human problem too. Both sides of the equation are miscalibrated.

The Training Data Problem

The information or memory that AI has is actually not all good. Usually it is "cowboy developers" who really go ahead and respond to most social media questions, Stack Overflow answers, blog posts, tutorials. And that is the training. That is the information AI learned from.

The same principle applies beyond code. The information we produce as a society is biased, and AI absorbs all of it. That is why you see discriminatory AI systems across industries. AI resume screeners favor white-associated names 85% of the time [5]. UnitedHealthcare's AI denied care and was overturned on appeal 90% of the time [6]. A Dutch algorithm wrongly accused 35,000 parents of fraud, and the scandal toppled the entire government [7].

For my own work, I create guides to counteract this. Content framework guides that extract proper research on how to use storytelling, inverted pyramid, AIDA structures. Tone guides with specific instructions. I put them in skills and reference documents so I can point the AI to them when we are working. And still I have to remind it. Every time.

What I See Every Day

I have seen AI do what it did in laollita.es across multiple projects. In development, it created an interactive chat component, and the next time we used it on another screen, it almost wrote another one from scratch instead of reusing the one it had just built. Same project. Same session sometimes.

In content creation, I have a tone guide with specific stylistic preferences. And I still have to explicitly ask the AI to review it. No matter how directive the language in the instructions is. "Always load this file before writing content." It does not always load the file.

And it is not just my experience.

A Replit agent deleted a production database during a code freeze, then fabricated fake data and falsified logs to cover it up [8]. Google's Antigravity agent wiped a user's entire hard drive when asked to clear a cache [9]. Klarna's CEO said "we went too far" after cutting 700 jobs for AI and is now rehiring humans [10]. Salesforce cut 4,000 support staff and is now facing lost institutional knowledge [11]. The pattern keeps repeating. Companies trust the agent, remove the human, discover why the human was there in the first place.

What This Means for AI Supervision

I am not against AI. I am writing this post on a system largely built with AI assistance. The tools I publish, the workflows I create, the content I produce. AI is deeply embedded in my work. It makes me more productive.

At Palcera, I believe AI is genuinely great for employees and companies. When AI helps a developer finish faster, that time surplus benefits everyone. The developer gets breathing room. The company gets efficiency. And the customer can get better value, better pricing, faster delivery. That is real. I see it every day.

But all of that requires the human in the loop. Questioning the choices. Asking "isn't CSS simpler?" Clearing the context when things go sideways. Pointing to the tone guide when the AI forgets. Starting fresh when the conversation gets poisoned with old mistakes.

The results are there. But under supervision. And that distinction matters more than most people realize.


References

[1] Expo.dev, "What Our Web Team Learned Using Claude Code for a Month"

[2] Adversa AI, "Cascading Failures in Agentic AI: OWASP ASI08 Security Guide 2026"

[3] Galileo, "7 AI Agent Failure Modes and How To Fix Them"

[4] METR, "Measuring the Impact of Early-2025 AI on Experienced Developer Productivity"

[5] The Interview Guys / University of Washington, "85% of AI Resume Screeners Prefer White Names"

[6] AMA, "How AI Is Leading to More Prior Authorization Denials"

[7] WBUR, "What Happened When AI Went After Welfare Fraud"

[8] The Register, "Vibe Coding Service Replit Deleted Production Database"

[9] The Register, "Google's Vibe Coding Platform Deletes Entire Drive"

[10] Yahoo Finance, "After Firing 700 Humans For AI, Klarna Now Wants Them Back"

[11] Maarthandam, "Salesforce Regrets Firing 4,000 Experienced Staff and Replacing Them with AI"

Author
Abstract
A developer who uses AI coding tools daily shares real examples of why autonomous AI agents still need human supervision. From unnecessary technical debt to context pollution to confidently wrong outputs, AI works best when a human is asking the right questions.
Rating
No votes yet

Add new comment

13 Feb 2026 6:20pm GMT

Dripyard Premium Drupal Themes: Dripyard's Meridian + Drupal CMS Webinar Recording is Up

Our webinar on Drupal CMS + Meridian theme is up on YouTube! In this we talked about the new theme, demo'd various example sites built with it, and ran through new components.

We also talked about our differences with Drupal CMS's built in Byte theme and site template.

Enjoy!

13 Feb 2026 4:18pm GMT

29 Jan 2026

feedW3C - Blog

2025 World Wide Web Consortium Membership Survey

This post gives a summary of the results of the 2025 World Wide Web Consortium (W3C) Membership Survey.

29 Jan 2026 9:38am GMT

20 Jan 2026

feedW3C - Blog

Strengthening Community Engagement at TPAC 2025: looking back at the IE & inclusion Funds

Sylvia Cadena, W3C Chief Development Officer, reports on coordinating the TPAC 2025 inclusion fund and W3C Invited Expert fund, aimed to reduce barriers for participants who are contributing to W3C's work, and that are part of W3C's effort to strengthen our Community Engagement program.

20 Jan 2026 3:06pm GMT

18 Jan 2026

feedOfficial jQuery Blog

jQuery 4.0.0

On January 14, 2006, John Resig introduced a JavaScript library called jQuery at BarCamp in New York City. Now, 20 years later, the jQuery team is happy to announce the final release of jQuery 4.0.0. After a long development cycle and several pre-releases, jQuery 4.0.0 brings many improvements and modernizations. It is the first major … Continue reading

18 Jan 2026 12:29am GMT

14 Jan 2026

feedW3C - Blog

EPUB and HTML - Survey results and next steps

Mid-2025, the Publishing Maintenance Working Group (PMWG) ran a survey in the publishing community to ask: should we allow HTML in EPUB? The survey results and their discussions were invaluable in helping decide to not add HTML to EPUB 3.4, and to take a new approach on HTML and digital publications.

14 Jan 2026 12:38pm GMT

11 Aug 2025

feedOfficial jQuery Blog

jQuery 4.0.0 Release Candidate 1

It's here! Almost. jQuery 4.0.0-rc.1 is now available. It's our way of saying, "we think this is ready; now poke it with many sticks". If nothing is found that requires a second release candidate, jQuery 4.0.0 final will follow. Please try out this release and let us know if you encounter any issues. A 4.0 … Continue reading

11 Aug 2025 5:35pm GMT

17 Jul 2024

feedOfficial jQuery Blog

Second Beta of jQuery 4.0.0

Last February, we released the first beta of jQuery 4.0.0. We're now ready to release a second, and we expect a release candidate to come soon™. This release comes with a major rewrite to jQuery's testing infrastructure, which removed all deprecated or under-supported dependencies. But the main change that warranted a second beta was a … Continue reading

17 Jul 2024 2:03pm GMT

29 May 2023

feedSmiley Cat: Christian Watson's Web Design Blog

7 Types of Article Headlines: Craft the Perfect Title Every Time

When it comes to crafting an article, the headline is crucial for grabbing the reader's attention and enticing them to read further. In this post, I'll explore the 7 types of article headlines and provide examples for each using the subjects of product management, user experience design, and search engine optimization. 1. The Know-it-All The […]

The post 7 Types of Article Headlines: Craft the Perfect Title Every Time first appeared on Smiley Cat.

29 May 2023 10:20pm GMT

09 Apr 2023

feedSmiley Cat: Christian Watson's Web Design Blog

5 Product Management Myths You Need to Stop Believing

Product management is one of the most exciting and rewarding careers in the tech world. But it's also one of the most misunderstood and misrepresented. There are many myths and misconceptions that cloud the reality of what product managers do, how they do it, and what skills they need to succeed. In this blog post, […]

The post 5 Product Management Myths You Need to Stop Believing first appeared on Smiley Cat.

09 Apr 2023 5:28pm GMT

11 Dec 2022

feedSmiley Cat: Christian Watson's Web Design Blog

The Key Strengths of the Best Product Managers

The role of a product manager is crucial to the success of any product. They are responsible for managing the entire product life cycle, from conceptualization to launch and beyond. A product manager must possess a unique blend of skills and qualities to be effective in their role. Strong strategic thinking A product manager must […]

The post The Key Strengths of the Best Product Managers first appeared on Smiley Cat.

11 Dec 2022 4:43pm GMT